This repository has been archived on 2023-10-09. You can view files and clone it. You cannot open issues or pull requests or push a commit.
Files
blender-archive/intern/cycles/kernel/integrator/subsurface.h
Olivier Maury 1fb0247497 Cycles: approximate shadow caustics using manifold next event estimation
This adds support for selective rendering of caustics in shadows of refractive
objects. Example uses are rendering of underwater caustics and eye caustics.

This is based on "Manifold Next Event Estimation", a method developed for
production rendering. The idea is to selectively enable shadow caustics on a
few objects in the scene where they have a big visual impact, without impacting
render performance for the rest of the scene.

The Shadow Caustic option must be manually enabled on light, caustic receiver
and caster objects. For such light paths, the Filter Glossy option will be
ignored and replaced by sharp caustics.

Currently this method has a various limitations:

* Only caustics in shadows of refractive objects work, which means no caustics
  from reflection or caustics that outside shadows. Only up to 4 refractive
  caustic bounces are supported.
* Caustic caster objects should have smooth normals.
* Not currently support for Metal GPU rendering.

In the future this method may be extended for more general caustics.

TECHNICAL DETAILS

This code adds manifold next event estimation through refractive surface(s) as a
new sampling technique for direct lighting, i.e. finding the point on the
refractive surface(s) along the path to a light sample, which satisfies Fermat's
principle for a given microfacet normal and the path's end points. This
technique involves walking on the "specular manifold" using a pseudo newton
solver. Such a manifold is defined by the specular constraint matrix from the
manifold exploration framework [2]. For each refractive interface, this
constraint is defined by enforcing that the generalized half-vector projection
onto the interface local tangent plane is null. The newton solver guides the
walk by linearizing the manifold locally before reprojecting the linear solution
onto the refractive surface. See paper [1] for more details about the technique
itself and [3] for the half-vector light transport formulation, from which it is
derived.

[1] Manifold Next Event Estimation
Johannes Hanika, Marc Droske, and Luca Fascione. 2015.
Comput. Graph. Forum 34, 4 (July 2015), 87–97.
https://jo.dreggn.org/home/2015_mnee.pdf

[2] Manifold exploration: a Markov Chain Monte Carlo technique for rendering
scenes with difficult specular transport Wenzel Jakob and Steve Marschner.
2012. ACM Trans. Graph. 31, 4, Article 58 (July 2012), 13 pages.
https://www.cs.cornell.edu/projects/manifolds-sg12/

[3] The Natural-Constraint Representation of the Path Space for Efficient
Light Transport Simulation. Anton S. Kaplanyan, Johannes Hanika, and Carsten
Dachsbacher. 2014. ACM Trans. Graph. 33, 4, Article 102 (July 2014), 13 pages.
https://cg.ivd.kit.edu/english/HSLT.php

The code for this samping technique was inserted at the light sampling stage
(direct lighting). If the walk is successful, it turns off path regularization
using a specialized flag in the path state (PATH_MNEE_SUCCESS). This flag tells
the integrator not to blur the brdf roughness further down the path (in a child
ray created from BSDF sampling). In addition, using a cascading mechanism of
flag values, we cull connections to caustic lights for this and children rays,
which should be resolved through MNEE.

This mechanism also cancels the MIS bsdf counter part at the casutic receiver
depth, in essence leaving MNEE as the only sampling technique from receivers
through refractive casters to caustic lights. This choice might not be optimal
when the light gets large wrt to the receiver, though this is usually not when
you want to use MNEE.

This connection culling strategy removes a fair amount of fireflies, at the cost
of introducing a slight bias. Because of the selective nature of the culling
mechanism, reflective caustics still benefit from the native path
regularization, which further removes fireflies on other surfaces (bouncing
light off casters).

Differential Revision: https://developer.blender.org/D13533
2022-04-01 17:45:39 +02:00

196 lines
6.6 KiB
C++

/* SPDX-License-Identifier: Apache-2.0
* Copyright 2011-2022 Blender Foundation */
#pragma once
#include "kernel/camera/projection.h"
#include "kernel/bvh/bvh.h"
#include "kernel/closure/alloc.h"
#include "kernel/closure/bsdf_diffuse.h"
#include "kernel/closure/bsdf_principled_diffuse.h"
#include "kernel/closure/bssrdf.h"
#include "kernel/closure/volume.h"
#include "kernel/integrator/intersect_volume_stack.h"
#include "kernel/integrator/path_state.h"
#include "kernel/integrator/shader_eval.h"
#include "kernel/integrator/subsurface_disk.h"
#include "kernel/integrator/subsurface_random_walk.h"
CCL_NAMESPACE_BEGIN
#ifdef __SUBSURFACE__
ccl_device int subsurface_bounce(KernelGlobals kg,
IntegratorState state,
ccl_private ShaderData *sd,
ccl_private const ShaderClosure *sc)
{
/* We should never have two consecutive BSSRDF bounces, the second one should
* be converted to a diffuse BSDF to avoid this. */
kernel_assert(!(INTEGRATOR_STATE(state, path, flag) & PATH_RAY_DIFFUSE_ANCESTOR));
/* Setup path state for intersect_subsurface kernel. */
ccl_private const Bssrdf *bssrdf = (ccl_private const Bssrdf *)sc;
/* Setup ray into surface. */
INTEGRATOR_STATE_WRITE(state, ray, P) = sd->P;
INTEGRATOR_STATE_WRITE(state, ray, D) = bssrdf->N;
INTEGRATOR_STATE_WRITE(state, ray, t) = FLT_MAX;
INTEGRATOR_STATE_WRITE(state, ray, dP) = differential_make_compact(sd->dP);
INTEGRATOR_STATE_WRITE(state, ray, dD) = differential_zero_compact();
/* Pass along object info, reusing isect to save memory. */
INTEGRATOR_STATE_WRITE(state, subsurface, Ng) = sd->Ng;
uint32_t path_flag = (INTEGRATOR_STATE(state, path, flag) & ~PATH_RAY_CAMERA) |
((sc->type == CLOSURE_BSSRDF_BURLEY_ID) ? PATH_RAY_SUBSURFACE_DISK :
PATH_RAY_SUBSURFACE_RANDOM_WALK);
/* Compute weight, optionally including Fresnel from entry point. */
float3 weight = shader_bssrdf_sample_weight(sd, sc);
# ifdef __PRINCIPLED__
if (bssrdf->roughness != FLT_MAX) {
path_flag |= PATH_RAY_SUBSURFACE_USE_FRESNEL;
}
# endif
if (sd->flag & SD_BACKFACING) {
path_flag |= PATH_RAY_SUBSURFACE_BACKFACING;
}
INTEGRATOR_STATE_WRITE(state, path, throughput) *= weight;
INTEGRATOR_STATE_WRITE(state, path, flag) = path_flag;
/* Advance random number offset for bounce. */
INTEGRATOR_STATE_WRITE(state, path, rng_offset) += PRNG_BOUNCE_NUM;
if (kernel_data.kernel_features & KERNEL_FEATURE_LIGHT_PASSES) {
if (INTEGRATOR_STATE(state, path, bounce) == 0) {
INTEGRATOR_STATE_WRITE(state, path, pass_diffuse_weight) = one_float3();
INTEGRATOR_STATE_WRITE(state, path, pass_glossy_weight) = zero_float3();
}
}
/* Pass BSSRDF parameters. */
INTEGRATOR_STATE_WRITE(state, subsurface, albedo) = bssrdf->albedo;
INTEGRATOR_STATE_WRITE(state, subsurface, radius) = bssrdf->radius;
INTEGRATOR_STATE_WRITE(state, subsurface, anisotropy) = bssrdf->anisotropy;
return LABEL_SUBSURFACE_SCATTER;
}
ccl_device void subsurface_shader_data_setup(KernelGlobals kg,
IntegratorState state,
ccl_private ShaderData *sd,
const uint32_t path_flag)
{
/* Get bump mapped normal from shader evaluation at exit point. */
float3 N = sd->N;
if (sd->flag & SD_HAS_BSSRDF_BUMP) {
N = shader_bssrdf_normal(sd);
}
/* Setup diffuse BSDF at the exit point. This replaces shader_eval_surface. */
sd->flag &= ~SD_CLOSURE_FLAGS;
sd->num_closure = 0;
sd->num_closure_left = kernel_data.max_closures;
const float3 weight = one_float3();
# ifdef __PRINCIPLED__
if (path_flag & PATH_RAY_SUBSURFACE_USE_FRESNEL) {
ccl_private PrincipledDiffuseBsdf *bsdf = (ccl_private PrincipledDiffuseBsdf *)bsdf_alloc(
sd, sizeof(PrincipledDiffuseBsdf), weight);
if (bsdf) {
bsdf->N = N;
bsdf->roughness = FLT_MAX;
sd->flag |= bsdf_principled_diffuse_setup(bsdf, PRINCIPLED_DIFFUSE_LAMBERT_EXIT);
}
}
else
# endif /* __PRINCIPLED__ */
{
ccl_private DiffuseBsdf *bsdf = (ccl_private DiffuseBsdf *)bsdf_alloc(
sd, sizeof(DiffuseBsdf), weight);
if (bsdf) {
bsdf->N = N;
sd->flag |= bsdf_diffuse_setup(bsdf);
}
}
}
ccl_device_inline bool subsurface_scatter(KernelGlobals kg, IntegratorState state)
{
RNGState rng_state;
path_state_rng_load(state, &rng_state);
Ray ray ccl_optional_struct_init;
LocalIntersection ss_isect ccl_optional_struct_init;
if (INTEGRATOR_STATE(state, path, flag) & PATH_RAY_SUBSURFACE_RANDOM_WALK) {
if (!subsurface_random_walk(kg, state, rng_state, ray, ss_isect)) {
return false;
}
}
else {
if (!subsurface_disk(kg, state, rng_state, ray, ss_isect)) {
return false;
}
}
# ifdef __VOLUME__
/* Update volume stack if needed. */
if (kernel_data.integrator.use_volumes) {
const int object = ss_isect.hits[0].object;
const int object_flag = kernel_tex_fetch(__object_flag, object);
if (object_flag & SD_OBJECT_INTERSECTS_VOLUME) {
float3 P = INTEGRATOR_STATE(state, ray, P);
integrator_volume_stack_update_for_subsurface(kg, state, P, ray.P);
}
}
# endif /* __VOLUME__ */
/* Pretend ray is coming from the outside towards the exit point. This ensures
* correct front/back facing normals.
* TODO: find a more elegant solution? */
ray.P += ray.D * ray.t * 2.0f;
ray.D = -ray.D;
integrator_state_write_isect(kg, state, &ss_isect.hits[0]);
integrator_state_write_ray(kg, state, &ray);
/* Advance random number offset for bounce. */
INTEGRATOR_STATE_WRITE(state, path, rng_offset) += PRNG_BOUNCE_NUM;
const int shader = intersection_get_shader(kg, &ss_isect.hits[0]);
const int shader_flags = kernel_tex_fetch(__shaders, shader).flags;
const int object_flags = intersection_get_object_flags(kg, &ss_isect.hits[0]);
const bool use_caustics = kernel_data.integrator.use_caustics &&
(object_flags & SD_OBJECT_CAUSTICS);
const bool use_raytrace_kernel = (shader_flags & SD_HAS_RAYTRACE) || use_caustics;
if (use_raytrace_kernel) {
INTEGRATOR_PATH_NEXT_SORTED(DEVICE_KERNEL_INTEGRATOR_INTERSECT_SUBSURFACE,
DEVICE_KERNEL_INTEGRATOR_SHADE_SURFACE_RAYTRACE,
shader);
}
else {
INTEGRATOR_PATH_NEXT_SORTED(DEVICE_KERNEL_INTEGRATOR_INTERSECT_SUBSURFACE,
DEVICE_KERNEL_INTEGRATOR_SHADE_SURFACE,
shader);
}
return true;
}
#endif /* __SUBSURFACE__ */
CCL_NAMESPACE_END