This repository has been archived on 2023-10-09. You can view files and clone it. You cannot open issues or pull requests or push a commit.
Files
blender-archive/source/blender/draw/engines/eevee/shaders/prepass_frag.glsl
Clément Foucault 7f7e683099 EEVEE: Refactor closure_lit_lib.glsl
This refactor was needed for some reasons:
- closure_lit_lib.glsl was unreadable and could not be easily extended to use new features.
- It was generating ~5K LOC for any shader. Slowing down compilation.
- Some calculations were incorrect and BSDF/Closure code had lots of workaround/hacks.

What this refactor does:
- Add some macros to define the light object loops / eval.
- Clear separation between each closures which now have separate files. Each closure implements the eval functions.
- Make principled BSDF a bit more correct in some cases (specular coloring, mix between glass and opaque).
- The BSDF term are applied outside of the eval function and on the whole lighting (was separated for lights before).
- Make light iteration last to avoid carrying more data than needed.
- Makes sure that all inputs are within correct ranges before evaluating the closures (use `safe_normalize` on normals).
- Making each BSDF isolated means that we might carry duplicated data (normals for instance) but this should be optimized by compilers.
- Makes Translucent BSDF its own closure type to avoid having to disable raytraced shadows using hacks.
- Separate transmission roughness is now working on Principled BSDF.
- Makes principled shader variations using constants. Removing a lot of duplicated code. This needed `const` keyword detection in `gpu_material_library.c`.
- SSR/SSS masking and data loading is a bit more consistent and defined outside of closure eval. The loading functions will act as accumulator if the lighting is not to be separated.
- SSR pass now do a full deferred lighting evaluation, including lights, in order to avoid interference with the closure eval code. However, it seems that the cost of having a global SSR toggle uniform is making the surface shader more expensive (which is already the case, by the way).
- Principle fully black specular tint now returns black instead of white.
- This fixed some artifact issue on my AMD computer on normal surfaces (which might have been some uninitialized variables).
- This touched the Ambient Occlusion because it needs to be evaluated for each closure. But to avoid the cost of this, we use another approach to just pass the result of the occlusion on interpolated normals and modify it using the bent normal for each Closure. This tends to reduce shadowing. I'm still looking into improving this but this is out of the scope of this patch.
- Performance might be a bit worse with this patch since it is more oriented towards code modularity. But not by a lot.

Render tests needs to be updated after this.

Reviewed By: jbakker
Differential Revision: https://developer.blender.org/D10390

# Conflicts:
#	source/blender/draw/engines/eevee/eevee_shaders.c
#	source/blender/draw/engines/eevee/shaders/common_utiltex_lib.glsl
#	source/blender/draw/intern/shaders/common_math_lib.glsl
2021-02-13 18:43:09 +01:00

87 lines
2.4 KiB
GLSL

/* Required by some nodes. */
#pragma BLENDER_REQUIRE(common_hair_lib.glsl)
#pragma BLENDER_REQUIRE(common_utiltex_lib.glsl)
#pragma BLENDER_REQUIRE(common_view_lib.glsl)
#pragma BLENDER_REQUIRE(common_uniforms_lib.glsl)
#pragma BLENDER_REQUIRE(closure_type_lib.glsl)
#pragma BLENDER_REQUIRE(closure_eval_lib.glsl)
#pragma BLENDER_REQUIRE(closure_eval_diffuse_lib.glsl)
#pragma BLENDER_REQUIRE(closure_eval_glossy_lib.glsl)
#pragma BLENDER_REQUIRE(closure_eval_translucent_lib.glsl)
#pragma BLENDER_REQUIRE(closure_eval_refraction_lib.glsl)
#pragma BLENDER_REQUIRE(surface_lib.glsl)
#ifdef USE_ALPHA_HASH
/* From the paper "Hashed Alpha Testing" by Chris Wyman and Morgan McGuire */
float hash(vec2 a)
{
return fract(1e4 * sin(17.0 * a.x + 0.1 * a.y) * (0.1 + abs(sin(13.0 * a.y + a.x))));
}
float hash3d(vec3 a)
{
return hash(vec2(hash(a.xy), a.z));
}
float hashed_alpha_threshold(vec3 co)
{
/* Find the discretized derivatives of our coordinates. */
float max_deriv = max(length(dFdx(co)), length(dFdy(co)));
float pix_scale = 1.0 / (alphaHashScale * max_deriv);
/* Find two nearest log-discretized noise scales. */
float pix_scale_log = log2(pix_scale);
vec2 pix_scales;
pix_scales.x = exp2(floor(pix_scale_log));
pix_scales.y = exp2(ceil(pix_scale_log));
/* Compute alpha thresholds at our two noise scales. */
vec2 alpha;
alpha.x = hash3d(floor(pix_scales.x * co));
alpha.y = hash3d(floor(pix_scales.y * co));
/* Factor to interpolate lerp with. */
float fac = fract(log2(pix_scale));
/* Interpolate alpha threshold from noise at two scales. */
float x = mix(alpha.x, alpha.y, fac);
/* Pass into CDF to compute uniformly distrib threshold. */
float a = min(fac, 1.0 - fac);
float one_a = 1.0 - a;
float denom = 1.0 / (2 * a * one_a);
float one_x = (1 - x);
vec3 cases = vec3((x * x) * denom, (x - 0.5 * a) / one_a, 1.0 - (one_x * one_x * denom));
/* Find our final, uniformly distributed alpha threshold. */
float threshold = (x < one_a) ? ((x < a) ? cases.x : cases.y) : cases.z;
/* Jitter the threshold for TAA accumulation. */
threshold = fract(threshold + alphaHashOffset);
/* Avoids threshold == 0. */
threshold = clamp(threshold, 1.0e-6, 1.0);
return threshold;
}
#endif
void main()
{
#if defined(USE_ALPHA_HASH)
Closure cl = nodetree_exec();
float opacity = saturate(1.0 - avg(cl.transmittance));
/* Hashed Alpha Testing */
if (opacity < hashed_alpha_threshold(worldPosition)) {
discard;
}
#endif
}