Previously the lighting of SSS material was not present in reflection probe or irradiance grid.
This does not compute the SSS correctly but at least output the corresponding irradiance power to the correct output.
This option prevent from automatically blurring the albedo color applied to the SSS.
While this is great for preserving details it can bleed more light onto the nearby objects since the blurring will be done on pure "white" irradiance.
This issue is to be tackled in a separate commit.
This cleanup removes the need of gigantic code duplication for each closure.
This also make some preformance improvement since it removes some branches and duplicated loops.
It also fix some mismatch (between cycles and eevee) with the principled shader.
This adds the possibility to simulate things like red ears with strong backlight or material with high scattering distances.
To enable it you need to turn on the "Subsurface Translucency" option in the "Options" tab of the Material Panel (and of course to have "regular" SSS enabled in both render settings and material options).
Since the effect is adding another overhead I prefer to make it optional. But this is open to discussion.
Be aware that the effect only works for direct lights (so no indirect/world lighting) that have shadowmaps, and is affected by the "softness" of the shadowmap and resolution.
Technical notes:
This is inspired by http://www.iryoku.com/translucency/ but goes a bit beyond that.
We do not use a sum of gaussian to apply in regards to the object thickness but we precompute a 1D kernel texture.
This texture stores the light transmited to a point at the back of an infinite slab of material of variying thickness.
We make the assumption that the slab is perpendicular to the light so that no fresnel or diffusion term is taken into account.
The light is considered constant.
If the setup is similar to the one assume during the profile baking, the realtime render matches cycles reference.
Due to these assumptions the computed transmitted light is in most cases too bright for curvy objects.
Finally we jitter the shadow map sample per pixel so we can simulate dispersion inside the medium.
Radius of the dispersion is in world space and derived by from the "soft" shadowmap parameter.
Idea for this come from this presentation http://www.iryoku.com/stare-into-the-future (slide 164).
Samples : pretty self explanatory.
Jitter Threshold : Reduce cache misses and improve performance (greatly) by lowering this value. This settings let user decide how many samples should be jittered (rotated) to reduce banding artifacts.
How to use:
- Enable subsurface scattering in the render options.
- Add Subsurface BSDF to your shader.
- Check "Screen Space Subsurface Scattering" in the material panel options.
This initial implementation has a few limitations:
- only supports gaussian SSS.
- Does not support principled shader.
- The radius parameters is baked down to a number of samples and then put into an UBO. This means the radius input socket cannot be used. You need to tweak the default vector directly.
- The "texture blur" is considered as always set to 1
This is quite basic as it only support boundbing boxes.
But the material can refine the volume shape in anyway the user like.
To overcome this limitation, a voxelisation should be done on the mesh (generating a SDF maybe?) and tested against every volumetric cell.
The system now uses several 3D textures in order to decouple every steps of the volumetric rendering.
See https://www.ea.com/frostbite/news/physically-based-unified-volumetric-rendering-in-frostbite for more details.
On the technical side, instead of using a compute shader to populate the 3D textures we use layered rendering with a geometry shader to render 1 fullscreen triangle per 3D texture slice.
There was noise correlation between the rotation random number and the radius random number used in the contact shadow algo.
Hacking a new distribution from the old distribution (may not be ideal because it's discrepency may be high)
Also distribute samples evenly on the shadow disc. (add sqrt)
Fix the "bias floating shadows", was cause by the discarding of backfacing geom which makes no sense in this case.
This add the possibility to add screen space raytraced shadows to fix light leaking cause by shadows maps.
Theses inherit of the same artifacts as other screenspace methods.
This required some small changes to the data display shaders so that they match the way the object mode renders them.
Strangely enough, I had to remove the normal attribute from the display code because it was being not bound as soon as I created another rendering call in object mode. The problem may be deeper but I did not have time for this so I derive the normal from the sphere pos.
This adds TAA to eevee. The only thing important to note is that we need to keep the unjittered depth buffer so that the other engines are composited correctly.
The problem was that orthographic views can have hit position that are negative. Thus we cannot encode the hit in the sign of the Z component.
The workaround is to store the hit position in screenspace. But since we are using floating point render target, we are loosing quite a bit of precision.
TODO: use RGBA16 instead of RGBA16F. But that means encoding the pdf value somehow.
Add sanitizer. I wanted to stay away from this because I think we should fix what causes NaNs in the first place. But there can be too much different factor causing NaNs and it can be because of user inputs.
The branching introduced by the uniform caused problems on mesa + AMD in the resolve stage.
This patch create one shader per sample count without branching.
This improves performance of a single ray per pixel case (3.0ms against 3.6ms in my testing)
You can now use a transparent shader as a completly transparent bsdf. And use whatever alpha mask in a mix shader between a transparent bsdf and another bsdf.
- Replace poisson by concentric samples: Less variance. They are sorted by radius then by angle.
- Separate filtering into 2 blur. First blur is 3x3 box blur. Second is user dependant.
- Group fetches by group of 4.
This brings some data structure changes.
Shared shadow data are stored in ShadowData (in glsl) (aka EEVEE_Shadow in C).
This structure contains the array indices of the first shadow element of this shadow "object".
It also contains how many shadow to evaluate (to be used for Multiple shadow maps).
The filtering is noisy and needs improvement.
- Use only one 2d texture array to store all shadowmaps.
- Allow to change shadow maps resolution.
- Do not output radial distance when rendering shadowmaps. This will allow fast rendering of shadowmaps when we will drop the use of geometry shaders.