This was the cause of some issue with normal mapping. This way is cleaner
since it does not modify the state of the drawcalls and other ad-hoc
solutions to fix the problems down the road. Unfortunately, it does require
to fix every sampling coordinate for this texture.
Fix T62215: flipped normals in reflection plane
This is a parameter that will make the interpolation between irradiance
cells of a same Irradiance Volume smoother, reducing the weight of the
light leaking correction factors.
It is usefull in some cases to avoid harsh lighting transition that can
happen when a sample point it near a surface.
This separate probe rendering from viewport rendering, making possible to
run the baking in another thread (non blocking and faster).
The baked lighting is saved in the blend file. Nothing needs to be
recomputed on load.
There is a few missing bits / bugs:
- Cache cannot be saved to disk as a separate file, it is saved in the DNA
for now making file larger and memory usage higher.
- Auto update only cubemaps does update the grids (bug).
- Probes cannot be updated individually (considered as dynamic).
- Light Cache cannot be (re)generated during render.
Patch D3205 by Kanzaki Wataru
Only implemented in Eevee for now. Collapse a closure to RGBA so we can
do NPR stuff on the resulting color.
Use an emission shader to convert the color back to a closure.
Doing this will break PBR and will kill any SSR and SSS effects the shader
the shader rely on. That said screen space refraction and ambient occlusion
are supported due to the way they are implemented.
This is an optimization / cleanup commit.
The use of a global ubo remove lots of uniform lookups and only transfert data when needed.
Lots of renaming for more consistent codestyle.
This augment the existing irradiance grid with a new visibility precomputation.
We store a small shadowmap for each grid sample so that light does not leak through walls and such.
The visibility parameter are similar to the one used by the Variance Shadow Map for point lights.
Technical details:
We store the visibility in the same texture (array) as the irradiance itself (in order to reduce the number of sampler).
But the irradiance and the visibility are not the same data so we must encode them in order to use the same texture format.
We use RGBA8 normalized texture and encode irradiance as RGBE (shared exponent).
Using RGBE encoding instead of R11_G11_B10 may lead to some lighting changes, but quality seems to be nearly the same in my test cases.
Using full RGBA16/32F maybe a future option but that will require much more memory and reduce the perf significantly.
Visibility moments (VSM) are encoded as 16bits fixed point precision using a special range. This seems to retain enough precision for the needs.
Also interpolation does not seems to be big problem (even though it's incorrect).
It now uses a quality slider instead of stride.
Lower quality takes larger strides between samples and use lower mips when tracing rough rays.
Now raytracing is done entierly in homogeneous coordinate space. This run much faster.
Should be fairly optimized. We are still Bandwidth bound.
Add a line-line intersection refine.
Add a ray jitter between the multiple ray per pixel to fill some undersampling in mirror reflections.
The tracing now stops if it goes behind an object. This needs some work to allow it to continue even if behind objects.
This add the possibility to use planar probe informations to create SSR.
This has 2 advantages:
- Tracing is less expensive since the hit is found much quicker.
- We have much less artifact due to missing information.
There is still area for improvement.
This commit separate the depth texture into another texture array.
This remove the need to output radial depth into alpha.
Unfortunatly it's difficult to recover position from the non linear depth buffer when applying reflection without adding a bunch of stuff.
This is in preparation of SSR planar reflections.