This is an important change. Starting from now, all lights have a finite
influence radius (similar to the old sphere option for BI).
In order to avoid costly setup time, this distance is first computed
automatically based on a light threshold. The distance is computed
at the light origin and using the inverse square falloff. The setting
can be found inside the render settings panel > shadow tab.
This light threshold does not take the light shape into account an may not
suit every case. That's why we provide a per lamp override where you can
just set the cutt off distance (Light Properties Panel > Light >
Custom Distance).
The influence distance is also used as shadow far clip distance.
This influence distance does not concerns sun lights that still have a
far clip distance.
---
This change is important because it makes it possible to cull lights
an improve performance drastically in the future.
I made an empirical test with a 100% diffuse sphere and manually tweak the
lighting power of a sun lamp trying to fit cycles and eevee the best I can.
Then I plotted the result and found a rough fit to the equation and that
seems to work pretty well.
This makes it possible to tweak indirect lighting in the shader.
Only a subset of the outputs is supported and the ray depth has not exactly
the same meaning:
Is Camera : Supported.
Is Shadow : Supported.
Is Diffuse : Supported.
Is Glossy : Supported.
Is Singular : Not supported. Same as Is Glossy.
Is Reflection : Not supported. Same as Is Glossy.
Is Transmission : Not supported. Same as Is Glossy.
Ray Length : Not supported. Defaults to 1.0.
Ray Depth : Indicate the current bounce when baking the light cache.
Diffuse Depth : Same as Ray Depth but only when baking diffuse light.
Glossy Depth : Same as Ray Depth but only when baking specular light.
Transparent Depth : Not supported. Defaults to 0.
Transmission Depth : Not supported. Same as Glossy Depth.
Caveat: Is Glossy does not work with Screen Space Reflections but does work
with reflection planes (when used with SSR or not).
We have to render the world twice for that to work.
This new option is located in the shadows options in the render settings.
This approach is simple and just randomize the shadow map position (not
the lamp itself) and just let the temporal supersampling do the average of
all the shadowing. The downside is that is needs quite a large number of
samples to give smooth results and individual sample position can remain
visible.
Enabling this option will make the viewport refresh all shadow maps every
redraw so it has a serious performance impact.
This approach is not physicaly based at all and will not match cycles.
----
The sampling for point lamps (spheres) is not
This separate probe rendering from viewport rendering, making possible to
run the baking in another thread (non blocking and faster).
The baked lighting is saved in the blend file. Nothing needs to be
recomputed on load.
There is a few missing bits / bugs:
- Cache cannot be saved to disk as a separate file, it is saved in the DNA
for now making file larger and memory usage higher.
- Auto update only cubemaps does update the grids (bug).
- Probes cannot be updated individually (considered as dynamic).
- Light Cache cannot be (re)generated during render.
The implementation is pretty straightforward.
In Cycles, sampling the shapes is currently done w.r.t. area instead of solid angle.
There is a paper on solid angle sampling for disks [1], but the described algorithm is based on
simply sampling the enclosing square and rejecting samples outside of the disk, which is not exactly
great for Cycles' RNG (we'd need to setup a LCG for the repeated sampling) and for GPU divergence.
Even worse, the algorithm is only defined for disks. For ellipses, the basic idea still works, but a
way to analytically calculate the solid angle is required. This is technically possible [2], but the
calculation is extremely complex and still requires a lookup table for the Heuman Lambda function.
Therefore, I've decided to not implement that for now, we could still look into it later on.
In Eevee, the code uses the existing ltc_evaluate_disk to implement the lighting calculations.
[1]: "Solid Angle Sampling of Disk and Cylinder Lights"
[2]: "Analytical solution for the solid angle subtended at any point by an ellipse via a point source radiation vector potential"
Reviewers: sergey, brecht, fclem
Differential Revision: https://developer.blender.org/D3171
We handle doversion for the scene properties, but not for the
view layer overrides.
Overrides will be implemented in a different way via dynamic overrides.
For now this data is completely lost.
For some we may add per object overrides, but for most we plan to keep them
strictly per viewport settings. Display settings from the mesh still need to
be moved here, only collections were done to remove that code.
This was the otherway around before. But since we can have a different size*
for cube texture now, we can compute the correct-ish texture size.
This will give us on average the same texture appearance when we will add
support for real cubemap shadows.
As much as I want to give freedom to the user, 1.5G of vram for a
single shadow is a big of a stability issue.
So limiting to 4096 for now, we may remove this limit in the future.
This mean we can now have different shadow resolutions for both.
However each shadow type keep the same size accross all lamps because of
future "real" Cube Shadowmaps limitation and to save texture sampler slots.
That said the cascade shadow resolution could (in the future) still be
changed to be adjustable per sun lamp.
It's usefull in some scenario to tweak the specular intensity of a light
without modifying the diffuse contribution.
Cycles allows it via lamps material which we currently not support in Eevee.
This is a good workaround for now.
Because:
- Less redundancy.
- Better suffixes.
Also a few modification to GPU_texture_create_* to simplify the API:
- make the format explicit to the texture creation process.
- remove the component count as it's specified in the GPUTextureFormat.
This gets rid of the need of a geom shader and instancing.
Both are pretty slow compared to the new method.
The only moment the old method could be better is when scene is filled
with lots of objects and most of the objects in the shadow map appear
on every layer.
But even then, we could optimize the culling and minimize the overhead.
This refactor modernise the use of framebuffers.
It also touches a lot of files so breaking down changes we have:
- GPUTexture: Allow textures to be attached to more than one GPUFrameBuffer.
This allows to create and configure more FBO without the need to attach
and detach texture at drawing time.
- GPUFrameBuffer: The wrapper starts to mimic opengl a bit closer. This
allows to configure the framebuffer inside a context other than the one
that will be rendering the framebuffer. We do the actual configuration
when binding the FBO. We also Keep track of config validity and save
drawbuffers state in the FBO. We remove the different bind/unbind
functions. These make little sense now that we have separate contexts.
- DRWFrameBuffer: We replace DRW_framebuffer functions by GPU_framebuffer
ones to avoid another layer of abstraction. We move the DRW convenience
functions to GPUFramebuffer instead and even add new ones. The MACRO
GPU_framebuffer_ensure_config is pretty much all you need to create and
config a GPUFramebuffer.
- DRWTexture: Due to the removal of DRWFrameBuffer, we needed to create
functions to create textures for thoses framebuffers. Pool textures are
now using default texture parameters for the texture type asked.
- DRWManager: Make sure no framebuffer object is bound when doing cache
filling.
- GPUViewport: Add new color_only_fb and depth_only_fb along with FB API
usage update. This let draw engines render to color/depth only target
and without the need to attach/detach textures.
- WM_window: Assert when a framebuffer is bound when changing context.
This balance the fact we are not track ogl context inside GPUFramebuffer.
- Eevee, Clay, Mode engines: Update to new API. This comes with a lot of
code simplification.
This also come with some cleanups in some engine codes.
Instead of creating a new instancing shading group without attrib, we now have instancing calls. The benefits is that they can be culled.
They can be used in conjuction with the standard and generate calls but shader must support it (which is generally not the case).
We store a pointer to the actual count so that the number can be tweaked between redraw.
This will makes multi layer rendering more efficient.
This removes the need of custom attribs for instancing.
Instancing works fully with dynamic batches & Gwn_VertFormat now.
This is in prevision of the VAO manager patch.