This refactor modernise the use of framebuffers.
It also touches a lot of files so breaking down changes we have:
- GPUTexture: Allow textures to be attached to more than one GPUFrameBuffer.
This allows to create and configure more FBO without the need to attach
and detach texture at drawing time.
- GPUFrameBuffer: The wrapper starts to mimic opengl a bit closer. This
allows to configure the framebuffer inside a context other than the one
that will be rendering the framebuffer. We do the actual configuration
when binding the FBO. We also Keep track of config validity and save
drawbuffers state in the FBO. We remove the different bind/unbind
functions. These make little sense now that we have separate contexts.
- DRWFrameBuffer: We replace DRW_framebuffer functions by GPU_framebuffer
ones to avoid another layer of abstraction. We move the DRW convenience
functions to GPUFramebuffer instead and even add new ones. The MACRO
GPU_framebuffer_ensure_config is pretty much all you need to create and
config a GPUFramebuffer.
- DRWTexture: Due to the removal of DRWFrameBuffer, we needed to create
functions to create textures for thoses framebuffers. Pool textures are
now using default texture parameters for the texture type asked.
- DRWManager: Make sure no framebuffer object is bound when doing cache
filling.
- GPUViewport: Add new color_only_fb and depth_only_fb along with FB API
usage update. This let draw engines render to color/depth only target
and without the need to attach/detach textures.
- WM_window: Assert when a framebuffer is bound when changing context.
This balance the fact we are not track ogl context inside GPUFramebuffer.
- Eevee, Clay, Mode engines: Update to new API. This comes with a lot of
code simplification.
This also come with some cleanups in some engine codes.
Calling the rendering operator seems to kill any other WM_job running, leaving
uncompiled materials into a GPU_MAT_QUEUED state. This then made the probe update
looping indefinitely (all_materials_updated remaining to false).
To fix this, we resume compilation for materials that are in this state.
Cancelling Render before all material compilation could make certain material
remain uncompiled. Fortunately, this is not allowed as of now.
Instead of creating a new instancing shading group without attrib, we now have instancing calls. The benefits is that they can be culled.
They can be used in conjuction with the standard and generate calls but shader must support it (which is generally not the case).
We store a pointer to the actual count so that the number can be tweaked between redraw.
This will makes multi layer rendering more efficient.
This is an optimization / cleanup commit.
The use of a global ubo remove lots of uniform lookups and only transfert data when needed.
Lots of renaming for more consistent codestyle.
This is an improvement on the old spining quad method that was giving artifacts when the reflection ray was nearly aligned with the sphere center.
This might be a bit heavier but it's worth it.
This leads to a ~3ms improvement of CPU time during drawing.
This prevent the rendering from being stalled waiting for the texture data to be transfered.
Now hashed alpha materials are stable when moving the camera/not using TAA.
It also converge to a noise free image when using TAA. No more numerical imprecision.
There still can be situations with multiple overlapping transparent surfaces that can lead to residual noise.
Using GL_RG16I texture for the hit coordinates increase tremendously the precision of the hit.
The sign of the integer is used to 2 flags (has_hit and is_planar).
We do not store the depth and retrieve it from the depth buffer (increasing bandwith by +8bit/px).
The PDF is stored into another GL_R16F texture.
We remove the raycount for simplicity and to reduce compilation time (less branching in refraction shader).
This augment the existing irradiance grid with a new visibility precomputation.
We store a small shadowmap for each grid sample so that light does not leak through walls and such.
The visibility parameter are similar to the one used by the Variance Shadow Map for point lights.
Technical details:
We store the visibility in the same texture (array) as the irradiance itself (in order to reduce the number of sampler).
But the irradiance and the visibility are not the same data so we must encode them in order to use the same texture format.
We use RGBA8 normalized texture and encode irradiance as RGBE (shared exponent).
Using RGBE encoding instead of R11_G11_B10 may lead to some lighting changes, but quality seems to be nearly the same in my test cases.
Using full RGBA16/32F maybe a future option but that will require much more memory and reduce the perf significantly.
Visibility moments (VSM) are encoded as 16bits fixed point precision using a special range. This seems to retain enough precision for the needs.
Also interpolation does not seems to be big problem (even though it's incorrect).
Previously the lighting of SSS material was not present in reflection probe or irradiance grid.
This does not compute the SSS correctly but at least output the corresponding irradiance power to the correct output.
This option prevent from automatically blurring the albedo color applied to the SSS.
While this is great for preserving details it can bleed more light onto the nearby objects since the blurring will be done on pure "white" irradiance.
This issue is to be tackled in a separate commit.
This cleanup removes the need of gigantic code duplication for each closure.
This also make some preformance improvement since it removes some branches and duplicated loops.
It also fix some mismatch (between cycles and eevee) with the principled shader.
The RenderResult struct still has a listbase of RenderLayer, but that's ok
since this is strictly for rendering.
* Subversion bump (to 2.80.2)
* DNA low level doversion (renames) - only for .blend created since 2.80 started
Note: We can't use DNA_struct_elem_find or get file version in init_structDNA,
so we are manually iterating over the array of the SDNA elements instead.
Note 2: This doversion change with renames can be reverted in a few months. But
so far it's required for 2.8 files created between October 2016 and now.
Reviewers: campbellbarton, sergey
Differential Revision: https://developer.blender.org/D2927
This adds the possibility to simulate things like red ears with strong backlight or material with high scattering distances.
To enable it you need to turn on the "Subsurface Translucency" option in the "Options" tab of the Material Panel (and of course to have "regular" SSS enabled in both render settings and material options).
Since the effect is adding another overhead I prefer to make it optional. But this is open to discussion.
Be aware that the effect only works for direct lights (so no indirect/world lighting) that have shadowmaps, and is affected by the "softness" of the shadowmap and resolution.
Technical notes:
This is inspired by http://www.iryoku.com/translucency/ but goes a bit beyond that.
We do not use a sum of gaussian to apply in regards to the object thickness but we precompute a 1D kernel texture.
This texture stores the light transmited to a point at the back of an infinite slab of material of variying thickness.
We make the assumption that the slab is perpendicular to the light so that no fresnel or diffusion term is taken into account.
The light is considered constant.
If the setup is similar to the one assume during the profile baking, the realtime render matches cycles reference.
Due to these assumptions the computed transmitted light is in most cases too bright for curvy objects.
Finally we jitter the shadow map sample per pixel so we can simulate dispersion inside the medium.
Radius of the dispersion is in world space and derived by from the "soft" shadowmap parameter.
Idea for this come from this presentation http://www.iryoku.com/stare-into-the-future (slide 164).
Samples : pretty self explanatory.
Jitter Threshold : Reduce cache misses and improve performance (greatly) by lowering this value. This settings let user decide how many samples should be jittered (rotated) to reduce banding artifacts.
How to use:
- Enable subsurface scattering in the render options.
- Add Subsurface BSDF to your shader.
- Check "Screen Space Subsurface Scattering" in the material panel options.
This initial implementation has a few limitations:
- only supports gaussian SSS.
- Does not support principled shader.
- The radius parameters is baked down to a number of samples and then put into an UBO. This means the radius input socket cannot be used. You need to tweak the default vector directly.
- The "texture blur" is considered as always set to 1
This also:
- make sure to only compile the shader needed by the active effects.
- same thing for the shading groups.
- disable TAA if motion blur is active (avoid infinite refresh).
This is quite basic as it only support boundbing boxes.
But the material can refine the volume shape in anyway the user like.
To overcome this limitation, a voxelisation should be done on the mesh (generating a SDF maybe?) and tested against every volumetric cell.
The system now uses several 3D textures in order to decouple every steps of the volumetric rendering.
See https://www.ea.com/frostbite/news/physically-based-unified-volumetric-rendering-in-frostbite for more details.
On the technical side, instead of using a compute shader to populate the 3D textures we use layered rendering with a geometry shader to render 1 fullscreen triangle per 3D texture slice.