This revisit the render pipeline to support time slicing for better motion
blur.
We support accumulation with or without the Post-process motion blur.
If using the post-process, we reuse last step next motion data to avoid
another scene reevaluation.
This also adds support for hair motion blur which is handled in a similar
way as mesh motion blur.
The total number of samples is distributed evenly accross all timesteps to
avoid sampling weighting issues. For this reason, the sample count is
(internally) rounded up to the next multiple of the step count.
Only FX Motion BLur: {F8632258}
FX Motion Blur + 4 time steps: {F8632260}
FX Motion Blur + 32 time steps: {F8632261}
Reviewed By: jbakker
Differential Revision: https://developer.blender.org/D8079
This adds object motion blur vectors for EEVEE as well as better noise
reduction for it.
For TAA reprojection we just compute the motion vector on the fly based on
camera motion and depth buffer. This makes possible to store another motion
vector only for the blurring which is not useful for TAA history fetching.
Motion Data is saved per object & per geometry if using deformation blur.
We support deformation motion blur by saving previous VBO and modifying the
actual GPUBatch for the geometry to include theses VBOs.
We store Previous and Next frame motion in the same motion vector buffer
(RG for prev and BA for next). This makes non linear motion blur (like
rotating objects) less prone to outward/inward blur.
We also improve the motion blur post process to expand outside the objects
border. We use a tile base approach and the max size of the blur is set via
a new render setting.
We use a background reconstruction method that needs another setting
(Background Separation).
Sampling is done using a fixed 8 dithered samples per direction. The final
render samples will clear the noise like other stochastic effects.
One caveat is that hair particles are not yet supported. Support will
come in another patch.
Reviewed By: jbakker
Differential Revision: https://developer.blender.org/D7297
These are the modifications:
-With DRW modification we reduce the number of passes we need to populate.
-Rename passes for consistent naming.
-Reduce complexity in code compilation
-Cleanup how renderpass accumulation passes are setup, using pass instances.
-Make sculpt mode compatible with shadows
-Make hair passes compatible with SSS
-Error shader and lookdev materials now use standalone materials.
-Support default shader (world and material) using a default nodetree internally.
-Change BLEND_CLIP to be emulated by gpu nodetree. Making less shader variations.
-Use BLI_memblock for cache memory allocation.
-Renderpasses are handled by switching a UBO ref bind.
One major hack in this patch is the use of modified pointer as ghash keys.
This rely on the assumption that the keys will never overlap because the
number of options per key will never be bigger than the pointed struct.
The use of one single nodetree to support default material is also a bit hacky
since it won't support concurent usage of this nodetree.
(see EEVEE_shader_default_surface_nodetree)
Another change is that objects with shader errors now appear solid magenta instead
of shaded magenta. This is only because of code reuse purpose but could be changed
if really needed.
Reviewed By: jbakker
Differential Revision: https://developer.blender.org/D7642
Problem is that the RenderEngines will change the RenderData cfra when
rendering (when time remapping is used -- at least workbench/eevee/
gpencil do a combination of BKE_scene_frame_get() plus
RE_GetCameraWindow() which alters the RenderData cfra).
Later on in the pipeline, the Compositor will use this RenderData cfra
to determine the output file name for the FileOutput node. (In contrast
to this, the 'regular' Output will use the Scene's RenderData -- not the
Render's -- cfra [which hasnt been altered])
It is not entirely clear why RE_GetCameraWindow was setting the cfra on
the Render, but it appears to be legacy OGL rendering related and is not
needed anymore.
Removing this will keep the cfra as needed for the Compositor FileOutput
node.
Only the volume drawing part is really finished and exposed to the user. Hair
plugs into the existing hair rendering code and is fairly straightforward. The
pointcloud drawing is a hack using overlays rather than Eevee and workbench.
The most tricky part for volume rendering is the case where each volume grid
has a different transform, which requires an additional matrix in the shader
and non-trivial logic in Eevee volume drawing. In the common case were all the
transforms match we don't use the additional per-grid matrix in the shader.
Ref T73201, T68981
Differential Revision: https://developer.blender.org/D6955
This patch adds new render passes to EEVEE. These passes include:
* Emission
* Diffuse Light
* Diffuse Color
* Glossy Light
* Glossy Color
* Environment
* Volume Scattering
* Volume Transmission
* Bloom
* Shadow
With these passes it will be possible to use EEVEE effectively for
compositing. During development we kept a close eye on how to get similar
results compared to cycles render passes there are some differences that
are related to how EEVEE works. For EEVEE we combined the passes to
`Diffuse` and `Specular`. There are no transmittance or sss passes anymore.
Cycles will be changed accordingly.
Cycles volume transmittance is added to multiple surface col passes. For
EEVEE we left the volume transmittance as a separate pass.
Known Limitations
* All materials that use alpha blending will not be rendered in the render
passes. Other transparency modes are supported.
* More GPU memory is required to store the render passes. When rendering
a HD image with all render passes enabled at max extra 570MB GPU memory is
required.
Implementation Details
An overview of render passes have been described in
https://wiki.blender.org/wiki/Source/Render/EEVEE/RenderPasses
Future Developments
* In this implementation the materials are re-rendered for Diffuse/Glossy
and Emission passes. We could use multi target rendering to improve the
render speed.
* Other passes can be added later
* Don't render material based passes when only requesting AO or Shadow.
* Add more passes to the system. These could include Cryptomatte, AOV's, Vector,
ObjectID, MaterialID, UV.
Reviewed By: Clément Foucault
Differential Revision: https://developer.blender.org/D6331
EEVEE Soft shadows were not rendered correctly during viewport
rendering. The reason for this is that during viewport rendering the
shadow buffers were only update once and not per sample. This resulted
that all the samples calculated the same shadow.
This fix moves the call to `EEVEE_shadows_update` from cache finished to
draw scene. This needs to happen before `EEVEE_lightprobes_refresh`.
Reviewed By: fclem
Differential Revision: https://developer.blender.org/D6538
This patch will allow the user to select the EEVEE renderpass to be
shown in the viewport by default the combined pass will be shown.
Limitations:
* Viewport rendering stores the result in a `RenderResult`. RenderResult
is not aware of the type of data it holds. In many places where RenderResult
is used it is assumed that it stores a combined pass and the display+view
transform are applied.
I will propose to fix this in a future patch. But that is still being
designed and discussed.
Reviewed By: fclem
Differential Revision: https://developer.blender.org/D6319
Most of the renderpasses in EEVEE used post-processing on the CPU. For
final image rendering this is sufficient, but when we want to display
the data to the user we don't want to transfer to the CPU to do post
processing to then upload it back to the GPU to display the result.
This patch moves the renderpass postprocessing to a GLSL shader.
This is the first step to do, before we will enable the renderpasses in the viewport.
Reviewed By: fclem
Differential Revision: https://developer.blender.org/D6206
When rendering the Subsurface scattering lighting render layer with high
sample count render artifacts can appear. This patch will remove these
render artifacts by using a more precise texture format when samples
will be larger than 128. As with the new eevee-shadows it is more common
to use higher number of samples.
The reason why it was visible in the subsurface scattering is that every
sample could change the color. Adding different values will reduce
precision over the number of samples.
The subsurface color render layer also has this issue, but it is not noticeable as
the colors tend to be close to each other so the colors would
most of the time just shift the precision and hold up better.
Reviewed By: fclem
Differential Revision: https://developer.blender.org/D6245
Alpha blended Transparency is now using dual source blending making it
fully compatible with cycles Transparent BSDF.
Multiply and additive blend mode can be achieved using some nodes and are
going to be removed.
We check if the previous iteration (sample) was using a valid double buffer.
If it wasn't, we request another iteration.
This fix the issue for viewport,viewport render and image render.
Related to T65761 Eevee render inconsistency between 3D View, Viewport render, and F12 Render
This is to simplify the usage of Volumetrics.
Now it automatically detect if there is any Volumetric material in the
view and allocate the needed buffer if any.
This is to fix the slowdown issue experienced on windows when rendering
from command line.
Fix T59649 Eevee in command-line batch mode is slow with particles/duplis
This was due to environement not being rendered with alpha blending. So
color was still written and contributed to the final render color. Now
we multiply by background alpha so that it removes any background pixels
intensity.
For this reason this made the (incorrect) final premult unecessary.
BF-admins agree to remove header information that isn't useful,
to reduce noise.
- BEGIN/END license blocks
Developers should add non license comments as separate comment blocks.
No need for separator text.
- Contributors
This is often invalid, outdated or misleading
especially when splitting files.
It's more useful to git-blame to find out who has developed the code.
See P901 for script to perform these edits.
Object visibility is now handled by the depsgraph iterator, but this API
was incomplete as it made no distinction for visibility of the object itself,
particles and generated instances.
The depsgraph iterator API now includes information about which part of the
object is visible, and this is used by Cycles to replace the old custom logic.
Cycles and EEVEE visibility should now be consistent, which unfortunately does
means some subtle compatibility breakage for both.
Fixes T58956, T58202, T59284.
Differential Revision: https://developer.blender.org/D4109
This makes it possible to tweak indirect lighting in the shader.
Only a subset of the outputs is supported and the ray depth has not exactly
the same meaning:
Is Camera : Supported.
Is Shadow : Supported.
Is Diffuse : Supported.
Is Glossy : Supported.
Is Singular : Not supported. Same as Is Glossy.
Is Reflection : Not supported. Same as Is Glossy.
Is Transmission : Not supported. Same as Is Glossy.
Ray Length : Not supported. Defaults to 1.0.
Ray Depth : Indicate the current bounce when baking the light cache.
Diffuse Depth : Same as Ray Depth but only when baking diffuse light.
Glossy Depth : Same as Ray Depth but only when baking specular light.
Transparent Depth : Not supported. Defaults to 0.
Transmission Depth : Not supported. Same as Glossy Depth.
Caveat: Is Glossy does not work with Screen Space Reflections but does work
with reflection planes (when used with SSR or not).
We have to render the world twice for that to work.
This option make the internal render size larger than the output size in
order to minimize screenspace effects disapearing at the render edges.
The overscan size added around the render is the maximum dimension
multiplied by the overscan percentage.
This new option is located in the shadows options in the render settings.
This approach is simple and just randomize the shadow map position (not
the lamp itself) and just let the temporal supersampling do the average of
all the shadowing. The downside is that is needs quite a large number of
samples to give smooth results and individual sample position can remain
visible.
Enabling this option will make the viewport refresh all shadow maps every
redraw so it has a serious performance impact.
This approach is not physicaly based at all and will not match cycles.
----
The sampling for point lamps (spheres) is not