Alpha blended Transparency is now using dual source blending making it
fully compatible with cycles Transparent BSDF.
Multiply and additive blend mode can be achieved using some nodes and are
going to be removed.
We check if the previous iteration (sample) was using a valid double buffer.
If it wasn't, we request another iteration.
This fix the issue for viewport,viewport render and image render.
Related to T65761 Eevee render inconsistency between 3D View, Viewport render, and F12 Render
This is to simplify the usage of Volumetrics.
Now it automatically detect if there is any Volumetric material in the
view and allocate the needed buffer if any.
This is to fix the slowdown issue experienced on windows when rendering
from command line.
Fix T59649 Eevee in command-line batch mode is slow with particles/duplis
This was due to environement not being rendered with alpha blending. So
color was still written and contributed to the final render color. Now
we multiply by background alpha so that it removes any background pixels
intensity.
For this reason this made the (incorrect) final premult unecessary.
BF-admins agree to remove header information that isn't useful,
to reduce noise.
- BEGIN/END license blocks
Developers should add non license comments as separate comment blocks.
No need for separator text.
- Contributors
This is often invalid, outdated or misleading
especially when splitting files.
It's more useful to git-blame to find out who has developed the code.
See P901 for script to perform these edits.
Object visibility is now handled by the depsgraph iterator, but this API
was incomplete as it made no distinction for visibility of the object itself,
particles and generated instances.
The depsgraph iterator API now includes information about which part of the
object is visible, and this is used by Cycles to replace the old custom logic.
Cycles and EEVEE visibility should now be consistent, which unfortunately does
means some subtle compatibility breakage for both.
Fixes T58956, T58202, T59284.
Differential Revision: https://developer.blender.org/D4109
This makes it possible to tweak indirect lighting in the shader.
Only a subset of the outputs is supported and the ray depth has not exactly
the same meaning:
Is Camera : Supported.
Is Shadow : Supported.
Is Diffuse : Supported.
Is Glossy : Supported.
Is Singular : Not supported. Same as Is Glossy.
Is Reflection : Not supported. Same as Is Glossy.
Is Transmission : Not supported. Same as Is Glossy.
Ray Length : Not supported. Defaults to 1.0.
Ray Depth : Indicate the current bounce when baking the light cache.
Diffuse Depth : Same as Ray Depth but only when baking diffuse light.
Glossy Depth : Same as Ray Depth but only when baking specular light.
Transparent Depth : Not supported. Defaults to 0.
Transmission Depth : Not supported. Same as Glossy Depth.
Caveat: Is Glossy does not work with Screen Space Reflections but does work
with reflection planes (when used with SSR or not).
We have to render the world twice for that to work.
This option make the internal render size larger than the output size in
order to minimize screenspace effects disapearing at the render edges.
The overscan size added around the render is the maximum dimension
multiplied by the overscan percentage.
This new option is located in the shadows options in the render settings.
This approach is simple and just randomize the shadow map position (not
the lamp itself) and just let the temporal supersampling do the average of
all the shadowing. The downside is that is needs quite a large number of
samples to give smooth results and individual sample position can remain
visible.
Enabling this option will make the viewport refresh all shadow maps every
redraw so it has a serious performance impact.
This approach is not physicaly based at all and will not match cycles.
----
The sampling for point lamps (spheres) is not
Encountered on Nvidia + Linux, it seems that doing everything all at once
can make the driver give up the whole command list and return nothing as
the output of the render.
This separate probe rendering from viewport rendering, making possible to
run the baking in another thread (non blocking and faster).
The baked lighting is saved in the blend file. Nothing needs to be
recomputed on load.
There is a few missing bits / bugs:
- Cache cannot be saved to disk as a separate file, it is saved in the DNA
for now making file larger and memory usage higher.
- Auto update only cubemaps does update the grids (bug).
- Probes cannot be updated individually (considered as dynamic).
- Light Cache cannot be (re)generated during render.
Note: Metaballs only support the first material slot. Splicing it per
material would create empty Batches. In order to overcome this we set
the batch for other materials to NULL. We added extra checks in EEVEE
and Workbench to not draw when the geom is NULL.
We handle doversion for the scene properties, but not for the
view layer overrides.
Overrides will be implemented in a different way via dynamic overrides.
For now this data is completely lost.
For some we may add per object overrides, but for most we plan to keep them
strictly per viewport settings. Display settings from the mesh still need to
be moved here, only collections were done to remove that code.
Because:
- Less redundancy.
- Better suffixes.
Also a few modification to GPU_texture_create_* to simplify the API:
- make the format explicit to the texture creation process.
- remove the component count as it's specified in the GPUTextureFormat.