This commit introduce the computation of a depth pyramid containing min and max depth values of the original depth buffer.
This is useful for Clustered Light Culling but also for raytracing on the depth buffer (SSR).
It's also usefull to have to fetch higher mips in order to improve texture cache usage.
As of now, 1st mip (highest res) is half the resolution of the depth buffer, but everything is already done to be able to make a fullres copy of the depth buffer in the 1st mip instead of downsampling.
Also, the texture used is RG_32F which is a too much but enough to cover the 24bits of the depth buffer. Reducing the texture size would make things quite faster.
This special case function enables rendering to a miplevel while using the miplevels above as texture input.
This is needed for some algorithm (i.e. creating a min-max depth pyramid texture).
We should be able to differentiate between OpenGL render (viewport
render), offline render (F12), and view render (viewport draw).
This allows for us preventing offline render to skip mode drawings, grid, ...
Even OpenGL render can benefit from this forcing a higher quality
anti-alias and sampling than the viewport drawing.
I'm not sure if it's clever to keep the memset(0x00) outside the render
loop function as it is in this patch. An alternative is to just pass the
render "type" as a flag to the render function, and set DST.options
inside it. (I may change this tomorrow, I will wait to hear from
Campbell on that).
Early implementation. Slow and still has quality
3 ways of storing irradiance:
- Spherical Harmonics: Have problem with directionnal lighting.
- HL2 diffuse cube: Very low resolution but smooth transitions.
- Diffuse cube: High storage requirement.
Also include some name change.
We still do it a few times, but that helps already. Related to T51718.
Note that it also reinforces the idea that any geometry datablock will
have a generated copy-on-write Mesh provided by Depsgraph.
Use indirect access to it via object.
It was already flushing from base to object, now we can avoid such flushing.
Still weird to have selection color filled in by dependency graph, but now
there is no synchronization going on at least.
This is supposed to help catch bugs if referrencing stack data out of
the draw loop context.
No change is suppose to happen for users (specially because the changes
here happens mostly on debug).
It includes a change in the logic for render loop, to make sure DST is
not accessed before we enter it - contribution by Campbell Barton.
Now dupli groups, objects, particles, ... are all working.
This introduces a flag for the iterator to determine whether we go over
Set and dupli objects or not.
Important to remember to keep the iteration of DEG_ as readonly.
Cycles is not working well for dupli groups, and it's memleaking
for dupli particles. So for now we iterate over main objects and set
only, not dupli.
To change that go in rna_scene.c and:
-DEG_OBJECT_ITER(graph, ob, DEG_OBJECT_ITER_FLAG_SET)
+DEG_OBJECT_ITER(graph, ob, DEG_OBJECT_ITER_FLAG_ALL)
Review and suggestions by Sergey Sharybin
Do not use the inverse perspective matrix inside the shader to recover world positions.
That leads to severe float imprecision leading to nasty artifacts.
Instead we compute the world view vector for each pixels and do a ray to plane intersection.
We are still getting low precision derivatives when going far away from the origin, and thus artifacts.
This commit also fixes the non-appearing negative Z axis in 3D view.
-Use 11_11_10 buffers for hdr content.
-Eevee compositing share 1 buffer if bloom and DOF are both activated.
-Fix slowdown when resizing EEVEE viewport.
-Removed DRW_BUF_*** enums causing confusion.