How to use:
- Enable subsurface scattering in the render options.
- Add Subsurface BSDF to your shader.
- Check "Screen Space Subsurface Scattering" in the material panel options.
This initial implementation has a few limitations:
- only supports gaussian SSS.
- Does not support principled shader.
- The radius parameters is baked down to a number of samples and then put into an UBO. This means the radius input socket cannot be used. You need to tweak the default vector directly.
- The "texture blur" is considered as always set to 1
The algorithm averages normals from nearby surfaces. It uses the same
sampling strategy as BSSRDFs, casting rays along the normal and two
orthogonal axes, and combining the samples with MIS.
The main concern here is that we are introducing raytracing inside
shader evaluation, which could be quite bad for GPU performance and
stack memory usage. In practice it doesn't seem so bad though.
Note that using this feature can easily slow down renders 20%, and
that if you care about performance then it's better to use a bevel
modifier. Mainly this is useful for baking, and for cases where the
mesh topology makes it difficult for the bevel modifier to work well.
Differential Revision: https://developer.blender.org/D2803
It should behave like cycles.
Even if not efficient at all, we still do the same create - draw - free process that was done in the old viewport to save vram (maybe not really the case now) and not care about simulation's GPU texture state sync.
This is quite basic as it only support boundbing boxes.
But the material can refine the volume shape in anyway the user like.
To overcome this limitation, a voxelisation should be done on the mesh (generating a SDF maybe?) and tested against every volumetric cell.
You can now use a transparent shader as a completly transparent bsdf. And use whatever alpha mask in a mix shader between a transparent bsdf and another bsdf.
Since the change to prevent shader recompilation at every update, we got
a regression when clearcoat was used.
Basically at the shader build time we would determine if the shader
needed clear coat, and if it didin't, it would build a different GLSL
program.
However if later the user updated the clearcoat value so that it would
then require the full clearcoat shader, the user wouldn't get it until
manually forcing the shader to recompile, or reopening the file.
We now handle the optimization in the GLSL code. That adds a minimum
overhead due to branching. But the overall performance seems unchanged
(tested on linux in AMD and NVidia).
Reviewers: pascal, brecht, fclem
Differential Revision: https://developer.blender.org/D2822
This reverts commit 3888227a7b.
This "Fix" was made while ORCO was broken. Now that orco itself is fixed
this is no longer required, otherwise Tangent node produces different
results in Cycles and Eevee.
Orco should behave the same if it comes from unconnected vec inputs, or
from the Texture Coordinate -> Generated node output.
Fixup for 11e7e0769a.
This is related to T52528. The coordinates are still different between
Eevee and Cycles, but at least it behaves consistent within itself.
In fact the shader should now be correct, but the orco attributes we are
passing the shader seems to be where the problem is. But it's to be
tackled separately.
Since we started supporting the (Cycles) Material Output old files
stopped working. There is no reason to keep the original Eevee material
otuput anymore.
It includes doversion for old files.
Diffuse was not outputing the right normal. (this is not a problem with SSR actually)
Glass did not have proper ssr_id and was receiving environment lighting twice.
Also it did not have proper fresnel on lamps.
Use mesh batch cache for mesh selection.
Note that we could create the batches and free immediately
so they don't take up memory.
This resolves a problem where selection was limited
to immediate-mode buffer size.
* Outlines of keyframes were too thick and ugly
* Size differences between keyframe types was being swallowed
by the pixel-fudge factor, leaving colour as the only distinguishing
factor (bad!)
This commit separate the depth texture into another texture array.
This remove the need to output radial depth into alpha.
Unfortunatly it's difficult to recover position from the non linear depth buffer when applying reflection without adding a bunch of stuff.
This is in preparation of SSR planar reflections.
- Encode normals for other opaque bsdf so they are not rejected by the normal facing test.
- Early out non reflective surfaces.
- Add small offset to raytrace to avoid self intersection.
- Fix fallback probes not appearing.
Output in 2 buffers Normals, Specular Color and roughness.
This way we can raytrace in a defered fashion and blend the exact contribution of the specular lobe on top of the opaque pass.
Goal is to make them more modular, to allow more variants (variable
single-color, thickness, ...) to be added without having to
copy-and-change-one-line of whole chain of shaders.
Hashed Alpha transparency offers a noisy output but has the benefit of being correctly ordered. Noise can be attenuated with Multisampling / AntiAliasing.
The default anisotropic tangent computation could fail in some cases,
leading to NaNs and artifacts. Use a simpler formulation that doesn't
suffer from this.
This allow specialized shaders to redefine the closure interface to fit their needs.
For instance, Volumetric closure needs to pass more than one vec4 (absorption vec3, scattering vec3, anisotropy float).