It seems to be useful still in cases where the particle are distributed in
a particular order or pattern, to colorize them along with that. This isn't
really well defined, but might as well avoid breaking backwards compatibility
for now.
This is like the only way to add variety to hair which is created
using simple children. Used here for the hair.
Maybe not ideal, but the time will show.
This adds midlevel and object/world space for displacement, and a
vector displacement node with tangent/object/world space, midlevel
and scale.
Note that tangent space vector displacement still is not exactly
compatible with maps created by other software, this will require
changes to the tangent computation.
Differential Revision: https://developer.blender.org/D1734
Previously only scalar displacement along the normal was supported,
now displacement can go in any direction. For backwards compatibility,
a Displacement node will be automatically inserted in existing files.
This will make it possible to support vector displacement maps in the
future. It's already possible to use them to some extent, but requires
a manual shader node setup. For tangent space maps the right tangent
may also not be available yet, depends on the map.
Differential Revision: https://developer.blender.org/D3015
This converts object space height to world space displacement, to be
linked to the new vector displacement material output.
Differential Revision: https://developer.blender.org/D3015
This would lead to sock.default_value pointing to the wrong data type,
possibly causing crashes. Unfortunately, this bug will still exist for
older Blender versions that try to load newer files, which makes
changing the type of a node socket problematic.
This is a hack to make the user control the SSS radius even though the profile is baked with the default radius values.
This is completly against UI principles since you cannot edit the profile radiuses while there is something plugged into the radius socket.
Better solution will be to either have a dedicated node value for RGB radiuses and a SSS scale socket only for eevee.
The RenderResult struct still has a listbase of RenderLayer, but that's ok
since this is strictly for rendering.
* Subversion bump (to 2.80.2)
* DNA low level doversion (renames) - only for .blend created since 2.80 started
Note: We can't use DNA_struct_elem_find or get file version in init_structDNA,
so we are manually iterating over the array of the SDNA elements instead.
Note 2: This doversion change with renames can be reverted in a few months. But
so far it's required for 2.8 files created between October 2016 and now.
Reviewers: campbellbarton, sergey
Differential Revision: https://developer.blender.org/D2927
This patch moves all the functionality previously in SceneRenderLayer to SceneLayer.
If we want to rename some of these structs now would be a good time to do it, before they are in SceneLayer.
Everything should be working, though I will test things further tomorrow. Once this is committed depsgraph can get
rid of the workaround added in rna_Main_meshes_new_from_object and finish whatever this patch was preventing from being finished.
This patch also adds a few placeholders for the overrides (samples, ...). These are obviously not working, so some unittests that rely on 'lay', and 'zmask' will fail.
This patch does not addressed the change of moving samples to ViewRender (I have this as a separate patch and needs some separate discussion).
Following next is the individual note of the individual parts that were committed.
Note 1: It is up to Cycles to still get rid of exclude_layer internally.
Note 2: Cycles still need to handle its own doversion for the use_layer_samples cases and
(1) Remove the override as it is
(2) Add a new override (scene.cycles.samples) if scene.cycles.use_layer_samples != IGNORE
Respecting the expected behaviour when scene.cycles.use_layer_samples == BOUNDED.
Note 3: Cycles still need to implement the per-object holdout
(similar to how we do shadow catcher).
Note 4: There are parts of the old (Blender Internal) rendering pipeline that is still
using lay, e.g., in shi->lay.
Honestly it will be easier to purge the entire Blender Internal code away instead of taking things from it bit by bit.
Reviewers: sergey, campbellbarton, brecht
Differential Revision: https://developer.blender.org/D2919
This seems to be a correct implementation of the same diffusion profile as Cycles uses by default.
There are a few bias though:
- We consider _A_ the albedo to be 1 when evaluating _s_.
- We use a factor of 0.6 when computing _d_ to match more or less cycles results.
Note that doing per pixel jittering does bias the result even further (loss of energy).
How to use:
- Enable subsurface scattering in the render options.
- Add Subsurface BSDF to your shader.
- Check "Screen Space Subsurface Scattering" in the material panel options.
This initial implementation has a few limitations:
- only supports gaussian SSS.
- Does not support principled shader.
- The radius parameters is baked down to a number of samples and then put into an UBO. This means the radius input socket cannot be used. You need to tweak the default vector directly.
- The "texture blur" is considered as always set to 1
The algorithm averages normals from nearby surfaces. It uses the same
sampling strategy as BSSRDFs, casting rays along the normal and two
orthogonal axes, and combining the samples with MIS.
The main concern here is that we are introducing raytracing inside
shader evaluation, which could be quite bad for GPU performance and
stack memory usage. In practice it doesn't seem so bad though.
Note that using this feature can easily slow down renders 20%, and
that if you care about performance then it's better to use a bevel
modifier. Mainly this is useful for baking, and for cases where the
mesh topology makes it difficult for the bevel modifier to work well.
Differential Revision: https://developer.blender.org/D2803
It should behave like cycles.
Even if not efficient at all, we still do the same create - draw - free process that was done in the old viewport to save vram (maybe not really the case now) and not care about simulation's GPU texture state sync.
This is quite basic as it only support boundbing boxes.
But the material can refine the volume shape in anyway the user like.
To overcome this limitation, a voxelisation should be done on the mesh (generating a SDF maybe?) and tested against every volumetric cell.