It seems to be useful still in cases where the particle are distributed in
a particular order or pattern, to colorize them along with that. This isn't
really well defined, but might as well avoid breaking backwards compatibility
for now.
This is like the only way to add variety to hair which is created
using simple children. Used here for the hair.
Maybe not ideal, but the time will show.
Burley SSS uses a bit of strange thing where the albedo and closure weight are
different, which makes the subsurface color act a bit like a subsurface radius
indirectly by the way the Burley SSS profile works.
This can't work for random walk SSS though, and it's not clear to me that this
is actually a good idea since it's really the subsurface radius that is supposed
to control this. For now I'll leave Burley SSS working the same to not break
backwards compatibility.
It is basically brute force volume scattering within the mesh, but part
of the SSS code for faster performance. The main difference with actual
volume scattering is that we assume the boundaries are diffuse and that
all lighting is coming through this boundary from outside the volume.
This gives much more accurate results for thin features and low density.
Some challenges remain however:
* Significantly more noisy than BSSRDF. Adding Dwivedi sampling may help
here, but it's unclear still how much it helps in real world cases.
* Due to this being a volumetric method, geometry like eyes or mouth can
darken the skin on the outside. We may be able to reduce this effect,
or users can compensate for it by reducing the scattering radius in
such areas.
* Sharp corners are quite bright. This matches actual volume rendering
and results in some other renderers, but maybe not so much real world
objects.
Differential Revision: https://developer.blender.org/D3054
This patch changes the huge list of projects in visual studio into a nice tree matching the source folder structure. see D2823 for details.
Differential Revision: http://developer.blender.org/D2823
nvcc is very picky regarding compiler versions, severely limiting the compiler we can use, this commit adds a nvrtc based compiler that'll allow us to build the cubins even if the host compiler is unsupported. for details see D2913.
Differential Revision: http://developer.blender.org/D2913
This adds midlevel and object/world space for displacement, and a
vector displacement node with tangent/object/world space, midlevel
and scale.
Note that tangent space vector displacement still is not exactly
compatible with maps created by other software, this will require
changes to the tangent computation.
Differential Revision: https://developer.blender.org/D1734
There was a check for volume bounces at every surface intersection. That could lead to a volume scattered path being terminated
when passing through a transparent surface. This check was superfluous, as the volume shader evaluation already checks the
number of volume bounces and once it passes the max, volume shaders will not return scatter events any more.
Reviewers: #cycles, brecht
Reviewed By: #cycles, brecht
Subscribers: brecht, #cycles
Tags: #cycles
Maniphest Tasks: T53914
Differential Revision: https://developer.blender.org/D3024
Previously we stored each color channel in a single closure, which was
convenient for sampling a closure and channel together. But this doesn't
work so well for algorithms where we want to render multiple color
channels together.
Previously only scalar displacement along the normal was supported,
now displacement can go in any direction. For backwards compatibility,
a Displacement node will be automatically inserted in existing files.
This will make it possible to support vector displacement maps in the
future. It's already possible to use them to some extent, but requires
a manual shader node setup. For tangent space maps the right tangent
may also not be available yet, depends on the map.
Differential Revision: https://developer.blender.org/D3015
This converts object space height to world space displacement, to be
linked to the new vector displacement material output.
Differential Revision: https://developer.blender.org/D3015
This was we can introduce other types of BVH, for example, wider ones, without
causing too much mess around boolean flags.
Thoughs:
- Ideally device info should probably return bitflag of what BVH types it
supports.
It is possible to implement based on simple logic in device/ and mesh.cpp,
rest of the changes will stay the same.
- Not happy with workarounds in util_debug and duplicated enum in kernel.
Maybe enbum should be stores in kernel, but then it's kind of weird to include
kernel types from utils. Soudns some cyclkic dependency.
Reviewers: brecht, maxim_d33
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D3011
Adds the code to get screen size of a point in world space, which is
used for subdividing geometry to the correct level. The approximate
method of treating the point as if it were directly in front of the
camera is used, as panoramic projections can become very distorted
near the edges of an image. This should be fine for most uses.
There is also no support yet for offscreen dicing scale, though
panorama cameras are often used for rendering 360° renders anyway.
Fixes T49254.
Differential Revision: https://developer.blender.org/D2468
This can be enabled in the Film panel, with an option to control the
transmisison roughness below which glass becomes transparent.
Differential Revision: https://developer.blender.org/D2904
SVM nodes need to read all data to get the right offset for the following node.
This is quite weak, a more generic solution would be good in the future.
Our own implementation was behaving different comparing to OSL and GPU,
namely on the border pixels OSL and CUDA was doing interpolation with
black, but we were clamping coordinate.
This partially fixes issue reported in T53452.
Similar change should also be done for 3D interpolation perhaps, but this
is to be investigated separately.
Previously, the NLM kernels would be launched once per offset with one thread per pixel.
However, with the smaller tile sizes that are now feasible, there wasn't enough work to fully occupy GPUs which results in a significant slowdown.
Therefore, the kernels are now launched in a single call that handles all offsets at once.
This has two downsides: Memory accesses to accumulating buffers are now atomic, and more importantly, the temporary memory now has to be allocated for every shift at once, increasing the required memory.
On the other hand, of course, the smaller tiles significantly reduce the size of the memory.
The main bottleneck right now is the construction of the transformation - there is nothing to be parallelized there, one thread per pixel is the maximum.
I tried to parallelize the SVD implementation by storing the matrix in shared memory and launching one block per pixel, but that wasn't really going anywhere.
To make the new code somewhat readable, the handling of rectangular regions was cleaned up a bit and commented, it should be easier to understand what's going on now.
Also, some variables have been renamed to make the difference between buffer width and stride more apparent, in addition to some general style cleanup.