It was possible that render buffers and scene kernel data will be out
of sync because reset and scene update happens in different locks.
This is similar issue we've fixed in the Cycles X branch, so backported
relevant changes from there.
This change removes what seems to be unused feature kernel.
Differential Revision: https://developer.blender.org/D11828
When compiling BSDF nodes, we only assing stack space to the normal and
tangent inputs if they are linked. However, it could be that the
ConstantFolder removed the link, so checking if there is a link fails
to take this into account.
To fix this, added a flag to ShaderInput to keep track of whether a
constant was folded into the input, and use it as well to verify that
the socket is linked when assigning stack space.
Reviewed By: brecht
Maniphest Tasks: T70615
Differential Revision: https://developer.blender.org/D11731
Offset rays from the flat surface to match where they would be for a smooth
surface as specified by the normals. In the shading panel there is now a
Shading Offset (existing option) and Geometry Offset (new).
The Geometry Offset works as follows:
* 0: disabled
* 0.001: only terminated triangles (normal points to the light, geometry
doesn't) are affected
* 0.1 (default): triangles at grazing angles are affected, and the effect
fades out
* 1: all triangles are affected
Limitations:
* The artifact is still visible in some cases, it could be that some quads
require to be treated specifically as quads.
* Inconsistent normals cause artifacts.
* If small objects cast shadows to a big low poly surface, the shadows can
appear to be in a wrong place - because the surface moved slightly above
the geometry. This can be noticed only at grazing angles to light.
* Approximated surfaces of two non-intersecting low-poly objects can overlap
that causes off-the-wall shadows.
Generally, using one or a few levels of subdivision can get rid of artifacts
faster than before.
Differential Revision: https://developer.blender.org/D11065
This adds support for importing UV sets which are defined per vertex,
instead of per face corners. Such UV sets can be generated when the
mesh is split according to UV islands, or when there is only one UV
island, in which cases only a single UV value can be stored per
vertex since vertices will never be on a seam.
Cycles, Eevee, OSL, Geo, Attribute
This operator provides consistency with the standard math node. Allows users to use a single node instead of two nodes for this common operation.
Reviewed By: HooglyBoogly, brecht
Differential Revision: https://developer.blender.org/D10808
When an `AttributeSet` is tagged as modified, which happens after the addition or
removal of an `Attribute` from the set, during the following GeometryManager device
update, we update and repack the kernel data for all attribute types. However, if we
only add or remove a `float` attribute, `float2` or `float3` attributes should not
be repacked for efficiency.
This patch adds some mechanisms to detect which attribute types are modified from
the AttributeSet.
Firstly, this adds an `AttrKernelDataType` to map the data type of the Attribute to
the one used in the kernel as there is no one to one match between the two since e.g.
`Transform` or `float4` data are stored as `float3s` in the kernel.
Then, this replaces the `AttributeSet.modified` boolean with a set of flags to detect
which types have been modified. There is no specific flag type (e.g.
`enum ModifiedType`), rather the flags used derive simply from the
`AttrKernelDataType` enumeration, to keep things synchronized.
The logic to remove an `Attribute` from the `AttributeSet` and tag the latter as modified
is centralized in a new `AttributeSet.remove` method taking an iterator as input.
Lastly, as some attributes like standard normals are not stored in the various
kernel attribute arrays (`DeviceScene::attribute_*`), the modified flags are only
set if the associated standard corresponds to an attribute which will be stored
in the kernel's attribute arrays. This makes it so adding or removing such attributes
does not trigger an unnecessary update of other type-related attributes.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D11373
These seem to be causing some stability issues, and really are just not that
useful in practice. Compiling them is slow already, so it does not improve
the user experience much to show an AO preview if it's not nearly instant.
lookups
We use the schema so that we can access top level attributes as well.
This is already done for polygon meshes and curves, so this only
modifies the behavior for subdivision objects.
This issue originates from a missing BVH packing for visibility data
when it is modified.
To fix this, this adds update flags to the managers to carry the
modified visibility information from the Objects' modified flag to the
GeometryManager.
Another set of flags is added to determine which data need to be packed:
geometry, vertices, or visibility. Those flags are then used when
packing the primivites.
Reviewed By: brecht
Maniphest Tasks: T87929
Differential Revision: https://developer.blender.org/D11219
The render session is keeping track of the scene update, which includes
kernel loading time.
This fixes negative render times reported when CUDA kernels are compiled
at runtime.
A bit fragile logic, can be re-implemented using some user-counted
scope utility classes, so that only outer-most time skip is applied.
This precomputes vertex normals in the procedural and caches them in case none
are found in the archive. This only applies to polygon meshes, as subdivision
meshes are yet to be subdivided, so it is useless to do this computation.
The goal here is to speed up data updates between frames, as computing normals
shows up in profiles even for large objects. This saves around 16% of update time
for a production file.
This splits the data reading logic from the AlembicObject class and moves it to
separate files to better enforce a separation of concern. The goal was to simplify
and improve the logic to read data from an Alembic archive.
Since the procedural loads data for the entire animation, this requires looping
over the frame range and looking up data for each frame. Previously those loops
would be duplicated over the entire code causing divergences in how we might
skip or deduplicate data across frames (if only some data change over time and
not other on the same object, e.g. vertices and triangles might not have the
same animation times), and therefore, bugs.
Now, we only use a single function with callback to loop over the geometry data
for each requested frame, and another one to loop over attributes. Given how
attributes are accessed it is a bit tricky to simplify further and only use a
ingle function, however, this is left as a further improvement as it is not
impossible.
To read the data, we now use a set of structures to hold which data to read.
Those structures might seem redundant with the Alembic schemas as they are
somewhat a copy of the schemas' structures, however they will allow us in the
long run to treat the data of one object type as the data of another object
type (e.g. to ignore subdivision, or only loading the vertices as point clouds).
For attributes, this new system allows us to read arbitrary attributes, although
with some limitations still:
* only subdivision and polygon meshes are supported due to lack of examples for
curve data;
* some data types might be missing: we support float, float2, float3, booleans,
normals, uvs, rgb, and rbga at the moment, other types can be trivially added
* some attribute scopes (or domains) are not handled, again, due to lack of example
files
* color types are always interpreted as vertex colors
Shaders are only compiled if they are used by some other Node (Geometry, Light, etc.).
This usage detection is done before updating the Scene, however it fails at detecting
Shaders used by Procedurals not known to Cycles (e.g. ones defined by third party
applications), as Procedurals are only updated after the shaders are compiled.
To remedy this, we now use the Node reference counting mechanism to detect whether a
Shader is used and therefore should be compiled.
This removes `ShaderManager::update_shaders_used` as it is not needed anymore, however,
since it would also update the Shader ids, this is now performed in
`ShaderManager::device_update`, and a new virtual `device_update_specific` method was
added to handle device updates for SVM and OSL.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D10965
This adds a reference count to Nodes which is incremented or decremented
whenever they are added to or removed from a socket, which will help us
track used Nodes throughout the scene graph generically without having to
add an explicit count or flag on specific Node types. This is especially
useful to track Nodes defined through Procedurals out of Cycles' control.
This also modifies the order in which nodes are deleted to ensure that
upon deletion, a Node does not attempt to decrement the reference
count of another Node which was already freed or deleted.
This is not currently used, but will be in the next commit.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D10965
For indirect light rays, don't assume any hit is opaque, rather if it has
transparency or emission do the shading but don't do any further bounces.
Naturally this is slower when there are transparent surfaces, however
without this cutout opacity doesn't give sensible results.
Differential Revision: https://developer.blender.org/D10985
For Cycles, when enabling the Persistent Data option, the full render data
will be preserved from frame-to-frame in animation renders and between
re-renders of the scene. This means that any modifier evaluation, BVH
building, OpenGL vertex buffer uploads, etc, can be done only once for
unchanged objects. This comes at an increased memory cost.
Previously there option was named Persistent Images and had a more limited
impact on render time and memory.
When using multiple view layers, only data from a single view layer is
preserved to keep memory usage somewhat under control. However objects
shared between view layers are preserved, and so this can speedup such
renders as well, even single frame renders.
For Eevee and Workbench this option is not available, however these engines
will now always reuse the depsgraph for animation and multiple view layers.
This can significantly speed up rendering.
These engines do not support sharing the depsgraph between re-renders, due
to technical issues regarding OpenGL contexts. Support for this could be added
if those are solved, see the code comments for details.
This simulates the effect of a honeycomb or grid placed in front of a softbox.
In practice, it works by attenuating rays coming off-angle as a function of the
provided spread angle parameter.
Setting the parameter to 180 degrees poses no restrictions to the rays, making
the light behave the same way as before this patch.
The total light power is normalized based on the spread angle, so that the
light strength remains the same.
Differential Revision: https://developer.blender.org/D10594