This is done to ensure building with newer OptiX SDK releases that add new struct fields gives
deterministic results (no uninitialized fields and therefore random data is passed to OptiX).
Separate tile buffers on all devices only need to exist when denoising is active (so any overlap
being rendered simultaneously does not write to the same memory region).
When denoising is not active they can be distributed like all other memory when peer
memory support is available.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D10858
This patch changes the `MEM_DEVICE_ONLY` type to only allocate on the device and fail if
that is not possible anymore because out-of-memory (since OptiX acceleration structures may
not be allocated in host memory). It also fixes high peak memory usage during OptiX
acceleration structure building.
Reviewed By: brecht
Maniphest Tasks: T85985
Differential Revision: https://developer.blender.org/D10535
Commit 6e74a8b69f changed the denoiser input passes default to
include the normal pass. This does not always produce optimal images though, hence why the
default was previously set to only include the color and albedo passes. This restores that behavior, so
that viewport denoising with OptiX produces the same results as before.
The "cuda_mem_map_mutex" was potentially being locked recursively during the call to
"CUDADevice::move_textures_to_host", which crashed. This moves around the locking and
unlocking of "cuda_mem_map_mutex", so that it doesn't call a function that locks it while
still holding the lock.
Reviewed By: pmoursnv
Maniphest Tasks: T85089, T84734
Differential Revision: https://developer.blender.org/D10219
This optimizes device updates (during user edits or frame changes in
the viewport) by avoiding unnecessary computations. To achieve this,
we use a combination of the sockets' update flags as well as some new
flags passed to the various managers when tagging for an update to tell
exactly what the tagging is for (e.g. shader was modified, object was
removed, etc.).
Besides avoiding recomputations, we also avoid resending to the devices
unmodified data arrays, thus reducing bandwidth usage. For OptiX and
Embree, BVH packing was also multithreaded.
The performance improvements may vary depending on the used device (CPU
or GPU), and the content of the scene. Simple scenes (e.g. with no adaptive
subdivision or volumes) rendered using OptiX will benefit from this work
the most.
On average, for a variety of animated scenes, this gives a 3x speedup.
Reviewed By: #cycles, brecht
Maniphest Tasks: T79174
Differential Revision: https://developer.blender.org/D9555
Branched path tracing is not supported for OptiX, and it would still use the
number of AA samples from there when branched path was enabled by the user
earlier but auto disabled and hidden in the UI when using OptiX.
Ref D10159
If extensions string is longer than 1024 then the old code would have
reported empty string instead of extensions.
Now the code does dynamic string allocation to store result of request,
similar to what is done in `OpenCLInfo::get_hardware_id`.
The code looks a bit ugly, but it didn't really change much with this
patch. In other words, the code can become more modern and clear, but
it is considered to be outside of the scope of this change.
Differential Revision: https://developer.blender.org/D10135
In my testing this works, but it requires me to remove the min(start_sample...) part in the
adaptive sampling kernel, and I assume there's a reason why it was there?
Reviewed By: brecht
Maniphest Tasks: T82351
Differential Revision: https://developer.blender.org/D9445
Commit d259e7dcfb increased the instance limit, but only provided
a fall back for the host code for older OptiX SDKs, not for kernel code. This caused a mismatch when
an old SDK was used (as is currently the case on buildbot) and subsequent rendering artifacts. This
fixes that by moving the bit that is checked to a common location that works with both old an new
SDK versions.
For a while now OptiX had support for 28-bits of instance IDs, instead of the initial 24-bits (see also
value reported by OPTIX_DEVICE_PROPERTY_LIMIT_MAX_INSTANCE_ID). This change makes use of
that and also adds an error reported when the number of instances an OptiX acceleration structure is
created with goes beyond the limit, to make this clear instead of just rendering an image with artifacts.
Manifest Tasks: T81431
Rendering on the CPU uses the Embree BVH layout, whether the OptiX denoiser is enabled or not.
This means the "build_bvh" function gets a "BVHEmbree" object to fill and not a "BVHMulti" as it
was assuming before, which caused crashes due to memory geting overwritten incorrectly. This
fixes that by redirecting Embree BVH builds to the Embree device.
Manifest Tasks: T83925
Based on testing by Intel, rendering on Iris GPUs and upcoming Xe GPUs
should work. This is enabled on Windows and Linux.
More testing is needed to verify correctness and performance in production
scenes, but our basic benchmark files seem to give correct results.
Adds support for building multiple BVH types in order to support using both CPU and OptiX
devices for rendering simultaneously. Primitive packing for Embree and OptiX is now
standalone, so it only needs to be run once and can be shared between the two. Additionally,
BVH building was made a device call, so that each device backend can decide how to
perform the building. The multi-device for instance creates a special multi-BVH that holds
references to several sub-BVHs, one for each sub-device.
Reviewed By: brecht, kevindietrich
Differential Revision: https://developer.blender.org/D9718
This enables support for baking when OptiX is active, but uses CUDA for that behind the scenes, since
the way baking is currently implemented does not work well with OptiX.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D9784
Support for the AO and bevel shader nodes requires calling "optixTrace" from within the shading
VM, which is only allowed from inlined functions to the raygen program or callables. This patch
therefore converts the shading VM to use direct callables to make it work. To prevent performance
regressions a separate kernel module is compiled and used for this purpose.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D9733
This encapsulates Node socket members behind a set of specific methods;
as such it is no longer possible to directly access Node class members
from exporters and parts of Cycles.
The methods are defined via the NODE_SOCKET_API macros in `graph/
node.h`, and are for getting or setting a specific socket's value, as
well as querying or modifying the state of its update flag.
The setters will check whether the value has changed and tag the socket
as modified appropriately. This will let us know how a Node has changed
and what to update, which is the first concrete step toward a more
granular scene update system.
Since the setters will tag the Node sockets as modified when passed
different data, this patch also removes the various modified methods
on Nodes in favor of Node::is_modified which checks the sockets'
update flags status.
Reviewed By: brecht
Maniphest Tasks: T79174
Differential Revision: https://developer.blender.org/D8544
This avoids recomputing the BVH for geometries that do not have changes in topology but whose vertices are modified (like a simple character animation), and gives up to 40% speedup for BVH building.
This is only available for viewport renders at the moment.
Reviewed By: pmoursnv, brecht
Differential Revision: https://developer.blender.org/D9353
While Cycles already supports using both CPU and GPU at the same time, there
currently is a large problem with it: Since the CPU grabs one tile per thread,
at the end of the render the GPU runs out of new work but the CPU still needs
quite some time to finish its current times.
Having smaller tiles helps somewhat, but especially OpenCL rendering tends to
lose performance with smaller tiles.
Therefore, this commit adds support for tile stealing: When a GPU device runs
out of new tiles, it can signal the CPU to release one of its tiles.
This way, at the end of the render, the GPU quickly finishes the remaining
tiles instead of having to wait for the CPU.
Thanks to AMD for sponsoring this work!
Differential Revision: https://developer.blender.org/D9324
Rather than just printing a message and falling back to the CPU. For render
farms it's better to avoid a potentially slow render on the CPU if the intent
was to render on the GPU.
Ref T82193, D9086
This encapsulates Node socket members behind a set of specific methods;
as such it is no longer possible to directly access Node class members
from exporters and parts of Cycles.
The methods are defined via the NODE_SOCKET_API macros in `graph/
node.h`, and are for getting or setting a specific socket's value, as
well as querying or modifying the state of its update flag.
The setters will check whether the value has changed and tag the socket
as modified appropriately. This will let us know how a Node has changed
and what to update, which is the first concrete step toward a more
granular scene update system.
Since the setters will tag the Node sockets as modified when passed
different data, this patch also removes the various `modified` methods
on Nodes in favor of `Node::is_modified` which checks the sockets'
update flags status.
Reviewed By: brecht
Maniphest Tasks: T79174
Differential Revision: https://developer.blender.org/D8544
NanoVDB is a platform-independent sparse volume data structure that makes it possible to
use OpenVDB volumes on the GPU. This patch uses it for volume rendering in Cycles,
replacing the previous usage of dense 3D textures.
Since it has a big impact on memory usage and performance and changes the OpenVDB
branch used for the rest of Blender as well, this is not enabled by default yet, which will
happen only after 2.82 was branched off. To enable it, build both dependencies and Blender
itself with the "WITH_NANOVDB" CMake option.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D8794
Commit 009971ba7a changed it so Cycles creates a separate
Embree device for each Cycles device, but missed the multi-device case. A multi-device with
Embree BVH can occur when CPU rendering is used with OptiX denoising and BVH creation then
failed to get a valid pointer to the Embree device, which crashed. This fixes that by providing the
correct device pointer in the multi-device case as well.
* Support precompiled libraries on Linux
* Add license headers
* Refactoring to deduplicate code
Includes work by Ray Molenkamp and Grische for precompiled libraries.
Ref D8769
Before, Cycles was using a shared Embree device across all instances.
This could result in crashes when viewport rendering and material
preview were using Cycles simultaneously.
Fixes issue T80042
Maniphest Tasks: T80042
Differential Revision: https://developer.blender.org/D8772
We forgot to update this code as part of D3108. I'd like to include this in 2.90,
it's entirely broken now so can't really get any worse.
Differential Revision: https://developer.blender.org/D8738
The code uses OpenGL functionality, so is to be linked against
OpenGL libraries.
This makes it easier to integrate with cycles using CMake.
Differential Revision: https://developer.blender.org/D8371
This splits the volume related data (properties for rendering and attributes) of the Mesh node
into a new `Volume` node type.
This `Volume` node derives from the `Mesh` class since we generate a mesh for the bounds of the
volume, as such we can safely work on `Volumes` as if they were `Meshes`, e.g. for BVH creation.
However such code should still check for the geometry type of the object to be `MESH` or `VOLUME`
which may be bug prone if this is forgotten.
This is part of T79131.
Reviewed By: brecht
Maniphest Tasks: T79131
Differential Revision: https://developer.blender.org/D8538
This moves `Session::get_requested_device_features`,
`Session::load_kernels`, and `Session::update_scene` out of `Session`
and into `Scene`, as mentioned in D8544.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D8590
The OptiX kernels are compiled for target "compute_sm_52", which is only available on second
generation Maxwell GPUs, so disable support for older ones.
This patch changes the discovery of pre-compiled kernels, to look for any PTX, even if
it does not match the current architecture version exactly. It works because the driver can
JIT-compile PTX generated for architectures less than or equal to the current one.
This e.g. makes it possible to render on a new GPU architecture even if no pre-compiled
binary kernel was distributed for it as part of the Blender installation.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D8332