Volatile fields were introduced to the RenderResult struct years ago[1].
However, volatile is most likely not doing what it was intended to do
in this instance, and is problematic when moving files to c++ (see
discussion from D13962). There are complex rules around what happens to
these fields but none of them guarantee what the above commit alluded to.
This patch drops the volatile and cleans up the APIs surrounding it.
[1] rB7930c40051ef1b1a26140629cf1299aa89eed859
Passing on all platforms:
https://builder.blender.org/admin/#/builders/18/builds/338
Differential Revision: https://developer.blender.org/D14298
This was an old issue, but recent image partial update changes made this more
likely to happen in some cases. Now ensure that whenever the rendered scene
switches the image is updated.
Use a shorter/simpler license convention, stops the header taking so
much space.
Follow the SPDX license specification: https://spdx.org/licenses
- C/C++/objc/objc++
- Python
- Shell Scripts
- CMake, GNUmakefile
While most of the source tree has been included
- `./extern/` was left out.
- `./intern/cycles` & `./intern/atomic` are also excluded because they
use different header conventions.
doc/license/SPDX-license-identifiers.txt has been added to list SPDX all
used identifiers.
See P2788 for the script that automated these edits.
Reviewed By: brecht, mont29, sergey
Ref D14069
Currently there are many function declarations in `BKE_node.h` that
don't actually have implementations in blenkernel. This commit moves
the declarations to `NOD_composite.h`, `NOD_texture.h`, and
`NOD_shader.h` instead. This helps to clarify the purpose of the
different modules.
Differential Revision: https://developer.blender.org/D13869
Goals of this refactor:
* More unified approach to updating everything that needs to be updated
after a change in a node tree.
* The updates should happen in the correct order and quadratic or worse
algorithms should be avoided.
* Improve detection of changes to the output to avoid tagging the depsgraph
when it's not necessary.
* Move towards a more declarative style of defining nodes by having a
more centralized update procedure.
The refactor consists of two main parts:
* Node tree tagging and update refactor.
* Generally, when changes are done to a node tree, it is tagged dirty
until a global update function is called that updates everything in
the correct order.
* The tagging is more fine-grained compared to before, to allow for more
precise depsgraph update tagging.
* Depsgraph changes.
* The shading specific depsgraph node for node trees as been removed.
* Instead, there is a new `NTREE_OUTPUT` depsgrap node, which is only
tagged when the output of the node tree changed (e.g. the Group Output
or Material Output node).
* The copy-on-write relation from node trees to the data block they are
embedded in is now non-flushing. This avoids e.g. triggering a material
update after the shader node tree changed in unrelated ways. Instead
the material has a flushing relation to the new `NTREE_OUTPUT` node now.
* The depsgraph no longer reports data block changes through to cycles
through `Depsgraph.updates` when only the node tree changed in ways
that do not affect the output.
Avoiding unnecessary updates seems to work well for geometry nodes and cycles.
The situation is a bit worse when there are drivers on the node tree, but that
could potentially be improved separately in the future.
Avoiding updates in eevee and the compositor is more tricky, but also less urgent.
* Eevee updates are triggered by calling `DRW_notify_view_update` in
`ED_render_view3d_update` indirectly from `DEG_editors_update`.
* Compositor updates are triggered by `ED_node_composite_job` in `node_area_refresh`.
This is triggered by calling `ED_area_tag_refresh` in `node_area_listener`.
Removing updates always has the risk of breaking some dependency that no
one was aware of. It's not unlikely that this will happen here as well. Adding
back missing updates should be quite a bit easier than getting rid of
unnecessary updates though.
Differential Revision: https://developer.blender.org/D13246
Previously we would only free animation strip data when doing final
renders. If not doing a final render or simply just playing back videos
in the VSE, we would not free decoders or non VSE cache data from the
strips.
This would lead to memory usage exploding in complex VSE scenes.
Now we instead use the dumb apporach of freeing everything that is not
currently visible.
The related issue which is fixed by this change is the missing noisy
image pass when denoising and border render is used,
Need to allocate passes after the passes has been copied from the
original render result.
This includes much improved GPU rendering performance, viewport interactivity,
new shadow catcher, revamped sampling settings, subsurface scattering anisotropy,
new GPU volume sampling, improved PMJ sampling pattern, and more.
Some features have also been removed or changed, breaking backwards compatibility.
Including the removal of the OpenCL backend, for which alternatives are under
development.
Release notes and code docs:
https://wiki.blender.org/wiki/Reference/Release_Notes/3.0/Cycleshttps://wiki.blender.org/wiki/Source/Render/Cycles
Credits:
* Sergey Sharybin
* Brecht Van Lommel
* Patrick Mours (OptiX backend)
* Christophe Hery (subsurface scattering anisotropy)
* William Leeson (PMJ sampling pattern)
* Alaska (various fixes and tweaks)
* Thomas Dinges (various fixes)
For the full commit history, see the cycles-x branch. This squashes together
all the changes since intermediate changes would often fail building or tests.
Ref T87839, T87837, T87836
Fixes T90734, T89353, T80267, T80267, T77185, T69800
Run into it when was re-working tiles in the Cycles X project.
Make sure the storage of highlighted tiles is emptied when the
render is finished or cancelled).
The error is only possible to happen if the engine did not do
something correct, but is still good to deal with such situations
more gracefully.
Should be no visible change on user side.
Preparing for render parts removal as part of Cycles X project.
Differential Revision: https://developer.blender.org/D12317
The idea is to only allocate pixel storage only when there is an actual
data to be written to them.
This moves the code forward a better support of high-res rendering when
pixel storage is not allocated until render engine is ready to provide
pixel data.
Is expected to be no functional changes for neither users no external
engines. The only difference is that the motion and depth passes will
be displayed as transparent for until render engine provides any tile
result (at which point the pixels will be allocated and initialized to
infinite depth).
Differential Revision: https://developer.blender.org/D12195
In this bug report it resulted in rendering animations stopping too early,
but this affected more areas.
After the previous cleanup commit, it becomes clear that frame and ctime
values were mixed up.
Confusingly, BKE_scene_frame_get did not match the frame number as expected by
BKE_scene_frame_set. Instead it return the value after time remapping, which
is commonly named "ctime".
* Rename BKE_scene_frame_get to BKE_scene_ctime_get
* Add a new BKE_scene_frame_get that matches BKE_scene_frame_set
* Use int/float depending if fractional frame is expected
For some custom rendering engines it's advantageous not to write the image files to disk.
An example would be a network rendering engine which does it's own image writing.
This feature is only supported when bl_use_postprocess is also disabled, since render
engines can't influence the saving behavior of the sequencer or compositor.
Differential Revision: https://developer.blender.org/D11512
For Cycles, when enabling the Persistent Data option, the full render data
will be preserved from frame-to-frame in animation renders and between
re-renders of the scene. This means that any modifier evaluation, BVH
building, OpenGL vertex buffer uploads, etc, can be done only once for
unchanged objects. This comes at an increased memory cost.
Previously there option was named Persistent Images and had a more limited
impact on render time and memory.
When using multiple view layers, only data from a single view layer is
preserved to keep memory usage somewhat under control. However objects
shared between view layers are preserved, and so this can speedup such
renders as well, even single frame renders.
For Eevee and Workbench this option is not available, however these engines
will now always reuse the depsgraph for animation and multiple view layers.
This can significantly speed up rendering.
These engines do not support sharing the depsgraph between re-renders, due
to technical issues regarding OpenGL contexts. Support for this could be added
if those are solved, see the code comments for details.
Eevee is now used for Freestyle rendering by default, since other engines are
unlikely to have support for this. Workbench and Cycles do their own rendering.
RenderEngine add-ons can do their own Freestyle rendering by setting
bl_use_custom_freestyle = True.
Differential Revision: https://developer.blender.org/D8335