This quick fix will populate the runtime orig pointers to avoid
crashes when a grease pencil object uses layer transforms, parenting
or modifiers.
This will have to be revisited and fixed with a better solution.
This commit renames enums related the "Curve" object type and ID type
to add `_LEGACY` to the end. The idea is to make our aspirations clearer
in the code and to avoid ambiguities between `CURVE` and `CURVES`.
Ref T95355
To summarize for the record, the plans are:
- In the short/medium term, replace the `Curve` object data type with
`Curves`
- In the longer term (no immediate plans), use a proper data block for
3D text and surfaces.
Differential Revision: https://developer.blender.org/D14114
Use a shorter/simpler license convention, stops the header taking so
much space.
Follow the SPDX license specification: https://spdx.org/licenses
- C/C++/objc/objc++
- Python
- Shell Scripts
- CMake, GNUmakefile
While most of the source tree has been included
- `./extern/` was left out.
- `./intern/cycles` & `./intern/atomic` are also excluded because they
use different header conventions.
doc/license/SPDX-license-identifiers.txt has been added to list SPDX all
used identifiers.
See P2788 for the script that automated these edits.
Reviewed By: brecht, mont29, sergey
Ref D14069
This implements the update cache described in T95401.
The cache is currently only used for drawing strokes and
sculpting (using the push brush).
**Note: Making use of the cache throughout grease pencil will
have to be done incrementally in other patches. **
The update cache stores what elements have changed in the
original data-block since the last time the eval object
was updated. Additionally, the update cache can store multiple
updates to the data and minimizes the number of elements
that need to be copied.
Elements can be tagged using `BKE_gpencil_tag_full_update` and
`BKE_gpencil_tag_light_update`. A full update means that the element
itself will be copied but also all of the content inside. E.g. when a
layer is tagged for a full update, the layer, all the frames inside the
layer and all the strokes inside the frames will be copied.
A light update means that only the properties of the element are copied
without any of the content. E.g. if a layer is tagged with a light
update, it will copy the layer name, opacity, transform, etc.
When the update cache is in use (e.g. elements have been tagged) then
the depsgraph will not trigger a copy-on-write, but an update-on-write.
This means that the update cache will be used to determine what elements
have changed and then only those elements will be copied over to the
eval object.
If the update cache is empty or the data block was tagged with a full
update, we always fall back to a copy-on-write.
Currently, the update cache is only used by the active depsgraph. This
is because we need to free the update cache after an update-on-write so
it's reset and we need to make sure it is not freed or read by other
depsgraphs.
Co-authored-by: @yann-lty
This patch was contributed by The SPA Studios.
Reviewed By: sergey, antoniov, #dependency_graph, pepeland, mendio
Maniphest Tasks: T95401
Differential Revision: https://developer.blender.org/D13984
Based on discussions from T95355 and T94193, the plan is to use
the name "Curves" to describe the data-block container for multiple
curves. Eventually this will replace the existing "Curve" data-block.
However, it will be a while before the curve data-block can be replaced
so in order to distinguish the two curve types in the UI, "Hair Curves"
will be used, but eventually changed back to "Curves".
This patch renames "hair-related" files, functions, types, and variable
names to this convention. A deep rename is preferred to keep code
consistent and to avoid any "hair" terminology from leaking, since the
new data-block is meant for all curve types, not just hair use cases.
The downside of this naming is that the difference between "Curve"
and "Curves" has become important. That was considered during
design discussons and deemed acceptable, especially given the
non-permanent nature of the somewhat common conflict.
Some points of interest:
- All DNA compatibility is lost, just like rBf59767ff9729.
- I renamed `ID_HA` to `ID_CV` so there is no complete mismatch.
- `hair_curves` is used where necessary to distinguish from the
existing "curves" plural.
- I didn't rename any of the cycles/rendering code function names,
since that is also used by the old hair particle system.
Differential Revision: https://developer.blender.org/D14007
Part of T91671.
Not much else to say, this is mainly a massive deletion of code.
Note that a few cleanups possible after this proxy removal were kept out
of this commit to try to reduce a bit its size.
Reviewed By: sergey, brecht
Maniphest Tasks: T91671
Differential Revision: https://developer.blender.org/D13995
The issue was happening with a specific file where the ID management
code was not fully copying all modifiers because of the extra check
in the `BKE_object_support_modifier_type_check()`.
While it is arguable that copy-on-write should be a 1:1 copy there is
no real need to maintain the per-modifier pointer to its original.
Use its SessionUUID to perform lookup in the original datablock.
Downside of this approach is that it is a linear lookup instead of
direct pointer access, but the upside is that there is less pointers
to manage and that the file with unsupported modifiers does behave
correct without any asserts.
Differential Revision: https://developer.blender.org/D13993
The modifiers are mapped between original and evaluated objects based on
their session IDs. The pointer to original modifier is no longer needed
for the backup: it remained from the initial implementation which was
rewritten at some point.
This is a preparation for removal of the pointer to original modifier.
Blender would have crashed when renaming bone in Edit Mode, Saving, and
than selecting/deselecting.
Caused by a mistake in the 0f89bcdbeb: can not "short-circuit" the
CoW update if it was explicitly requested.
Safest for now solution seems to be to store whether the CoW component
has been explicitly tagged, so that the following configuration can be
supported:
DEG_id_tag_update(id, ID_RECALC_GEOMETRY);
DEG_id_tag_update(id, ID_RECALC_COPY_ON_WRITE);
Differential Revision: https://developer.blender.org/D13966
The evaluated mesh is a result of evaluated modifiers, and referencing
other evaluated IDs such as materials.
It can not be stored in the EditMesh structure which is intended to be
re-used by many areas. Such sharing was causing ownership errors causing
bugs like
T93855: Cycles crash with edit mode and simultaneous viewport and final render
The proposed solution is to store the evaluated edit mesh and its cage in
the object's runtime field. The motivation goes as following:
- It allows to avoid ownership problems like the ones in the linked report.
- Object level is chosen over mesh level is because the evaluated mesh
is affected by modifiers, which are on the object level.
This patch allows to have modifier stack of an object which shares mesh with
an object which is in edit mode to be properly taken into account (before
the change the modifier stack from the active object will be used for all
objects which share the mesh).
There is a change in the way how copy-on-write is handled in the edit mode to
allow proper state update when changing active scene (or having two windows
with different scenes). Previously, the copt-on-write would have been ignored
by skipping tagging CoW component. Now it is ignored from within the CoW
operation callback. This allows to update edit pointers for objects which are
not from the current depsgraph and where the edit_mesh was never assigned in
the case when the depsgraph was evaluated prior the active depsgraph.
There is no user level changes changes expected with the CoW handling changes:
should not affect on neither performance, nor memory consumption.
Tested scenarios:
- Various modifiers configurations of objects sharing mesh and be part of the
same scene.
- Steps from the reports: T93855, T82952, T77359
This also fixes T76609, T72733 and perhaps other reports.
Differential Revision: https://developer.blender.org/D13824
- Added space below non doc-string comments to make it clear
these aren't comments for the symbols directly below them.
- Use doxy sections for some headers.
Ref T92709
Not sure why this bug was only discovered by such an elaborate steps
and why it took so long to be discovered. The root of the issue is
that in the 956c539e59 the typical flow of tag+flush+evaluate was
violated and tagging for visibility change happened after flush.
Copy-on-write data blocks could be referenced from python but were not
properly managing python reference counting.
This would leak memory for any evaluated data-blocks accessed by Python.
Reviewed By: sergey
Ref D12850
Seems to be residue from an early 2.80 days: the placeholder code path
is no longer used.
Remove all the tricky code for this, and make it clear that the
`deg_expand_copy_on_write_datablock` is used on an non-expanded
datablock.
Should be no functional changes. And should help simplify D12850.
Differential Revision: https://developer.blender.org/D12852
This reverts commit 0558907ae6.
Based on discussion with Sergey, having Python references to un-expanded
data should not happen - this change needs to be reconsidered.
With this commit, curve objects support the geometry nodes modifier.
Curves objects now evaluate to `CurveEval` unless there was a previous
implicit conversion (tessellating modifiers, mesh modifiers, or the
settings in the curve "Geometry" panel). In the new code, curves are
only considered to be the wire edges-- any generated surface is a mesh
instead, stored in the evaluated geometry set.
The consolidation of concepts mentioned above allows remove a lot of
code that had to do with maintaining the `DispList` type temporarily
for modifiers and rendering. Instead, render engines see a separate
object for the mesh from the mesh geometry component, and when the
curve object evaluates to a curve, the `CurveEval` is always used for
drawing wire edges.
However, currently the `DispList` type is still maintained and used as
an intermediate step in implicit mesh conversion. In the future, more
uses of it could be changed to use `CurveEval` and `Mesh` instead.
This is mostly not changed behavior, it is just a formalization of
existing logic after recent fixes for 2.8 versions last year and two
years ago. Also, in the future more functionality can be converted
to nodes, removing cases of implicit conversions. For more discussion
on that topic, see T89676.
The `use_fill_deform` option is removed. It has not worked properly
since 2.62, and the choice for filling a curve before or after
deformation will work much better and be clearer with a node system.
Applying the geometry nodes modifier to generate a curve is not
implemented with this commit, so applying the modifier won't work
at all. This is a separate technical challenge, and should be solved
in a separate step.
Differential Revision: https://developer.blender.org/D11597
Evaluating the dependency graph potentially executes Python code when
evaluating drivers. In specific situations (see T91046) this could
deadlock Blender entirely. Temporarily releasing the GIL when evaluating
the depsgraph resolves this.
This is an improved version of
rBfc460351170478e712740ae1917a2e24803eba3b, thanks @brecht for the diff!
Manifest task: T91046
It is causing crashes in rendering, when releasing the GIL in render threads
while the main thread is holding it.
Ref T91046
This reverts commit fc46035117.
Evaluating the dependency graph potentially executes Python code when
evaluating drivers. In specific situations (see T91046) this could deadlock
Blender entirely. Temporarily releasing the GIL when evaluating the depsgraph
resolves this.
Calling the `BPy_BEGIN_ALLOW_THREADS` macro is relatively safe, as it's
a no-op when the current thread does not have the GIL.
Developed in collaboration with @sergey
Manifest task: T91046
We now use a for_each function with callback to iterate through all sequences in the scene.
This has the benefit that we now only loop over the sequences in the scene once.
Before we would loop over them twice and allocate memory to store temporary data.
The allocation of temporary data lead to unintentional memory leaks if the code used returns to exit out of the iteration loop.
The new for_each callback method doesn't allocate any temporary data and only iterates though all sequences once.
Reviewed By: Richard Antalik, Bastien Montagne
Differential Revision: http://developer.blender.org/D12278
During a mesh transformation in edit mode (Move, Rotate...), only part of
the batch cache needs to be updated.
This commit allows only update only the drawn batches seen in
`BKE_object_data_eval_batch_cache_deform_tag` if the new
`ID_RECALC_GEOMETRY_DEFORM` flag is used.
This new flag is used in the transforms operation for edit-mesh and
results in 1.6x overall speedup in heavy subdiv cube.
Differential Revision: https://developer.blender.org/D11599
Delay depsgraph visibility update tagging until it is known that
graph relations are up to date, and until it is known that the graph
is actually needed to be evaluated.
Differential Revision: https://developer.blender.org/D11660
After looking into task isolation issues with Sergey, we couldn't find the
reason behind the deadlocks that we are getting in T87938 and a Sprite Fright
file involving motion blur renders.
There is no apparent place where we adding or waiting on tasks in a task group
from different isolation regions, which is what is known to cause problems. Yet
it still hangs. Either we do not understand some limitation of TBB isolation,
or there is a bug in TBB, but we could not figure it out.
Instead the idea is to use isolation only where we know we need it: when
holding a mutex lock and then doing some multithreaded operation within that
locked region. Three places where we do this now:
* Generated images
* Cached BVH tree building
* OpenVDB lazy grid loading
Compared to the more automatic approach previously used, there is the downside
that it is easy to miss places where we need isolation. Yet doing it more
automatically is also causing unexpected issue and bugs that we found no
solution for, so this seems better.
Patch implemented by Sergey and me.
Differential Revision: https://developer.blender.org/D11603
Under some circumstances using task isolation can cause deadlocks.
Previously, our task pool implementation would run all tasks in an
isolated region. Now using task isolation is optional and can be
turned on/off for individual task pools.
Task pools that spawn new tasks recursively should never enable
task isolation. There is a new check that finds these cases at runtime.
Right now this check is disabled, so that this commit is a pure refactor.
It will be enabled in an upcoming commit.
This fixes T88598.
Differential Revision: https://developer.blender.org/D11415
Share the pointer with the original mesh instead, this matches behavior
of all other objects edit-mode data.
Duplicating the edit-mesh pointer makes updates to edit-mesh require
a COPY_ON_WRITE update, which is currently an expensive operation
(copying the entire mesh).
Notes:
- This change is from 802027f3f8
so the edit-meshes object pointer `BMEditMesh.ob` referenced the COW
version of the object. This pointer has since been removed, so the
copy is no longer needed.
- Having a separate edit-mesh pointer could be used so linked duplicates
could have their own generated meshes. For this to be supported,
many other changes would be needed: see D10920.
Resolve ownership ambiguity with shared physics pointers.
Previously, LIB_ID_CREATE_NO_MAIN allowed pointer sharing with
the source ID so physics caches can be shared between original and
evaluated data: (Object.soft.shared & Object.rigidbody_object.shared).
This only worked properly for LIB_TAG_COPIED_ON_WRITE ID's,
as LIB_TAG_NO_MAIN can be used in situations where the original ID's
lifetime limited by it's original data.
This commit adds `LIB_ID_COPY_SET_COPIED_ON_WRITE` so ID's only share
memory with original data for ID's evaluated in the depsgraph.
For all other uses, a full copy of physics data is made.
Ref D11228#287094
Issue was caused by anim handle being reset by
`DEG_evaluate_on_framechange()`
Preserve anim handle by backing it up in
`blender::deg::SequenceBackup`
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D11059
Fix `nullptr` redeference when setting 'orig_data' pointers on CoW copies,
by stopping the loop also when `element_cow == nullptr`. This avoids a
crash of Blender when the original list of pointers is longer than the
CoW list of pointers.
I've also added a `BLI_assert()` that checks for equal lengths of the
two `ListBase`s, so that problems like these aren't hidden away completely.
The root cause of the crash was actually a modifier that was assigned to
an object of the wrong type (an Armature object doesn't support modifiers).
This caused the list of modifiers on the CoW copy to be shorter than the
list of modifiers on the original Object. It's still a mystery how that
object got that modifier in the first place.
This removes a lot of unnecessary code that is generated by
the compiler automatically.
In very few cases, a defaulted destructor in a .cc file is
still necessary, because of forward declarations in the header.
I removed some defaulted virtual destructors, because they are not
necessary, when the parent class has a virtual destructor already.
Defaulted constructors are only necessary when there is another
constructor, but the class should still be default constructible.
Differential Revision: https://developer.blender.org/D10911
Handle Lattice object the same way as Mesh objects. This is mostly to
execute the `object->data = data_eval;` line, which ensures that the
evaluated mesh is assigned to the evaluated object, and thus prevents
the lattice from un-deforming.
Rename:
- `BKE_animsys_store_rna_setting` → `BKE_animsys_rna_path_resolve`
- `BKE_animsys_read_rna_setting` → `BKE_animsys_read_from_rna_path`
- `BKE_animsys_write_rna_setting` → `BKE_animsys_write_to_rna_path`
The concept of "RNA setting" is unclear; the new names reflect better
what the functions actually do.
No functional changes.