The existing logic to copy `BMesh` custom data layers to `Mesh`
attribute arrays was quite complicated, and incorrect in some cases
when the source and destinations didn't have the same layers.
The functions leave a lot to be desired in general, since they have
a lot of redundant complexity that ends up doing the same thing for
every element.
The problem in #104154 was that the "rest_position" attribute overwrote
the mesh positions since it has the same type and the positions weren't
copied. This same problem has shown up in boolean attribute conversion
in the past. Other changes fixed some specific cases but I think a
larger change is the only proper solution.
This patch adds preprocessing before looping over all elements to
find the basic information for copying the relevant layers, taking
layer names into account. The preprocessing makes the hot loops
simpler.
In a simple file with a 1 million vertex grid, I observed a 6%
improvement animation playback framerate in edit mode with a simple
geometry nodes modifier, from 5 to 5.3 FPS.
Fixes#104154, #104348
Pull Request #104421
The issue was caused by rather recent refactor in 7dea18b3aa.
The root of the issue lies within the fact that the optical center was updated
on the Blender side after the solution was run. There was a mistake in the code
which double-corrected for the pixel aspect ratio.
Added a comment in the code about this, so that it does not look suspicious.
Pull Request #104711
## Cleanup: Refactor NLATrack / NLAStrip Remove
This PR adds 3 new methods:
* BKE_nlatrack_remove_strip
* BKE_nlastrip_remove
* BKE_nlastrip_remove_and_free
These named BKE methods are really just replacements for BLI_remlink, but with some added checks, and enhanced readability.
Co-authored-by: Nate Rupsis <nrupsis@gmail.com>
Pull Request #104437
Don't create caps when using cyclic profile splines with two or fewer
points.
This case wasn't handled, yet, leading to invalid meshes or crashes.
Co-authored-by: Leon Schittek <leon.schittek@gmx.net>
Pull Request #104594
Previously, the node used the "true" normal of every looptri. Now it uses the
"loop normals" which includes e.g. smooth faces and custom normals. The true
normal can still be used on the points by capturing it before the Distribute node.
We do intend to expose the smooth normals separately in geometry nodes as well,
but this is an important first step.
It's also necessary to generate child hair between guide hair strands that don't
have visible artifacts at face boundaries.
For perfect backward compatibility, the node still has a "Legacy Normal" option
in the side bar. Creating the exact same behavior with existing nodes isn't
really possible unfortunately because of the specifics of how the Distribute
node used to compute the normals using looptris.
Pull Request #104414
Add `contains_group` method in python api for `NodeTree` type, cleanup
`ntreeHasTree` function, reuse `ntreeHasTree` in more place in code.
The algorithm has been changed to not recheck trees by using set.
Performance gains from avoiding already checked node trees:
Based on tests, can say that for large files with a huge number
of trees, the response speed of opening the search menu in the
node editor increased by ~200 times (for really large projects
with 16 individual groups in 6 levels of nesting). Group insert
operations are also accelerated, but this is different in some cases.
Pull Request #104465
Using callback functions didn't scale well as more arguments are added.
It got very confusing when to pass tehmarguments weren't always used.
Instead use a `FunctionRef` with indices for arguments. Also remove
unused edge arguments to topology mapping functions.
Because of T95965, some attributes are stored as generic attributes
in Mesh but have special handling for the conversion to BMesh.
Expose a function to tell whether certain attribute names are handled
specially in the conversion, and refactor the error checking process
to use it. Also check for generic attributes on the face domain which
wasn't done before.
Author: Hans Goudey
Reviewed By: Joseph Eagar
Co-authored-by: Joseph Eagar <joeedh@gmail.com>
Pull Request #104567
When merging two gpencil layers, if the destination layer had a keyframe
where the source layer did not, strokes of the previous keyframe
in source layer were lost in that frame.
This happened because the merge operator was looping through
frames of the source layer and appending strokes in the
corresponding destination layer, but never completing
other frames than the ones existing in the source layer.
This patch fixes it by first adding in source layer
all frames that are in destination layer.
Co-authored-by: Amelie Fondevilla <amelie.fondevilla@les-fees-speciales.coop>
Pull Request #104558
When nodes are copied to the clipboard, they don't need their declaration.
For nodes with dynamic declaration that might depend on the node tree itself,
the declaration could not be build anyway, because the node-clipboard does
not have a node tree.
Pull Request #104432
Add a new node that groups faces inside of boundary edge regions.
This is the opposite action as the existing "Face Group Boundaries"
node. It's also the same as some of the "Initialize Face Sets"
options in sculpt mode.
Discussion in #102962 has favored "Group" for a name for these
sockets rather than "Set", so that is used here.
Pull Request #104428
As described in #95966, replace the `ME_EDGEDRAW` flag with a bit
vector in mesh runtime data. Currently the the flag is only ever set
to false for the "optimal display" feature of the subdivision surface
modifier. When creating an "original" mesh in the main data-base,
the flag is always supposed to be true.
The bit vector is now created by the modifier only as necessary, and
is cleared for topology-changing operations. This fixes incorrect
interpolation of the flag as noted in #104376. Generally it isn't
possible to interpolate it through topology-changing operations.
After this, only the seam status needs to be removed from edges before
we can replace them with the generic `int2` type (or something similar)
and reduce memory usage by 1/3.
Related:
- 10131a6f62
- 145839aa42
In the future `BM_ELEM_DRAW` could be removed as well. Currently it is
used and aliased by other defines in some non-obvious ways though.
Pull Request #104417
Subdivision surface efficiency relies on caching pre-computed topology
data for evaluation between frames. However, while eed45d2a23
introduced a second GPU subdiv evaluator type, it still only kept
one slot for caching this runtime data per mesh.
The result is that if the mesh is also needed on CPU, for instance
due to a modifier on a different object (e.g. shrinkwrap), the two
evaluators are used at the same time and fight over the single slot.
This causes the topology data to be discarded and recomputed twice
per frame.
Since avoiding duplicate evaluation is a complex task, this fix
simply adds a second separate cache slot for the GPU data, so that
the cost is simply running subdivision twice, not recomputing topology
twice.
To help diagnostics, I also add a message to show when GPU evaluation
is actually used to the modifier panel. Two frame counters are used
to suppress flicker in the UI panel.
Differential Revision: https://developer.blender.org/D17117
Pull Request #104441
Straightforward port. I took the oportunity to remove some C vector
functions (ex: copy_v2_v2).
This makes some changes to DRWView to accomodate the alignement
requirements of the float4x4 type.
Remove the use of a separate contiguous positions array now that
they are stored that way in the first place. This allows removing the
complexity of tracking whether it is allocated and deformed in the
mesh modifier stack.
Instead of deferring the creation of the final mesh until after the
positions have been copied and deformed, create the final mesh
first and then deform its positions.
Since vertex and face normals are calculated lazily, we can rely on
individual modifiers to calculate them as necessary and simplify
the modifier stack. This was hard to change before because of the
separate array of deformed positions.
Differential Revision: https://developer.blender.org/D16971
Existing `BKE_main_namemap_destroy` is too specific when a entire Main
needs to have its namemaps cleared, since it would not handle the
Library ones.
While in regular situation current code is fine, ID management code that
may also edit linked data needs a wider, simpler clearing tool.
Current `BKE_id_remapper_add` would not replace an already existing
mapping rule, now `BKE_id_remapper_add_overwrite` allows that behavior
if necessary.
If identity pairs (i.e. old ID pointer being same as new one) was
forbidden, then this should be asserted against in code defining
remapping, not in code applying it.
But it is actually sometimes usefull to allow/use identity pairs, so
simply early-return on these instead of asserting.
The goal is to give technical artists the ability to optimize modifier usage
and/or geometry node groups for performance. In the long term, it
would be useful if Blender could provide its own UI to display profiling
information to users. However, right now, there are too many open
design questions making it infeasible to tackle this in the short term.
This commit uses a simpler approach: Instead of adding new ui for
profiling data, it exposes the execution-time of modifiers in the Python
API. This allows technical artists to access the information and to build
their own UI to display the relevant information. In the long term this
will hopefully also help us to integrate a native ui for this in Blender
by observing how users use this information.
Note: The execution time of a modifier highly depends on what other
things the CPU is doing at the same time. For example, in many more
complex files, many objects and therefore modifiers are evaluated at
the same time by multiple threads which makes the measurement
much less reliable. For best results, make sure that only one object
is evaluated at a time (e.g. by changing it in isolation) and that no
other process on the system keeps the CPU busy.
As shown below, the execution time has to be accessed on the
evaluated object, not the original object.
```lang=python
import bpy
depsgraph = bpy.context.view_layer.depsgraph
ob = bpy.context.active_object
ob_eval = ob.evaluated_get(depsgraph)
modifier_eval = ob_eval.modifiers[0]
print(modifier_eval.execution_time, "s")
```
Differential Revision: https://developer.blender.org/D17185
The merge down operator was sometimes copying the wrong frame, which altered the animation.
While merging the layers, it is sometimes needed to duplicate a keyframe,
when the lowest layer does not have a keyframe but the highest layer does.
Instead of duplicating the previous keyframe of the lowest layer, the code
was actually duplicating the active frame of the layer which was the current frame in the timeline.
This patch fixes the issue by setting the previous keyframe of the layer as its active frame before duplication.
Related issue: T104371.
Differential Revision: https://developer.blender.org/D17214
There were two errors with the function used to convert face sets
to the legacy mesh format for keeping forward compatibility:
- It was moved before `CustomData_blend_write_prepare` so it
operated on an empty span.
- It modified the mesh when it's only supposed to change the copy
of the layers written to the file.
Differential Revision: https://developer.blender.org/D17210
Use the `SharedCache` concept introduced in D16204 to share lazily
calculated evaluated data between original and multiple evaluated
curves data-blocks. Combined with D14139, this should basically remove
most costs associated with copying large curves data-blocks (though
they add slightly higher constant overhead). The caches should
interact well with undo steps, limiting recalculations on undo/redo.
Options for avoiding the new overhead associated with the shared
caches are described in T104327 and can be addressed separately.
Simple situations affected by this change are using any of the following data
on an evaluated curves data-block without first invalidating it:
- Evaluated offsets (size of evaluated curves)
- Evaluated positions
- Evaluated tangents
- Evaluated normals
- Evaluated lengths (spline parameter node)
- Internal Bezier and NURBS caches
In a test with 4m points and 170k curves, using curve normals in a
procedural setup that didn't change positions procedurally gave 5x
faster playback speeds. Avoiding recalculating the offsets on every
update saved about 3 ms for every sculpt update for brushes that don't
change topology.
Differential Revision: https://developer.blender.org/D17134