As an inherent property of matrix-based transformation math, non-
uniform scaling of a parent bone induces shear into the transform
matrix of any rotated child. Such matrices cannot be cleanly
decomposed into a combination of location/rotation/scale, which
causes issues for rigging and animation tools.
Blender bones have options to exclude rotation and/or scale from the
inherited transformation, but don't have any support for removing the
often undesired shear component. That goal requires replacing simple
parenting with a combination of multiple bones and constraints. The
same is true about the goal of inheriting some scale, but completely
avoiding shear.
This patch replaces the old Inherit Scale checkbox with a enum that
supports multiple options:
* Full: inherit all effects of scale, like with enabled Inherit Scale.
* Fix Shear: removes shear from the final inherited transformation.
The cleanup math is specifically designed to preserve the main
axis of the bone, its length and total volume, and minimally
affect roll on average. It however will not prevent reappearance
of shear due to local rotation of the child or its children.
* Average: inherit uniform scale that represents the parent volume.
This is the simplest foolproof solution that will inherit some
scale without ever causing shear.
* None: completely remove scale and shear.
* None (Legacy): old disabled Inherit Scale checkbox.
This mode does not handle parent shear in any way, so the child
is likely to end up having both scale and shear. It is retained
for backward compatibility.
Since many rigging-related addons access the use_inherit_scale
property from Python, it is retained as a backward compatibility
stub that provides the old functionality.
As a side effect of reworking the code, this also fixes a matrix
multiplication order bug in the Inherit Rotation code, which caused
the parent local scale to be applied in world space. In rigger
opinion this option is useless in production rigs, so this fix
should not be a problem.
Reviewers: brecht
Differential Revision: https://developer.blender.org/D5588
This change ensures that operators which needs access to evaluated data
first makes sure there is a dependency graph.
Other accesses to the dependency graph made it more explicit about
whether they just need a valid dependency graph pointer or whether they
expect the graph to be already evaluated.
This replaces OPTYPE_USE_EVAL_DATA which is now removed.
Some general rules about usage of accessors:
- Drawing is expected to happen from a fully evaluated dependency graph.
There is now a function to access it, which will in the future control
that dependency graph is actually evaluated.
This check is not yet done because there are some things to be taken
care about first: for example, post-update hooks might leave scene in
a state where something is still tagged for update.
- All operators which needs to access evaluated state must use
CTX_data_ensure_evaluated_depsgraph().
This function replaces OPTYPE_USE_EVAL_DATA.
The call is generally to be done in the very beginning of the
operator, prior other logic (unless this is some comprehensive
operator which might or might not need access to an evaluated state).
This call is never to be used from a loop.
If some utility function requires evaluated state of dependency graph
the graph is to be passed as an explicit argument. This way it is
clear that no evaluation happens in a loop or something like this.
- All cases which needs to know dependency graph pointer, but which
doesn't want to actually evaluate it can use old-style function
CTX_data_depsgraph_pointer(), assuming that underlying code will
ensure dependency graph is evaluated prior to accessing it.
- The new functions are replacing OPTYPE_USE_EVAL_DATA, so now it is
explicit and local about where dependency graph is being ensured.
This commit also contains some fixes of wrong usage of evaluation
functions on original objects. Ideally should be split out, but in
reality with all the APIs being renamed is quite tricky.
Fixes T67454: Blender crash on rapid undo and select
Speculation here is that sometimes undo and selection operators are
sometimes handled in the same event loop iteration, which leaves
non-evaluated dependency graph.
Fixes T67973: Crash on Fix Deforms operator
Fixes T67902: Crash when undo a loop cut
Reviewers: brecht
Reviewed By: brecht
Subscribers: lichtwerk
Maniphest Tasks: T67454
Differential Revision: https://developer.blender.org/D5343
Use explicit boolean flag to indicate whether flush to original data
is needed or not. Makes it possible to avoid confusion on whether an
evaluated or any depsgraph can be passed to the API.
Allows to remove depsgraph from bAnimContext as well.
Reviewers: brecht
Differential Revision: https://developer.blender.org/D5379
This also makes `IDP_CopyProperty` the "opposite"
of `IDP_FreeProperty`, which is what I'd expect.
Two refactoring steps:
* rename IDP_FreeProperty to IDP_FreePropertyContent
* new IDP_FreeProperty function that actually frees the property
Reviewers: brecht
Differential Revision: https://developer.blender.org/D4872
The most difficult part is handling parent-child relations correctly:
when a parent is applied, the children should be moved accordingly,
and when applying a child, it should not include transformation from
unapplied parents. All this requires walking bones as a tree, instead
of a flat list.
Limitation: Applying bones with non-uniform scaling without also applying
children will distort non-rest posing on said children for reasons related
to T54159 (basically, non-uniform scale plus rotation creates shear, and
Blender matrix decomposition utilities don't have tools to deal with it).
Reviewers: campbellbarton, brecht, mont29
Differential Revision: https://developer.blender.org/D3775
As far as I can tell, there is no technical reason why the B-Bone
segment thickness scaling can't be separated into two axes. The
only downside is the increase in complexity of the B-Bone settings,
but this is inevitable due to the increase in flexibility.
Updating the file is somewhat complicated though, because F-Curves
and drivers have to be duplicated and updated to the new names.
Reviewers: campbellbarton
Subscribers: icappiello, jpbouza
Differential Revision: https://developer.blender.org/D4716
This commit does not add anything new from user perspective, but make it
possible to paste any kind of IDs, not only objects/collections.
Will be used by new copy/paste in the outliner in next commit.
Convention was not to but after discussion on 918941483f we agree its
best to change the convention.
Names now mostly follow RNA.
Some exceptions:
- Use 'nodetrees' instead of 'nodegroups'
since the struct is called NodeTree.
- Use 'gpencils' instead of 'grease_pencil'
since 'gpencil' is a common abbreviation in the C code.
Other exceptions:
- Leave 'wm' as it's a list of one.
- Leave 'ipo' as is for versioning.
BF-admins agree to remove header information that isn't useful,
to reduce noise.
- BEGIN/END license blocks
Developers should add non license comments as separate comment blocks.
No need for separator text.
- Contributors
This is often invalid, outdated or misleading
especially when splitting files.
It's more useful to git-blame to find out who has developed the code.
See P901 for script to perform these edits.
There were at least three copies of those:
- OB_RECALC* family of flags, which are rudiment of an old
dependency graph system.
- PSYS_RECALC* which were used by old dependency graph system
as a separate set since the graph itself did not handle
particle systems.
- DEG_TAG_* which was used to tag IDs.
Now there is a single set, which defines what can be tagged
and queried for an update. It also has some aggregate flags
to make queries simpler.
Lets once and for all solve the madness of those flags, stick
to a single set, which will not overlap with anything or require
any extra conversion.
Technically, shouldn't be measurable user difference, but some
of the agregate flags for few dependency graph components did
change.
Fixes T58632: Particle don't update rotation settings
The cause is that FOREACH_OBJECT_IN_MODE_BEGIN assumed that the active
object is in the correct mode, which is wrong in this case. It also
only considered objects of the same type as active, which had to be
replaced with an explicit type parameter.
Bring back per-viewport localview. This is based on Blender 2.79.
We have a limit of 16 different local view viewports.
We are using both the numpad /, as well as the regular /.
Missing features:
* Hack to make sure lights are always visible.
* Make rendered mode with external engines to support this as well
(probably just need to support this in the RNA iterators).
* Support over 16 viewports by taking existing viewports out of local view.
The code can use a cleanup pass in the future to unify the test to see
if an object is visible (or we can use TESTBASE in more places).
That kind of implicit includes should really only be done when totally,
absolutely necessary, and ideally only with rather simple 'second-level'
headers.
Otherwise not being explicit with includes always end up biting in
unexpected ways...
Notes:
* Really need to address RNA setters case, end up adding way too much
G.main here these days... :/
* Added Main pointer into bAnimContext, helps a lot in anim code ;)
This way we allow animation system to make decisions based on which
context dependency graph is coming from, and whether it belongs to
an active edit window or not.
This is a hacky fix so that animators can use this tool again with autokey enabled
(which they do all the time). The issue here is that the tool writes the new (0)
values to the original data, but insertkey now reads from evaluated data (so that
keying interpolated values works). However, the cleared values do not get re-evaluated
or flushed before insertkey gets to it (via auto keying), meaning that the wrong values
get keyed.
There may be better solutions for this, but for now, this is the simplest fix that
I can get working.
This operator was only partially converted to multi-object editing,
as on one hand, it was using the new "objects in mode" iterator,
while on the other hand, it was also using the context iterator inside
that, making all selected bones across armatures get included.
The depsgraph was always created within a fixed evaluation context. Passing
both risks the depsgraph and evaluation context not matching, and it
complicates the Python API where we'd have to expose both which is not so
easy to understand.
This also removes the global evaluation context in main, which assumed there
to be a single active scene and view layer.
Differential Revision: https://developer.blender.org/D3152