Mainly:
* Use 'for' loops instead of 'while' ones (saves many lines and regroup most loop handling on one line).
* Use float[3] pointers where possible.
Based on investigation by sergey (Sergey Sharybin) and revzin (Grigory Revzin).
Based on patch D460 by revzin (Grigory Revzin).
Differential Revision: https://developer.blender.org/D460
Thing is, those functions always reallocate the whole keyblock's data mem,
while in some cases we already have right amount of elements, so we can just
copy over. Further more, `BKE_key_convert_from_offset`, despite its name,
was not making any check nor allocation on keyblock's data elements!
So split 'copy' operation itself in `BKE_key_update_from_...`,
where no mem checks/operations are performed (only an assert).
Only useful in sculpt mode currently, but will be used by fix for T35170 too.
Small optimisation (which shouldn't have much of an effect) where we skip
complex handle calculations if all the handles/verts for a Bezier curve
segment are all flat.
Patch by Campbell (T40372 -> F91346)
When the active action is a NLA strip, the keyframe indicator colors for buttons
and the 3D view indicator (i.e. the current frame indicator changes color) didn't
work correctly. This was because they were still checking for keyframes in
"global" time space, whereas they needed to be applying NLA corrections to
"look inside" the remapped action.
That's like really a bummer, because currently animation data for armatures
might want to use pose, and pose might be missing on the object.
This happens when changing visible layers, which leads to situations when
pose is missing or marked for recalc, animation will change it and then
object update will restore the pose.
This could be solved by the new dependency graph, but for until then we'll
do an extra pass on the objects to ensure it's all fine.
It's done in the scene_update_for_newframe() to solve possible issues with
the render engines as well.
This finally solves issues we had with Caminandes team, where Koro would be
at the scene origin instead of being properly posed.
layer index was being obtained for loop data types but we referenced
Tessface data types
NULLing those out since only the data offsets are used in edit mode and
address sanitizer complains about freed memory access.
Also minor comment in texpainting
Only fixes the crash actually, real issue is, vparent does not handle deletion of vertices
at all currently... We'd need either some kind of static uuid for vertices, or some
mapping helpers used each time we remove or reorder verts... ugh.
Org patch by Severin (Julian Eisel).
Most of the unused functions were removed. Some of them were if-defed
because they are referenced from the code which was already if-defed.
Reviewers: lukastoenne, campbellbarton
Differential Revision: https://developer.blender.org/D868
Looks like material node trees are stored directly in the material. The
reason I thought this was fixed was because my test file didn't connect
the lamp data node in the rest of the tree.
Thanks to Campbell for catching this :)
In cases when the subsurf modifier is the last in the stack and there
are only deformation modifiers before it we can skip doing full orig
vertex lookup.
This is rather common situation here in animatic.
Constraint space conversion ignores object scale, which is OK in most cases. But here,
we are converting a normal from world to local space, and when later converting it
into target space to actually do the BVH raycast, we use TransformSpace which
does applies objects' scaling to normals, as expected.
Best solution here is to also take object's scale into account when converting
from local to world space.
This was never ported to a new tracking pipeline and now it's done using
FrameAccessor::Transform routines. Quite striaghtforward, but i've changed
order of grayscale conversion in blender side with call of transform callback.
This way it's much easier to perform rescaling in libmv side.
The title actually tells it all, this commit switches Blender to use the new
autotrack API from Libmv.
From the user point of view it means that prediction model is now used when
tracking which gives really nice results.
All the other changes are not really visible for users, those are just frame
accessors, caches and so for the new API.
This is only an indirect fix, in fact: this commit adds a public API to check
the maximum number of a given layer type (`CustomData_layertype_layers_max()`),
and uses it to forbid too much layer creation in `CustomData_merge()`.
This only affects UVs/VCol data though, but merge behavior in itself is not a bug
actually, how user managed to get thousands of different UV layer names remain
rather mysterious...
Vertex parent was using original non-modified nurbs list, simply because
it didn't have something else to operate with.
Now we've got deformed by pre-tessellation modifiers nurbs in the curve
cache which might be used y the vertex parent.