Bendy Bones: implement a new curve-aware vertex to segment mapping mode. #110758
No reviewers
Labels
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset System
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Viewport & EEVEE
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Asset Browser Project
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Module
Viewport & EEVEE
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Severity
High
Severity
Low
Severity
Normal
Severity
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
5 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: blender/blender#110758
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "angavrilov/blender:pr-bbone-curved-mapping"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Currently vertices are mapped to B-Bone segments without taking the
rest pose curve into account. This is very simple and fast, but causes
poor deformations in some cases where the rest curvature is significant,
e.g. mouth corners in the new Rigify face rig.
This patch implements a new mapping mode that addresses this problem.
The general idea is to do an orthogonal projection on the curve. However,
since there is no analytical solution for bezier curves, it uses the
segment approximation:
partitioning search to narrow down the mapping to one segment.
interpolation between the BSP planes.
a smoothing pass is applied to the chosen position by blending to
reduce the slope around the planes previously used in the BSP search.
In order to make per-vertex processing faster, a new array with the
necessary vectors converted to the pose space, as well as some
precomputed coefficients, is built.
The new mode is implemented as a per-bone option in order to ensure
backward compatibility, and also because the new mode may not be
optimal for all cases due to the difference in performance, and
complications like the smoothed but still present mapping
discontinuities around the evolute surface.
The colors above represents the head-tail position along the curve a vertex would be mapped to, with a darker blue or red fill used for points mapped exactly to the start or end of the curve. In case of the curved mode, beyond the picture bounds to the left all lines eventually align parallel to the middle (green) line.
This Blender File reproduces the mapping dynamically using a viewport node material (it supports between 2 and 14 segments), and was used to render the first two pictures. The file also contains a script that tests the actual implemented mapping via checking the result of
PoseBone.bbone_segment_index
for a number of empties placed based on the image rendered by the shader model. There are tests for the 4, 8 and 14 segment cases, but the segment count has to be changed manually.This Blender File contains a simple deformation test case and was used to generate the third image.
@blender-bot package
Package build started. Download here when ready.
I've only skimmed the code so far. But there are some higher-level things I'd like to discuss:
In any case, the functionality definitely looks useful to me! So from my perspective, it's mainly a matter of making sure we make the right set of trade offs in the solution here.
This discontinuity is fundamental and is essentially caused by the loss of 1:1 continuous, monotonic and unambiguous correspondence between orthogonal projection and nearest point on curve when the original point is further than the radius of curvature. You would observe the exact same jumpy behavior from Shrinkwrap when your points are too far inside the target mesh.
Also, the location of the discontinuities is not arbitrary, they are located on the evolute of the bone curve. It's just that since the bone shape is approximated with segments, the evolute also becomes segmented in a weird way instead of smooth.
Finally, note that this mapping has to be recomputed for every vertex every time the Armature modifier runs. Thus, it cannot be too complicated to compute.
Edit: but, in practice, I wonder if this discontinuity even matters? What would be the practical use case where you would want the bone to actually deform any vertices located in the problem area? In my view of the use case for this, the bone is a part of a loop surrounding some kind of hole in the mesh, so the middle would be empty.
Mapping here has nothing to do with deformation. It essentially corresponds to different Auto Weight algorithms, but for b-bone segments (and in case of segments you don't have an option of manual weight painting unfortunately).
Edit: However, in a way your impressions are actually right here I think - the deformation itself was fixed, but segment weighting was still based on the straightened out bone shape all the way up until now.
Right, I'm aware that it's fundamental with this exact mapping. What I was suggesting is looking for a different mapping that's roughly equivalent near the bone where it matters, but transitions to something smooth where the discontinuity would otherwise be so that vertices in that region are also "well behaved" in some sense.
A trivial example of this kind of "getting rid of discontinuities" approximation is the smooth absolute value function
sqrt(x*x + n)
, when you want something that approximates an absolute value but is continuous. And maybe such an approximation isn't possible in this case (it's certainly a much more involved mapping than the absolute value function!), in which case oh well. But if it is possible, I think that would be preferable.Having said that, that could also be a future addition. If we have multiple mappings anyway, we can just add another to the list if we find a better one in the future. But I think taking a bit of time to try to figure that out now would still be worthwhile.
I could definitely be wrong here, but it looks to me like the discontinuities aren't actually on the exact evolute. As you mention, the b-bone is segmented, and it's that segmentation here that introduces the (possible) arbitrariness I was referring to. In deviating from the analytic evolute, you have to make some choices, I think?
But I don't think that's a meaningful issue or anything. The main point was what I mentioned above, about looking for a different mapping that wouldn't have the discontinuities at all (if possible).
How do you imagine that kind of thing applying here at all? What you need to do here is map the vertex to a point on the curve (basically Head Tail slider on a Copy Transforms constraint), because each vertex is weighted to exactly two adjacent segment joints with some weight split between them.
It's not really arbitrary, it is caused by the first step of narrowing down the mapping to just one segment (pair of adjacent joints). Like I said, this mapping has to be computed from scratch for every vertex every time it is deformed, and thus has to be as fast as possible. Thus, for performance reasons this first step is done by binary partitioning of space using normal planes at joints. This means:
Do you have any reply re whether you can think of any practical mesh, rig and weight paint where this discontinuity would actually matter, i.e. you would have vertices weighted to that b-bone in that area?
I wasn't suggesting using the smooth absolute value function—that obviously doesn't apply. It was an example of taking a mapping with a discontinuity (in that case a first derivative discontinuity) and creating an approximation without one. What I was suggesting is seeing if we can figure out a continuous approximation of the current mapping in the PR.
Again, maybe that isn't possible or feasible. And I'm certainly not suggesting blocking this PR on it (apologies if I gave that impression). But I do think it would be preferable if possible, and is worth at least a cursory exploration before landing this.
The issue with discontinuities is that they tend to make things finicky, and that's usually (though not always) undesirable. The shrinkwrap modifier is a good example of this, where its functionality is good, but it's extremely finicky to use in non-trivial cases. And sometimes that's just unavoidable, which is fine. Better to have useful-but-finicky functionality than no functionality at all.
But as a rule of thumb I think it's good to avoid discontinuities in these kinds of things when feasible. So I'm just suggesting some exploration into that here.
I mean, what would even be the logic, and how would it look? Maybe you can draw on that picture to show it.
The logic of this mapping is that it does an orthogonal projection on the curve, with some approximation because of the segment representation to simplify and speed up the math. This discontinuity happens because there are points where there is no solution since the curve has ends, so they are mapped to the nearest end of the curve as a fallback, and you get locations where one point has a solution in the middle of the curve, while an adjacent point has no solution.
Well, at least one point where the problem is smaller here compared to shrinkwrap is that this mapping normally isn't animated. You can imagine this discontinuity as a cut in a rubber sheet deformed by the bone; I think it would be pretty obvious if it happens in a rig, and you can remove weight from this bone to avoid it.
Also, as seen on the image the discontinuity always starts from the boundary planes of the curve ends, and I'm pretty sure that it is guaranteed to be always outside the volume limited by the planes of both ends (so e.g. for a bone in a mouth corner never assign inner bottom lip vertices to the top lip bone and vice versa, and you should be safe).
I also made a blend file that replicates the lines of the above picture dynamically using geometry nodes (but without the color fill), so you can use it to play with the boundary for different bone shapes.
My initial thought would be something like the lines (well, planes really) curving to become parallel before they intersect. Realistically, I think that would be difficult to implement efficiently, so I'm not necessarily suggesting that. But that's an example of the kind of thing I'm thinking about.
I think for a more feasible mapping, it would probably need to be something that takes good advantage of the BSP. We know the length of each b-bone segment, and in general a BSP can tell you what side of the dividing plane a point is on and its minimum distance from that plane. And with that kind of information available, a signed-distance-field-esk approach probably has the most potential. So that's along the lines of what I'm thinking, basically.
Isn't it implicitly animated with shape keys? That comes up a lot with e.g. corrective shape keys, face rigs, etc.
The problem with that kind of thing is how wide to make it without extra options so that it is useful - too narrow a transition would still be a jump, while a too wide area would cause muddled deformation. Also, a point can be close to multiple planes and a careless approach can cause multiple small fractured discontinuities instead of a single easily understandable one...
True, I forgot about that. However, at least shape keys and the extent of their application are normally controlled by the rigger, so they can test that everything works well and an animator can't inadvertently mess it up 😄
And, in the way I imagine this being used (primarily lips, i.e. the bone is at most a quarter of a loop around a hole, and more likely 1/8th), the discontinuity would happen in an area where no vertices assigned to the bone would reasonably occur. I think something extending across the hole from one lip to another (like fangs, tusks or something) would likely warrant having their own deform bones anyway for control or rigidity.
Sure, but that's the kind of thing I'm suggesting exploring. If you just don't want to do that exploration, that's totally fair. But I think it would be worth doing if you're up to it. And maybe nothing useful comes out of it, but also maybe something does.
Yeah, that's true. But also, maybe there's a rig where I want the benefits of the better b-bone deformation and also want to allow the animators to push the shape keys to extremes sometimes. I mean, it's totally okay to basically say, "Hey, you can do one or the other, but not both." Sometimes that's just a reality of engineering features. But if feasible, it would be nice to not force users into that choice.
Here's an example that demonstrates artifacts from the discontinuities in the mapping.
Blend file:
bbone_deform_artifacts.blend
Movie (notice the mouth corners):
This is based on the Rain asset, with that asset's actual cheek puff shape keys. However, the armature deformation setup is tossed together by me with auto-weighting, because the original rig was built in a way that wouldn't really be affected by the new deform mode (the b-bones were disconnected around the lip corners, and were generally fairly straight).
Nevertheless, I think this is reasonably representative, because it demonstrates that real production shape keys can and will move vertices enough to hit the discontinuity. That could perhaps be mitigated with more careful weight painting and bone placement... but then the rigger may also have to make choices between weights and bone placement that give the best deformations vs those which avoid the discontinuities of the mapping when shapes get applied.
Again, I'm not suggesting that a smooth mapping is necessarily achievable. But I also don't think we've explored it enough to reasonably rule it out either. And this test makes me more convinced that it really would be a good idea to do that exploration before landing this.
After much experimenting using a shader model, I came up with a smoothing scheme without glitches I think.
I wonder if the coefficients in the yellow and blue nodes are the best, and whether the purple one should use Smooth Step or not.
857dd786d2
to241b7ba307
@blender-bot package
Package build started. Download here when ready.
So, like I commented above and in the chat, I implemented a smoothed mapping (with a blend file containing a material-based model for math testing and experiments, and making pictures). However, it does not solve the problem in your test case - although vertices don't jump any more, they still move unnaturally. I think this is unavoidable through smoothing, because of how big the magnitude of the shape change actually is compared to the size of the bone.
I think this test is an instance where the test itself needs fixing. The shape key is actually pulling the mesh away from the bone structure in the mouth corner - I'd say it's not what it should be doing, especially in such a delicate area. Also, the mouth corner itself isn't placed in the best way: it should probably be a bit higher to match the edge loop, and more forward (outward) to reduce the curve the corner bones take if you look from above. Some rest pose ease values also could use reducing.
I found that some tweaks to the position of the mouth corner bone joint and ease values seemed to mostly fix the problem.
This is awesome! Thanks for putting in the time and effort.
I'm running into a weird bug with the latest build, though. When I open the file, some polygons around the mouth are missing. But it's inconsistent. Every time I open the file (no saving, literally just opening the same file) different faces are missing:
Sometimes they're missing from the corners of the mouth, sometimes from the lips, and sometimes no faces are missing and it's fine.
That aside, this is looking really good.
I agree that the vertices still move weird. But that's also the case (although to a slightly lesser extent) even with the straight mapping. I think that's just fundamental to any technique where weights are derived from vertex position.
The benefit of the smooth mapping (or at least the benefit I was looking for) isn't that it prevents all weirdness, it's that it prevents popping artifacts, which are immediately noticeable and disturbing even when fairly minor/subtle. A bit of weird movement can be smoothed over without too much effort, whereas resolving popping artifacts are a huge pain because they're quite noticeable even when made smaller.
For example:
Both of these have a subtle corrective smooth modifier applied to mitigate the artifacts. The left is with the new smooth mapping, the right is with the old discontinuous mapping. Unless you're paying close attention to the corners of the mouth, the left one looks fine. But the right one is still visually disturbing even when just casually noticing the mouth corners.
So the smooth mapping changes the problem from something that's really painstaking to address into something that's quite reasonable to address.
This test certainly isn't great rigging work, ha ha! I tossed it together quickly for illustrative purposes. However, I also didn't place the bones to intentionally trigger the artifact. I did try to place them as I actually would for a real rig, just very hastily done.
And in the general case, bone placement won't necessarily help you here, because moving a bone closer to some vertices moves it further away from others. So you're always playing a kind of balancing game. Additionally, bone placement affects deformations in more ways than just how the vertices get weighted to the b-bone segments, and the best deformations in those other respects may be better with the bones placed more center-mass rather than close to the mesh surface.
In any case, I'm pretty happy with the functionality now. Although it should probably get a bit more thorough testing. @Mets when you're back, can you take a crack at doing some rigging experiments with this, and see how it goes?
And I'll give this a proper code review soon as well.
Again, this is great work @angavrilov. Thanks a bunch!
Rather than being closer to the mesh surface, the tweaks I tried focus more on moving the corner vertically to match the position of the corner edge loop, and laterally to change the angle of the plane the corner bones lie in so that it points more forward than sideways (basically unbend the shape of the mouth corner bones a bit when you look from above), to reduce the effect where moving the vertices forward in the global sense makes their projection on the mouth surface defined by the bone appear move 'inward' from the corner into the middle of the mouth. After all, the discontinuity happens for vertices on the opposite side vertically when you move inward.
241b7ba307
to2b4bde578f
@blender-bot package
Package build started. Download here when ready.
2b4bde578f
to5327a7d5d8
@blender-bot package
I tweaked the math to reduce the dependency of the width of the smoothing band on the number of segments by scaling the radius appropriately.
Package build started. Download here when ready.
5327a7d5d8
to479d0b8fd0
@blender-bot package
Found a way to make the mapping isolines bordering the pure blue and red areas even smoother:
Package build started. Download here when ready.
@Mets @nathanvegdahl In the interest of final polishing, I would like some opinions on these questions regarding configuring the smoothing math, using the attached file containing the material based model of the mapping. The bone is keyframed in the file to allow viewing how the mapping responds to changes in the bone shape by just stepping through the timeline (note that this represents different rest shapes of the bbone in the actual implementation).
Which falloff smoothing method in the purple box should be used?
This is a tradeoff of removing kinks in the mapping isolines (caused by first derivative discontinuities) at the cost of introducing more easing into the smoothed transition where it touches areas mapped completely to one of the curve ends. The non-smoothed approach has very visible angles located where the lines cross the smoothing band border parallel to the middle green line, but the smoothed transition is relatively uniform. Half Smoothstep smoothes out those kinks, but there is significant non-iniformity in the gradient. Piecewise Smoothstep is a compromise that splices a section of a parabola before the threshold and a straight line after it in a way that ensures continuous first derivative; it basically smoothes out the kinks from a sharp angle into a curved section with its size depending on how big the threshold is (zero threshold is no smoothing), and only adds easing to the outermost band.
Note that this does nothing to remove kinks caused by the original mapping rather than smoothing - they can be seen where the lines cross planes perpendicular to the curve at its ends.
This smoothing makes the picture nicer (I plan to use it in the docs as well), but I have a feeling it may not be really important for the actual deformation.
Should the smoothing radius depend on the length of the curve, or the uncurved bone (should the Multiply in the green box be unmuted)?
Basically, should the width of the smoothed transition between the ends of the curve where the lines converge far to the left of the image depend on the length of the curve itself, or the distance between the head and tail of the bone. Depending on the length of the curve like now seems to be more natural, as the spacing between lines where they cross the curve obviously depends on that, but maybe the other way makes sense more? Note especially the difference in behavior in frames 35-40.
What should be the final smoothing radius specified in the blue node?
This is the actual value, the interpretation of which depends on the previous two points. Increasing it makes the transition smoother, but can cause the actual end points of the bone to be mapped further into the curve when the end curvature is high. Also, higher smoothing reduces the apparent width because of increased easing and requires increasing this value to maintain the same spacing in the middle of the transition.
Is the value in the yellow node the best?
This basically controls how strongly lines are pushed away from the center, compared to the smoothing that mostly affects the outer boundary.
Thanks for putting together this awesome blend file! Really helps.
Max Width vs Curve Length
value node?0.25
as you currently have it set seems fine to me. Tweaking it higher or lower gives results that might be better in specific cases, but0.25
seems like a decent all-arounder. If that's not good enough, then adjusting it automatically based on maximum curvature in the b-bone might be worth exploring.Scale asymptote
node? It basically seems like the larger the value, the better, with a rapid diminishing returns. So maybe8.0
? But the value you have set (3.0
) also seems fine.(As a side note: please use unique labels to specify what nodes you're talking about. Also using colors is great, to help people scan for them quickly. But without a label I'm a bit unsure "Did I really find the specific node he was talking about?" And on top of that, I'm partially color blind.)
Also, I notice that in the "After" image, it looks like there's a now a slight discontinuity on the outside edges, which isn't in the "Before". Is that just an artifact in the visualization that's not actually in the mapping?
I just polished up further a bit what I've been using to design the smoothing math in the first place. It would have been impossible to come up with a good solution without a quick and easy way to test it. 😄
Of course smoothing has performance impact, but it's not much compared to the arctangent used by the slope based factor.
There is a demonstrable quality difference in the pictures, but probably not in the deformations. It's probably better to think about whether a more linear weight interpolation may be preferrable, or a heavily eased one.
Well, basically the idea is: "vs Bone Length" unmuted is to keep the width of the smoothed band consistent relative to the length of the bone, but the ratio of compression of the bands may change depending on how curved the bone is (since at the point where they cross the curve their width depends on the length of the curve). The muted way keeps the ratio of compression of the bands consistent, but the actual width will depend on the curvature.
Muted "vs Bone Length" already adjusts it based on curvature effectively (by not deliberately adjusting it back to compensate for it), so if you think vs Bone Length is better because of consistency, then keeping this fixed makes more sense.
That constant is basically used as in
arctan(3.0 * arg)
, so increasing it naturally gives diminishing returns. I think I probably chose 3 as a nice round value about where the diminishing returns start.Do you mean darker blue and red areas? I just added separate shading for exact 0 or 1 values to show where the boundary lies, since drawing lines doesn't work well for those boundaries (note how ugly they are in this file, where they are enabled for 'debug'). The Before picture is equivalent to disabling Smooth Falloff and setting "Max Width" to 0.2 instead of 0.25 (to roughly keep similar line density in the middle of the transition).
Did you check other segment counts? You can set the bone to any even value between 2 and 14 and it should work, although I found a bug in the "vs Bone Length" bit so download the attachment to this comment to fix segment counts above 8.
Yeah, that's a good point. Having said that, I was playing around with the nodes like a dumb idiot (no real logic or understanding behind it), and I stumbled upon this, which removes the arctangent entirely:
It just replaces the whole arctangent construction with a clamped multiplier by a large value. And it appears to give essentially identical results to using the arctangent with a large value in
Scale asymptote
.Got it. Awesome.
The way in which unmuted feels more consistent to me is that it does a better job of ensuring the curved mapping within a consistent range around the bone, before starting to transition to the straightened mapping. This is especially obvious comparing between the quarter-circle and half-circle examples. And I think that's probably what we want for this, since the whole point is the curved mapping, so doing our best to "guarantee it" (on a best-effort basis) within a certain range of the bone seems like it makes sense.
That's also why I was wondering about adaptively changing that value, because it still doesn't do a great job of that in the ease-up example. Which makes me wonder if there's some other criteria that can give us even more consistent results in that sense.
I did, yes. It seems pretty consistent/reasonable across the different segment counts, so I'm pretty happy with that.
Ah, interesting. Now that you mention it, setting it to 0.2 does feel a little more consistent to me across the examples. So maybe that's actually the value we should go with. (Unless you want to take some time to explore different criteria, as mentioned earlier.)
Didn't read everything but I could swear I've tested this in the past, but anyways, Nathan poked me and I tested again.
Based on all the images and the cool material set-up, I'm just gonna trust that you guys have figured out the best algorithm.
I guess it would be interesting to point out that, yes, Rain's lips are disconnected at the corners at least partially because of the issues that this patch is trying to solve; The way bendy bones behaved on mouth corners was just super wonky. At the time, I didn't even realize why this was happening.
I think the results in the demo file speak for themselves, it's a great improvement.
I have only one not-so-small suggestion, but I hate to drag out this patch even more for you, so do with it what you wish:
What about an "Automatic" setting, which will pick between "Straight" and "Curved" depending on how curved the bendy bone is in the rest pose? Just hardcode whatever threshold you find best. Maybe it's a silly idea though.
That's all I got, great patch!
Actually, the point of this part is to provide a smooth transition between no smoothing when the lines are spaced widely apart outside the curve, and narrowly spaced lines that require smoothing on the inside. A huge multiplier or removing the arctangent remove this smoothing component and cause kinks as the lines cross the curve, and some wobble; so I don't think it's a good idea.
However, what works is to replace
arctan(y/x)
with justy/(x+y)
- it provides the same function profile at a fraction of the cost. Since this is all heuristic the exact values don't matter much, the important part is the overall shape of the function curve; and these two functions are about the same.I see, makes sense.
I tried to factor in the distance from the bsp plane to the nearest curve end ("Pinching attempt" box in the attached file), but that didn't produce good results. I basically clamp the half-width of the smoothing band for each plane to not be greater than 0.75 of the distance from the plane to the nearest end. Maybe there is a better way to apply this info, but in this primitive attempt the effect is either too strong, or not significant enough to warrant the complexity; it also diminishes with more or fewer bone segments than the default 8...
So unless you have some better ideas how to factor it in, it's probably better to just select the best overall Max Width value (0.2, 0.25 or whatever is best) and be done with it.
Changes in the file: max width to 0.2 (since you liked it better), smooth falloff to piecewise 0.2 (reduce easing but not completely), replaced arctangent, added muted pinching experiment, unmuted magenta bsp lines in the shader on the right.
479d0b8fd0
toffd0468044
@blender-bot package
Updated the implementation:
atan2(y, x)
withy/(x + y)
.Package build started. Download here when ready.
ffd0468044
to2533389399
I rebased the branch on main.
Also, here are some per-vertex per-bone performance calculations, based on counting purely floating point operations and assuming division is about 3 times slower than addition or multiplication. The values are compared to the cost of transforming a vertex by one ordinary bone without preserve volume:
Additional final averaging cost: 3.91
within the area that requires smoothing
2533389399
to5cfb6ab797
This isn't a full review yet, but at least a start so we can get things moving again. I'll continue next week.
@ -1526,0 +1555,4 @@
* Normally this should not be a problem, but just in case ensure that:
* - All splits between the head and the primary point away from the head.
* - All splits between the tail and the primary point towards the tail.
* - The primary split plane conforms to both requirements.
It's definitely possible to make these conditions not hold with a b-bone. And this function just returns
false
in that case, which isn't checked or used in the calling code.I tried disabling the
last_start < first_end
if clause (lines 1578-1580), and that results in a weird mapping when the conditions don't hold, but nothing really bad happens. And the straight mapping method also results in a similarly weird mapping in these cases, so that just seems par for the course.So I think we can just remove these checks entirely, and just accept that if people make weird extreme b-bones, they're going end up with weird mappings.
The return value is not used, but failing the conditions still disables the curved mapping through side effects.
@ -1643,0 +1776,4 @@
int stack_idx[MAX_BBONE_SUBDIV];
int start, end, bias, stack_ptr = 0;
/* Do the first split at the specified joint index. */
If this can all just be rolled into the main binary traversal further below, I think that would be cleaner.
Well,
bias
cannot be rolled into it cleanly, and it ensures that the ends get the maximum number of bsp splits for symmetry and smoothness.Biasing toward the previous rather than the first split also seems ok, so fixed.
@ -183,0 +185,4 @@
float point[3];
float tangent[3];
/* Dot product of point and tangent to speed up distance computation. */
float plane_bias;
Instead of
tangent
andplane_bias
, maybeplane_normal
andplane_offset
? That feels more clear to me given how they're used.@ -183,0 +188,4 @@
float plane_bias;
/* Inverse width of the smoothing at this level in head-tail space.
* Hack: this value is actually indexed by bsp depth (0 to bsp_depth-1), not joint index.
Nit: instead of "Hack" use "Optimization".
@ -192,0 +205,4 @@
/* Index of the primary split plane joint, or -1 if linear mode. */
short bbone_primary_split;
/* Maximum number of levels in the joint BSP tree. */
short bbone_bsp_depth;
Both
bbone_primary_split
andbbone_bsp_depth
appear unnecessary to store precomputed here, based on looking at the rest of the code.bbone_primary_split
can trivially be computed on the fly at the start of the binary search (and equivalent code will be run anyway at each depth for further splits). Andbbone_bsp_depth
only appears to actually be used within the same function that computes it, and is otherwise only used for an assert that I don't think really guards against anything meaningful.bbone_primary_split
can be adjusted by the above condition checks to ensure the conditions hold, even if somehow they don't hold forsegments / 2
.@ -192,0 +207,4 @@
/* Maximum number of levels in the joint BSP tree. */
short bbone_bsp_depth;
/* Inverse of the total length of the segment polyline. */
float bbone_segment_scale;
I think we can just call this
bbone_total_length_inverse
.5cfb6ab797
tod552481667
Over-all this is looking good to me. Just a handful of additional changes.
@ -1643,0 +1735,4 @@
/* Perform a BSP binary search to narrow things down to one segment.
* Checked BSP planes are stored for the smoothing pass later. */
float boundary_dist[MAX_BBONE_SUBDIV + 1];
int stack_idx[MAX_BBONE_SUBDIV];
Minor readability improvement:
stack_idx
->boundary_idx_stack
.stack_idx
reads to me like it's a single index into a stack (likestack_ptr
), rather than a stack of indices into another array. And includingboundary
in the name makes it clearer what the indices in the stack are indexing into.Another possibility might be
joint_idx_stack
, since that's (I think?) equivalent here. Although that makes the relationship to theboundary_dist
array a little less clear.@ -1643,0 +1776,4 @@
const float d1 = fabsf(boundary_dist[start]);
const float d2 = fabsf(boundary_dist[end]);
head_tail = segment_size * (start * d2 + end * d1) / (d1 + d2);
I'm having a little trouble following the math on this. If I understand correctly, the goal here is to compute a [0, 1] parameter that represents the [head, tail] position across the whole length of the bone. If that's the case, this code is easier for me to follow:
Maybe that's just me, but it also takes one less multiply.
I think I was thinking in terms of a 'weighted sum' here, hence the somewhat weird math expression.
@ -1643,0 +1779,4 @@
head_tail = segment_size * (start * d2 + end * d1) / (d1 + d2);
}
/* Smooth the mapping to suppress discontinuities by using BSP boundaries up the stack.
We should include a reference to this PR in this comment, since the rationale behind this as well as the blend file where you worked out the formulas for this are both there. And that gives people a lot more to work from if this ever needs to be revisited in the future.
@ -183,0 +184,4 @@
/* Boundary data in pose space. */
float point[3];
float plane_normal[3];
/* Dot product of point and tangent to speed up distance computation. */
Now that the member names have been changed, also change this comment to match: "Dot product of point and plane_normal..."
@ -1088,0 +1090,4 @@
"STRAIGHT",
0,
"Straight",
"Fast mapping method that ignores the rest pose curvature of the B-Bone"},
Maybe:
And then for the curved mapping:
Further details can be provided in the documentation.
d552481667
to1257a9913c
Here is the latest blend file for completeness. It reflects the change in managing the bsp division
bias
value.This looks good to me, but I'd like to get one more pair of eyes on it before merging.
@dr.sybren could you also give this a review?
This references "the planes", but so far there is no mention of those. What are those planes?
This references "the sharp discontinuity", which also hasn't been mentioned before. Maybe change that to "any sharp discontinuity" (to make it implicit that these in fact can occur) or explicitly describe that these can occur (and how/why).
What is this optimal representation?
👍
"boundaries between segments are used for a binary space partitioning" = planes
Since binary space partitioning is normally done using planes.
They always occur though, it was a major point in the history of the patch:
Basically converting everything from LOCAL to POSE space.
The code works, and I think it's really nice to have in 4.0!
Because things already work well, I mostly focused on clarity of the code & comments.
The word "inverse" is used quite a few times in the code. It seems to be used as
1/x
. Mathematically "inverse" is typically used for operations, so this would "the inverse under multiplcation". Probably better to use the word "reciprocal", as that's actually defined as1/x
.Since this is such a novel approach (at least for Blender), and the images in this PR really help to clarify things, how would you feel about adding a new page on https://wiki.blender.org/wiki/Source/Animation and documenting the implementation there? That might be better than referring to the PR itself.
@ -1507,3 +1508,3 @@
}
static void allocate_bbone_cache(bPoseChannel *pchan, int segments)
static void allocate_bbone_cache(bPoseChannel *pchan, int segments, bool joints)
const
Document what
joints
actually means. Especially sincesegments
actually means "number of segments" (andnum_segments
thus would be a better name), it gets confusing whenjoints
doesn't mean "number of joints".The value of
joints
is either determined bypchan->bone->bbone_mapping_mode == BBONE_MAPPING_CURVED
orpchan->runtime->bbone_use_bsp_mapping
. So what's the ground truth here? And why isn't that property used everywhere, given that each call has access to thepchan
and thus also its runtime data?There's another confusion in the naming. The bone property is called
..._segment_joints
, but the property that determines whether it's allocated is called..._bsp_mapping
. I'm guessing that the "planes" in the "segment joints" form a space partition, but that's guesswork. It's better if related things are actually named such that that relation is clear. Or at the bare minimum, have either side's documentation mention the other.It is pretty simple: when computing the data, it is using the original mapping mode. However when copying it, it copies what is there. The same copy code also uses
runtime.bbone_segments
instead ofbone->segments
.@ -1524,2 +1525,4 @@
1 + uint(segments), sizeof(DualQuat), "bPoseChannel_Runtime::bbone_dual_quats"));
}
if (joints && !runtime->bbone_segment_joints) {
Shouldn't this be reallocated when the number of bbone segments changes?
@ -1526,0 +1533,4 @@
}
}
static bool compute_bbone_joint_planes(bPoseChannel *pchan)
Document what the returned
bool
means.This actually computes the
bbone_segment_joints
. Sometimes they're called "segment joints", sometimes "joint planes", and sometimes "bsp_mapping".@ -1526,0 +1555,4 @@
joints[i].plane_offset = dot_v3v3(joints[i].point, joints[i].plane_normal);
}
/* Precompute the (inverted) length of the curve. */
Remove the parentheses, as the word "inverted" is not something that's optional here. It'll always do that.
@ -1526,0 +1568,4 @@
* The actual space partitioning includes two extra virtual segments for the ends. */
const int bsp_depth = int(ceilf(log2f(bone->segments + 2)));
/* Maximum half-width of the smoothing band at the primary split, in segments. */
"the" primary split -- I haven't seen anything about about "primary splits" or "secondary splits" or similar terms, so talking about the primary split gets confusing. What does it mean? What is it referring to?
@ -1526,0 +1569,4 @@
const int bsp_depth = int(ceilf(log2f(bone->segments + 2)));
/* Maximum half-width of the smoothing band at the primary split, in segments. */
const float bone_length = len_v3v3(joints[0].point, joints[bone->segments].point);
If the points are on a curve, this is not the arclength of the bone, but the distance between its start & end point. Naming this something like
straight_length
and thelength_sum
toarc_length
would make things clearer.@ -1526,0 +1570,4 @@
/* Maximum half-width of the smoothing band at the primary split, in segments. */
const float bone_length = len_v3v3(joints[0].point, joints[bone->segments].point);
const float max_depth_scale = bone->segments * (bone_length / length_sum) * 0.222f;
What's the magic
0.222f
?@ -1526,0 +1572,4 @@
const float bone_length = len_v3v3(joints[0].point, joints[bone->segments].point);
const float max_depth_scale = bone->segments * (bone_length / length_sum) * 0.222f;
/* Per-layer scaling factor, aiming to reduce width to 1 at innermost layer. */
What are layers here? And why is a width of 1 desirable?
@ -1526,0 +1573,4 @@
const float max_depth_scale = bone->segments * (bone_length / length_sum) * 0.222f;
/* Per-layer scaling factor, aiming to reduce width to 1 at innermost layer. */
const float scale_factor = powf(max_ff(max_depth_scale, 1.0f), 1.0f / (bsp_depth - 1));
What is the typical range of this scale factor? Is it in
[0, 1]
(i.e. the loop below diminishes the scale for every subsequent joint),≥1
(i.e. the loop increases the scale for every subsequent joint),<0
(toggles back & forth between positive & negative)?@ -1547,2 +1608,4 @@
BKE_pchan_bbone_spline_setup(pchan, true, true, b_bone_rest);
/* Compute joint planes. */
runtime->bbone_use_bsp_mapping = use_joints && compute_bbone_joint_planes(pchan);
If the BSP mapping isn't going to get used, what's the use in allocating the "segment joints"? Shouldn't they either be allocated and used, or not allocated at all? Or if that's tricky, deallocated when it turns out they can't be used?
I still don't think
runtime->bbone_use_bsp_mapping
(nowruntime->bbone_use_curved_mapping
) is necessary (this is related to my other question about about the allocation). If the mapping isn't going to be used, then the data shouldn't be kept allocated and the pointer can benullptr
. This means that you can simply useruntime->bbone_segment_boundaries != nullptr
instead of having a separate boolean that needs to be kept in sync.@ -1599,0 +1665,4 @@
runtime->bbone_use_bsp_mapping = use_bsp_mapping;
if (use_bsp_mapping) {
If
!use_bsp_mapping
, shouldn't this reset some of the properties that are set below?@ -1629,2 +1703,2 @@
int *r_index,
float *r_blend_next)
/** Implementation of the Straight B-Bone segment mapping. */
static void find_bbone_segment_index_simple(const bPoseChannel *pchan,
Instead of calling this function "simple", just call it "straight", because that's what it implements.
@ -1640,6 +1717,138 @@ void BKE_pchan_bbone_deform_segment_index(const bPoseChannel *pchan,
pchan, y / pchan->bone->length, r_index, r_blend_next);
}
/** Computes signed distance to the plane perpendicular to the joint tangent. */
What is the joint tangent? Isn't this just the segment itself, expressed as a vector in some space or other?
This question hasn't been answered yet.
@ -1641,2 +1718,4 @@
}
/** Computes signed distance to the plane perpendicular to the joint tangent. */
inline float bbone_segment_bsp_side(const bPoseChannel_BBoneSegmentJoint &joint, const float *co)
Name the function after what it does, so something like
bbone_segment_bsp_signed_distance
.@ -1643,0 +1724,4 @@
}
/** Implementation of the Curved B-Bone segment mapping. */
static void find_bbone_segment_index_bsp(const bPoseChannel *pchan,
Since this implements the "curved" mode, just name it "curved" instead of "bsp".
@ -1643,0 +1735,4 @@
/* Perform a BSP binary search to narrow things down to one segment.
* Checked BSP planes are stored for the smoothing pass later. */
float boundary_dist[MAX_BBONE_SUBDIV + 1];
int boundary_idx_stack[MAX_BBONE_SUBDIV];
What is this stack?
@ -1643,0 +1794,4 @@
/* Boundary in the head-tail space. */
const float boundary_pos = boundary_idx * segment_size;
/* Distance of the original 3d point from the boundary plane,
is "the original 3d point" the same as
co
?@ -1643,0 +1824,4 @@
/* Smooth the distance based coefficient around zero. */
const float dist_coeff_smooth = dist_coeff * dist_coeff * (7.0f + 1.0f) /
(7.0f * dist_coeff + 1.0f);
Where do these constants come from (inclyuding the
3.0f
above)? It's fine if they were just experimentally found. Just add some documentation that explains.The refactor to using named constants is good, but it doesn't answer the question of where these values came from.
It comes from what looks reasonably good in the pictures.
@ -180,6 +180,19 @@ typedef struct bPoseChannelDrawData {
struct DualQuat;
struct Mat4;
typedef struct bPoseChannel_BBoneSegmentJoint {
Given that this is already a "bone" (often also represented as joints), add some docstring that explains what is a "segment joint". The code also refers to a plane; the relation of "segment joints" and "planes" should be documented as well.
@ -183,0 +188,4 @@
float plane_offset;
/* Inverse width of the smoothing at this level in head-tail space.
* Optimization: this value is actually indexed by bsp depth (0 to bsp_depth-1), not joint
👍 for documenting this optimisation.
@ -198,1 +218,4 @@
struct DualQuat *bbone_dual_quats;
/* Segment boundaries for curved mode. */
struct bPoseChannel_BBoneSegmentJoint *bbone_segment_joints;
If those are "segment boundaries", why is the struct name "joints", and not "boundaries"?
👍 for documenting that this is specific to curved mode.
1257a9913c
tof5def8ae1c
f5def8ae1c
to5649b6e21d
@ -82,2 +82,4 @@
float oldlength;
/** Mapping of vertices to segments. */
char bbone_mapping_mode;
Since this is purely runtime data anyway, just use the
eBone_BBoneMappingMode
type instead ofchar
.What is the point of using an enum for a field that is a temporary storage of a char DNA field. It means nothing.
In other words: this declaration has the same type as the corresponding permanent DNA field.
DNA not supporting enum types doesn't mean other code cannot use them. Use enum types whenever the data is of the enum type, unless there is a specific reason why this is not possible. This is not such a reason.
@ -1602,0 +1684,4 @@
if (use_curved_mapping) {
runtime->bbone_arc_length_reciprocal = runtime_from->bbone_arc_length_reciprocal;
memcpy(runtime->bbone_segment_boundaries,
I think a better approach would be to just copy what's in the source, instead of reevaluating whether the curved mapping should be used. In other words, call
allocate_bbone_cache(pchan, segments, false)
, then:MEM_dupallocN
is NULL-safe, so it'll just do the right thing here.This whole function is clearly structured as to avoid freeing and immediately re-allocating buffers. Otherwise it all can be changed to do free + dupalloc.
It's not clearly structured that way. But I get your point.
@ -1646,0 +1753,4 @@
float boundary_dist[MAX_BBONE_SUBDIV + 1];
/* Stack of BSP plane indices that were checked in the binary search. */
int boundary_idx_stack[MAX_BBONE_SUBDIV];
Now that I understand better what this is, just use a
blender::Vector<int>
to store a variable-sized array.If you want to keep things as they are, at least move the declaration of
stack_ptr
here as well so that the relevant variables are declared together. Possibly renamestack_ptr
(because it's not a pointer) to something likeboundary_idx_stack_top
(orstack_top_index
, or something else non-pointery).👍
Just one note that can be handled when landing the PR.
@ -1602,0 +1682,4 @@
memcpy(runtime->bbone_segment_boundaries,
runtime_from->bbone_segment_boundaries,
sizeof(bPoseChannel_BBoneSegmentBoundary) * (1 + segments));
}
Add an assert that
runtime->bbone_segment_boundaries == nullptr
in anelse
clause, so that it's locally clear that!use_curved_mapping
implies that this pointer is nil.cac6505037
toa5224b40af