Mesh: Parallelize vertex and edge to corner topology map creation #110707
No reviewers
Labels
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: blender/blender#110707
Loading…
Reference in New Issue
No description provided.
Delete Branch "mod_moder/blender:tmp_edge_to_loop_speedup"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Make the algorithm more parallelizable. For 8.000.000 edges,
this function has become 4.5x times faster (509.1 ms -> 136.2 ms).
*Called from Edge Split node for cuboid [1].
2x faster is quite nice! This looks very clever. But I wonder if parallel sorting is the best solution, since it's so global-- it starts by acting on the entire domain.
I didn't do my own performance testing of this PR though, with 32 threads the benefit might be larger.
Here's my guess for another approach (not tested, just a sketch!):
The downside is that it still needs the "counts" array, and the offsets and indices can't be built in parallel. But at least each chunk of work is smaller. Maybe worth looking into-- not sure.
Thanks for the other option, I'll try it out as well.
Small addition: I improved the Point to Curve node today, and also applied this technique with sorting iota-groups. Apparently, separating
create_reverse_offsets
andgather_reverse
into different threads is not such a good idea.And following the conclusions, i can assume that sorting itself is 2 times faster than loop with random-memory incrementation. It's just that
create_reverse_offsets
and thegather_reverse
are parallelized well enough to interfere with each other. It will be necessary to make their calls sequential ....Interesting!
parallel_invoke
might make sense in a sweet spot where the individual callbacks don't have quite enough work to be worth parallelizing, but that probably doesn't make sense here yet.I was able to achieve a 2.4 gain with only multi-step sorts. Probably, if memory was dear to us, then this would be a solution.
But we want speed (since the speed of cpu grows more slowly than the amount of ram). So in the end, having an array of indices, your variant turned out to be 1.7 times faster than the best my sort result!
So the end result is 509.1 ms -> 136.2 ms!
All tests: https://hackmd.io/@s0TMIS4lTAGwHVO20ECwpw/build_edge_to_loop_map_tests
@ -366,20 +368,39 @@ GroupedSpan<int> build_vert_to_loop_map(const Span<int> corner_verts,
return {OffsetIndices<int>(r_offsets), r_indices};
}
static void sort_groups(const OffsetIndices<int> groups, MutableSpan<int> indices)
Did you find it better to sort the entire array once at the end?
I imagined that sorting the moment each group was full would make better use of caches, since the small group of data would already be in L1 or L2 cache of that thread
The problem is that we don't have a group from the start. By iterating over the corner edge indices, we can have random edges rather than already grouped ones. This means that only at the end it can be done.
Okay, right, applying the same logic from my other comment, this makes sense. Thanks.
@ -383,0 +397,4 @@
const int edge = corner_edges[corner];
const IndexRange range = offsets[edge];
const int index = atomic_add_and_fetch_int32(&counts[edge], 1);
r_indices[range[index]] = int(corner);
Not sure this works-- imagine that each thread might pause for an unknown about of time between these two lines:
So there needs to be some mechanism to prevent multiple threads from reaching
r_indices[...] =
for the same index at the same time. I think that's what I was doing with thewhile (true)
loop in my exampleImagine that
counts
is an array of iterators.Each thread can make a step and get the value of the random iterator. This means that each iterator will reach its end in the end of all.
And each thread will only see its own value of each iterator. That is, only one line (
atomic_add_and_fetch_int32
) works with a lock. And its result is a unique index in the group.Ah, makes sense, since the order doesn't matter here, since they're sorted later. Very nice!
@ -368,1 +370,4 @@
static void sort_groups(const OffsetIndices<int> groups, MutableSpan<int> indices)
{
const auto comparator = [&](const int index_a, const int index_b) { return index_a < index_b; };
Could you pass this lambda directly to
std::sort
? Also, no need for a capture,[]
would be fineIs the lambda necessary at all?
Yep, just noticed this.
Could you also look into changing the vertex to corner map in this PR? The code could be shared actually, similarly to
mapped_corner_selection_from_face
Added some assertions and also genirilized the names. For the Point to Curves node, this function will be just as useful, it can be moved to another place later.
Geometry Nodes: Improvement of topology map functionto Geometry Nodes: Improvement of topology map function with gather_groups@ -307,1 +309,4 @@
static void sort_small_groups(const OffsetIndices<int> groups,
MutableSpan<int> indices,
const int grai_size = 1024)
typo:
grai_size
->grain_size
@ -308,0 +331,4 @@
for (const int64_t i : range) {
const int group = group_indices[i];
const int index = atomic_add_and_fetch_int32(&counts[group], 1);
const IndexRange range = offsets[group];
Geometry Nodes: Improvement of topology map function with gather_groupsto Mesh: Parallelize vertex and edge to corner topology map creationI'd like to test this again on my desktop when I get back home this weekend, just to corroborate the performance numbers. Other than that, the code looks quite good now given your explorations of alternative solutions I'm fairly confident this is at least a good solution.
@ -308,0 +343,4 @@
Array<int> &r_offsets,
Array<int> &r_indices)
{
if (group_indices.is_empty()) {
Is this
is_empty()
case necessary? I guess it's because of the asserts inreverse_indices_in_groups
?I found it a little odd that we are still allocating an array just for the sake of empty groups to work. I decided to formalize this state here. Maybe later it will be possible not to do this ...
I usually find it better to handle the "empty" case implicitly later in the code-- performance should be totally fine, especially with small buffer optimization and the fact that this code might not even get called in the "important" cases anyway.
Also, yes, new code can have problems with empty group indices, so it's also required to be intercepted.
@blender-bot build
@blender-bot build