Mesh: Parallelize vertex and edge to corner topology map creation #110707

Merged
Hans Goudey merged 16 commits from mod_moder/blender:tmp_edge_to_loop_speedup into main 2023-08-28 22:32:38 +02:00

Make the algorithm more parallelizable. For 8.000.000 edges,
this function has become 4.5x times faster (509.1 ms -> 136.2 ms).
*Called from Edge Split node for cuboid [1].

  1. https://hackmd.io/@s0TMIS4lTAGwHVO20ECwpw/build_edge_to_loop_map_tests.
Make the algorithm more parallelizable. For 8.000.000 edges, this function has become 4.5x times faster (509.1 ms -> 136.2 ms). *Called from Edge Split node for cuboid [1]. 1. https://hackmd.io/@s0TMIS4lTAGwHVO20ECwpw/build_edge_to_loop_map_tests.
Iliya Katushenock added 7 commits 2023-08-02 00:53:51 +02:00
Iliya Katushenock requested review from Hans Goudey 2023-08-02 00:53:58 +02:00
Member

2x faster is quite nice! This looks very clever. But I wonder if parallel sorting is the best solution, since it's so global-- it starts by acting on the entire domain.

I didn't do my own performance testing of this PR though, with 32 threads the benefit might be larger.

Here's my guess for another approach (not tested, just a sketch!):

diff --git a/source/blender/blenkernel/intern/mesh_mapping.cc b/source/blender/blenkernel/intern/mesh_mapping.cc
index 53a8200dc4d..c1d9a16d3a6 100644
--- a/source/blender/blenkernel/intern/mesh_mapping.cc
+++ b/source/blender/blenkernel/intern/mesh_mapping.cc
@@ -29,6 +29,8 @@
 
 #include "BLI_strict_flags.h"
 
+#include "atomic_ops.h"
+
 /* -------------------------------------------------------------------- */
 /** \name Mesh Connectivity Mapping
  * \{ */
@@ -374,12 +376,27 @@ GroupedSpan<int> build_edge_to_loop_map(const Span<int> corner_edges,
   r_offsets = create_reverse_offsets(corner_edges, edges_num);
   r_indices.reinitialize(r_offsets.last());
   Array<int> counts(edges_num, 0);
-
-  for (const int64_t corner : corner_edges.index_range()) {
-    const int edge = corner_edges[corner];
-    r_indices[r_offsets[edge] + counts[edge]] = int(corner);
-    counts[edge]++;
-  }
+  const OffsetIndices offsets(r_offsets.as_span());
+
+  threading::parallel_for(IndexRange(edges_num), 4096, [&](const IndexRange range) {
+    for (const int64_t corner : range) {
+      const int edge = corner_edges[corner];
+      const IndexRange range = offsets[edge];
+
+      int old_count = counts[edge];
+      while (true) {
+        old_count = counts[edge];
+        if (atomic_cas_int32(&counts[edge], old_count, old_count + 1)) {
+          r_indices[range[counts[edge]]] = int(corner);
+          break;
+        }
+      }
+      if (old_count + 1 == range.size()) {
+        MutableSpan edge_corners = r_indices.as_mutable_span().slice(range);
+        std::sort(edge_corners.begin(), edge_corners.end());
+      }
+    }
+  });
   return {OffsetIndices<int>(r_offsets), r_indices};
 }
 
 }

The downside is that it still needs the "counts" array, and the offsets and indices can't be built in parallel. But at least each chunk of work is smaller. Maybe worth looking into-- not sure.

2x faster is quite nice! This looks very clever. But I wonder if parallel sorting is the best solution, since it's so global-- it starts by acting on the entire domain. I didn't do my own performance testing of this PR though, with 32 threads the benefit might be larger. Here's my guess for another approach (not tested, just a sketch!): ```diff diff --git a/source/blender/blenkernel/intern/mesh_mapping.cc b/source/blender/blenkernel/intern/mesh_mapping.cc index 53a8200dc4d..c1d9a16d3a6 100644 --- a/source/blender/blenkernel/intern/mesh_mapping.cc +++ b/source/blender/blenkernel/intern/mesh_mapping.cc @@ -29,6 +29,8 @@ #include "BLI_strict_flags.h" +#include "atomic_ops.h" + /* -------------------------------------------------------------------- */ /** \name Mesh Connectivity Mapping * \{ */ @@ -374,12 +376,27 @@ GroupedSpan<int> build_edge_to_loop_map(const Span<int> corner_edges, r_offsets = create_reverse_offsets(corner_edges, edges_num); r_indices.reinitialize(r_offsets.last()); Array<int> counts(edges_num, 0); - - for (const int64_t corner : corner_edges.index_range()) { - const int edge = corner_edges[corner]; - r_indices[r_offsets[edge] + counts[edge]] = int(corner); - counts[edge]++; - } + const OffsetIndices offsets(r_offsets.as_span()); + + threading::parallel_for(IndexRange(edges_num), 4096, [&](const IndexRange range) { + for (const int64_t corner : range) { + const int edge = corner_edges[corner]; + const IndexRange range = offsets[edge]; + + int old_count = counts[edge]; + while (true) { + old_count = counts[edge]; + if (atomic_cas_int32(&counts[edge], old_count, old_count + 1)) { + r_indices[range[counts[edge]]] = int(corner); + break; + } + } + if (old_count + 1 == range.size()) { + MutableSpan edge_corners = r_indices.as_mutable_span().slice(range); + std::sort(edge_corners.begin(), edge_corners.end()); + } + } + }); return {OffsetIndices<int>(r_offsets), r_indices}; } } ``` The downside is that it still needs the "counts" array, and the offsets and indices can't be built in parallel. But at least each chunk of work is smaller. Maybe worth looking into-- not sure.
Author
Member

Thanks for the other option, I'll try it out as well.
Small addition: I improved the Point to Curve node today, and also applied this technique with sorting iota-groups. Apparently, separating create_reverse_offsets and gather_reverse into different threads is not such a good idea.
And following the conclusions, i can assume that sorting itself is 2 times faster than loop with random-memory incrementation. It's just that create_reverse_offsets and the gather_reverse are parallelized well enough to interfere with each other. It will be necessary to make their calls sequential ....

Thanks for the other option, I'll try it out as well. Small addition: I improved the Point to Curve node today, and also applied this technique with sorting iota-groups. Apparently, separating `create_reverse_offsets` and `gather_reverse` into different threads is not such a good idea. And following the conclusions, i can assume that sorting itself is 2 times faster than loop with random-memory incrementation. It's just that `create_reverse_offsets` and the `gather_reverse` are parallelized well enough to interfere with each other. It will be necessary to make their calls sequential ....
Member

Interesting! parallel_invoke might make sense in a sweet spot where the individual callbacks don't have quite enough work to be worth parallelizing, but that probably doesn't make sense here yet.

Interesting! `parallel_invoke` might make sense in a sweet spot where the individual callbacks don't have quite enough work to be worth parallelizing, but that probably doesn't make sense here yet.
Author
Member

I was able to achieve a 2.4 gain with only multi-step sorts. Probably, if memory was dear to us, then this would be a solution.
But we want speed (since the speed of cpu grows more slowly than the amount of ram). So in the end, having an array of indices, your variant turned out to be 1.7 times faster than the best my sort result!
So the end result is 509.1 ms -> 136.2 ms!

All tests: https://hackmd.io/@s0TMIS4lTAGwHVO20ECwpw/build_edge_to_loop_map_tests

I was able to achieve a 2.4 gain with only multi-step sorts. Probably, if memory was dear to us, then this would be a solution. But we want speed (since the speed of cpu grows more slowly than the amount of ram). So in the end, having an array of indices, your variant turned out to be 1.7 times faster than the best my sort result! So the end result is 509.1 ms -> 136.2 ms! All tests: https://hackmd.io/@s0TMIS4lTAGwHVO20ECwpw/build_edge_to_loop_map_tests
Iliya Katushenock added 1 commit 2023-08-02 16:40:28 +02:00
Iliya Katushenock added 1 commit 2023-08-02 16:45:08 +02:00
Hans Goudey reviewed 2023-08-02 17:03:00 +02:00
@ -366,20 +368,39 @@ GroupedSpan<int> build_vert_to_loop_map(const Span<int> corner_verts,
return {OffsetIndices<int>(r_offsets), r_indices};
}
static void sort_groups(const OffsetIndices<int> groups, MutableSpan<int> indices)
Member

Did you find it better to sort the entire array once at the end?
I imagined that sorting the moment each group was full would make better use of caches, since the small group of data would already be in L1 or L2 cache of that thread

Did you find it better to sort the entire array once at the end? I imagined that sorting the moment each group was full would make better use of caches, since the small group of data would already be in L1 or L2 cache of that thread
Author
Member

The problem is that we don't have a group from the start. By iterating over the corner edge indices, we can have random edges rather than already grouped ones. This means that only at the end it can be done.

The problem is that we don't have a group from the start. By iterating over the corner edge indices, we can have random edges rather than already grouped ones. This means that only at the end it can be done.
Member

Okay, right, applying the same logic from my other comment, this makes sense. Thanks.

Okay, right, applying the same logic from my other comment, this makes sense. Thanks.
HooglyBoogly marked this conversation as resolved
@ -383,0 +397,4 @@
const int edge = corner_edges[corner];
const IndexRange range = offsets[edge];
const int index = atomic_add_and_fetch_int32(&counts[edge], 1);
r_indices[range[index]] = int(corner);
Member

Not sure this works-- imagine that each thread might pause for an unknown about of time between these two lines:

const int index = atomic_add_and_fetch_int32(&counts[edge], 1);
r_indices[range[index]] = int(corner);

So there needs to be some mechanism to prevent multiple threads from reaching r_indices[...] = for the same index at the same time. I think that's what I was doing with the while (true) loop in my example

Not sure this works-- imagine that each thread might pause for an unknown about of time between these two lines: ``` const int index = atomic_add_and_fetch_int32(&counts[edge], 1); r_indices[range[index]] = int(corner); ``` So there needs to be some mechanism to prevent multiple threads from reaching `r_indices[...] = ` for the same index at the same time. I _think_ that's what I was doing with the `while (true)` loop in my example
Author
Member

Imagine that counts is an array of iterators.
Each thread can make a step and get the value of the random iterator. This means that each iterator will reach its end in the end of all.
And each thread will only see its own value of each iterator. That is, only one line (atomic_add_and_fetch_int32) works with a lock. And its result is a unique index in the group.

Imagine that `counts` is an array of iterators. Each thread can make a step and get the value of the random iterator. This means that each iterator will reach its end in the end of all. And each thread will only see its own value of each iterator. That is, only one line (`atomic_add_and_fetch_int32`) works with a lock. And its result is a unique index in the group.
Member

Ah, makes sense, since the order doesn't matter here, since they're sorted later. Very nice!

Ah, makes sense, since the order doesn't matter here, since they're sorted later. Very nice!
HooglyBoogly marked this conversation as resolved
Hans Goudey reviewed 2023-08-02 17:21:49 +02:00
@ -368,1 +370,4 @@
static void sort_groups(const OffsetIndices<int> groups, MutableSpan<int> indices)
{
const auto comparator = [&](const int index_a, const int index_b) { return index_a < index_b; };
Member

Could you pass this lambda directly to std::sort? Also, no need for a capture, [] would be fine

Could you pass this lambda directly to `std::sort`? Also, no need for a capture, `[]` would be fine
Member

Is the lambda necessary at all?

Is the lambda necessary at all?
Author
Member

Yep, just noticed this.

Yep, just noticed this.
mod_moder marked this conversation as resolved
Member

Could you also look into changing the vertex to corner map in this PR? The code could be shared actually, similarly to mapped_corner_selection_from_face

Could you also look into changing the vertex to corner map in this PR? The code could be shared actually, similarly to `mapped_corner_selection_from_face`
Iliya Katushenock added 2 commits 2023-08-02 19:11:07 +02:00
Iliya Katushenock added 1 commit 2023-08-02 19:42:00 +02:00
Author
Member

Added some assertions and also genirilized the names. For the Point to Curves node, this function will be just as useful, it can be moved to another place later.

Added some assertions and also genirilized the names. For the Point to Curves node, this function will be just as useful, it can be moved to another place later.
Iliya Katushenock changed title from Geometry Nodes: Improvement of topology map function to Geometry Nodes: Improvement of topology map function with gather_groups 2023-08-02 19:44:00 +02:00
Hans Goudey reviewed 2023-08-02 19:44:57 +02:00
@ -307,1 +309,4 @@
static void sort_small_groups(const OffsetIndices<int> groups,
MutableSpan<int> indices,
const int grai_size = 1024)
Member

typo: grai_size -> grain_size

typo: `grai_size` -> `grain_size`
mod_moder marked this conversation as resolved
Iliya Katushenock added 1 commit 2023-08-02 19:47:33 +02:00
Hans Goudey reviewed 2023-08-02 20:02:40 +02:00
@ -308,0 +331,4 @@
for (const int64_t i : range) {
const int group = group_indices[i];
const int index = atomic_add_and_fetch_int32(&counts[group], 1);
const IndexRange range = offsets[group];
Member
/home/hans/blender-git/blender/source/blender/blenkernel/intern/mesh_mapping.cc:334:24: error: declaration of ‘const blender::IndexRange range’ shadows a parameter [-Werror=shadow]
  334 |       const IndexRange range = offsets[group];
      |                        ^~~~~
``` /home/hans/blender-git/blender/source/blender/blenkernel/intern/mesh_mapping.cc:334:24: error: declaration of ‘const blender::IndexRange range’ shadows a parameter [-Werror=shadow] 334 | const IndexRange range = offsets[group]; | ^~~~~ ```
mod_moder marked this conversation as resolved
Hans Goudey changed title from Geometry Nodes: Improvement of topology map function with gather_groups to Mesh: Parallelize vertex and edge to corner topology map creation 2023-08-02 20:08:16 +02:00
Iliya Katushenock added 1 commit 2023-08-02 20:23:43 +02:00
Iliya Katushenock added 1 commit 2023-08-02 21:09:00 +02:00
buildbot/vexp-code-patch-coordinator Build done. Details
9715b0d74b
progress
Hans Goudey approved these changes 2023-08-22 00:53:49 +02:00
Hans Goudey left a comment
Member

I'd like to test this again on my desktop when I get back home this weekend, just to corroborate the performance numbers. Other than that, the code looks quite good now given your explorations of alternative solutions I'm fairly confident this is at least a good solution.

I'd like to test this again on my desktop when I get back home this weekend, just to corroborate the performance numbers. Other than that, the code looks quite good now given your explorations of alternative solutions I'm fairly confident this is at least a good solution.
@ -308,0 +343,4 @@
Array<int> &r_offsets,
Array<int> &r_indices)
{
if (group_indices.is_empty()) {
Member

Is this is_empty() case necessary? I guess it's because of the asserts in reverse_indices_in_groups?

Is this `is_empty()` case necessary? I guess it's because of the asserts in `reverse_indices_in_groups`?
Author
Member

I found it a little odd that we are still allocating an array just for the sake of empty groups to work. I decided to formalize this state here. Maybe later it will be possible not to do this ...

I found it a little odd that we are still allocating an array just for the sake of empty groups to work. I decided to formalize this state here. Maybe later it will be possible not to do this ...
Member

I usually find it better to handle the "empty" case implicitly later in the code-- performance should be totally fine, especially with small buffer optimization and the fact that this code might not even get called in the "important" cases anyway.

I usually find it better to handle the "empty" case implicitly later in the code-- performance should be totally fine, especially with small buffer optimization and the fact that this code might not even get called in the "important" cases anyway.
Author
Member

Also, yes, new code can have problems with empty group indices, so it's also required to be intercepted.

Also, yes, new code can have problems with empty group indices, so it's also required to be intercepted.
Member

@blender-bot build

@blender-bot build
Iliya Katushenock added 1 commit 2023-08-23 13:49:58 +02:00
buildbot/vexp-code-patch-coordinator Build done. Details
1ab2fba023
Merge branch 'main' into tmp_edge_to_loop_speedup
Member

@blender-bot build

@blender-bot build
Hans Goudey merged commit 226359ec48 into main 2023-08-28 22:32:38 +02:00
Iliya Katushenock deleted branch tmp_edge_to_loop_speedup 2023-08-29 13:47:53 +02:00
Sign in to join this conversation.
No reviewers
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
3 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#110707
No description provided.