Regression: Inverting face selection now very slow #105046

Closed
opened 2023-02-21 20:16:13 +01:00 by Brian Greenstone · 10 comments

System Information
Operating system: macOS-13.2.1-arm64-arm-64bit 64 Bits
Graphics card: Apple M1 Max Apple 4.1 Metal - 83

Blender Version
Broken: version: 3.4.1, branch: blender-v3.4-release, commit date: 2022-12-19 17:00, hash: rB55485cb379f7
Worked: 3.3

Caused by 12becbf0df

Short description of error

Previously I was using Blender 3.3.1 and earlier. When I upgraded to 3.4.1 I found that inverting the selection on large meshes now takes a long time. Previously, on 3.3.1 it was almost instantaneous. This is noticeable on large meshes of around 1 million polys, but almost instant on smaller meshes with, say, 200K polys.

Exact steps for others to reproduce the error
Load up a mesh with at least 1 million polys. Select half the faces, and then Invert the selection. Instead of instantly changing the selection to the other faces, it sits there for several seconds thinking about it before it does it. This never happened with previous versions of Blender where it was pretty much instant.

  • Start with default scene
  • Add a SubD modifier to the Cube and manually set the levels to 9 (you'll get ~1.5 million faces)
  • Apply modifier
  • Enter Edit mode for the Cube
  • Box select the top third of the mesh
  • Hit CTRL+I a few times and observe quite poor performance
  • Compare same scenario in 3.3 LTS
**System Information** Operating system: macOS-13.2.1-arm64-arm-64bit 64 Bits Graphics card: Apple M1 Max Apple 4.1 Metal - 83 **Blender Version** Broken: version: 3.4.1, branch: blender-v3.4-release, commit date: 2022-12-19 17:00, hash: `rB55485cb379f7` Worked: 3.3 Caused by 12becbf0dffe06b6f28c4cc444fe0312cf9249b9 **Short description of error** Previously I was using Blender 3.3.1 and earlier. When I upgraded to 3.4.1 I found that inverting the selection on large meshes now takes a long time. Previously, on 3.3.1 it was almost instantaneous. This is noticeable on large meshes of around 1 million polys, but almost instant on smaller meshes with, say, 200K polys. **Exact steps for others to reproduce the error** Load up a mesh with at least 1 million polys. Select half the faces, and then Invert the selection. Instead of instantly changing the selection to the other faces, it sits there for several seconds thinking about it before it does it. This never happened with previous versions of Blender where it was pretty much instant. - Start with default scene - Add a SubD modifier to the Cube and manually set the levels to 9 (you'll get ~1.5 million faces) - Apply modifier - Enter Edit mode for the Cube - Box select the top third of the mesh - Hit CTRL+I a few times and observe quite poor performance - Compare same scenario in 3.3 LTS
Brian Greenstone added the
Priority
Normal
Status
Needs Triage
Type
Report
labels 2023-02-21 20:16:14 +01:00

I can also reproduce the drastic change in performance. I'll modify the main description for folks to re-create their own simple test case.

A quick profile shows the following:

profile-anno.png

It could be because of the ".select_edge" and ".select_poly" custom data layers on the mesh and the comment inside um_arraystore_cd_compact warning about dynamic layers.

I can also reproduce the drastic change in performance. I'll modify the main description for folks to re-create their own simple test case. A quick profile shows the following: ![profile-anno.png](/attachments/32ff0ea0-6105-4324-bf6d-6d50a721dd11) It could be because of the ".select_edge" and ".select_poly" custom data layers on the mesh and the comment inside `um_arraystore_cd_compact` warning about dynamic layers.
Member

Can confirm

Can confirm
Pratik Borhade added
Module
Modeling
Status
Confirmed
and removed
Status
Needs Triage
labels 2023-02-22 05:09:40 +01:00
Pratik Borhade changed title from Inverting face selection now very slow to Regression: Inverting face selection now very slow 2023-02-22 06:54:03 +01:00
Pratik Borhade added
Priority
High
and removed
Priority
Normal
labels 2023-02-22 06:54:52 +01:00
Member
Caused by 12becbf0dffe06b6f28c4cc444fe0312cf9249b9 @HooglyBoogly ^
Member

Thanks for the investigation @deadpin. The dynamic layer thing seems to mean layers like vertex groups or multires masks/displacements, not simple boolean layers.

I find this whole "array store" thing to be pretty confusing. I'll run some testing though, the difference shouldn't be that dramatic.

Thanks for the investigation @deadpin. The dynamic layer thing seems to mean layers like vertex groups or multires masks/displacements, not simple boolean layers. I find this whole "array store" thing to be pretty confusing. I'll run some testing though, the difference shouldn't be that dramatic.
Hans Goudey added this to the 3.5 milestone 2023-02-22 14:11:26 +01:00
Hans Goudey self-assigned this 2023-02-22 14:11:33 +01:00
Member

Here's some timing information that may be useful for further investigation:

Timer 'um_arraystore_cd_compact vert' took 3677.9 ms
Timer 'State Add: .select_vert (data_len: 1000000 stride: 1)' took 3676.6 ms
Timer 'State Add: position (data_len: 1000000 stride: 12)' took 0.5 ms

Timer 'um_arraystore_cd_compact edge' took 2744.3 ms
Timer 'State Add: .select_edge (data_len: 1998000 stride: 1)' took 2742.2 ms
Timer 'State Add:  (data_len: 1998000 stride: 12)' took 1.0 ms


Timer 'um_arraystore_cd_compact loop' took 2.9 ms
Timer 'State Add: NGon Face-Vertex (data_len: 3992004 stride: 8)' took 1.4 ms

Timer 'um_arraystore_cd_compact poly' took 166.5 ms
Timer 'State Add: .select_poly (data_len: 998001 stride: 1)' took 165.4 ms
Timer 'State Add: NGon Face (data_len: 998001 stride: 12)' took 0.5 ms

It looks like the "array store" system is quite slow for the boolean data that's changing. I don't know why it would take 4 entire seconds to compare 1 million booleans though, that sounds crazy.

Here's some timing information that may be useful for further investigation: ``` Timer 'um_arraystore_cd_compact vert' took 3677.9 ms Timer 'State Add: .select_vert (data_len: 1000000 stride: 1)' took 3676.6 ms Timer 'State Add: position (data_len: 1000000 stride: 12)' took 0.5 ms Timer 'um_arraystore_cd_compact edge' took 2744.3 ms Timer 'State Add: .select_edge (data_len: 1998000 stride: 1)' took 2742.2 ms Timer 'State Add: (data_len: 1998000 stride: 12)' took 1.0 ms Timer 'um_arraystore_cd_compact loop' took 2.9 ms Timer 'State Add: NGon Face-Vertex (data_len: 3992004 stride: 8)' took 1.4 ms Timer 'um_arraystore_cd_compact poly' took 166.5 ms Timer 'State Add: .select_poly (data_len: 998001 stride: 1)' took 165.4 ms Timer 'State Add: NGon Face (data_len: 998001 stride: 12)' took 0.5 ms ``` It looks like the "array store" system is quite slow for the boolean data that's changing. I don't know why it would take 4 entire seconds to compare 1 million booleans though, that sounds crazy.

I'm not familiar with the bmesh structure, but if its data is stored as small allocations in memory, and not an array, then the cache explains everything

I'm not familiar with the bmesh structure, but if its data is stored as small allocations in memory, and not an array, then the cache explains everything
Member

@mod_moder Check out the code, it's Mesh that's being saved here, not BMesh ;)

@mod_moder Check out the code, it's `Mesh` that's being saved here, not `BMesh` ;)
Member

I'm having a hard time understanding the design decisions in the "array store" system. Many of the decisions seem to try to trade a large computational cost for decreases in memory usage when most data remains the same. That isn't the case here, and all of the processing ends up being pointless.

It also doesn't help that the system often processes each element individually. In this cases that's one byte at a time.

Here's a more in depth analysis of what the code is doing, in a test of a mesh with 1 million vertices.


  1. First the entire edit BMesh is converted to a Mesh. This uses the edit mesh -> original mesh conversion, so it doesn't use any of the recent performance improvements to that code. The process is entirely single-threaded.
  2. In um_arraystore_compact_ex, the undo mesh structure is converted into the "array store" format. The four attribute domains are processed sequentially, and each custom data layer is processed sequentially (no multithreading anywhere here).
  3. When converting the boolean selection layer, it is "merged" into an existing reference with bchunk_list_from_data_merge. This function allocates a 64 bit int for every element for a hash table. We've already used 8 times the memory as our original array.
  4. hash_array_from_data hashes ALL bytes in the boolean array individually, single-threaded to fill the integer array.
  5. hash_accum then adds up the entire array of 64 bit integers. 3 times!
  6. It looks like key_from_chunk_ref then hashes all the data again, but this time in smaller chunks to make a table of accumulated hash values (the table size is 12000 hashes)
  7. Then, for every single element, table_lookup is called. The table seems to be mostly empty. Though it looks like if it were to fill up that the behavior might become quadratic (over the size of the table).
  8. Then in bchunk_list_append_data_n, the data duplicated into small chunks of 248 bytes, which are added to a linked list. This is all in one thread.
  9. After this, the mesh layer->data is just freed in the end.
I'm having a hard time understanding the design decisions in the "array store" system. Many of the decisions seem to try to trade a large computational cost for decreases in memory usage when most data remains the same. That isn't the case here, and all of the processing ends up being pointless. It also doesn't help that the system often processes each element individually. In this cases that's one byte at a time. Here's a more in depth analysis of what the code is doing, in a test of a mesh with 1 million vertices. --- 1. First the entire edit `BMesh` is converted to a `Mesh`. This uses the edit mesh -> original mesh conversion, so it doesn't use any of the recent performance improvements to that code. The process is entirely single-threaded. 2. In `um_arraystore_compact_ex`, the undo mesh structure is converted into the "array store" format. The four attribute domains are processed sequentially, and each custom data layer is processed sequentially (no multithreading anywhere here). 3. When converting the boolean selection layer, it is "merged" into an existing reference with `bchunk_list_from_data_merge`. This function allocates a 64 bit int for every element for a hash table. We've already used 8 times the memory as our original array. 4. `hash_array_from_data` hashes ALL bytes in the boolean array individually, single-threaded to fill the integer array. 5. `hash_accum` then adds up the entire array of 64 bit integers. 3 times! 6. It looks like `key_from_chunk_ref` then hashes all the data again, but this time in smaller chunks to make a table of accumulated hash values (the table size is 12000 hashes) 7. Then, for every single element, `table_lookup` is called. The table seems to be mostly empty. Though it looks like if it were to fill up that the behavior might become quadratic (over the size of the table). 8. Then in `bchunk_list_append_data_n`, the data duplicated into small chunks of 248 bytes, which are added to a linked list. This is all in one thread. 9. After this, the mesh `layer->data` is just freed in the end.
Hans Goudey removed their assignment 2023-03-08 21:20:42 +01:00
Member

@ideasman42, could you look into this bottleneck? It looks you're the original author of the array store system. To me it looks like it's working as intended here, but the amount of processing doesn't work well for boolean attributes when there are large changes in the data.

@ideasman42, could you look into this bottleneck? It looks you're the original author of the array store system. To me it looks like it's working as intended here, but the amount of processing doesn't work well for boolean attributes when there are large changes in the data.

@HooglyBoogly I'll check on this, if it's simply the compacting the array that's slow, I'll check on optimizing this.

@HooglyBoogly I'll check on this, if it's simply the compacting the array that's slow, I'll check on optimizing this.
Blender Bot added
Status
Resolved
and removed
Status
Confirmed
labels 2023-03-11 02:03:30 +01:00
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
6 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#105046
No description provided.