Mesh Draw Extraction Refactor #116901

Open
opened 2024-01-08 15:27:08 +01:00 by Hans Goudey · 7 comments
Member

Current Issues

  • Virtual function call per element, per data
  • Unnecessary branching in hot loops for constant checks
  • Poor use of CPU cache when accessing many arrays in a single loop
  • Unnecessarily iterating over faces when iterating over corners would work
  • Code is unnecessarily object-oriented, which makes it more confusing (for example, extractor "overrides")
  • Too much casting, since each callback gets a void pointer

Proposal

  • Use a more data oriented design, with standard threading functions like parallel_for
  • Iterate over a single array at a time
  • Use functions with span arguments instead of callbacks
  • Eventually completely replace the MeshExtract class
  • Replace over-engineering dependencies between buffers with if statements and execution order
## Current Issues - Virtual function call per element, per data - Unnecessary branching in hot loops for constant checks - Poor use of CPU cache when accessing many arrays in a single loop - Unnecessarily iterating over faces when iterating over corners would work - Code is unnecessarily object-oriented, which makes it more confusing (for example, extractor "overrides") - Too much casting, since each callback gets a void pointer ## Proposal - Use a more data oriented design, with standard threading functions like `parallel_for` - Iterate over a single array at a time - Use functions with span arguments instead of callbacks - Eventually completely replace the `MeshExtract` class - Replace over-engineering dependencies between buffers with if statements and execution order
Hans Goudey added the
Module
EEVEE & Viewport
Type
Design
labels 2024-01-08 15:27:08 +01:00

Iterate over a single array at a time

Not sure if related to this particular refactor, but one thing to watch out for, when writing directly into some GPU buffer (i.e. not a regular CPU buffer you have allocated yourself, but a pointer you get from the graphics API/driver for "write into some GPU memory directly"): you really really really want to write data linearly, and never read it.

So if, for example, the destination vertex buffer uses interleaved "pos0, normal0, pos1, normal1, pos2, normal2" layout, then you want to avoid "write all the positions, leaving gaps, then later on write all the normals into the gaps" pattern. If source data is separate positions and normals array, this means you do want to iterate over both source arrays at once.

The reason for all of this is that when writing directly into the vertex buffer, in (many, but not all) cases it uses a "CPU uncached" type of memory, where leaving gaps or reading from it can be very, very slow.

> Iterate over a single array at a time Not sure if related to this particular refactor, but one thing to watch out for, when writing directly into some GPU buffer (i.e. not a regular CPU buffer you have allocated yourself, but a pointer you get from the graphics API/driver for "write into some GPU memory directly"): you really really really want to write data linearly, and _never_ read it. So if, for example, the destination vertex buffer uses interleaved "pos0, normal0, pos1, normal1, pos2, normal2" layout, then you want to avoid "write all the positions, leaving gaps, then later on write all the normals into the gaps" pattern. If source data is separate positions and normals array, this means you do want to iterate over both source arrays at once. The reason for all of this is that when writing directly into the vertex buffer, in (many, but not all) cases it uses a "CPU uncached" type of memory, where leaving gaps or reading from it can be very, very slow.
Author
Member

Thanks, great information! I believe the vertex buffers from the GPU API are still allocated completely by our own CPU allocator (see GLVertBuf::acquire_data() which uses MEM_mallocN). I'm assuming we'd want to change that in the future though, so it seems right to keep that in mind.

In #116902 I already did run into a situation like that, though the "gaps" are only the 2 bits in the w field of GPUPackedNormal. I wanted to reduce branching by filling in those bits in separate loops. Maybe an alternative is to calculate the normals into small local chunks and copy those to the VBO data?

Side note: in that patch I'm trying out dropping the interleaved normals and positions in favor of using the existing separate "lnor" VBO instead. I'm guessing there are more aspects of how that interacts with GPU performance to keep in mind.

Thanks, great information! I believe the vertex buffers from the GPU API are still allocated completely by our own CPU allocator (see `GLVertBuf::acquire_data()` which uses `MEM_mallocN`). I'm assuming we'd want to change that in the future though, so it seems right to keep that in mind. In #116902 I already did run into a situation like that, though the "gaps" are only the 2 bits in the `w` field of `GPUPackedNormal`. I wanted to reduce branching by filling in those bits in separate loops. Maybe an alternative is to calculate the normals into small local chunks and copy those to the VBO data? Side note: in that patch I'm trying out dropping the interleaved normals and positions in favor of using the existing separate "lnor" VBO instead. I'm guessing there are more aspects of how that interacts with GPU performance to keep in mind.

Thanks, great information! I believe the vertex buffers from the GPU API are still allocated completely by our own CPU allocator

Then yeah, it would not be an issue right now.

In #116902 I already did run into a situation like that, though the "gaps" are only the 2 bits in the w field
of GPUPackedNormal. I wanted to reduce branching by filling in those bits in separate loops

Using bitfields like that, if this was writing into a GPU memory directly, would be pretty bad. Since bitfields generally work by "read some bytes, modify the needed bits, write out bytes back", and the "read" part from the GPU memory would be slow. But as you mention above, that's not an issue today.

If it were an issue at some point, then even something simple like declaring a local variable for compressed normal, calculating into that, and memcpy the four bytes into destination would likely be fine.

> Thanks, great information! I believe the vertex buffers from the GPU API are still allocated completely by our own CPU allocator Then yeah, it would not be an issue right now. > In #116902 I already did run into a situation like that, though the "gaps" are only the 2 bits in the w field > of GPUPackedNormal. I wanted to reduce branching by filling in those bits in separate loops Using bitfields like that, if this was writing into a GPU memory directly, would be pretty bad. Since bitfields generally work by "read some bytes, modify the needed bits, write out bytes back", and the "read" part from the GPU memory would be slow. But as you mention above, that's not an issue today. If it were an issue at some point, then even something simple like declaring a local variable for compressed normal, calculating into that, and memcpy the four bytes into destination would likely be fine.
Member

Writing directly to the GPU driver allocated memory is something we were thinking of. But might require conversions as not all drivers support all combinations. Vulkan backend has some tooling vk_data_conversions to help with this specific task. But isn't designed yet to be shared with other backends.

Writing directly to the GPU driver allocated memory is something we were thinking of. But might require conversions as not all drivers support all combinations. Vulkan backend has some tooling `vk_data_conversions` to help with this specific task. But isn't designed yet to be shared with other backends.
Author
Member

Adding a separate step of conversions after writing to the buffer initially doesn't seem like a good solution to me, unless we don't care about performance for that backend. We should be able to insert the correct conversion step directly into the "extraction" process. i.e. if some backend doesn't support GPUPackedNormal, we should be able to compile the existing extraction with a conversion to GPUNormalForOtherAPI instead.

Adding a separate step of conversions _after_ writing to the buffer initially doesn't seem like a good solution to me, unless we don't care about performance for that backend. We should be able to insert the correct conversion step directly into the "extraction" process. i.e. if some backend doesn't support `GPUPackedNormal`, we should be able to compile the existing extraction with a conversion to `GPUNormalForOtherAPI` instead.
Member

Our vulkan backend does this on the fly when “uploading” to the driver. We should promote such feature to the vertex buffer builder. Requires some changes to the API, but could remove the intermediate buffer completely.
Conversion would than happen inline when adding data to the builder and written directly to driver owned memory.

Our vulkan backend does this on the fly when “uploading” to the driver. We should promote such feature to the vertex buffer builder. Requires some changes to the API, but could remove the intermediate buffer completely. Conversion would than happen inline when adding data to the builder and written directly to driver owned memory.

Writing directly to the GPU driver allocated memory is something we were thinking of. But might require conversions as not all drivers support all combinations. Vulkan backend has some tooling vk_data_conversions to help with this specific task. But isn't designed yet to be shared with other backends.

This would be a very welcome change for UMA architectures like Apple Silicon. Currently we have Shared buffers wherein the CPU data is instantly visible to the GPU, but equally for fallback support, standard mapping/unmapping via Managed buffers which flush host data back to the GPU work well too.

(Some of this may have been talked above or on the blender.chat threads, however)

Another change I would be an advocate for is with the possibility of allowing the GPU module to own the synchronization for buffer updates. i.e. at the moment, if we require up-to-date mesh data, the stall happens on the host side prior to command encoding (the BLI_task_pool_work_and_wait() command.
Instead, splitting the buffer allocation and data population would allow better pipelining of the GPU and CPU, as command encoding could happen in parallel with the host data population. Buffer allocation should happen up-front, but population synchronization could execute in parallel with command encoding and even some initial GPU processing.

The backend could then either decide to wait for vertex buffer data to be ready at submission time, or for the API's that allow it, submit the GPU work and then stall pending completion of the CPU threads.

For Metal specifically, this can be achieved with an MTLSharedEvent wherein certain aspects of the GPU work can be stalled until a flag is set. Flags could be specified on a per vertex buffer basis, and we could simply encode waits in the GPU command stream, which would allow the GPU to make partial progress on a frame.

From a rough calculation say looking at Tree-Creatures animation stack, only around 75% of the frame is spent with animation fetching, wherein the remaining 25% is spent with general command encoding and scene processing, which could theoretically run in parallel along-side mesh extraction if the GPU backend did not need to wait for fully populated vertex buffers to be ready.

> Writing directly to the GPU driver allocated memory is something we were thinking of. But might require conversions as not all drivers support all combinations. Vulkan backend has some tooling vk_data_conversions to help with this specific task. But isn't designed yet to be shared with other backends. This would be a very welcome change for UMA architectures like Apple Silicon. Currently we have Shared buffers wherein the CPU data is instantly visible to the GPU, but equally for fallback support, standard mapping/unmapping via Managed buffers which flush host data back to the GPU work well too. (Some of this may have been talked above or on the blender.chat threads, however) Another change I would be an advocate for is with the possibility of allowing the GPU module to own the synchronization for buffer updates. i.e. at the moment, if we require up-to-date mesh data, the stall happens on the host side prior to command encoding (the `BLI_task_pool_work_and_wait()` command. Instead, splitting the buffer allocation and data population would allow better pipelining of the GPU and CPU, as command encoding could happen in parallel with the host data population. Buffer allocation should happen up-front, but population synchronization could execute in parallel with command encoding and even some initial GPU processing. The backend could then either decide to wait for vertex buffer data to be ready at submission time, or for the API's that allow it, submit the GPU work and then stall pending completion of the CPU threads. For Metal specifically, this can be achieved with an MTLSharedEvent wherein certain aspects of the GPU work can be stalled until a flag is set. Flags could be specified on a per vertex buffer basis, and we could simply encode waits in the GPU command stream, which would allow the GPU to make partial progress on a frame. From a rough calculation say looking at Tree-Creatures animation stack, only around 75% of the frame is spent with animation fetching, wherein the remaining 25% is spent with general command encoding and scene processing, which could theoretically run in parallel along-side mesh extraction if the GPU backend did not need to wait for fully populated vertex buffers to be ready.
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
4 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#116901
No description provided.