WIP:Work sets for better work distrubtion with a Multi-GPU setup #108147

Draft
William Leeson wants to merge 51 commits from leesonw/blender-cluster:work_sets into main

When changing the target branch, be careful to rebase the branch in your fork to match. See documentation.
Member

Why

To improve the distribution of work between GPUs by giving them a set of work that should take roughly the same amount of time

What

This patch adds an option to the cycles performance category called "device scale factor" that determines how many vertical splits occur in a render tile or image if tiling is not used. This results in the tiles or images being broken into n parts each of which is given to a different device. The image tile are split by giving every 1/s*w_{device} scan lines in the tile to a each GPU. Where s is the "device scale factor" an integer value and w_{device} the proportion of work to assign to the device.. This way they all render a similar image which results in each work set taking a similar amount of time. So for instance at the beginning for 2 GPU the weight would be equal at 0.5 each. Thus for a 256x256 tile each would get 128 scan lines.

Added some performance profiles here. I also exported some of the graphs below note I need to redo "attic gpu map" as it was done with NVTX markers which slowed it down.

## Why To improve the distribution of work between GPUs by giving them a set of work that should take roughly the same amount of time ## What This patch adds an option to the cycles performance category called "device scale factor" that determines how many vertical splits occur in a render tile or image if tiling is not used. This results in the tiles or images being broken into $n$ parts each of which is given to a different device. The image tile are split by giving every $1/s*w_{device}$ scan lines in the tile to a each GPU. Where s is the "device scale factor" an integer value and $w_{device}$ the proportion of work to assign to the device.. This way they all render a similar image which results in each work set taking a similar amount of time. So for instance at the beginning for 2 GPU the weight would be equal at 0.5 each. Thus for a 256x256 tile each would get 128 scan lines. Added some performance profiles [here](https://docs.google.com/spreadsheets/d/1Hqg7vJ2C2aIdSvizWxymoeqZTvRZGWMV1LIdDMFyUBE/edit?usp=sharing). I also exported some of the graphs below note I need to redo "attic gpu map" as it was done with NVTX markers which slowed it down.
William Leeson added 12 commits 2023-05-22 14:28:06 +02:00
William Leeson added 9 commits 2023-05-25 10:25:29 +02:00
cbdb1379e2 Use a single master buffer to hold all the slices
This replaces n slice buffers with a single master buffer and n
slices which reference into the master buffer. This allows a
single copy to upload or download the data.
9565aaa6fe FIX: Denoise buffer when more than one work_set
Previouly when there were more than one worker (aka device) and
multiple work_sets the denoising was skipped.
6050b49ee4 FIX: Use remaining rows in the last work item
The remain rows were not added to the last work item as it was
detected incorrectly.
f0185ed234 FIX: Don't copy 0 height slices to display
Attempting to copy a zero height slice resulted in CUDA errors.
Also switches back to using the master buffer when zeroing all
the slices.
William Leeson added 4 commits 2023-05-30 13:47:39 +02:00
William Leeson added 2 commits 2023-05-31 13:32:28 +02:00
0e0056707e FIX: Master buffer is now copied directly using it's buffer
CPU data was not being copied properly due to the buffer not
having the correct parameters.

I think this is not implemented at the right level (but maybe that is planned as a further step already?). In this PR a device seems to fully complete rendering a slice before moving on to the next slice. But then as you increase the number of slice there is more overhead from synchronizing each time. And probably worse, GPU occupancy and coherence will drop as there is less work to keep all cores busy, especially near the end when fewer and fewer paths remain.

For example imagine a shot with a character where it's hair is the most expensive to render, but takes up only a small part of the entire image. Getting the slices small enough to distribute that works as more GPUs are added is going to require a high device scale factor, probably with too much overhead? So you'd want to be able to make that really granular, maybe even just one scanline per device.

It should be rendering pixels from all slices simultaneously, all in the same path state array. I would expect kernel/device/gpu/work_stealing.h to have the logic to map a global work index to a slice and coordinate in the slice.

I think every device should have just one render buffer to work with that, not multiple. The device_memory.slice mechanism I'm not sure will work well for this, if you do path tracing for multiple slices at once, you would pass it just a single render buffer device pointer and this abstraction can't really be used to hide that.

I think this is not implemented at the right level (but maybe that is planned as a further step already?). In this PR a device seems to fully complete rendering a slice before moving on to the next slice. But then as you increase the number of slice there is more overhead from synchronizing each time. And probably worse, GPU occupancy and coherence will drop as there is less work to keep all cores busy, especially near the end when fewer and fewer paths remain. For example imagine a shot with a character where it's hair is the most expensive to render, but takes up only a small part of the entire image. Getting the slices small enough to distribute that works as more GPUs are added is going to require a high device scale factor, probably with too much overhead? So you'd want to be able to make that really granular, maybe even just one scanline per device. It should be rendering pixels from all slices simultaneously, all in the same path state array. I would expect `kernel/device/gpu/work_stealing.h` to have the logic to map a global work index to a slice and coordinate in the slice. I think every device should have just one render buffer to work with that, not multiple. The `device_memory.slice` mechanism I'm not sure will work well for this, if you do path tracing for multiple slices at once, you would pass it just a single render buffer device pointer and this abstraction can't really be used to hide that.
Brecht Van Lommel requested changes 2023-06-02 14:03:58 +02:00
@ -973,6 +973,13 @@ class CyclesRenderSettings(bpy.types.PropertyGroup):
min=8, max=8192,
)
device_scale_factor: IntProperty(

This may be ok for debugging, but I don't think the final implementation should require tuning a parameter like this.

This may be ok for debugging, but I don't think the final implementation should require tuning a parameter like this.
@ -435,6 +435,17 @@ void Device::free_memory()
metal_devices.free_memory();
}
bool Device::alloc_host(void *&shared_pointer, size_t size, bool /* pinned */) {

Please use clang-format.

Please use clang-format.
@ -45,2 +46,3 @@
void *ptr = util_aligned_malloc(size, MIN_ALIGNMENT_CPU_DATA_TYPES);
void *ptr = NULL;
if(!pinned_mem) {

I'm confused by what non-pinned memory means now, since here it's just a regular memory allocation, but in the CUDA device it allocates with the CUDA API.

I'm confused by what non-pinned memory means now, since here it's just a regular memory allocation, but in the CUDA device it allocates with the CUDA API.
@ -630,3 +620,1 @@
if (knode->type == LIGHT_TREE_INSTANCE) {
/* Switch to the node with the subtree. */
*node_index = knode->instance.reference;
if (selected_index != -1) {

Unrelated change, leave out.

Unrelated change, leave out.
Author
Member

I think this is not implemented at the right level (but maybe that is planned as a further step already?). In this PR a device seems to fully complete rendering a slice before moving on to the next slice. But then as you increase the number of slice there is more overhead from synchronizing each time. And probably worse, GPU occupancy and coherence will drop as there is less work to keep all cores busy, especially near the end when fewer and fewer paths remain.

For example imagine a shot with a character where it's hair is the most expensive to render, but takes up only a small part of the entire image. Getting the slices small enough to distribute that works as more GPUs are added is going to require a high device scale factor, probably with too much overhead? So you'd want to be able to make that really granular, maybe even just one scanline per device.

It should be rendering pixels from all slices simultaneously, all in the same path state array. I would expect kernel/device/gpu/work_stealing.h to have the logic to map a global work index to a slice and coordinate in the slice.

I think every device should have just one render buffer to work with that, not multiple. The device_memory.slice mechanism I'm not sure will work well for this, if you do path tracing for multiple slices at once, you would pass it just a single render buffer device pointer and this abstraction can't really be used to hide that.

Yes, I totally agree. I am working on a change that will just use one render buffer and render all the slices into it. That way like you say then you don't need to restart the path tracing for each slice. I was doing this another way but now that you point it out it's probably better to use kernel/device/gpu/work_stealing.h to do the mapping instead of the slices and work_sets. I have noticed that many of the wins from this are a result of the rebalance happening less often and are quicker as the slices are smaller so the copying to and from the GPUs is way less expensive. This is particularly important with setups that have many GPUs (10 in our case where it really made a difference)

> I think this is not implemented at the right level (but maybe that is planned as a further step already?). In this PR a device seems to fully complete rendering a slice before moving on to the next slice. But then as you increase the number of slice there is more overhead from synchronizing each time. And probably worse, GPU occupancy and coherence will drop as there is less work to keep all cores busy, especially near the end when fewer and fewer paths remain. > > For example imagine a shot with a character where it's hair is the most expensive to render, but takes up only a small part of the entire image. Getting the slices small enough to distribute that works as more GPUs are added is going to require a high device scale factor, probably with too much overhead? So you'd want to be able to make that really granular, maybe even just one scanline per device. > > It should be rendering pixels from all slices simultaneously, all in the same path state array. I would expect `kernel/device/gpu/work_stealing.h` to have the logic to map a global work index to a slice and coordinate in the slice. > > I think every device should have just one render buffer to work with that, not multiple. The `device_memory.slice` mechanism I'm not sure will work well for this, if you do path tracing for multiple slices at once, you would pass it just a single render buffer device pointer and this abstraction can't really be used to hide that. Yes, I totally agree. I am working on a change that will just use one render buffer and render all the slices into it. That way like you say then you don't need to restart the path tracing for each slice. I was doing this another way but now that you point it out it's probably better to use `kernel/device/gpu/work_stealing.h` to do the mapping instead of the slices and `work_sets`. I have noticed that many of the wins from this are a result of the `rebalance` happening less often and are quicker as the slices are smaller so the copying to and from the GPUs is way less expensive. This is particularly important with setups that have many GPUs (10 in our case where it really made a difference)
William Leeson added 7 commits 2023-06-09 10:42:33 +02:00
2bbf552f09 Render all slices in one go
Previously the render_samples iterated over all the WorkSets.
However, this was not ideal due to overheads and was not good at
keeping the GPU busy. Now info is passed in the WorkTile to enable
the GPU to render all the slices in one pass.
7d1379f95d Path rng_hash uses image pixel coordinates
Previously it was using the slice coordinates
cf298a1d7e FIX: Update slice buffer offest into the master buffer on change
The slice buffers offsets into the master buffer were only updated
when the master buffer was reallocated. This ignored that fact that
the resolution scaler could resize the buffer and the slices even
though the master buffer was not reallocated.
William Leeson added 12 commits 2023-06-13 16:15:39 +02:00
7460500a81 FIX: RenderBuffers state now setup correctly for NODE
The parameters for slices added to the RenderBuffers was not setup
correctly for use with Nodes. This adds the necessary setup code.

Also, switched the render buffer to not use pinned memory.
67116b0844 FIX: Fixes debug build
The debug build was failing because dna_type_offsets.h was not
always generated when building. This also sometimes more rarely
affected the release build.
7c4f3aa0d4 Remove the need for using the WorkSet size()
Adds device_scale_factor_ member variable to PathTraceWork for
iterating over the slices.
d9442b3969 Adaptive sampling uses only one parallel_for on CPU
Also fixes an issue where the render_samples_impl was getting the
incorrect height.
William Leeson added 1 commit 2023-06-26 12:20:53 +02:00
William Leeson added 1 commit 2023-06-27 14:25:34 +02:00
William Leeson added 1 commit 2023-06-29 18:35:03 +02:00
4e424d384f Make PassAssessor and master_buffers_ aware of slice structure
Adds the correct BufferParams to the master_buffers and also
changes the code to the PathAccessors for bot CPU and GPU to copy
the images according to the slice structure.
William Leeson added 1 commit 2023-06-30 14:32:06 +02:00
dcb9476e9c FIX: Stop (get/set)_render_tile_pixels using work_sets
Sets the master_buffers as the effective_buffer_params so they
only iterate once instead of per buffer. As the device_pointers
cannot be used as regular pointers.
William Leeson added 1 commit 2023-07-04 16:30:55 +02:00
This pull request has changes conflicting with the target branch.
  • intern/cycles/integrator/pass_accessor_cpu.cpp
  • intern/cycles/integrator/path_trace_display.cpp
  • intern/cycles/integrator/path_trace_work_gpu.cpp

Checkout

From your project repository, check out a new branch and test the changes.
git fetch -u work_sets:leesonw-work_sets
git checkout leesonw-work_sets
Sign in to join this conversation.
No reviewers
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
2 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#108147
No description provided.