WIP:Interleaved slices for better work distriubtion with a Multi-GPU setup #110348

Draft
William Leeson wants to merge 82 commits from leesonw/blender-cluster:work_sets_similar into main

When changing the target branch, be careful to rebase the branch in your fork to match. See documentation.
Member

Why

To improve the distribution of work between GPUs by giving them a set of work that should take roughly the same amount of time.

What

This patch adds an option to the cycles performance category called "interleaved slices". This splits the work load by giving each device a set of scan lines such that the one with the smallest weight w_{smallest} gets 1 scan line and the others get n_i = w_i/w_{smallest}. The scaliness for each device are interleaved such that the first device takes the first n_0 and the second gets the next n_1 + }$ and so on. The set of scan lines are reassigned each time the weights change.

This pull request is based on that in 108147 I could not find a way to change the branch being merged from so I started a new one.

I put some performance statistics here

## Why To improve the distribution of work between GPUs by giving them a set of work that should take roughly the same amount of time. ## What This patch adds an option to the cycles performance category called "interleaved slices". This splits the work load by giving each device a set of scan lines such that the one with the smallest weight $w_{smallest}$ gets 1 scan line and the others get $n_i = w_i/w_{smallest}$. The scaliness for each device are interleaved such that the first device takes the first $n_0$ and the second gets the next $n_1$ + }$ and so on. The set of scan lines are reassigned each time the weights change. This pull request is based on that in [108147](https://projects.blender.org/blender/blender/pulls/108147) I could not find a way to change the branch being merged from so I started a new one. I put some performance statistics [here](https://docs.google.com/spreadsheets/d/1eG5_AvD_tAY-wJLkEEkZNQcgZYqS5PpSkmc4RNFrdXo/edit?usp=sharing)
William Leeson added 65 commits 2023-07-21 15:42:32 +02:00
cbdb1379e2 Use a single master buffer to hold all the slices
This replaces n slice buffers with a single master buffer and n
slices which reference into the master buffer. This allows a
single copy to upload or download the data.
9565aaa6fe FIX: Denoise buffer when more than one work_set
Previouly when there were more than one worker (aka device) and
multiple work_sets the denoising was skipped.
6050b49ee4 FIX: Use remaining rows in the last work item
The remain rows were not added to the last work item as it was
detected incorrectly.
f0185ed234 FIX: Don't copy 0 height slices to display
Attempting to copy a zero height slice resulted in CUDA errors.
Also switches back to using the master buffer when zeroing all
the slices.
0e0056707e FIX: Master buffer is now copied directly using it's buffer
CPU data was not being copied properly due to the buffer not
having the correct parameters.
2bbf552f09 Render all slices in one go
Previously the render_samples iterated over all the WorkSets.
However, this was not ideal due to overheads and was not good at
keeping the GPU busy. Now info is passed in the WorkTile to enable
the GPU to render all the slices in one pass.
7d1379f95d Path rng_hash uses image pixel coordinates
Previously it was using the slice coordinates
cf298a1d7e FIX: Update slice buffer offest into the master buffer on change
The slice buffers offsets into the master buffer were only updated
when the master buffer was reallocated. This ignored that fact that
the resolution scaler could resize the buffer and the slices even
though the master buffer was not reallocated.
7460500a81 FIX: RenderBuffers state now setup correctly for NODE
The parameters for slices added to the RenderBuffers was not setup
correctly for use with Nodes. This adds the necessary setup code.

Also, switched the render buffer to not use pinned memory.
67116b0844 FIX: Fixes debug build
The debug build was failing because dna_type_offsets.h was not
always generated when building. This also sometimes more rarely
affected the release build.
7c4f3aa0d4 Remove the need for using the WorkSet size()
Adds device_scale_factor_ member variable to PathTraceWork for
iterating over the slices.
d9442b3969 Adaptive sampling uses only one parallel_for on CPU
Also fixes an issue where the render_samples_impl was getting the
incorrect height.
4e424d384f Make PassAssessor and master_buffers_ aware of slice structure
Adds the correct BufferParams to the master_buffers and also
changes the code to the PathAccessors for bot CPU and GPU to copy
the images according to the slice structure.
dcb9476e9c FIX: Stop (get/set)_render_tile_pixels using work_sets
Sets the master_buffers as the effective_buffer_params so they
only iterate once instead of per buffer. As the device_pointers
cannot be used as regular pointers.
989dd1d3ef FIX: Correctly account for partial slices
The last slices that are not full sized need to take the current_y
int account to determine how many scanlines are left.
5e532bb022 FIX: Baking now reads the correct scanlines into the RenderBuffers
The scanlines were just copied serially not taking into account
the slices this is now corrected.
514f8a7990 FIX: For Baking don't use interleaved slices
For some reason at the moment baking roughness does not work due
to the interleaved scanlines between the devices. So for now it
reverts to just using 2 big slices.
b86a443aaa FIX: padding takes interleaved scanlines into account
Previously padding did not use the interleaved scanlines to pad
the data and instead just wrote to the first n scanlines. Now it
iterates over the correct scanlines updating the correct set
based on the data in the BufferParams.
c703a7c8ac Change device_scale_factor to interleaved_slices
Replace the device_scale_factor with a check box to enable or
disable the interleaved slices.
William Leeson added 1 commit 2023-07-21 16:31:24 +02:00
a6e4771b27 FIX: Devices cannot be assigned more rows than scanlines
If the compute difference was huge it was possible for devices to
be assigned way to many rows sometimes more than there are devices.
This prevents this from happening by ensuring devices get at least
1 row.
William Leeson added 1 commit 2023-07-21 16:34:38 +02:00
William Leeson changed title from WIP:Work sets for better work distrubtion with a Multi-GPU setup to WIP:Interleaved slices for better work distriubtion with a Multi-GPU setup 2023-07-21 16:35:50 +02:00
William Leeson requested review from Brecht Van Lommel 2023-07-21 16:36:13 +02:00
William Leeson requested review from Sergey Sharybin 2023-07-21 16:37:14 +02:00
William Leeson added the
Interest
Cycles
Interest
Render & Cycles
labels 2023-07-21 17:44:17 +02:00
William Leeson added 1 commit 2023-07-24 10:57:22 +02:00
William Leeson added 1 commit 2023-07-24 17:02:08 +02:00
William Leeson added 1 commit 2023-07-25 09:53:12 +02:00
ac246b3601 Remove pinned memory
This change is not required for this to work.
William Leeson added 1 commit 2023-07-25 10:02:32 +02:00
William Leeson added 1 commit 2023-07-25 10:05:31 +02:00
William Leeson added 2 commits 2023-07-25 14:41:10 +02:00
d7d5a4127d Unify slice size calculation algorithm
The 2 branch for interleaved or consequtive slices have been
replaced with a simpler check that determines the size of the
slice to share between the devices and the minimum number of
slices each device should have.
William Leeson added 1 commit 2023-07-25 16:00:14 +02:00
William Leeson added 1 commit 2023-07-26 08:35:38 +02:00
William Leeson added 1 commit 2023-07-26 13:03:09 +02:00
William Leeson added 1 commit 2023-08-01 12:55:20 +02:00
William Leeson added 1 commit 2023-08-07 14:26:00 +02:00
buildbot/vexp-code-patch-coordinator Build done. Details
415b4c0487
Merge branch 'upstream_main' into work_sets_similar

@blender-bot package

@blender-bot package
Member

Package build started. Download here when ready.

Package build started. [Download here](https://builder.blender.org/download/patch/PR110348) when ready.
Sergey Sharybin reviewed 2023-08-08 18:16:15 +02:00
Sergey Sharybin left a comment
Owner

Did an initial pass of review. Nothing big yet, but I am still trying to figure out what would be a good way to see how close to the ideal scaling we are. It is a bit hard to measure speedup on dual GP100 setup we currently have here. Need to think a bit of what is the worst case scenario for the current algorithm to get a good performance comparison.

From the testing it seems that if you start viewport render, then enable OptiX denoiser the Blender stalls. Can reproduce reliably with this PR, but not with the current buildbot build of the main branch.

What I am not sure about is how this patch deals with the adaptive sampling. Does the every slice have overscan around it? Or are there "gaps" (information-wise) in the scanlines rendered on a particular device? The former would mean quite high memory usage, the latter would mean the result is not converging to the same result as when rendering on a single device. Or something completely different going on and the result converges to the same result no matter how many devices you're rendering on?

For some reason I can not get Samples Count pass to work on the test machine and pabellon file, I'll dig into it later. But think comparing this pass will be a good measure to see whether the patch converges the same way on multi-device as it does on a single device.

Did an initial pass of review. Nothing big yet, but I am still trying to figure out what would be a good way to see how close to the ideal scaling we are. It is a bit hard to measure speedup on dual GP100 setup we currently have here. Need to think a bit of what is the worst case scenario for the current algorithm to get a good performance comparison. From the testing it seems that if you start viewport render, then enable OptiX denoiser the Blender stalls. Can reproduce reliably with this PR, but not with the current buildbot build of the main branch. What I am not sure about is how this patch deals with the adaptive sampling. Does the every slice have overscan around it? Or are there "gaps" (information-wise) in the scanlines rendered on a particular device? The former would mean quite high memory usage, the latter would mean the result is not converging to the same result as when rendering on a single device. Or something completely different going on and the result converges to the same result no matter how many devices you're rendering on? For some reason I can not get Samples Count pass to work on the test machine and pabellon file, I'll dig into it later. But think comparing this pass will be a good measure to see whether the patch converges the same way on multi-device as it does on a single device.
@ -978,6 +978,12 @@ class CyclesRenderSettings(bpy.types.PropertyGroup):
min=8, max=8192,
)
interleaved_slices: BoolProperty(

Outside of the development purposes I don't think this should be an option.

If we plan to have it for the future development/investigation better move to the debug panel.
Otherwise perhaps just remove the option.

Outside of the development purposes I don't think this should be an option. If we plan to have it for the future development/investigation better move to the debug panel. Otherwise perhaps just remove the option.
@ -46,3 +46,3 @@
destination.num_components;
parallel_for(0, buffer_params.window_height, [&](int64_t y) {
/* Calculate how many full plus partial slices there are */

Fullstops in the comments.

Fullstops in the comments.
@ -49,0 +65,4 @@
/* Copy over each slice to the destination */
parallel_for(0, slices, [&](int slice) {
//for(int slice = 0;slice < slices;++slice) {

Remove the dead code. Also seems that the clang-format is not properly configured in your setup.

Remove the dead code. Also seems that the clang-format is not properly configured in your setup.
@ -254,0 +298,4 @@
slice_sizes[largest_weight] += leftover_scanlines;
slice_stride++;
} else if(leftover_scanlines < 0) {
VLOG_WARNING << "#######Used to many scanlines";

Not sure about details, but seems that VLOG_WARNING << "Used to many scanlines" will be much better suit here.

If this is something that must get a developer attention can use DCHECK_GE(leftover_scanlines, 0) above.

Not sure about details, but seems that `VLOG_WARNING << "Used to many scanlines"` will be much better suit here. If this is something that must get a developer attention can use `DCHECK_GE(leftover_scanlines, 0)` above.
@ -254,0 +301,4 @@
VLOG_WARNING << "#######Used to many scanlines";
}
VLOG_INFO << "===================SLICE allocatable:" << allocatable_slices << " fixed:"<< fixed_slices << "================";

Not sure I'd keep such explicit separation. Surely, it is important for this PR, but outside of this PR it feels it'll draw too much attention.

Not sure I'd keep such explicit separation. Surely, it is important for this PR, but outside of this PR it feels it'll draw too much attention.
@ -412,2 +463,4 @@
});
const double work_time = time_dt() - start_time;
VLOG(3) << "render time total for frame: "

I don't think it is a total time. It is a path tracing time for the current work. There might be more path tracing needing to be done, and there are other types of work as well.

Perhaps it will be much better to log "bare" time from the RenderScheduler::report_*_time() functions.

I don't think it is a total time. It is a path tracing time for the current work. There might be more path tracing needing to be done, and there are other types of work as well. Perhaps it will be much better to log "bare" time from the `RenderScheduler::report_*_time()` functions.
@ -78,2 +78,4 @@
kg, state, render_buffer, scheduled_sample, tile->sample_offset);
/* Map the buffer coordinates to the image coordinates */
int tile_y = y - tile->slice_start_y;

Should the same mapping be done for the init_from_bake?

Should the same mapping be done for the `init_from_bake`?

Still not sure what's up with the Samples Count pass in the Pabellon file, but it works fine in the Junkshop, so that is what I've used for verification.

Unfortunately, the convergence is indeed different with this patch.

Attached the Samples Count pass with the single device render and dual-GPU render. There are noticeable stripe artifacts in the samples distribution.

Single Device Multiple devices
Screenshot from 2023-08-10 16-58-03.png Screenshot from 2023-08-10 16-57-09.png

To reproduce the issue get the Junkshop benchmark scene, enable adaptive samples, Samples Count pass, and render.

In the current main branch both single and multi-GPU render converges to the same result, and their Sample Count pass matches. This is done by applying some overscan around the slices, so that the adaptive sampling filter has enough data to propagate to/from.

Doing so for inetrleaved case would be rather very costly. Not sure what the best solution would be. The quick idea is to perform filtering on a single device. This would require copying pixels from/to multiple devices, but we can limit it to a single required pass only, and doing so every N samples is probably not so bad. Maybe can also utilize half-float, but that could make it harder to deal with the over-exposed areas.

Still not sure what's up with the Samples Count pass in the Pabellon file, but it works fine in the Junkshop, so that is what I've used for verification. Unfortunately, the convergence is indeed different with this patch. Attached the Samples Count pass with the single device render and dual-GPU render. There are noticeable stripe artifacts in the samples distribution. | Single Device | Multiple devices | | -- | -- | | ![Screenshot from 2023-08-10 16-58-03.png](/attachments/5e5f93c2-3b74-4f6d-ad5c-f6ca97096d51) | ![Screenshot from 2023-08-10 16-57-09.png](/attachments/31b5c3cd-907a-4e47-8df2-f1c97d3e4bf9)| To reproduce the issue get the Junkshop benchmark scene, enable adaptive samples, Samples Count pass, and render. In the current main branch both single and multi-GPU render converges to the same result, and their Sample Count pass matches. This is done by applying some overscan around the slices, so that the adaptive sampling filter has enough data to propagate to/from. Doing so for inetrleaved case would be rather very costly. Not sure what the best solution would be. The quick idea is to perform filtering on a single device. This would require copying pixels from/to multiple devices, but we can limit it to a single required pass only, and doing so every N samples is probably not so bad. Maybe can also utilize half-float, but that could make it harder to deal with the over-exposed areas.
Author
Member

Still not sure what's up with the Samples Count pass in the Pabellon file, but it works fine in the Junkshop, so that is what I've used for verification.

Unfortunately, the convergence is indeed different with this patch.

Attached the Samples Count pass with the single device render and dual-GPU render. There are noticeable stripe artifacts in the samples distribution.

Single Device Multiple devices
Screenshot from 2023-08-10 16-58-03.png Screenshot from 2023-08-10 16-57-09.png

To reproduce the issue get the Junkshop benchmark scene, enable adaptive samples, Samples Count pass, and render.

In the current main branch both single and multi-GPU render converges to the same result, and their Sample Count pass matches. This is done by applying some overscan around the slices, so that the adaptive sampling filter has enough data to propagate to/from.

Doing so for inetrleaved case would be rather very costly. Not sure what the best solution would be. The quick idea is to perform filtering on a single device. This would require copying pixels from/to multiple devices, but we can limit it to a single required pass only, and doing so every N samples is probably not so bad. Maybe can also utilize half-float, but that could make it harder to deal with the over-exposed areas.

Thanks for taking a look at this.

This would kind of make sense as the interleaved scan lines would have difference noise and therefore convergence. The same issue would probably appear in the previous division method but only along the single line of division.

Copying between would probably cause a large slow down. I'll take a look at that code and see what I can come up with.

> Still not sure what's up with the Samples Count pass in the Pabellon file, but it works fine in the Junkshop, so that is what I've used for verification. > > Unfortunately, the convergence is indeed different with this patch. > > Attached the Samples Count pass with the single device render and dual-GPU render. There are noticeable stripe artifacts in the samples distribution. > > | Single Device | Multiple devices | > | -- | -- | > | ![Screenshot from 2023-08-10 16-58-03.png](/attachments/5e5f93c2-3b74-4f6d-ad5c-f6ca97096d51) | ![Screenshot from 2023-08-10 16-57-09.png](/attachments/31b5c3cd-907a-4e47-8df2-f1c97d3e4bf9)| > > To reproduce the issue get the Junkshop benchmark scene, enable adaptive samples, Samples Count pass, and render. > > In the current main branch both single and multi-GPU render converges to the same result, and their Sample Count pass matches. This is done by applying some overscan around the slices, so that the adaptive sampling filter has enough data to propagate to/from. > > Doing so for inetrleaved case would be rather very costly. Not sure what the best solution would be. The quick idea is to perform filtering on a single device. This would require copying pixels from/to multiple devices, but we can limit it to a single required pass only, and doing so every N samples is probably not so bad. Maybe can also utilize half-float, but that could make it harder to deal with the over-exposed areas. Thanks for taking a look at this. This would kind of make sense as the interleaved scan lines would have difference noise and therefore convergence. The same issue would probably appear in the previous division method but only along the single line of division. Copying between would probably cause a large slow down. I'll take a look at that code and see what I can come up with.

The same issue would probably appear in the previous division method but only along the single line of division.

It shouldn't. This is what the overscan for the render result is meant to solve. While it is theoretically possible that propagation will affect more than the current overscan size, but we did not see such practical scene, yet alone big difference from the single device rendering.

Copying between would probably cause a large slow down.

I think by default filtering is only happening every 16 samples, which isn't that often. Also, on the one hand we'll be copying a single pass across devices to do filtering, but on the other hand we'll be able to disable overscan.

So to be it is not that obvious that there will be a slowdown.

> The same issue would probably appear in the previous division method but only along the single line of division. It shouldn't. This is what the overscan for the render result is meant to solve. While it is theoretically possible that propagation will affect more than the current overscan size, but we did not see such practical scene, yet alone big difference from the single device rendering. > Copying between would probably cause a large slow down. I think by default filtering is only happening every 16 samples, which isn't that often. Also, on the one hand we'll be copying a single pass across devices to do filtering, but on the other hand we'll be able to disable overscan. So to be it is not that obvious that there will be a slowdown.

One thing I've realized is that the render buffers are storing passes as interleaved pixels, which seems it'll make it harder to copy a single pass.

I kind of curious if it will make sense to store them non-interleaved. Could help running denoising tasks, and maybe will help path tracing performance as well?

One thing I've realized is that the render buffers are storing passes as interleaved pixels, which seems it'll make it harder to copy a single pass. I kind of curious if it will make sense to store them non-interleaved. Could help running denoising tasks, and maybe will help path tracing performance as well?

@leesonw One more thing. The change to switch from oevrscan to filtering on a single device would need to be submitted in a separate PR.

@leesonw One more thing. The change to switch from oevrscan to filtering on a single device would need to be submitted in a separate PR.
William Leeson added 2 commits 2023-08-14 16:56:00 +02:00

I was giving the change a deeper check and look into the code.

For the performance checks I used a slightly modified Victor scene where I've hit Victor and Frank, to force the worst scenario for the current algorithm where the top half of the scene is trivial. I've also increased resolution to 150% to ensure both GPUs have enough work to do.

The scalability of the current algorithm is somewhere between 3% to 10% behind the ideal scaling (as in, rendering on 2 GPUs is not really 2x speedup, but more like 1.9-ish). The proposed algorithm was always within a couple of percent from the ideal scaling. And I guess the benefit only gets better with more GPUs.

The code I found a bit tricky to follow. I think the biggest issue is that I am now not really sure about semantic meaning of fields in the RenderBuffer. Think once we make those clear the rest of the code will be easier to understand, modify if needed, and maintain in the future.

For the width and height I think it is fair to state that they are /* Width/height of the allocated buffer. NOTE: It does not necessarily translate to a continuous region within a camera space. */. Something like this, but point being: make it clear that it is not really a "continuous region" within a camera space.

The window parameters I am not sure if we'll need in the longer term. Currently it only seems to be used for adaptive sampling, and to get adaptive sampling to wrk with such interleaved scanlines a different solution is needed, as mentioned above. If we can have different algorithm for adaptive sampling filtering which does not require overscan and works with internleaved scanlines that will simplify this part of the render buffer and related scheduler logic.

The full_{x,y} is where things becomes quite fuzzy to me.
If i remember things correctly, currently for multi GPU rendering the full_y is offset for every device. So in a way,. RenderBuffers is configured similarly to a border render. Think this is an easy way to think about it.
But with interleaving scanlines I am not sure what those offsets intuitively mean. Are all devices have render buffers with the same offset to the full buffer, and the way to think of it is that it is an offset within the full buffer calculated prior slicing the result to be passed to individual devices (which will make it simple like: typically, just a border render information)?
This needs to be clarified in the comment of those fields.

The slice_stride and slice_height is also something I don't fully understand:

  • What slice exactly is? How slice is different from a "smaller" RenderBuffer (smaller width, height, offset using full_x, full_y) ? In a way it feels that the extra information we need to store is more about "interleaving" rather than "slicing".
  • How the slice_height different from height?
  • What slice_stride is measured in? When talking about stride it typically helps defining it in a terms of "number of between two consequitive scanlines".

Not sure I understood the meaning of them correctly, so the following suggestion might be a bit wrong. Anyway, but I'd try to define this part of the RenderBuffers I'd do something like the following:

 struct RenderBuffers {
  /**
   * Note on interleaved scanlines information.
   *
   * When rendering on a multiple devices the full render buffer is split in a
   * way that allows to schedule its scanlines in an interleaved fashion. For
   * example, in an ideal world with dual matched GPU render first GPU renders
   * every even scanline of the full result, and the second GPU renders every
   * odd scanline.
   *
   * Since it is not uncommon to have non-matched (performance-wise) GPUs, the
   * scheduling allows to render more scanlines on one GPU than on another.
   * This is done by scheduling more than one consecutive scanlines to the
   * faster GPU. 
   *
   * Interleaved example A:
   *   if first GPU is 2x faster than the second, then
   *   the first GPU will render scanlines { 0, 1, 3, 4, 7, 9, ... } and the
   *   second GPU will render scanlines { 2, 5, 9, ... }.
   */

  /* Width/height of the physical allocated buffer.
   *
   * Note that it is not necessarily holding pixels of continuous scanlines of
   * the full render result: when rendering happens on multiple devices the
   * buffer holds render result of interleaved scanlines. */
  int width = 0;
  int height = 0;

  /* The number of consecutive scanlines of the full render which are rendered
   * into this render buffer.
   *
   * In the "Interleaved example A" above it will be set to 2 for the render
   * buffer for the faster GPU, and to 1 for the slower GPU. */
  int num_interleaved_scanlines;

  /* The number of scanlines of the full result which are skipped from this
   * this render result.
   *
   * In the "Interleaved example A" above it will be set to 1 for the render
   * buffer for the faster GPU, and to 2 for the slower GPU. */
  int num_skip_interleaved_scanlines;
};

Could elaborate further to make it more explicit how the scanlines are stored in the buffer itself. Also, I didn't go into explanation of full_x/full_y as I am not yet sure what they mean for interleaved scheduling yet.

In any case, I hope it gives a good starting point for clarifying the RenderBuffers structure.

I was giving the change a deeper check and look into the code. For the performance checks I used a slightly modified Victor scene where I've hit Victor and Frank, to force the worst scenario for the current algorithm where the top half of the scene is trivial. I've also increased resolution to 150% to ensure both GPUs have enough work to do. The scalability of the current algorithm is somewhere between 3% to 10% behind the ideal scaling (as in, rendering on 2 GPUs is not really 2x speedup, but more like 1.9-ish). The proposed algorithm was always within a couple of percent from the ideal scaling. And I guess the benefit only gets better with more GPUs. The code I found a bit tricky to follow. I think the biggest issue is that I am now not really sure about semantic meaning of fields in the `RenderBuffer`. Think once we make those clear the rest of the code will be easier to understand, modify if needed, and maintain in the future. For the `width` and `height` I think it is fair to state that they are ` /* Width/height of the allocated buffer. NOTE: It does not necessarily translate to a continuous region within a camera space. */`. Something like this, but point being: make it clear that it is not really a "continuous region" within a camera space. The window parameters I am not sure if we'll need in the longer term. Currently it only seems to be used for adaptive sampling, and to get adaptive sampling to wrk with such interleaved scanlines a different solution is needed, as mentioned above. If we can have different algorithm for adaptive sampling filtering which does not require overscan and works with internleaved scanlines that will simplify this part of the render buffer and related scheduler logic. The `full_{x,y}` is where things becomes quite fuzzy to me. If i remember things correctly, currently for multi GPU rendering the `full_y` is offset for every device. So in a way,. RenderBuffers is configured similarly to a border render. Think this is an easy way to think about it. But with interleaving scanlines I am not sure what those offsets intuitively mean. Are all devices have render buffers with the same offset to the full buffer, and the way to think of it is that it is an offset within the full buffer calculated prior slicing the result to be passed to individual devices (which will make it simple like: typically, just a border render information)? This needs to be clarified in the comment of those fields. The `slice_stride` and `slice_height` is also something I don't fully understand: - What `slice` exactly is? How slice is different from a "smaller" RenderBuffer (smaller width, height, offset using full_x, full_y) ? In a way it feels that the extra information we need to store is more about "interleaving" rather than "slicing". - How the `slice_height` different from `height`? - What `slice_stride` is measured in? When talking about stride it typically helps defining it in a terms of "number of between two consequitive scanlines". Not sure I understood the meaning of them correctly, so the following suggestion might be a bit wrong. Anyway, but I'd try to define this part of the `RenderBuffers` I'd do something like the following: ``` struct RenderBuffers { /** * Note on interleaved scanlines information. * * When rendering on a multiple devices the full render buffer is split in a * way that allows to schedule its scanlines in an interleaved fashion. For * example, in an ideal world with dual matched GPU render first GPU renders * every even scanline of the full result, and the second GPU renders every * odd scanline. * * Since it is not uncommon to have non-matched (performance-wise) GPUs, the * scheduling allows to render more scanlines on one GPU than on another. * This is done by scheduling more than one consecutive scanlines to the * faster GPU. * * Interleaved example A: * if first GPU is 2x faster than the second, then * the first GPU will render scanlines { 0, 1, 3, 4, 7, 9, ... } and the * second GPU will render scanlines { 2, 5, 9, ... }. */ /* Width/height of the physical allocated buffer. * * Note that it is not necessarily holding pixels of continuous scanlines of * the full render result: when rendering happens on multiple devices the * buffer holds render result of interleaved scanlines. */ int width = 0; int height = 0; /* The number of consecutive scanlines of the full render which are rendered * into this render buffer. * * In the "Interleaved example A" above it will be set to 2 for the render * buffer for the faster GPU, and to 1 for the slower GPU. */ int num_interleaved_scanlines; /* The number of scanlines of the full result which are skipped from this * this render result. * * In the "Interleaved example A" above it will be set to 1 for the render * buffer for the faster GPU, and to 2 for the slower GPU. */ int num_skip_interleaved_scanlines; }; ``` Could elaborate further to make it more explicit how the scanlines are stored in the buffer itself. Also, I didn't go into explanation of `full_x`/`full_y` as I am not yet sure what they mean for interleaved scheduling yet. In any case, I hope it gives a good starting point for clarifying the `RenderBuffers` structure.
William Leeson added 1 commit 2023-08-17 12:43:33 +02:00
This pull request has changes conflicting with the target branch.
  • intern/cycles/integrator/pass_accessor_cpu.cpp
  • intern/cycles/integrator/path_trace_display.cpp

Checkout

From your project repository, check out a new branch and test the changes.
git fetch -u work_sets_similar:leesonw-work_sets_similar
git checkout leesonw-work_sets_similar
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
3 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#110348
No description provided.