WIP: Initial version of a single-frame job compiler #104189

Closed
k8ie wants to merge 26 commits from k8ie/flamenco:single-frame into main

When changing the target branch, be careful to rebase the branch in your fork to match. See documentation.
First-time contributor

I took it upon myself to create a job type for single-frame renders in Flamenco, as described in #104188. This is also my first ever Blender contribution, yay! 🎉

The submission menu looks like this:
image

Something that is not implemented yet is compositing because I'm not sure how to approach that.

The job type is mostly based on the default Simple Blender Render job compiler, although it includes quite a bit of Python, mainly for the merging logic.

Like I mentioned, this is my first time contributing, so improvements can almost certainly be made.

I took it upon myself to create a job type for single-frame renders in Flamenco, as described in #104188. This is also my first ever Blender contribution, yay! 🎉 The submission menu looks like this: ![image](/attachments/d02e8f80-4839-47d9-afb3-007ecee2f414) Something that is not implemented yet is compositing because I'm not sure how to approach that. The job type is mostly based on the default Simple Blender Render job compiler, although it includes quite a bit of Python, mainly for the merging logic. Like I mentioned, this is my first time contributing, so improvements can almost certainly be made.
k8ie added 2 commits 2023-02-27 21:16:07 +01:00
k8ie changed title from Initial version of a single-frame job compiler to WIP: Initial version of a single-frame job compiler 2023-02-28 11:29:38 +01:00
k8ie added 1 commit 2023-02-28 12:40:39 +01:00
k8ie added 1 commit 2023-02-28 13:04:56 +01:00

You may find https://projects.blender.org/studio/flamenco/src/branch/flamenco-v2-server/flamenco/job_compilers/blender_render_progressive.py interesting as well. It's a completely different code base, namely of Flamenco v2, and also for a different goal (progressive video rendering). It does use Cycles sample chunks, though, and has a way to merge these chunks into a final image (which then gets merged into a preview video). Not exactly what you need, but might give you some inspiration on how to tackle this.

You may find https://projects.blender.org/studio/flamenco/src/branch/flamenco-v2-server/flamenco/job_compilers/blender_render_progressive.py interesting as well. It's a completely different code base, namely of Flamenco v2, and also for a different goal (progressive video rendering). It does use Cycles sample chunks, though, and has a way to merge these chunks into a final image (which then gets merged into a preview video). Not exactly what you need, but might give you some inspiration on how to tackle this.
k8ie added 1 commit 2023-02-28 18:53:26 +01:00
k8ie added 1 commit 2023-02-28 19:03:09 +01:00
Author
First-time contributor

Alright, unless I manage to find something functionally wrong, I think it's ready for review.

I'll test it out for a bit and I'll let you know.

Alright, unless I manage to find something functionally wrong, I think it's ready for review. I'll test it out for a bit and I'll let you know.
Author
First-time contributor

Alright, I threw everything I could think of at it and couldn't find any issue. I think it's ready for review @dr.sybren

Alright, I threw everything I could think of at it and couldn't find any issue. I think it's ready for review @dr.sybren
Sybren A. Stüvel requested changes 2023-03-03 12:14:42 +01:00
Sybren A. Stüvel left a comment
Owner

I think the approach to do sample chunking is too complex. Cycles has built-in functionality for this.

You can call Blender like this:

blender -a thefile.blend -- --cycles-resumable-num-chunks 10 --cycles-resumable-start-chunk 3 --cycles-resumable-end-chunk 5

That way Cycles takes care of setting up the sample counts for you. More importantly, if there are other parameters that need to be taken into account, Cycles does this internally as well.

You can see more about how this was done in Flamenco v2 in the worker implementation of the commands, most importantly the BlenderRenderProgressiveCommand and MergeProgressiveRendersCommand commands.

I think the approach to do sample chunking is too complex. Cycles has built-in functionality for this. You can call Blender like this: ``` blender -a thefile.blend -- --cycles-resumable-num-chunks 10 --cycles-resumable-start-chunk 3 --cycles-resumable-end-chunk 5 ``` That way Cycles takes care of setting up the sample counts for you. More importantly, if there are other parameters that need to be taken into account, Cycles does this internally as well. You can see more about how this was done in Flamenco v2 in the [worker implementation of the commands](https://projects.blender.org/archive/flamenco-worker/src/branch/master/flamenco_worker/commands.py), most importantly the `BlenderRenderProgressiveCommand` and `MergeProgressiveRendersCommand` commands.
@ -0,0 +9,4 @@
{ key: "chunk_size", type: "int32", default: 128, propargs: {min: 1}, description: "Number of samples to render in one Blender render task",
visible: "submission" },
{ key: "denoising", type: "bool", required: true, default: false,
description: "Toggles OpenImageDenoise" },

I think this can be moved into the hidden settings, with expr: "bpy.context.scene.cycles.use_denoising". That way the denoising is managed via the normal setting, and not yet again by Flamenco.

I think this can be moved into the hidden settings, with `expr: "bpy.context.scene.cycles.use_denoising"`. That way the denoising is managed via the normal setting, and not yet again by Flamenco.
k8ie marked this conversation as resolved
@ -0,0 +24,4 @@
// Automatically evaluated settings:
{ key: "blendfile", type: "string", required: true, description: "Path of the Blend file to render", visible: "web" },
{ key: "format", type: "string", required: true, eval: "C.scene.render.image_settings.file_format", visible: "web" },
{ key: "uses_compositing", type: "bool", required: true, eval: "C.scene.use_nodes and C.scene.render.use_compositing", visible: "web" },

The properties are typically named as 'imperative' ("do X", "use Y") and not 'descriptive' ("does X", "uses Y"). So this could be use_compositing.

The properties are typically named as 'imperative' ("do X", "use Y") and not 'descriptive' ("does X", "uses Y"). So this could be `use_compositing`.
Author
First-time contributor

Should I change denoising to be use_denoising or use_compositing to be just compositing? Just to keep things consistent.

Should I change `denoising` to be `use_denoising` or `use_compositing` to be just `compositing`? Just to keep things consistent.

Use use_denoising and use_compositing.

Use `use_denoising` and `use_compositing`.
k8ie marked this conversation as resolved
Author
First-time contributor

Cycles has built-in functionality for this.

Ohhh, that makes sense. Now my way of using Python for the chunking feels pretty hacky. I'll change it as soon as possible.

> Cycles has built-in functionality for this. Ohhh, that makes sense. Now my way of using Python for the chunking feels pretty hacky. I'll change it as soon as possible.
Author
First-time contributor

Cycles has built-in functionality for this.

Ohhh, that makes sense. Now my way of using Python for the chunking feels pretty hacky. I'll change it as soon as possible.

> > Cycles has built-in functionality for this. > > Ohhh, that makes sense. Now my way of using Python for the chunking feels pretty hacky. I'll change it as soon as possible.
k8ie closed this pull request 2023-03-03 14:16:22 +01:00
Author
First-time contributor

Are you sure these options are still available?

I dug around a bit, but I couldn't find any reference to them in any documentation or anything.

I did find the commit where the options were introduced tho.

Are you sure these options are still available? I dug around a bit, but I couldn't find any reference to them in any documentation or anything. I did find the [commit where the options were introduced](https://projects.blender.org/blender/blender/commit/f8b9f4e9bbc10675de9bbc3088f2841381e23f78) tho.
Author
First-time contributor

Digging a little more, it looks like these options were removed with Cycles-x. 0803119725

Digging a little more, it looks like these options were removed with Cycles-x. https://projects.blender.org/blender/blender/commit/08031197250aeecbaca3803254e6f25b8c7b7b37
Author
First-time contributor

Wait, how the heck did this happen, I didn't mean to post this or close the PR 🤦

Can you reopen it from your side or should I start a new one?

Wait, how the heck did this happen, I didn't mean to post [this](https://projects.blender.org/studio/flamenco/pulls/104189#issuecomment-894680) or close the PR 🤦 Can you reopen it from your side or should I start a new one?
k8ie requested review from Sybren A. Stüvel 2023-03-03 20:37:19 +01:00

Gitea says: "This pull request cannot be reopened because the branch was deleted."

So that's why it got closed. I guess the only thing that's left is to just create a new PR.

Digging a little more, it looks like these options were removed with Cycles-x. 0803119725

Good find. I'll ask the Cycles developers on how to approach this with modern Blenders.

Gitea says: "This pull request cannot be reopened because the branch was deleted." So that's why it got closed. I guess the only thing that's left is to just create a new PR. > Digging a little more, it looks like these options were removed with Cycles-x. 0803119725 Good find. I'll ask the Cycles developers on how to approach this with modern Blenders.

This is what Sergey and I discussed in #blender-coders:

  • Sybren: so would a tile-based approach be better? Like using border render to render 1/4th of the frame, and create 4 render tasks for it?
  • Sergey: The render border solves some things, but introduces other issues. It makes it possible to use path guiding (i think so anyway), but in order to make adaptive sampling to work you need to apply overscan on the "tiles". I think 16 pixels is what we use for the similar slicing for multi-device rendering.
  • Sybren: I think that's not a problem, we can account for that.
  • Sergey: It also surely will make shadow catcher to work good, but for denoising you'll need to enable Denoising Data passes, and run the denoiser as a post-processing.
  • Sergey: Ah, and performance on GPU might be horrible. Especially with adaptive sampling enabled: there will be no enough data to keep the GPU busy.
  • Sybren: Also when you're rendering at high resolutions? I'm thinking printing-a -poster-at-300-dpi kind of images.
  • Sergey: having "tile" of 2k by 2k pixels should keep GPU busy enough
  • Sergey: also, the Time Limit option can not really be supported
  • Sybren: yeah, I can imagine that that would be tricky, given that at a constant time you'd get too jarring quality differences per tile.

With the added disclaimer from Sergey that he may have missed something ;-)

This is what Sergey and I discussed in #blender-coders: - *Sybren*: so would a tile-based approach be better? Like using border render to render 1/4th of the frame, and create 4 render tasks for it? - *Sergey*: The render border solves some things, but introduces other issues. It makes it possible to use path guiding (i think so anyway), but in order to make adaptive sampling to work you need to apply overscan on the "tiles". I think 16 pixels is what we use for the similar slicing for multi-device rendering. - *Sybren*: I think that's not a problem, we can account for that. - *Sergey*: It also surely will make shadow catcher to work good, but for denoising you'll need to enable Denoising Data passes, and run the denoiser as a post-processing. - *Sergey*: Ah, and performance on GPU might be horrible. Especially with adaptive sampling enabled: there will be no enough data to keep the GPU busy. - *Sybren*: Also when you're rendering at high resolutions? I'm thinking printing-a -poster-at-300-dpi kind of images. - *Sergey*: having "tile" of 2k by 2k pixels should keep GPU busy enough - *Sergey*: also, the Time Limit option can not really be supported - *Sybren*: yeah, I can imagine that that would be tricky, given that at a constant time you'd get too jarring quality differences per tile. With the added disclaimer from Sergey that he may have missed something ;-)
Author
First-time contributor

Right. I'll list out the pros and cons of both approaches.

Tile-based splitting:

Pros:

  • Adaptive sampling could be supported
  • Path guiding (I'm not sure, but I believe that Sergey is right)

Cons:

  • GPUs would get poor utilization with smaller tile sizes (which would be common with moderately sized images)

Sample-based splitting:

Pros:

  • Full GPU utilization because each worker renders the full image
  • Failure to render one of the tasks still results in a coherent image (albeit with less samples than desired)

Cons:

  • Adaptive sampling cannot be supported

I'm not sure about path guiding with sample-based splitting. My guess is it would probably still work. At least I can't think of a reason why it shouldn't.

Right. I'll list out the pros and cons of both approaches. ### Tile-based splitting: #### Pros: - Adaptive sampling could be supported - Path guiding (I'm not sure, but I believe that Sergey is right) #### Cons: - GPUs would get poor utilization with smaller tile sizes (which would be common with moderately sized images) ### Sample-based splitting: #### Pros: - Full GPU utilization because each worker renders the full image - Failure to render one of the tasks still results in a coherent image (albeit with less samples than desired) #### Cons: - Adaptive sampling cannot be supported I'm not sure about path guiding with sample-based splitting. My guess is it would probably still work. At least I can't think of a reason why it shouldn't.
Author
First-time contributor

I'm honestly not sure what the best answer is here. I'm leaning a bit towards tiles tho.

I'm honestly not sure what the best answer is here. I'm leaning a bit towards tiles tho.
Author
First-time contributor

I do have an idea tho, maybe we should split it into two job types (or just add the option to chose between tiles or samples). Give the user a choice instead of choosing for them.

We could also decide which method to use based on if GPU rendering is enabled.

Although doing both approaches would add quite a bit of complexity to what is already a more complex task than I initially expected.

I do have an idea tho, maybe we should split it into two job types (or just add the option to chose between tiles or samples). Give the user a choice instead of choosing for them. We could also decide which method to use based on if GPU rendering is enabled. Although doing both approaches would add quite a bit of complexity to what is already a more complex task than I initially expected.

I think the tile-based one is the most attractive option right now. When blender/blender#92569 gets implemented we can look at sample-based approaches again.

That task is very interesting for other purposes as well, like running Flamenco on cheap cloud computers that can be shut down on very short notice (Flamenco could then signal Blender to pause the render & save its state, then shut down the machine and re-queue the task for resuming somewhere else). That's for a different PR though ;-)

I think the tile-based one is the most attractive option right now. When blender/blender#92569 gets implemented we can look at sample-based approaches again. That task is very interesting for other purposes as well, like running Flamenco on cheap cloud computers that can be shut down on very short notice (Flamenco could then signal Blender to pause the render & save its state, then shut down the machine and re-queue the task for resuming somewhere else). That's for a different PR though ;-)
Author
First-time contributor

blender/blender#92569 does sound really nice.

Alright, I'll start rewriting with tile-based splitting. Do you have any tips for how to approach this? Any functions or parameters Blender has that I could use?

https://projects.blender.org/blender/blender/issues/92569 does sound really nice. Alright, I'll start rewriting with tile-based splitting. Do you have any tips for how to approach this? Any functions or parameters Blender has that I could use?

Sergey said that it is likely to not matter much what the size is. Going for somewhat-square-ish is probably best. So for 6 chunks, 'regular' rectangular frames could be divided into 3x2 or 2x3 tiles, but thinner ones might be better to just do 1x6 or 6x1.

The render size should be 16 pixels bigger in each direction (top, bottom, left, and right) to make the adaptive sampling work well. These pixels can then be cropped off before stitching the result together.

The denoising has to happen all the way at the end, after the stitching. That might also need some extra data as separate layers in the EXR files, not sure about that though.

Blender can do 'border renders' where you can set a rectangle and it'll only render that. Not sure how to set that, but if there is no CLI arguments for it I'm confident you can figure out the Python code!

Sergey said that it is likely to not matter much what the size is. Going for somewhat-square-ish is probably best. So for 6 chunks, 'regular' rectangular frames could be divided into 3x2 or 2x3 tiles, but thinner ones might be better to just do 1x6 or 6x1. The render size should be 16 pixels bigger in each direction (top, bottom, left, and right) to make the adaptive sampling work well. These pixels can then be cropped off before stitching the result together. The denoising has to happen all the way at the end, after the stitching. That might also need some extra data as separate layers in the EXR files, not sure about that though. Blender can do 'border renders' where you can set a rectangle and it'll only render that. Not sure how to set that, but if there is no CLI arguments for it I'm confident you can figure out the Python code!
Author
First-time contributor

Ideally I'd like to make the tile size user-configurable.

Otherwise thanks for the tips, I'll start working on it as soon as I can ;)

Ideally I'd like to make the tile size user-configurable. Otherwise thanks for the tips, I'll start working on it as soon as I can ;)

Pull request closed

Sign in to join this conversation.
No description provided.