WIP: Initial version of a single-frame job compiler #104189
No reviewers
Labels
No Label
Good First Issue
Priority
High
Priority
Low
Priority
Normal
Status
Archived
Status
Confirmed
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Job Type
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: studio/flamenco#104189
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "k8ie/flamenco:single-frame"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I took it upon myself to create a job type for single-frame renders in Flamenco, as described in #104188. This is also my first ever Blender contribution, yay! 🎉
The submission menu looks like this:
Something that is not implemented yet is compositing because I'm not sure how to approach that.
The job type is mostly based on the default Simple Blender Render job compiler, although it includes quite a bit of Python, mainly for the merging logic.
Like I mentioned, this is my first time contributing, so improvements can almost certainly be made.
Initial version of a single-frame job compilerto WIP: Initial version of a single-frame job compilerYou may find https://projects.blender.org/studio/flamenco/src/branch/flamenco-v2-server/flamenco/job_compilers/blender_render_progressive.py interesting as well. It's a completely different code base, namely of Flamenco v2, and also for a different goal (progressive video rendering). It does use Cycles sample chunks, though, and has a way to merge these chunks into a final image (which then gets merged into a preview video). Not exactly what you need, but might give you some inspiration on how to tackle this.
Alright, unless I manage to find something functionally wrong, I think it's ready for review.
I'll test it out for a bit and I'll let you know.
Alright, I threw everything I could think of at it and couldn't find any issue. I think it's ready for review @dr.sybren
I think the approach to do sample chunking is too complex. Cycles has built-in functionality for this.
You can call Blender like this:
That way Cycles takes care of setting up the sample counts for you. More importantly, if there are other parameters that need to be taken into account, Cycles does this internally as well.
You can see more about how this was done in Flamenco v2 in the worker implementation of the commands, most importantly the
BlenderRenderProgressiveCommand
andMergeProgressiveRendersCommand
commands.@ -0,0 +9,4 @@
{ key: "chunk_size", type: "int32", default: 128, propargs: {min: 1}, description: "Number of samples to render in one Blender render task",
visible: "submission" },
{ key: "denoising", type: "bool", required: true, default: false,
description: "Toggles OpenImageDenoise" },
I think this can be moved into the hidden settings, with
expr: "bpy.context.scene.cycles.use_denoising"
. That way the denoising is managed via the normal setting, and not yet again by Flamenco.@ -0,0 +24,4 @@
// Automatically evaluated settings:
{ key: "blendfile", type: "string", required: true, description: "Path of the Blend file to render", visible: "web" },
{ key: "format", type: "string", required: true, eval: "C.scene.render.image_settings.file_format", visible: "web" },
{ key: "uses_compositing", type: "bool", required: true, eval: "C.scene.use_nodes and C.scene.render.use_compositing", visible: "web" },
The properties are typically named as 'imperative' ("do X", "use Y") and not 'descriptive' ("does X", "uses Y"). So this could be
use_compositing
.Should I change
denoising
to beuse_denoising
oruse_compositing
to be justcompositing
? Just to keep things consistent.Use
use_denoising
anduse_compositing
.Ohhh, that makes sense. Now my way of using Python for the chunking feels pretty hacky. I'll change it as soon as possible.
Are you sure these options are still available?
I dug around a bit, but I couldn't find any reference to them in any documentation or anything.
I did find the commit where the options were introduced tho.
Digging a little more, it looks like these options were removed with Cycles-x.
0803119725
Wait, how the heck did this happen, I didn't mean to post this or close the PR 🤦
Can you reopen it from your side or should I start a new one?
Gitea says: "This pull request cannot be reopened because the branch was deleted."
So that's why it got closed. I guess the only thing that's left is to just create a new PR.
Good find. I'll ask the Cycles developers on how to approach this with modern Blenders.
This is what Sergey and I discussed in #blender-coders:
With the added disclaimer from Sergey that he may have missed something ;-)
k8ie referenced this pull request2023-03-06 15:07:34 +01:00
Right. I'll list out the pros and cons of both approaches.
Tile-based splitting:
Pros:
Cons:
Sample-based splitting:
Pros:
Cons:
I'm not sure about path guiding with sample-based splitting. My guess is it would probably still work. At least I can't think of a reason why it shouldn't.
I'm honestly not sure what the best answer is here. I'm leaning a bit towards tiles tho.
I do have an idea tho, maybe we should split it into two job types (or just add the option to chose between tiles or samples). Give the user a choice instead of choosing for them.
We could also decide which method to use based on if GPU rendering is enabled.
Although doing both approaches would add quite a bit of complexity to what is already a more complex task than I initially expected.
I think the tile-based one is the most attractive option right now. When blender/blender#92569 gets implemented we can look at sample-based approaches again.
That task is very interesting for other purposes as well, like running Flamenco on cheap cloud computers that can be shut down on very short notice (Flamenco could then signal Blender to pause the render & save its state, then shut down the machine and re-queue the task for resuming somewhere else). That's for a different PR though ;-)
blender/blender#92569 does sound really nice.
Alright, I'll start rewriting with tile-based splitting. Do you have any tips for how to approach this? Any functions or parameters Blender has that I could use?
Sergey said that it is likely to not matter much what the size is. Going for somewhat-square-ish is probably best. So for 6 chunks, 'regular' rectangular frames could be divided into 3x2 or 2x3 tiles, but thinner ones might be better to just do 1x6 or 6x1.
The render size should be 16 pixels bigger in each direction (top, bottom, left, and right) to make the adaptive sampling work well. These pixels can then be cropped off before stitching the result together.
The denoising has to happen all the way at the end, after the stitching. That might also need some extra data as separate layers in the EXR files, not sure about that though.
Blender can do 'border renders' where you can set a rectangle and it'll only render that. Not sure how to set that, but if there is no CLI arguments for it I'm confident you can figure out the Python code!
Ideally I'd like to make the tile size user-configurable.
Otherwise thanks for the tips, I'll start working on it as soon as I can ;)
Pull request closed