New PMJ does not converge to the same result as Sobol-Burley #101356

Closed
opened 2022-09-25 11:00:45 +02:00 by Alaska · 38 comments
Member

System Information
Operating system: Linux-5.18.0-4-amd64-x86_64-with-glibc2.34 64 Bits
Graphics card: NVIDIA GeForce RTX 3060/PCIe/SSE2 NVIDIA Corporation 4.5.0 NVIDIA 515.65.01

Blender Version
Broken: version: 3.4.0 Alpha, branch: master, commit date: 2022-09-25 05:34, hash: c9e35c2ced
Worked: Prior to the new PMJ, 50df9caef0

Short description of error
In Cycles when rendering to high sample counts in specific scenes, the new PMJ sampler does not converge to the same result as Sobol Burley.
From my understanding, all sampling patterns should converge to the same result with enough samples, hence why I am reporting the issue.

MLS OFF PMJ.png MLS OFF Sobol.png MLS_OFF_DIFF.png
New PMJ Sobol-Burley Difference between the two images x 25

Exact steps for others to reproduce the error
Render a complex scene at an extremely high sample count with the PMJ sampler in Cycles and compare it to Sobol-Burley.

The scene I personally used was this one at 500,000 samples. I'm sorry, I haven't found a simpler scene than this one.
Bistro.blend
This file uses assets from the "Amazon Lumberyard Bistro" scene: https://developer.nvidia.com/orca/amazon-lumberyard-bistro

 title = Amazon Lumberyard Bistro, Open Research Content Archive (ORCA)
 author = Amazon Lumberyard
 year = 2017
 month = July
 url = http://developer.nvidia.com/orca/amazon-lumberyard-bistro
**System Information** Operating system: Linux-5.18.0-4-amd64-x86_64-with-glibc2.34 64 Bits Graphics card: NVIDIA GeForce RTX 3060/PCIe/SSE2 NVIDIA Corporation 4.5.0 NVIDIA 515.65.01 **Blender Version** Broken: version: 3.4.0 Alpha, branch: master, commit date: 2022-09-25 05:34, hash: `c9e35c2ced` Worked: Prior to the new PMJ, 50df9caef0 **Short description of error** In Cycles when rendering to high sample counts in specific scenes, the new PMJ sampler does not converge to the same result as Sobol Burley. From my understanding, all sampling patterns should converge to the same result with enough samples, hence why I am reporting the issue. |![MLS OFF PMJ.png](https://archive.blender.org/developer/F13568840/MLS_OFF_PMJ.png)|![MLS OFF Sobol.png](https://archive.blender.org/developer/F13568842/MLS_OFF_Sobol.png)|![MLS_OFF_DIFF.png](https://archive.blender.org/developer/F13583869/MLS_OFF_DIFF.png)| | -- | -- | -- | |New PMJ|Sobol-Burley|Difference between the two images x 25| **Exact steps for others to reproduce the error** Render a complex scene at an extremely high sample count with the PMJ sampler in Cycles and compare it to Sobol-Burley. The scene I personally used was this one at 500,000 samples. I'm sorry, I haven't found a simpler scene than this one. [Bistro.blend](https://archive.blender.org/developer/F13568852/Bistro.blend) This file uses assets from the "Amazon Lumberyard Bistro" scene: https://developer.nvidia.com/orca/amazon-lumberyard-bistro ``` title = Amazon Lumberyard Bistro, Open Research Content Archive (ORCA) author = Amazon Lumberyard year = 2017 month = July url = http://developer.nvidia.com/orca/amazon-lumberyard-bistro
Author
Member

Added subscriber: @Alaska

Added subscriber: @Alaska
Author
Member

I am primarily creating this report due to the artifacts I noticed in the Many Lights Sampling branch caused by the new Progressive Multi-Jittered sampling pattern.
If you checkout the Many Lights Sampling branch (soc-2022-many-lights-sampling), then you will find there are artifacts when using the new PMJ.

With a bit of work, you can create a version of the Many Lights Sampling branch with all the latest fixes, but you can test with and without the PMJ changes. Here are the steps:

  1. Checkout 50df9caef0
  2. Apply P3212 on top. You now have a build of Many Lights Sampling on top of the commit that introduced the new PMJ sampler.
  3. Revert commit 50df9caef0 (the commit that introduced the new PMJ) and you will now be using the old PMJ sampler.
MLS ON Sobol.png MLS ON PMJ.png MLS_ON_DIFF.png
Many Lights Sampling - Sobol-Burley Many Lights Sampling - New PMJ Difference between the two images x 25 (notice the diaginal stripes)
I am primarily creating this report due to the artifacts I noticed in the Many Lights Sampling branch caused by the new Progressive Multi-Jittered sampling pattern. If you checkout the Many Lights Sampling branch (soc-2022-many-lights-sampling), then you will find there are artifacts when using the new PMJ. With a bit of work, you can create a version of the Many Lights Sampling branch with all the latest fixes, but you can test with and without the PMJ changes. Here are the steps: 1. Checkout 50df9caef0 2. Apply [P3212](https://archive.blender.org/developer/P3212.txt) on top. You now have a build of Many Lights Sampling on top of the commit that introduced the new PMJ sampler. 3. Revert commit 50df9caef0 (the commit that introduced the new PMJ) and you will now be using the old PMJ sampler. |![MLS ON Sobol.png](https://archive.blender.org/developer/F13569214/MLS_ON_Sobol.png)|![MLS ON PMJ.png](https://archive.blender.org/developer/F13569215/MLS_ON_PMJ.png)|![MLS_ON_DIFF.png](https://archive.blender.org/developer/F13583872/MLS_ON_DIFF.png)| | -- | -- | -- | |Many Lights Sampling - Sobol-Burley|Many Lights Sampling - New PMJ|Difference between the two images x 25 (notice the diaginal stripes)|
Member

Added subscriber: @PratikPB2123

Added subscriber: @PratikPB2123
Member

Added subscriber: @cessen

Added subscriber: @cessen
Member

Hmm. If this works correctly with Sobol-Burley, then I think there's a good chance this is an issue in the PMJ sampler. I'll take a look at this the next chance I get.

But in the mean time, one thing you could check (if you have the time) is to set NUM_PMJ_DIVISIONS and NUM_PMJ_PATTERNS in kernel/types.h to something really large, and see if the issue still persists. For example, NUM_PMJ_DIVISIONS = 256 and NUM_PMJ_PATTERNS = 1024. (Note: this will make the sample tables take > 500MB of memory.) If the issue goes away or changes when you do that, then that suggests the issue is in how the PMJ sampler is re-using tables.

The scene I personally used was this one at 500,000 samples.

The PMJ sampler only precomputes enough samples to "properly" do 1024 samples per pixel, and depends on randomization and shuffling to decorrelate things after that. And even then, it ends up just working through all the sample patterns, and there's only 2^16 total samples in all the patterns combined. So after 2^16 samples it's basically just re-doing the same samples over and over and relying entirely on shuffling to pair samples together differently. So that's definitely suggestive that the issue may be in how the shuffling is working.

Another possibility is that we're just running into the limits of how well that can work with as small/few tables as we use. Bumping up the constants I mentioned above will help us rule out if that's the issue as well.

Hmm. If this works correctly with Sobol-Burley, then I think there's a good chance this is an issue in the PMJ sampler. I'll take a look at this the next chance I get. But in the mean time, one thing you could check (if you have the time) is to set `NUM_PMJ_DIVISIONS` and `NUM_PMJ_PATTERNS` in `kernel/types.h` to something really large, and see if the issue still persists. For example, `NUM_PMJ_DIVISIONS = 256` and `NUM_PMJ_PATTERNS = 1024`. (Note: this will make the sample tables take > 500MB of memory.) If the issue goes away or changes when you do that, then that suggests the issue is in how the PMJ sampler is re-using tables. > The scene I personally used was this one at 500,000 samples. The PMJ sampler only precomputes enough samples to "properly" do 1024 samples per pixel, and depends on randomization and shuffling to decorrelate things after that. And even then, it ends up just working through all the sample patterns, and there's only 2^16 total samples in all the patterns combined. So after 2^16 samples it's basically just re-doing the same samples over and over and relying entirely on shuffling to pair samples together differently. So that's definitely suggestive that the issue may be in how the shuffling is working. Another possibility is that we're just running into the limits of how well that can work with as small/few tables as we use. Bumping up the constants I mentioned above will help us rule out if that's the issue as well.
Author
Member

In #101356#1423084, @cessen wrote:
But in the mean time, one thing you could check (if you have the time) is to set NUM_PMJ_DIVISIONS and NUM_PMJ_PATTERNS in kernel/types.h to something really large, and see if the issue still persists. For example, NUM_PMJ_DIVISIONS = 256 and NUM_PMJ_PATTERNS = 1024. If the issue goes away or changes when you do that, then that suggests
the issue is in how the PMJ sampler is re-using tables.

I've tested it, and changing those two numbers improves/fixes the issue. Now PMJ is basically indistinguishable from Sobol-Burley at high sample counts. And the artifacts in the Many Lights Sampling test is now gone.

Sobol-Burley PMJ PMJ, NUM_PMJ_DIVISIONS = 256 NUM_PMJ_PATTERNS = 1024 25 x Difference between Sobol and PMJ division = 256, patterns = 1024 - Now the difference just looks like noise, not artifacts
Many Lights Sampling MLS ON Sobol.png MLS ON PMJ.png MLS ON NEW PMJ.png MLS_ON_DIFF_IMPROVED_PMJ.png
Master MLS OFF Sobol.png MLS OFF PMJ.png MLS OFF NEW PMJ.png MLS_OFF_DIFF_IMPROVED_PMJ.png
> In #101356#1423084, @cessen wrote: > But in the mean time, one thing you could check (if you have the time) is to set NUM_PMJ_DIVISIONS and NUM_PMJ_PATTERNS in kernel/types.h to something really large, and see if the issue still persists. For example, NUM_PMJ_DIVISIONS = 256 and NUM_PMJ_PATTERNS = 1024. If the issue goes away or changes when you do that, then that suggests > the issue is in how the PMJ sampler is re-using tables. I've tested it, and changing those two numbers improves/fixes the issue. Now PMJ is basically indistinguishable from Sobol-Burley at high sample counts. And the artifacts in the Many Lights Sampling test is now gone. ||Sobol-Burley|PMJ|PMJ, `NUM_PMJ_DIVISIONS = 256` `NUM_PMJ_PATTERNS = 1024`|25 x Difference between Sobol and PMJ `division = 256`, `patterns = 1024` - Now the difference just looks like noise, not artifacts| | -- | -- | -- | -- | -- | |Many Lights Sampling|![MLS ON Sobol.png](https://archive.blender.org/developer/F13569214/MLS_ON_Sobol.png)|![MLS ON PMJ.png](https://archive.blender.org/developer/F13569215/MLS_ON_PMJ.png)|![MLS ON NEW PMJ.png](https://archive.blender.org/developer/F13578186/MLS_ON_NEW_PMJ.png)|![MLS_ON_DIFF_IMPROVED_PMJ.png](https://archive.blender.org/developer/F13583888/MLS_ON_DIFF_IMPROVED_PMJ.png)| |Master|![MLS OFF Sobol.png](https://archive.blender.org/developer/F13568842/MLS_OFF_Sobol.png)|![MLS OFF PMJ.png](https://archive.blender.org/developer/F13568840/MLS_OFF_PMJ.png)|![MLS OFF NEW PMJ.png](https://archive.blender.org/developer/F13578184/MLS_OFF_NEW_PMJ.png)|![MLS_OFF_DIFF_IMPROVED_PMJ.png](https://archive.blender.org/developer/F13583894/MLS_OFF_DIFF_IMPROVED_PMJ.png)|
Member

Changed status from 'Needs Triage' to: 'Needs Developer To Reproduce'

Changed status from 'Needs Triage' to: 'Needs Developer To Reproduce'

Added subscriber: @brecht

Added subscriber: @brecht

Changed status from 'Needs Developer To Reproduce' to: 'Confirmed'

Changed status from 'Needs Developer To Reproduce' to: 'Confirmed'

I'll raise this to high priority, to ensure we look at this before the 3.4 release.

I'll raise this to high priority, to ensure we look at this before the 3.4 release.
Member

I'll look at this some time this week.

@Alaska Thanks for trying that out! That definitely means the issue is in the PMJ sampler, and narrows down the possible causes within that as well.

I'll look at this some time this week. @Alaska Thanks for trying that out! That definitely means the issue is in the PMJ sampler, and narrows down the possible causes within that as well.
Member

It looks like there isn't anything wrong with the sample/table shuffling. But bumping NUM_PMJ_DIVISIONS up to 64 appears to resolve the issue:

sobol-burley PMJ with NUM_PMJ_DIVISIONS = 64
500000-pmj-4.png 500000-sobol-burley.png

So I think we were just hitting the limits of how small of tables we could get away with. But I only tested on master, not the many lights branch.

@Alaska Can you try just setting NUM_PMJ_DIVISIONS to 64 and test if that resolves the artifacts in the many lights branch as well?

It looks like there isn't anything wrong with the sample/table shuffling. But bumping `NUM_PMJ_DIVISIONS` up to 64 appears to resolve the issue: | sobol-burley | PMJ with `NUM_PMJ_DIVISIONS` = 64 | | -- | -- | | ![500000-pmj-4.png](https://archive.blender.org/developer/F13594189/500000-pmj-4.png) | ![500000-sobol-burley.png](https://archive.blender.org/developer/F13594188/500000-sobol-burley.png) | So I think we were just hitting the limits of how small of tables we could get away with. But I only tested on master, not the many lights branch. @Alaska Can you try just setting `NUM_PMJ_DIVISIONS` to 64 and test if that resolves the artifacts in the many lights branch as well?
Author
Member

In #101356#1425420, @cessen wrote:
@Alaska Can you try just setting NUM_PMJ_DIVISIONS to 64 and test if that resolves the artifacts in the many lights branch as well?

It doesn't resolve the artifacts in the Many Lights Sampling branch. It just makes the artifacts less visible.

Sobol-Burley - MLS PMJ - MLS, NUM_PMJ_DIVISIONS = 64 25x difference between the two
MLS On - Sobol.png MLS On - PMJ 64.png Difference betwenn Sobol and PMJ 64 MLS.png
> In #101356#1425420, @cessen wrote: > @Alaska Can you try just setting `NUM_PMJ_DIVISIONS` to 64 and test if that resolves the artifacts in the many lights branch as well? It doesn't resolve the artifacts in the Many Lights Sampling branch. It just makes the artifacts less visible. |Sobol-Burley - MLS|PMJ - MLS, `NUM_PMJ_DIVISIONS = 64`|25x difference between the two| | -- | -- | -- | |![MLS On - Sobol.png](https://archive.blender.org/developer/F13595446/MLS_On_-_Sobol.png)|![MLS On - PMJ 64.png](https://archive.blender.org/developer/F13595449/MLS_On_-_PMJ_64.png)|![Difference betwenn Sobol and PMJ 64 MLS.png](https://archive.blender.org/developer/F13595455/Difference_betwenn_Sobol_and_PMJ_64_MLS.png)|
Member

@Alaska Thanks for checking! I think that means we just need to bump up the table size a bit more. Can you try both of the following settings and post the results here?

  • NUM_PMJ_DIVISIONS = 128, NUM_PMJ_PATTERNS = 64
  • NUM_PMJ_DIVISIONS = 64, NUM_PMJ_PATTERNS = 256

Both should result in the same total memory being taken up by the tables (8 MiB). But I'm curious if one or the other gives better results.

Thanks so much for your help with this!

It just makes the artifacts less visible.

On some level, that's just fundamentally an aspect of PMJ sampling, or really any sampling method that uses a fixed set of pre-computed tables for its samples. If you go beyond a certain number of samples it's technically always going to have issues as it starts reusing samples from the tables. It's just a matter of if the resulting issues are meaningful/visible or not.

@Alaska Thanks for checking! I think that means we just need to bump up the table size a bit more. Can you try both of the following settings and post the results here? - `NUM_PMJ_DIVISIONS = 128`, `NUM_PMJ_PATTERNS = 64` - `NUM_PMJ_DIVISIONS = 64`, `NUM_PMJ_PATTERNS = 256` Both should result in the same total memory being taken up by the tables (8 MiB). But I'm curious if one or the other gives better results. Thanks so much for your help with this! > It just makes the artifacts less visible. On some level, that's just fundamentally an aspect of PMJ sampling, or really any sampling method that uses a fixed set of pre-computed tables for its samples. If you go beyond a certain number of samples it's technically always going to have issues as it starts reusing samples from the tables. It's just a matter of if the resulting issues are meaningful/visible or not.
Author
Member

@cessen Here are the tests for you.

Render 1: Sobol-Burley - MLS Render 2: PMJ - MLS, DIVISIONS = 128, PATTERNS = 64 Render 3: PMJ - MLS, DIVISIONS = 64, PATTERNS = 256 25x difference between Render 1 and 2 25x difference between Render 1 and 3
MLS On - Sobol.png MLS On - PMJ 128,64.png MLS On - PMJ 64,256.png Difference - PMJ 128,64 and Sobol.png Difference - PMJ 64,256 and Sobol.png

When running the images through idiff to get the difference images, I got these statistics:

Difference PMJ - MLS, DIVISIONS = 128, PATTERNS = 64 and Sobol-Burley Difference PMJ - MLS, DIVISIONS = 64, PATTERNS = 256 and Sobol-Burley
Mean error = 0.00268485 Mean error = 0.0028188
RMS error = 0.00383492 RMS error = 0.0039951
Peak SNR = 48.3249 Peak SNR = 47.9694

I wanted to bring up something. Changing these values isn't really fixing the issue, it's just making it harder to come across. For example in the scene below with 1,000,000 samples, artifacts still occur. Would it be better to automatically adjust NUM_PMJ_DIVISIONS and NUM_PMJ_PATTERNS based on the sample count the user has picked? What about making Sobol-Burley the default sampler? These might be questions that need to be discussed with @brecht.

Sobol-Burley PMJ, DIVISIONS = 32, PATTERNS = 64 PMJ, DIVISIONS = 64, PATTERNS = 64 PMJ, DIVISIONS = 128, PATTERNS = 64
Spotlight - Sobol.png Spotlight - PMJ 32,64.png Spotlight - PMJ 64,64.png Spotlight - PMJ 128,64.png

Note about this scene:
In Many Lights Sampling, the importance heuristic can be very wrong in specific situations. And when not using splitting, this introduces artifacts that may require extremely high sample counts to get a "correct" result.
The scene above is one of these scenes. I've disabled splitting, and setup the scene so the importance heuristics is very wrong in a certain area of the render, that way I need an extremely high sample count to get a correct result. Just so I can make a point.
This scene is extremely unrealistic. In these cases, the user would be using splitting and this wouldn't be an issue. But I was just using this scene as an example of how "in complex scenes that need really high sample counts (caustics rendering? scene with lots of lights?), we will need to continue increasing the PMJ table size to resolve errors"

@cessen Here are the tests for you. |Render 1: Sobol-Burley - MLS|Render 2: PMJ - MLS, `DIVISIONS = 128`, `PATTERNS = 64`|Render 3: PMJ - MLS, `DIVISIONS = 64`, `PATTERNS = 256`|25x difference between Render 1 and 2|25x difference between Render 1 and 3| | -- | -- | -- | -- | -- | |![MLS On - Sobol.png](https://archive.blender.org/developer/F13600427/MLS_On_-_Sobol.png)|![MLS On - PMJ 128,64.png](https://archive.blender.org/developer/F13600422/MLS_On_-_PMJ_128_64.png)|![MLS On - PMJ 64,256.png](https://archive.blender.org/developer/F13600442/MLS_On_-_PMJ_64_256.png)|![Difference - PMJ 128,64 and Sobol.png](https://archive.blender.org/developer/F13600447/Difference_-_PMJ_128_64_and_Sobol.png)|![Difference - PMJ 64,256 and Sobol.png](https://archive.blender.org/developer/F13600449/Difference_-_PMJ_64_256_and_Sobol.png)| When running the images through idiff to get the difference images, I got these statistics: |Difference PMJ - MLS, `DIVISIONS = 128`, `PATTERNS = 64` and Sobol-Burley|Difference PMJ - MLS, `DIVISIONS = 64`, `PATTERNS = 256` and Sobol-Burley | -- | -- | |Mean error = 0.00268485|Mean error = 0.0028188| |RMS error = 0.00383492|RMS error = 0.0039951| |Peak SNR = 48.3249|Peak SNR = 47.9694| I wanted to bring up something. Changing these values isn't really fixing the issue, it's just making it harder to come across. For example in the scene below with 1,000,000 samples, artifacts still occur. Would it be better to automatically adjust `NUM_PMJ_DIVISIONS` and `NUM_PMJ_PATTERNS` based on the sample count the user has picked? What about making Sobol-Burley the default sampler? These might be questions that need to be discussed with @brecht. |Sobol-Burley|PMJ, `DIVISIONS = 32`, `PATTERNS = 64`|PMJ, `DIVISIONS = 64`, `PATTERNS = 64`|PMJ, `DIVISIONS = 128`, `PATTERNS = 64`| | -- | -- | -- | -- | |![Spotlight - Sobol.png](https://archive.blender.org/developer/F13600474/Spotlight_-_Sobol.png)|![Spotlight - PMJ 32,64.png](https://archive.blender.org/developer/F13600477/Spotlight_-_PMJ_32_64.png)|![Spotlight - PMJ 64,64.png](https://archive.blender.org/developer/F13600480/Spotlight_-_PMJ_64_64.png)|![Spotlight - PMJ 128,64.png](https://archive.blender.org/developer/F13600483/Spotlight_-_PMJ_128_64.png)| Note about this scene: In Many Lights Sampling, the importance heuristic can be very wrong in specific situations. And when not using splitting, this introduces artifacts that may require extremely high sample counts to get a "correct" result. The scene above is one of these scenes. I've disabled splitting, and setup the scene so the importance heuristics is very wrong in a certain area of the render, that way I need an extremely high sample count to get a correct result. Just so I can make a point. This scene is extremely unrealistic. In these cases, the user would be using splitting and this wouldn't be an issue. But I was just using this scene as an example of how "in complex scenes that need really high sample counts (caustics rendering? scene with lots of lights?), we will need to continue increasing the PMJ table size to resolve errors"
Member

@Alaska Thanks for testing! Also thanks for the error metrics.

It looks to me like DIVISIONS = 128 and PATTERNS = 64 probably makes the most sense.

I wanted to bring up something. Changing these values isn't really fixing the issue, it's just making it harder to come across.

Right, that's what I was saying in my previous post. It's one of the drawbacks of using a pre-tabulated sampling method. There's always the possibility that I've still missed something wrong/non-optimal in the current PMJ implementation, and that we shouldn't be running into issues at these tables sizes. But going over the code again, and playing with things, I wasn't able to find anything, at least.

Another thing we could try would be DIVISIONS = 1024 and PATTERNS = 1 (which would also result in 8 MiB of tables). Basically, relying on the large sample count and sample shuffling alone to create effectively "independent" patterns between dimensions. That's more-or-less what the old implementation did, but with a far smaller sample count. Doing that does lose out on the true independence between dimensions that you get with multiple patterns, but that might not matter in practice.

(Also, for anyone reading along who's wondering why the old PMJ sampler didn't run into these issues: it's because it was doing Cranley-Patterson rotation, which turned these artifacts into just increased noise. You can verify that by rendering a scene that results in the issue, and then setting the Scrambling Distance Multiplier to 0.999 to trigger the use of Cranley-Patterson with the new PMJ sampler. Unfortunately, Cranley-Patterson rotation increases noise in general with samplers like PMJ, which is why we want to avoid it.)

Would it be better to automatically adjust NUM_PMJ_DIVISIONS and NUM_PMJ_PATTERNS based on the sample count the user has picked?

I think it makes more sense to just find values that handle all the cases we're concerned about. And I think it's worth keeping in mind what sample counts people will practically be using when determining that. For example, even for still renders I've rarely seen anyone use sample counts much higher than 100,000 (unless you count reference images in research papers, which I don't think is relevant here). And for animations, you're usually looking at an order of magnitude or two lower than that.

I mean, if we can easily make it work with crazy high sample counts, let's go for it. But if we're bending over backwards and making sacrifices to achieve that, I don't (personally) think that makes much sense.

What about making Sobol-Burley the default sampler?

Quality-wise, that's definitely the way to go. Sobol-Burley is the more "robust" sampler in most respects. But it's also a little slower than PMJ right now. I'm hoping to narrow the performance gap with SIMD and some other things, so we might be able to switch over at some point. But at the moment PMJ is both faster and equivalent quality at typical sample counts.

Another thing we could consider would be to automatically switch to Sobol-Burley at high sample counts. But that feels a little weird to me. Right now PMJ and Sobol-Burley are basically equivalent in their results, so it would be fine at the moment. But as I continue working on Sobol-Burley in the future I expect there to be some divergence.

@Alaska Thanks for testing! Also thanks for the error metrics. It looks to me like `DIVISIONS = 128` and `PATTERNS = 64` probably makes the most sense. > I wanted to bring up something. Changing these values isn't really fixing the issue, it's just making it harder to come across. Right, that's what I was saying in my previous post. It's one of the drawbacks of using a pre-tabulated sampling method. There's always the possibility that I've still missed something wrong/non-optimal in the current PMJ implementation, and that we shouldn't be running into issues at these tables sizes. But going over the code again, and playing with things, I wasn't able to find anything, at least. Another thing we could try would be `DIVISIONS = 1024` and `PATTERNS = 1` (which would also result in 8 MiB of tables). Basically, relying on the large sample count and sample shuffling alone to create effectively "independent" patterns between dimensions. That's more-or-less what the old implementation did, but with a far smaller sample count. Doing that does lose out on the true independence between dimensions that you get with multiple patterns, but that might not matter in practice. (Also, for anyone reading along who's wondering why the old PMJ sampler didn't run into these issues: it's because it was doing Cranley-Patterson rotation, which turned these artifacts into just increased noise. You can verify that by rendering a scene that results in the issue, and then setting the Scrambling Distance Multiplier to 0.999 to trigger the use of Cranley-Patterson with the new PMJ sampler. Unfortunately, Cranley-Patterson rotation increases noise *in general* with samplers like PMJ, which is why we want to avoid it.) > Would it be better to automatically adjust NUM_PMJ_DIVISIONS and NUM_PMJ_PATTERNS based on the sample count the user has picked? I think it makes more sense to just find values that handle all the cases we're concerned about. And I think it's worth keeping in mind what sample counts people will practically be using when determining that. For example, even for still renders I've rarely seen anyone use sample counts much higher than 100,000 (unless you count reference images in research papers, which I don't think is relevant here). And for animations, you're usually looking at an order of magnitude or two lower than that. I mean, if we can easily make it work with crazy high sample counts, let's go for it. But if we're bending over backwards and making sacrifices to achieve that, I don't (personally) think that makes much sense. > What about making Sobol-Burley the default sampler? Quality-wise, that's definitely the way to go. Sobol-Burley is the more "robust" sampler in most respects. But it's also a little slower than PMJ right now. I'm hoping to narrow the performance gap with SIMD and some other things, so we might be able to switch over at some point. But at the moment PMJ is both faster and equivalent quality at typical sample counts. Another thing we could consider would be to automatically switch to Sobol-Burley at high sample counts. But that feels a little weird to me. Right now PMJ and Sobol-Burley are basically equivalent in their results, so it would be fine at the moment. But as I continue working on Sobol-Burley in the future I expect there to be some divergence.
Author
Member

@cessen Thank you for your insights and explanations.

In #101356#1426288, @cessen wrote:
Another thing we could try would be DIVISIONS = 1024 and PATTERNS = 1...

Testing with the Bistro Many Lights Sampling scene, minor artifacts re-appear when using these values.

In #101356#1426288, @cessen wrote:

Would it be better to automatically adjust NUM_PMJ_DIVISIONS and NUM_PMJ_PATTERNS based on the sample count the user has picked?

I think it makes more sense to just find values that handle all the cases we're concerned about. And I think it's worth keeping in mind what sample counts people will practically be using when determining that. For example, even for still renders I've rarely seen anyone use sample counts much higher than 100,000 (unless you count reference images in research papers, which I don't think is relevant here). And for animations, you're usually looking at an order of magnitude or two lower than that.

You're right. I'm just getting caught up on the idea of having PMJ correct at all sample counts due to how I personally use Blender and Cycles, not how it's generally used.

@cessen Thank you for your insights and explanations. > In #101356#1426288, @cessen wrote: > Another thing we could try would be `DIVISIONS = 1024` and `PATTERNS = 1`... Testing with the Bistro Many Lights Sampling scene, minor artifacts re-appear when using these values. > In #101356#1426288, @cessen wrote: >> Would it be better to automatically adjust NUM_PMJ_DIVISIONS and NUM_PMJ_PATTERNS based on the sample count the user has picked? > > I think it makes more sense to just find values that handle all the cases we're concerned about. And I think it's worth keeping in mind what sample counts people will practically be using when determining that. For example, even for still renders I've rarely seen anyone use sample counts much higher than 100,000 (unless you count reference images in research papers, which I don't think is relevant here). And for animations, you're usually looking at an order of magnitude or two lower than that. You're right. I'm just getting caught up on the idea of having PMJ correct at all sample counts due to how I personally use Blender and Cycles, not how it's generally used.

With many light sampling, we're currently using a single 2D pattern for sampling lights. First we use one dimension of the pattern for selecting the light, then a subset of the 2D pattern for sampling a position on a chosen light. With many lights, that's really stretching the sample out a lot.

It's probably better to use a 1D pattern for sampling the light tree, and 2D pattern for sampling the light here. Ideally this would be a 3D pattern to ensure each chosen light still gets a stratified pattern. We could always use a 3D Sobol-Burley pattern just for light sampling regardless of the setting, and leave PMJ for the others.

There's some alternative solutions possible, increasing divisions along one dimension only, or splitting the pattern along both dimensions some now. But probably not worth spending time on.

With many light sampling, we're currently using a single 2D pattern for sampling lights. First we use one dimension of the pattern for selecting the light, then a subset of the 2D pattern for sampling a position on a chosen light. With many lights, that's really stretching the sample out a lot. It's probably better to use a 1D pattern for sampling the light tree, and 2D pattern for sampling the light here. Ideally this would be a 3D pattern to ensure each chosen light still gets a stratified pattern. We could always use a 3D Sobol-Burley pattern just for light sampling regardless of the setting, and leave PMJ for the others. There's some alternative solutions possible, increasing divisions along one dimension only, or splitting the pattern along both dimensions some now. But probably not worth spending time on.
Member

First we use one dimension of the pattern for selecting the light, then a subset of the 2D pattern for sampling a position on a chosen light.

Oh! Yeah, that sounds like it could be exacerbating things, for sure.

We could always use a 3D Sobol-Burley pattern just for light sampling regardless of the setting, and leave PMJ for the others.

That's an interesting thought. In Psychopath I mix samplers as well: the Golden Ratio Sequence for wavelength (spectral rendering), and Sobol-Burley for everything else. Doing PMJ + Burley in Cycles, using each where appropriate, could work well.

Another possibility would be to pre-tabulate scrambled Sobol points in place of the PMJ points. All the other code would remain the same, but then we could get proper 3D samples without the perf hit of on-the-fly Sobol-Burley.

> First we use one dimension of the pattern for selecting the light, then a subset of the 2D pattern for sampling a position on a chosen light. Oh! Yeah, that sounds like it could be exacerbating things, for sure. > We could always use a 3D Sobol-Burley pattern just for light sampling regardless of the setting, and leave PMJ for the others. That's an interesting thought. In Psychopath I mix samplers as well: the Golden Ratio Sequence for wavelength (spectral rendering), and Sobol-Burley for everything else. Doing PMJ + Burley in Cycles, using each where appropriate, could work well. Another possibility would be to pre-tabulate scrambled Sobol points in place of the PMJ points. All the other code would remain the same, but then we could get proper 3D samples without the perf hit of on-the-fly Sobol-Burley.

Either pre-tabulating Sobol-Burley or speeding it up with SIMD seems fine. I'd be happy to get rid of PMJ entirely.

Either pre-tabulating Sobol-Burley or speeding it up with SIMD seems fine. I'd be happy to get rid of PMJ entirely.
Member

@brecht Okay, sounds good.

I would definitely like to take a crack at SIMD-accelerating Sobol-Burley. I've done it before for x86-64 CPUs. But when I started diving into it for Cycles, I discovered I was a bit out of my depth due to Cycles being a kind of polyglot code base. If you have the time to give me some guidance, I could start a patch with the beginnings of my horrible first attempt (that doesn't even compile), along with the questions I have so far.

On the other hand, it might make sense to start with switching from pre-tabulated PMJ to pre-tabulated Owen-scrambled Sobol (with 3+ dimensions). It should be a pretty easy change, shouldn't have any meaningful impact on performance, and will give us a chance to start using higher-dimensional samples where appropriate.

My general feeling here is that I'd love to do both, so we can compare afterwards and decide which to keep and which to rip out. Do you have a preference for which I tackle first?

@brecht Okay, sounds good. I would definitely like to take a crack at SIMD-accelerating Sobol-Burley. I've done it before for x86-64 CPUs. But when I started diving into it for Cycles, I discovered I was a bit out of my depth due to Cycles being a kind of polyglot code base. If you have the time to give me some guidance, I could start a patch with the beginnings of my horrible first attempt (that doesn't even compile), along with the questions I have so far. On the other hand, it might make sense to start with switching from pre-tabulated PMJ to pre-tabulated Owen-scrambled Sobol (with 3+ dimensions). It should be a pretty easy change, shouldn't have any meaningful impact on performance, and will give us a chance to start using higher-dimensional samples where appropriate. My general feeling here is that I'd love to do both, so we can compare afterwards and decide which to keep and which to rip out. Do you have a preference for which I tackle first?

It's shouldn't be too hard to add SIMD support, you can use the ssef, sseì types. They work on Arm as well. We use them also for Perlin noise for example, in kernel/svm/noise.h.

My plan is to eventually replace those with float4 and int4, but right now it's fine to use them.

I don't have any preference for which to tackle first, whatever you prefer.

It's shouldn't be too hard to add SIMD support, you can use the `ssef`, `sseì` types. They work on Arm as well. We use them also for Perlin noise for example, in `kernel/svm/noise.h`. My plan is to eventually replace those with `float4` and `int4`, but right now it's fine to use them. I don't have any preference for which to tackle first, whatever you prefer.
Member

Thanks for the tip! I'll look at noise.h as an example.

I think I'll start with moving from pre-tabulated PMJ to pre-tabulated scrambled Sobol. It's a pretty trivial change, so it should be fast. And that way both samplers can be at feature parity in terms of number of dimensions. Then we can see afterwards if I can get Sobol-Burley fast enough to make sense as the default sampler or not.

Thanks for the tip! I'll look at `noise.h` as an example. I think I'll start with moving from pre-tabulated PMJ to pre-tabulated scrambled Sobol. It's a pretty trivial change, so it should be fast. And that way both samplers can be at feature parity in terms of number of dimensions. Then we can see afterwards if I can get Sobol-Burley fast enough to make sense as the default sampler or not.

@cessen, do you expect to have time to look into this before the Blender 3.4 release (deadline for bugfixes like these is November 30)? Or shall I look into it?

@cessen, do you expect to have time to look into this before the Blender 3.4 release (deadline for bugfixes like these is November 30)? Or shall I look into it?
Member

Thanks for the reminder Brecht! Yes, I'd be happy to take this on, and I should be available to tackle this before November 30th. Specifically, I will at least have a good chunk of time starting around November 16th. I'll bump up the priority of this on my todo list. And at the very least, I'll report back here well before the deadline if it starts to look like I won't have time.

Thanks for the reminder Brecht! Yes, I'd be happy to take this on, and I should be available to tackle this before November 30th. Specifically, I will at least have a good chunk of time starting around November 16th. I'll bump up the priority of this on my todo list. And at the very least, I'll report back here well before the deadline if it starts to look like I won't have time.
Member

Last week I received a surprising and welcome email from Andrew Helmer, one of the authors of the most recent PMJ paper ("Stochastic Generation of (t, s) Sample Sequences"). He confirmed my suspicion that shuffled PMJ02 and shuffled-scrambled Sobol are, in fact, the same sequence.

So this means we won't be missing anything by switching over to pretabulated Owen-scrambled Sobol: the first two dimensions will be equivalent. The only difference is that with Sobol we have a natural way to extend to 3 and 4 dimensions.

Last week I received a surprising and welcome email from Andrew Helmer, one of the authors of the most recent PMJ paper ("Stochastic Generation of (t, s) Sample Sequences"). He confirmed my suspicion that shuffled PMJ02 and shuffled-scrambled Sobol are, in fact, the same sequence. So this means we won't be missing anything by switching over to pretabulated Owen-scrambled Sobol: the first two dimensions will be equivalent. The only difference is that with Sobol we have a natural way to extend to 3 and 4 dimensions.
Member

I've implemented the 2D PMJ -> 4D scrambled Sobol change in D16443, also switching light sampling over to use 3D samples for light selection + light sampling.

@Alaska I'm not sure how much trouble it would be, but if you could test out that patch with your light tree branch to see if it actually fixes things, that would be amazing! If you don't have the time, no worries. Just let me know and I'll take a crack myself. But I figured since you already know how you're testing things, etc. it might be quicker for you to do it.

I've implemented the 2D PMJ -> 4D scrambled Sobol change in [D16443](https://archive.blender.org/developer/D16443), also switching light sampling over to use 3D samples for light selection + light sampling. @Alaska I'm not sure how much trouble it would be, but if you could test out that patch with your light tree branch to see if it actually fixes things, that would be amazing! If you don't have the time, no worries. Just let me know and I'll take a crack myself. But I figured since you already know how you're testing things, etc. it might be quicker for you to do it.
Author
Member

@cessen I've tested the patch on the Many Lights Sampling Branch and you can find my results below. But first I just wanted to share some useful information.

  1. For testing I used 0d76d746b8 as my base.
  2. After that I then applied D16443, however due to differences with how some things are structured, I had to manual remake some of the changes, specifically with switching light sampling over to use 3D samples for light selection + light sampling. You can find the full patch here (It's mostly your patch with some tweaks): D16462. And you can find only the changes relevant to using three dimensions for light tree traversal and picking a light here: D16461

If you take a look at D16461, you'll notice I am using randw (This is rand_light.z, the third dimension) for traversing down the light tree. And I'm using randu and randv like before to sample the light. I believe this is how use 3D samples for light selection + light sampling works for Many Lights Sampling via a light tree? But I might of got it wrong.

Anyway, here are the results (150,000 samples)

PMJ with Many Lights Sampling Sobol Tabulated using three dimensions in Many Lights Sampling (one for traversing the light tree, two for sampling the light) Sobol Burley using three dimensions in Many Lights Sampling (one for traversing the light tree, two for sampling the light)
Normal MLS.png Sobol Tabulated Using third dimension.png Sobol Burley.png

The issue wasn't resolved. I'm not sure if that's an issue with how I implemented your patch or something else.

@cessen I've tested the patch on the Many Lights Sampling Branch and you can find my results below. But first I just wanted to share some useful information. 1. For testing I used 0d76d746b8 as my base. 2. After that I then applied [D16443](https://archive.blender.org/developer/D16443), however due to differences with how some things are structured, I had to manual remake some of the changes, specifically with `switching light sampling over to use 3D samples for light selection + light sampling.` You can find the full patch here (It's mostly your patch with some tweaks): [D16462](https://archive.blender.org/developer/D16462). And you can find only the changes relevant to using three dimensions for light tree traversal and picking a light here: [D16461](https://archive.blender.org/developer/D16461) If you take a look at [D16461](https://archive.blender.org/developer/D16461), you'll notice I am using `randw` (This is `rand_light.z`, the third dimension) for traversing down the light tree. And I'm using `randu` and `randv` like before to sample the light. I believe this is how `use 3D samples for light selection + light sampling` works for Many Lights Sampling via a light tree? But I might of got it wrong. Anyway, here are the results (150,000 samples) |PMJ with Many Lights Sampling|Sobol Tabulated using three dimensions in Many Lights Sampling (one for traversing the light tree, two for sampling the light)|Sobol Burley using three dimensions in Many Lights Sampling (one for traversing the light tree, two for sampling the light)| | -- | -- | -- | |![Normal MLS.png](https://archive.blender.org/developer/F13890000/Normal_MLS.png)|![Sobol Tabulated Using third dimension.png](https://archive.blender.org/developer/F13889162/Sobol_Tabulated_Using_third_dimension.png)|![Sobol Burley.png](https://archive.blender.org/developer/F13889178/Sobol_Burley.png)| The issue wasn't resolved. I'm not sure if that's an issue with how I implemented your patch or something else.
Member

@Alaska Thanks for testing!

Looking through your patch, as far as I can tell it looks like you applied it correctly. So I think this just means that the way the 2D samples are used for light selection isn't actually a contributor to the issue.

I'll play around with the light tree branch and see if I can figure out if there are any other contributing factors. But it may just come down to the size of the tables after all.

@Alaska Thanks for testing! Looking through your patch, as far as I can tell it looks like you applied it correctly. So I think this just means that the way the 2D samples are used for light selection isn't actually a contributor to the issue. I'll play around with the light tree branch and see if I can figure out if there are any other contributing factors. But it may just come down to the size of the tables after all.
Member

I've been playing around with various different ways of shuffling/randomizing the tables, and no matter what I do the artifacts still persist. It's of course still possible that I'm missing something. But I'm increasingly suspicious that that's not the case. From the original PMJ paper:

Although the pmj02 sequences are theoretically infinite, in the RenderMan renderer, we truncate them at 4096 samples in pre-generated tables (and start over in a different table in the rare case that more samples are required per pixel per dimension). We have a few hundred different tables, and each pixel index and ray depth is hashed to a table number.

Assuming by "a few hundred" they mean around 300, that would be roughly equivalent to DIVISIONS = 64 and PATTERNS = 256 in our implementation. And it seems likely they didn't hit upon those numbers by accident. So that suggests 64 patterns just isn't enough to guard against artifacts in complex scenes.

Bumping up to DIVISIONS = 64 and PATTERNS = 256 also isn't prohibitive with our new table generation code. We're now using the technique from "Stochastic Generation of (t, s) Sample Sequences", and they report speeds of a little over 200 million samples / second, scaling linearly with the number of samples. 4096 * 256 = 1048576, so we're talking around 1/100th of a second to generate our tables, even with DIVISIONS = 64, PATTERNS = 256. Moreover, our code is already multi-threaded to generate each pattern on a different thread, so it's actually even faster with that.

The only thing to watch out for is the size of the tables in memory. But even with the new 4D sampling patch, DIVISIONS = 64, PATTERNS = 256 works out to ~16 MB. Which I don't imagine is an issue even on fairly old hardware.

So this is all to say, I think it's turning out that the fix is indeed to just bump up the size of our tables after all.

I've been playing around with various different ways of shuffling/randomizing the tables, and no matter what I do the artifacts still persist. It's of course still possible that I'm missing something. But I'm increasingly suspicious that that's not the case. From the original PMJ paper: > Although the pmj02 sequences are theoretically infinite, in the RenderMan renderer, we truncate them at 4096 samples in pre-generated tables (and start over in a different table in the rare case that more samples are required per pixel per dimension). We have a few hundred different tables, and each pixel index and ray depth is hashed to a table number. Assuming by "a few hundred" they mean around 300, that would be roughly equivalent to `DIVISIONS = 64` and `PATTERNS = 256` in our implementation. And it seems likely they didn't hit upon those numbers by accident. So that suggests 64 patterns just isn't enough to guard against artifacts in complex scenes. Bumping up to `DIVISIONS = 64` and `PATTERNS = 256` also isn't prohibitive with our new table generation code. We're now using the technique from "Stochastic Generation of (t, s) Sample Sequences", and they report speeds of a little over 200 million samples / second, scaling linearly with the number of samples. 4096 * 256 = 1048576, so we're talking around 1/100th of a second to generate our tables, even with `DIVISIONS = 64, PATTERNS = 256`. Moreover, our code is already multi-threaded to generate each pattern on a different thread, so it's actually even faster with that. The only thing to watch out for is the size of the tables in memory. But even with the new 4D sampling patch, `DIVISIONS = 64, PATTERNS = 256` works out to ~16 MB. Which I don't imagine is an issue even on fairly old hardware. So this is all to say, I think it's turning out that the fix is indeed to just bump up the size of our tables after all.

Maybe we can only use the bigger pattern if the (max) samples are set to a high number? To avoid the memory usage increase for the more common case.

Maybe we can only use the bigger pattern if the (max) samples are set to a high number? To avoid the memory usage increase for the more common case.
Member

I think that's a reasonable idea. However, in my own testing it doesn't actually require the huge sample count that @Alaska was using for the artifacts to manifest. Even at around 8000 samples you can make out the artifacts. It's just not as obvious because it's competing with the render noise.

Because of that, I'm suspecting the issue is more about light transport complexity (maybe number of bounces + occlusion, but I'm not sure) exhausting the dimensions (patterns). Taking a lot of samples certainly manifests the issue more obviously, especially when you exceed the number of samples in a single pattern. But I'm suspecting that the issue is always kind of there without enough patterns.

So I think what makes the most sense is to fix the number of patterns at something high (e.g. 256), and adapt the number of samples per pattern. We could have a minimum of 256, and a max of maybe 8192. Or something like that. And increment the generated samples in powers of two to always remain >= the specified samples per pixel (up to the max).

We can also ditch the DIVISIONS constant at this point in favor of directly specifying the number of samples, because the new generation code can generate arbitrary numbers of samples. We'll still want to keep it to powers of two for the sample shuffling code, but that's about it.

Does that sound good? If so, I can start on a patch right away.

I think that's a reasonable idea. However, in my own testing it doesn't actually require the huge sample count that @Alaska was using for the artifacts to manifest. Even at around 8000 samples you can make out the artifacts. It's just not as obvious because it's competing with the render noise. Because of that, I'm suspecting the issue is more about light transport complexity (maybe number of bounces + occlusion, but I'm not sure) exhausting the dimensions (patterns). Taking a lot of samples certainly manifests the issue more obviously, especially when you exceed the number of samples in a single pattern. But I'm suspecting that the issue is always kind of there without enough patterns. So I think what makes the most sense is to fix the number of patterns at something high (e.g. 256), and adapt the number of samples per pattern. We could have a minimum of 256, and a max of maybe 8192. Or something like that. And increment the generated samples in powers of two to always remain >= the specified samples per pixel (up to the max). We can also ditch the `DIVISIONS` constant at this point in favor of directly specifying the number of samples, because the new generation code can generate arbitrary numbers of samples. We'll still want to keep it to powers of two for the sample shuffling code, but that's about it. Does that sound good? If so, I can start on a patch right away.

That sounds good.

That sounds good.
Member

Awesome! However, I goofed a little in terms of my schedule--I won't be able to start right away. I'm working as the VFX supervisor on a film shoot this week (Mon-Fri). I thought I was going to be available in the evenings to work on Cycles, but that doesn't seem to be the case.

I can start work on this on Saturday. If that's cutting it too close, you're more than welcome to do the patch yourself. But if Saturday is fine, I'll get started then.

Awesome! However, I goofed a little in terms of my schedule--I won't be able to start *right* away. I'm working as the VFX supervisor on a film shoot this week (Mon-Fri). I thought I was going to be available in the evenings to work on Cycles, but that doesn't seem to be the case. I can start work on this on Saturday. If that's cutting it too close, you're more than welcome to do the patch yourself. But if Saturday is fine, I'll get started then.
Member

@brecht I've implemented the pattern count increase and dynamic pattern sizing in D16561

I've done that separately from D16443 because I figure since the latter doesn't actually fix the issue, it's more of a feature so it should probably(?) wait for the next release at this point.

@brecht I've implemented the pattern count increase and dynamic pattern sizing in [D16561](https://archive.blender.org/developer/D16561) I've done that separately from [D16443](https://archive.blender.org/developer/D16443) because I figure since the latter doesn't actually fix the issue, it's more of a feature so it should probably(?) wait for the next release at this point.

This issue was referenced by 03b5be4e3c

This issue was referenced by 03b5be4e3cdf6a4967cb438dacd595c23075db79

Changed status from 'Confirmed' to: 'Resolved'

Changed status from 'Confirmed' to: 'Resolved'
Brecht Van Lommel self-assigned this 2022-11-21 19:20:29 +01:00

Thanks for the patches. Indeed I think D16443 should land in master instead.

Thanks for the patches. Indeed I think [D16443](https://archive.blender.org/developer/D16443) should land in master instead.
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
6 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#101356
No description provided.