Speedup classic Kuwahara filter by summed area table #111150

Merged
Habib Gahbiche merged 30 commits from zazizizou/blender:com-kuwahara-sat into main 2023-11-01 10:49:18 +01:00
Member

Implemented SAT for CPU.
Pros:

  • Filter runtime becomes independent of filter size
  • Up to 30x faster for 4k images

Cons:

  • Loss of precision because sum values of whole image is stored as float. Following two images show the effect of loss of precision on results. Multi-threading yields comparable results to 2sum method for filter size > 4.

picture
Following image shows the different methods divided by the naive implementation (black is perfect, color/white is deviation from perfect):
picture

Implemented SAT for CPU. Pros: - Filter runtime becomes independent of filter size - Up to 30x faster for 4k images Cons: - Loss of precision because sum values of whole image is stored as float. Following two images show the effect of loss of precision on results. Multi-threading yields comparable results to 2sum method for filter size > 4. ![picture](https://i.ibb.co/s1Lj9j5/Naive-vs-sat.jpg) Following image shows the different methods divided by the naive implementation (black is perfect, color/white is deviation from perfect): ![picture](https://i.ibb.co/qFqS8f0/naive-vs-sat-div.jpg)
Habib Gahbiche added 2 commits 2023-08-15 22:09:48 +02:00
Habib Gahbiche added 3 commits 2023-08-20 10:45:50 +02:00
Habib Gahbiche reviewed 2023-08-22 13:01:36 +02:00
@ -0,0 +207,4 @@
ASSERT_EQ(sum[0], 4);
}
TEST_F(SummedTableAreaSumTest, RightLine)
Author
Member

@OmarEmaraDev such cases (area of 1 line or column at image border) yield wrong results using SAT implementation ported from GPU.

I don't think it makes sense to support for Kuwhara filter because filter size must be at least 4. Do you agree? If so, I will add BLI_assert to only allow areas larger than 2x2

@OmarEmaraDev such cases (area of 1 line or column at image border) yield wrong results using SAT implementation ported from GPU. I don't think it makes sense to support for Kuwhara filter because filter size must be at least 4. Do you agree? If so, I will add `BLI_assert` to only allow areas larger than 2x2
Member

I feel like this should work. Is it wrong in the GPU implementation as well?

I feel like this should work. Is it wrong in the GPU implementation as well?
Author
Member

I looked into the GPU implementation. The problem was the definition of "area" for SAT. In my implementation I assumed the last element of area is not inclusive (the same way image[height] is out of bounds).

This is just a matter of definition though, so I will keep the implementation and update the tests.

I looked into the GPU implementation. The problem was the definition of "area" for SAT. In my implementation I assumed the last element of area is not inclusive (the same way `image[height]` is out of bounds). This is just a matter of definition though, so I will keep the implementation and update the tests.
Author
Member

I looked into the GPU implementation. The problem was the definition of "area" for SAT. In my implementation I assumed the last element of area is not inclusive (the same way image[height] is out of bounds).

This is just a matter of definition though, so I will keep the implementation and update the tests.

I looked into the GPU implementation. The problem was the definition of "area" for SAT. In my implementation I assumed the last element of area is not inclusive (the same way `image[height]` is out of bounds). This is just a matter of definition though, so I will keep the implementation and update the tests.
zazizizou marked this conversation as resolved
Habib Gahbiche added 1 commit 2023-09-03 12:52:41 +02:00
Habib Gahbiche added 3 commits 2023-09-03 23:17:14 +02:00
Author
Member

@blender-bot build

@blender-bot build
Habib Gahbiche added 3 commits 2023-09-05 19:01:45 +02:00
Author
Member

@blender-bot build

@blender-bot build
Habib Gahbiche added 2 commits 2023-09-05 20:39:49 +02:00
Author
Member

@blender-bot build

@blender-bot build
Habib Gahbiche added 1 commit 2023-09-09 11:32:18 +02:00
Habib Gahbiche added 1 commit 2023-09-09 12:25:00 +02:00
Habib Gahbiche added 1 commit 2023-09-09 13:34:35 +02:00
Author
Member

@blender-bot build

@blender-bot build
Habib Gahbiche requested review from Sergey Sharybin 2023-09-09 13:50:01 +02:00
Habib Gahbiche requested review from Omar Emara 2023-09-09 13:50:02 +02:00
Habib Gahbiche changed title from WIP: Speedup classic Kuwahara filter by summed area table to Speedup classic Kuwahara filter by summed area table 2023-09-09 13:50:13 +02:00
Habib Gahbiche added 1 commit 2023-09-09 21:16:35 +02:00
Author
Member

@blender-bot build

@blender-bot build
Omar Emara requested changes 2023-09-11 09:57:15 +02:00
Omar Emara left a comment
Member

The High Precision option shpould be used in the early return of the ConvertKuwaharaOperation::execute_classic method.

The High Precision option shpould be used in the early return of the `ConvertKuwaharaOperation::execute_classic` method.
@ -32,0 +58,4 @@
* Note: best results are achieved using this optimization as well as the running error
* compensation in SummedAreaTableOperation.
*/
CalculateMeanOperation *mean = new CalculateMeanOperation();
Member

I feel like we can just subtract 0.5 here and not the actual mean, since it will be faster and I suspect will achieve the desired functionality if not do it better.

If we have an image with mostly values in the [0, 1] range and a region of very large values as highlights, the mean will be skewed to the higher end. And subtracting the mean from this higher value region will not be beneficial anyways.

I feel like we can just subtract 0.5 here and not the actual mean, since it will be faster and I suspect will achieve the desired functionality if not do it better. If we have an image with mostly values in the `[0, 1]` range and a region of very large values as highlights, the mean will be skewed to the higher end. And subtracting the mean from this higher value region will not be beneficial anyways.
Author
Member

As discussed, I tried this and it didn't work. The reason is the small differences of 0.2 - 0.3 (2x - 3x actual mean) cause very large differences in the squared SAT, making it asymmetric and therefore cancelling all benefits.

As discussed, I tried this and it didn't work. The reason is the small differences of 0.2 - 0.3 (2x - 3x actual mean) cause very large differences in the squared SAT, making it asymmetric and therefore cancelling all benefits.
@ -172,0 +223,4 @@
int xx = x + dx;
int yy = y + dy;
if (xx >= 0 && yy >= 0 && xx < image->get_width() && yy < image->get_height()) {
Member

Reverse condition and continue to reduce indentation.

Reverse condition and continue to reduce indentation.
zazizizou marked this conversation as resolved
@ -0,0 +123,4 @@
/* Track floating point error. See below. */
float4 running_compensation = {0.0f, 0.0f, 0.0f, 0.0f};
for (BuffersIterator<float> it = result->iterate_with({}, *rect); !it.is_end(); ++it) {
Member

I think we should attempt to multithread the SAT computation. Not sure if there is anything stopping us from doing that, but a two pass prefix sum should be easy to implement and efficient to parallelize on the CPU.

| Image | -> Prefix sum from left to right -> | Horizontal Pass Result | -> Prefix sum from bottom to top -> | Desired SAT |

Each of the prefix sums can simply be a parallel loop over rows/columns.

I think we should attempt to multithread the SAT computation. Not sure if there is anything stopping us from doing that, but a two pass prefix sum should be easy to implement and efficient to parallelize on the CPU. ``` | Image | -> Prefix sum from left to right -> | Horizontal Pass Result | -> Prefix sum from bottom to top -> | Desired SAT | ``` Each of the prefix sums can simply be a parallel loop over rows/columns.
Author
Member

As discussed in the meeting, my concern was using SingleThreadedOperation for a multi-threaded execution. I will upload a patch using TBB.

As discussed in the meeting, my concern was using `SingleThreadedOperation` for a multi-threaded execution. I will upload a patch using TBB.
zazizizou marked this conversation as resolved
@ -8631,2 +8631,4 @@
"Controls how directional the filter is. 0 means the filter is completely omnidirectional "
"while 2 means it is maximally directed along the edges of the image");
prop = RNA_def_property(srna, "fast", PROP_BOOLEAN, PROP_NONE);
Member

I think this should be called High Precision, as it conveys the meaning better to the user.

I think this should be called `High Precision`, as it conveys the meaning better to the user.
Author
Member

Option will be removed.

Option will be removed.
zazizizou marked this conversation as resolved
@ -8633,0 +8634,4 @@
prop = RNA_def_property(srna, "fast", PROP_BOOLEAN, PROP_NONE);
RNA_def_property_boolean_sdna(prop, nullptr, "fast", 1);
RNA_def_property_ui_text(
prop, "Fast", "Use faster computation. Might produce artefacts for large images.");
Member

Extra period at the end of description.

Extra period at the end of description.
zazizizou marked this conversation as resolved

I am not entire sold on the idea of having it an option, let alone disabled by default.

From my understanding bigger the resolution less accurate the result is. Testing with 4K images here there is surely difference compared with the ground-truth, but I do not think those are deal breakers. Now, if for bigger images this becomes deal breaker, I don't think 30x slowdown will be an acceptable trade-off for artists. So from user perspective it does not seem to be a practical option.

For the development purposes it could be interesting to have a ground-truth implementation, there are better ways of doing so:

  • The simplest one is to enable the option by default
  • Can also follow what Cycles does and only expose it and take into account if a developer options are enabled (this is how Cycles exposes all the fine-tuning knobs in its Debug panel).

Thoughts?

P.S. On a functional level the fast method is so much more fun now :)

I am not entire sold on the idea of having it an option, let alone disabled by default. From my understanding bigger the resolution less accurate the result is. Testing with 4K images here there is surely difference compared with the ground-truth, but I do not think those are deal breakers. Now, if for bigger images this becomes deal breaker, I don't think 30x slowdown will be an acceptable trade-off for artists. So from user perspective it does not seem to be a practical option. For the development purposes it could be interesting to have a ground-truth implementation, there are better ways of doing so: * The simplest one is to enable the option by default * Can also follow what Cycles does and only expose it and take into account if a developer options are enabled (this is how Cycles exposes all the fine-tuning knobs in its Debug panel). Thoughts? P.S. On a functional level the fast method is so much more fun now :)
Member

There are a number of points I would like to note:

  • I believe we can solve the precision issues of the SAT implementation while maintaining similar level of performance. Therefore, I suspect that the option would only be temporary until we implement that improvement.
  • A 30x slowdown is only for the single threaded version. A multithreaded version would mean a slowdown that is orders of magnitude higher. :)
  • The nature of filter lends itself well to a downsample-filter-upsample strategy. So artists can use that if precision is still an issue.

While I initially proposed the option to resolve the SAT artifacts, it seems the offset SAT implementation is now satisfactory enough, so the option might not be strictly necessary. Either way, I think the SAT should definitely be the default if we decide to have the option.

There are a number of points I would like to note: - I believe we can solve the precision issues of the SAT implementation while maintaining similar level of performance. Therefore, I suspect that the option would only be temporary until we implement that improvement. - A 30x slowdown is only for the single threaded version. A multithreaded version would mean a slowdown that is orders of magnitude higher. :) - The nature of filter lends itself well to a downsample-filter-upsample strategy. So artists can use that if precision is still an issue. While I initially proposed the option to resolve the SAT artifacts, it seems the offset SAT implementation is now satisfactory enough, so the option might not be strictly necessary. Either way, I think the SAT should definitely be the default if we decide to have the option.
Habib Gahbiche added 4 commits 2023-09-23 14:35:23 +02:00
Author
Member

@blender-bot build

@blender-bot build
Author
Member

Using TBB was not much faster than openMP but multithreading in general helped in reducing the error (see updated images in description).

As agreed, I removed the fast. The filter still uses the naive implementation for kernel size < 4, which is around where SAT becomes accurate and fast enough.

Using `TBB` was not much faster than `openMP` but multithreading in general helped in reducing the error (see updated images in description). As agreed, I removed the `fast`. The filter still uses the naive implementation for kernel size < 4, which is around where SAT becomes accurate and fast enough.
Omar Emara requested changes 2023-09-25 12:36:15 +02:00
@ -32,0 +35,4 @@
kuwahara_classic->set_use_sat(false);
}
else {
SummedAreaTableOperation *sat = new SummedAreaTableOperation();
Member

Add a kuwahara_classic->set_use_sat(true); just for clarity.

Add a `kuwahara_classic->set_use_sat(true);` just for clarity.
zazizizou marked this conversation as resolved
@ -0,0 +64,4 @@
MemoryBuffer *image = inputs[0];
/* First pass: copy values from input to output and square values if necessary. */
threading::parallel_for(IndexRange(area.ymin, area.ymax), 1, [&](const IndexRange range_y) {
Member

It is sufficient to have a single parallel loop over rows and a serial loop over columns, too much parallelism will hurt performance.

This copy loop can be fused with the horizontal pass.

It is sufficient to have a single parallel loop over rows and a serial loop over columns, too much parallelism will hurt performance. This copy loop can be fused with the horizontal pass.
zazizizou marked this conversation as resolved
@ -0,0 +102,4 @@
threading::parallel_for(IndexRange(area.ymin, area.ymax), 1, [&](const IndexRange range_y) {
for (int64_t y = *range_y.begin(); y < *range_y.end(); y++) {
/* Track floating point error. See below. */
float4 running_compensation = {0.0f, 0.0f, 0.0f, 0.0f};
Member

This can be more compact.

  • Use a for each loop on ranges. for (const int y : sub_y_range) {
  • Accumulate a color instead of reading the previous output.
  • Use the get_elem function.
  • Use a copy function.
  threading::parallel_for(IndexRange(area.ymin, area.ymax), 1, [&](const IndexRange sub_y_range) {
    for (const int y : sub_y_range) {
      float4 accumulated_color = float4(0.0f);
      for (const int x : IndexRange(area.xmin, area.xmax)) {
        const float4 color = float4(image->get_elem(x, y));
        accumulated_color += color * color;
        copy_v4_v4(output->get_elem(x, y), accumulated_color);
      }
    }
  });
This can be more compact. - Use a for each loop on ranges. `for (const int y : sub_y_range) {` - Accumulate a color instead of reading the previous output. - Use the `get_elem` function. - Use a copy function. ```cpp threading::parallel_for(IndexRange(area.ymin, area.ymax), 1, [&](const IndexRange sub_y_range) { for (const int y : sub_y_range) { float4 accumulated_color = float4(0.0f); for (const int x : IndexRange(area.xmin, area.xmax)) { const float4 color = float4(image->get_elem(x, y)); accumulated_color += color * color; copy_v4_v4(output->get_elem(x, y), accumulated_color); } } }); ```
Author
Member

Will do, thanks for the tip :)

Will do, thanks for the tip :)
zazizizou marked this conversation as resolved
Habib Gahbiche added 3 commits 2023-10-22 19:43:42 +02:00
689f5fde28 Merge remote-tracking branch 'origin/main' into com-kuwahara-sat
Conflicts:
	source/blender/compositor/nodes/COM_KuwaharaNode.cc
	source/blender/compositor/operations/COM_KuwaharaClassicOperation.cc
	source/blender/compositor/operations/COM_KuwaharaClassicOperation.h
Author
Member

Some findings from profiling:

  • SAT multithreaded implementation is 30-40% faster than single threaded SAT. On my machine, fullframe is now 3-4x slower than GPU
  • SAT operation is the bottleneck (as expected)
  • CPU usage is at around 80%. I experimented with different grain sizes but I didn't observe any significant differences.
Some findings from profiling: - SAT multithreaded implementation is 30-40% faster than single threaded SAT. On my machine, fullframe is now 3-4x slower than GPU - SAT operation is the bottleneck (as expected) - CPU usage is at around 80%. I experimented with different grain sizes but I didn't observe any significant differences.
Sergey Sharybin reviewed 2023-10-23 09:52:58 +02:00
Sergey Sharybin left a comment
Owner

Are there locks down the the road in the get_elem perhaps which gets in a way of better parralelism?

If algorithmically the patch is correct then I would suggest moving forward with it. Makes it easier to look deeper into performance, while bringing artists benefits early on.

Are there locks down the the road in the `get_elem` perhaps which gets in a way of better parralelism? If algorithmically the patch is correct then I would suggest moving forward with it. Makes it easier to look deeper into performance, while bringing artists benefits early on.
@ -0,0 +130,4 @@
area.ymin = 1;
area.ymax = 3;
float4 sum = summed_area_table_sum(sat_.get(), area);
ASSERT_EQ(sum[0], 9);

Any specific reason to use ASSERT_EQ instead of EXPECT_EQ ? The ASSERT will stop the test. It is typically used for cases when the rest of the test will be impossible. For example, when you expect function to give you a pointer to an object and you check for it be non-nullptr before looking into its properties.

Any specific reason to use `ASSERT_EQ ` instead of `EXPECT_EQ `? The ASSERT will stop the test. It is typically used for cases when the rest of the test will be impossible. For example, when you expect function to give you a pointer to an object and you check for it be non-nullptr before looking into its properties.
Author
Member

No specific reason, but it doesn't make a difference here because there is a single assert per test. I can update it in a later patch for clarity

No specific reason, but it doesn't make a difference here because there is a single assert per test. I can update it in a later patch for clarity
zazizizou marked this conversation as resolved
Author
Member

@blender-bot build

@blender-bot build
Author
Member

Are there locks down the the road in the get_elem perhaps which gets in a way of better parralelism?

It looks like memory access is the bottleneck. read_elem_checked is the function causing the most waits, especially in the function summed_area_table_sum()

> Are there locks down the the road in the `get_elem` perhaps which gets in a way of better parralelism? It looks like memory access is the bottleneck. `read_elem_checked` is the function causing the most waits, especially in the function `summed_area_table_sum()`
Omar Emara requested changes 2023-10-27 16:34:40 +02:00
@ -7,11 +7,13 @@
#include "COM_KuwaharaNode.h"
#include "COM_CalculateMeanOperation.h"
Member

Unnecessary include.

Unnecessary include.
zazizizou marked this conversation as resolved
@ -0,0 +13,4 @@
SummedAreaTableOperation::SummedAreaTableOperation()
{
this->add_input_socket(DataType::Color);
this->add_input_socket(DataType::Value);
Member

What is this Value input?

What is this Value input?
Author
Member

This was needed to subtract the mean from image. Not needed anymore, will remove.

This was needed to subtract the mean from image. Not needed anymore, will remove.
OmarEmaraDev marked this conversation as resolved
@ -0,0 +79,4 @@
threading::parallel_for(IndexRange(area.xmin, area.xmax), 1, [&](const IndexRange range_x) {
for (const int x : range_x) {
for (const int y : IndexRange(area.ymin, area.ymax)) {
float4 color;
Member

Use a temporary accumulated_color variable and avoid reading the buffer again just like the above loop. Then, use get_elem instead of read_elem_checked.

Use a temporary `accumulated_color` variable and avoid reading the buffer again just like the above loop. Then, use `get_elem` instead of `read_elem_checked`.
zazizizou marked this conversation as resolved
@ -0,0 +101,4 @@
for (const int x : IndexRange(area->xmin, area->xmax)) {
float4 color;
image_reader_->read_sampled(&color.x, x, y, sampler);
Member

Use read instead of read_sampled. Same applies for all read_sampled calls below.

Use `read` instead of `read_sampled`. Same applies for all `read_sampled` calls below.
zazizizou marked this conversation as resolved
@ -0,0 +113,4 @@
for (const int x : range_x) {
for (const int y : IndexRange(area->ymin, area->ymax)) {
float4 color;
output->read_elem_checked(x, y - 1, &color.x);
Member

Same as above.

Same as above.
zazizizou marked this conversation as resolved
@ -0,0 +163,4 @@
float4 a, b, c, d, addend, substrahend;
buffer->read_sampled(
&a.x, corrected_upper_bound[0], corrected_upper_bound[1], PixelSampler::Nearest);
Member

Use UNPACK2.

Use `UNPACK2`.
zazizizou marked this conversation as resolved
@ -0,0 +9,4 @@
namespace blender::compositor {
/**
* \brief base class of CalculateMean, implementing the simple CalculateMean
Member

Update comment.

Update comment.
zazizizou marked this conversation as resolved
Habib Gahbiche added 2 commits 2023-10-29 12:12:41 +01:00
Habib Gahbiche added this to the Compositing project 2023-10-29 12:13:09 +01:00
Omar Emara approved these changes 2023-10-30 09:58:03 +01:00
Sergey Sharybin requested changes 2023-10-30 11:32:50 +01:00
Sergey Sharybin left a comment
Owner

The latest update Improve performance by 5-10% by using fewer buffer reads seems have broken the filter. The result is all empty, and the following tests are failing:

         51 - bf_compositor_tests (Failed)
        145 - compositor_filter_cpu_test (Failed)
The latest update `Improve performance by 5-10% by using fewer buffer reads` seems have broken the filter. The result is all empty, and the following tests are failing: ``` 51 - bf_compositor_tests (Failed) 145 - compositor_filter_cpu_test (Failed) ```

Nailed it down a bit more. The issue is in the change:

-  buffer->read_sampled(
-      &b.x, corrected_lower_bound[0], corrected_upper_bound[1], PixelSampler::Nearest);
-  buffer->read_sampled(
-      &c.x, corrected_upper_bound[0], corrected_lower_bound[1], PixelSampler::Nearest);
+  buffer->read_sampled(&b.x, UNPACK2(corrected_lower_bound), PixelSampler::Nearest);
+  buffer->read_sampled(&c.x, UNPACK2(corrected_upper_bound), PixelSampler::Nearest);

Note how the corrected_lower_bound[0], corrected_upper_bound[1] is replaced (after macro expansion) with corrected_lower_bound[1]`. There seems to be another chunk in the patch which lead to the same issue.

On a related topic, we should not be using UNPACK2 in the new code. Instead, overload the read_sampled which accepts const int2& pos and do expansion to individual x, y there (explicitly, without use of macro).

Nailed it down a bit more. The issue is in the change: ``` - buffer->read_sampled( - &b.x, corrected_lower_bound[0], corrected_upper_bound[1], PixelSampler::Nearest); - buffer->read_sampled( - &c.x, corrected_upper_bound[0], corrected_lower_bound[1], PixelSampler::Nearest); + buffer->read_sampled(&b.x, UNPACK2(corrected_lower_bound), PixelSampler::Nearest); + buffer->read_sampled(&c.x, UNPACK2(corrected_upper_bound), PixelSampler::Nearest); ``` Note how the `corrected_lower_bound[0], corrected_upper_bound[1]` is replaced (after macro expansion) with corrected_lower_bound[1]`. There seems to be another chunk in the patch which lead to the same issue. On a related topic, we should not be using `UNPACK2` in the new code. Instead, overload the `read_sampled` which accepts `const int2& pos` and do expansion to individual x, y there (explicitly, without use of macro).
Author
Member

Thanks @Sergey, really need more discipline to run tests for every single commit...

Thanks @Sergey, really need more discipline to run tests for _every single_ commit...
Habib Gahbiche added 1 commit 2023-10-30 18:46:16 +01:00
buildbot/vexp-code-patch-coordinator Build done. Details
2afa0f2041
Revert using UNPACK2
Author
Member

@blender-bot build

@blender-bot build
Habib Gahbiche added 2 commits 2023-10-31 13:03:20 +01:00
Author
Member

@blender-bot build

@blender-bot build
Sergey Sharybin approved these changes 2023-11-01 10:28:17 +01:00
Habib Gahbiche merged commit 021109e633 into main 2023-11-01 10:49:18 +01:00
Habib Gahbiche deleted branch com-kuwahara-sat 2023-11-01 10:49:19 +01:00
Author
Member

@OmarEmaraDev @Sergey thank you for the quick and helpful review!

@OmarEmaraDev @Sergey thank you for the quick and helpful review!
Sergey Sharybin removed this from the Compositing project 2023-11-01 10:51:34 +01:00
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
3 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#111150
No description provided.