BLI: refactor IndexMask for better performance and memory usage #104629

Merged
Jacques Lucke merged 254 commits from JacquesLucke/blender:index-mask-refactor into main 2023-05-24 18:11:47 +02:00
Member

Goals of this refactor:

  • Reduce memory consumption of IndexMask. The old IndexMask uses an int64_t for each index which is more than necessary in pretty much all practical cases currently. I still wouldn't want to simply reduce the size to int32_t because that could become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. I also don't want to template IndexMask, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated.
  • Allow for more multi-threading. The old IndexMask contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread.
  • Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations.
  • It should be possible to iterate over an IndexMask very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges.

To achieve these goals, a few compromises had to made:

  • Slicing of the mask (at specific indices) and random element access is O(log #indices) now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in O(n) though, making the time per split O(1).
  • Using range-based for loops does not work well when iterating over a nested data structure like the new IndexMask. Therefor, foreach_* functions with callbacks have to be used. To avoid extra code complexity at the call site, the foreach_* methods support multi-threading out of the box.

The new data structure splits an IndexMask into an arbitrary number of ordered IndexMaskSegment. Each segment can contain at most 2^14 = 16384 indices. The indices within a segment are stored as int16_t. Each segment has an additional int64_t offset which allows storing arbitrary int64_t indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly.

For more details see comments in BLI_index_mask.hh.

I did a few tests to verify that the data structure generally improves performance and does not cause regressions:

  • Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization.
  • Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from int64_t to int16_t for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory.
  • I possible performance improvements by benchmarking IndexMask::from_bools in index_mask_from_selection on 10.000.000 elements at various probabilities for true at every index:
    Probability      Old        New
    0              4.6 ms     0.8 ms 
    0.001          5.1 ms     1.3 ms
    0.2            8.4 ms     1.8 ms
    0.5           15.3 ms     3.0 ms
    0.8           20.1 ms     3.0 ms
    0.999         25.1 ms     1.7 ms
    1             13.5 ms     1.1 ms
    
Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. I still wouldn't want to simply reduce the size to `int32_t` because that could become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. I also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * I possible performance improvements by benchmarking `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ```
Jacques Lucke added 28 commits 2023-02-11 20:06:31 +01:00
Jacques Lucke added 1 commit 2023-02-11 20:31:41 +01:00
Jacques Lucke added 12 commits 2023-02-12 18:02:14 +01:00
Jacques Lucke added 2 commits 2023-02-17 00:35:13 +01:00
Jacques Lucke added 2 commits 2023-02-17 00:41:24 +01:00
Jacques Lucke added 4 commits 2023-02-17 01:05:45 +01:00
Jacques Lucke added 4 commits 2023-02-17 01:30:03 +01:00
Jacques Lucke added 12 commits 2023-02-17 12:39:57 +01:00
Jacques Lucke added 2 commits 2023-02-26 16:34:22 +01:00
Jacques Lucke added 3 commits 2023-02-26 17:49:56 +01:00
Jacques Lucke added 2 commits 2023-02-26 19:12:47 +01:00
Jacques Lucke added 1 commit 2023-02-26 19:57:49 +01:00
Jacques Lucke added 1 commit 2023-02-26 20:24:58 +01:00
Jacques Lucke added 1 commit 2023-02-26 20:28:27 +01:00
buildbot/vexp-code-patch-coordinator Build done. Details
e734450226
remove type alias
it didn't always make things shorter but made things more obscure
Author
Member

@blender-bot build

@blender-bot build
Jacques Lucke added 1 commit 2023-02-26 20:55:35 +01:00
buildbot/vexp-code-patch-coordinator Build done. Details
777f30a148
cleanup
Author
Member

@blender-bot build

@blender-bot build
Jacques Lucke added 3 commits 2023-02-27 14:39:17 +01:00
Jacques Lucke added 1 commit 2023-02-27 14:55:00 +01:00
Jacques Lucke added 1 commit 2023-02-28 19:39:39 +01:00
Jacques Lucke added 3 commits 2023-03-05 11:41:06 +01:00
Jacques Lucke added 1 commit 2023-03-05 11:52:17 +01:00
Jacques Lucke added 1 commit 2023-03-05 12:21:02 +01:00
Jacques Lucke added 8 commits 2023-03-19 08:10:26 +01:00
Jacques Lucke added 31 commits 2023-03-19 08:44:43 +01:00
Jacques Lucke added 1 commit 2023-03-19 09:05:58 +01:00
Jacques Lucke added 3 commits 2023-03-20 18:24:43 +01:00
Jacques Lucke added 3 commits 2023-03-20 19:49:07 +01:00
Hans Goudey added 2 commits 2023-03-21 12:14:40 +01:00
Jacques Lucke added 8 commits 2023-03-21 12:20:06 +01:00
Jacques Lucke added 1 commit 2023-03-21 12:51:02 +01:00
Jacques Lucke added 1 commit 2023-03-22 11:40:16 +01:00
Jacques Lucke added this to the Nodes & Physics project 2023-03-22 12:35:08 +01:00
Hans Goudey added 2 commits 2023-03-22 23:10:58 +01:00
Hans Goudey added 4 commits 2023-03-25 18:31:40 +01:00
Hans Goudey added 1 commit 2023-03-29 19:55:17 +02:00
Hans Goudey added 1 commit 2023-03-31 23:19:59 +02:00
Hans Goudey added 1 commit 2023-04-01 03:47:05 +02:00
Hans Goudey added 2 commits 2023-04-25 16:39:25 +02:00
Jacques Lucke added 12 commits 2023-05-12 18:06:58 +02:00
Jacques Lucke added 1 commit 2023-05-12 19:04:41 +02:00
Jacques Lucke added 1 commit 2023-05-14 12:25:54 +02:00
Jacques Lucke added 1 commit 2023-05-19 10:07:43 +02:00
Jacques Lucke added 8 commits 2023-05-19 12:10:43 +02:00
Jacques Lucke added 12 commits 2023-05-19 22:29:46 +02:00
Jacques Lucke added 27 commits 2023-05-21 02:23:32 +02:00
Jacques Lucke added 1 commit 2023-05-21 02:36:00 +02:00
Jacques Lucke added 1 commit 2023-05-21 02:40:55 +02:00
Jacques Lucke added 7 commits 2023-05-21 15:09:48 +02:00
Jacques Lucke added 1 commit 2023-05-21 15:24:43 +02:00
Jacques Lucke added 1 commit 2023-05-22 09:05:26 +02:00
Jacques Lucke added 5 commits 2023-05-22 11:34:02 +02:00
Jacques Lucke changed title from WIP: BLI: refactor IndexMask for better performance and memory usage to BLI: refactor IndexMask for better performance and memory usage 2023-05-22 11:34:48 +02:00
Jacques Lucke added 1 commit 2023-05-22 11:44:12 +02:00
buildbot/vexp-code-patch-coordinator Build done. Details
79b7967854
Merge branch 'main' into index-mask-refactor
Jacques Lucke requested review from Hans Goudey 2023-05-22 11:44:48 +02:00
Author
Member

@blender-bot build

@blender-bot build
Jacques Lucke requested review from Lukas Tönne 2023-05-22 11:48:05 +02:00
Jacques Lucke added 1 commit 2023-05-22 12:54:02 +02:00
buildbot/vexp-code-patch-coordinator Build done. Details
e0784f4fd1
fix compile error
Author
Member

@blender-bot build

@blender-bot build
Hans Goudey added 3 commits 2023-05-23 22:54:38 +02:00
Hans Goudey approved these changes 2023-05-23 22:56:26 +02:00
Hans Goudey left a comment
Member

Looks quite good! It's satisfying to see how this has come together and been simplified. Committing it now/soon will make it easier to track future improvements like improving from_bits or complement.

I think the API documentation is missing a bit of description of how to choose between foreach_segment and foreach_index (and the corresponding optimized variants). The choice seems a bit arbitrary right now in the various users.

Looks quite good! It's satisfying to see how this has come together and been simplified. Committing it now/soon will make it easier to track future improvements like improving `from_bits` or `complement`. I think the API documentation is missing a bit of description of how to choose between `foreach_segment` and `foreach_index` (and the corresponding optimized variants). The choice seems a bit arbitrary right now in the various users.
@ -31,0 +26,4 @@
* - The most-significant-bit is not used so that signed integers can be used which avoids common
* issues when mixing signed and unsigned ints.
* - The second most-significant bit is not used for indices so that #max_segment_size itself can
* be stored in the #int16_t.
Member

Might be helpful to mention why it's helpful that max_segment_size fits in int16_t

Might be helpful to mention why it's helpful that `max_segment_size` fits in `int16_t`
JacquesLucke marked this conversation as resolved
@ -73,0 +77,4 @@
/**
* Encodes the size of each segment. The size of a specific segment can be computed by
* subtracting consecutive elements (also see #OffsetIndices). The size of this array is one
* larger than #segments_num_. Note that the first elements is _not_ necessarily zero.
Member

Hmm, why would the first element not be 0 always?

Hmm, why would the first element not be 0 always?
Author
Member

The first element is often not 0 when the IndexMask is a slice of another mask.

The first element is often not 0 when the `IndexMask` is a slice of another mask.
@ -257,3 +294,1 @@
*
* All the indices in the sub-mask are shifted by 3 towards zero,
* so that the first index in the output is zero.
* Same as above but may generate more code at compile time because it optimizes for the case
Member

"optimizes for" might be more helpful if it said "generates a separate case for "

"optimizes for" might be more helpful if it said "generates a separate case for "
JacquesLucke marked this conversation as resolved
@ -291,0 +353,4 @@
* The class has to be constructed once. Afterwards, `update` has to be called to fill the mask
* with the provided segment.
*/
class IndexMaskFromSegment : NonCopyable, NonMovable {
Member

IndexMaskFromSegment could probably get a more "private" API, with only public mask() and update() methods and everything else private. That might make it more obvious how it's supposed to be used.

`IndexMaskFromSegment` could probably get a more "private" API, with only public `mask()` and `update()` methods and everything else private. That might make it more obvious how it's supposed to be used.
JacquesLucke marked this conversation as resolved
@ -0,0 +7,4 @@
namespace blender {
/**
* An #OffsetSpan where a constant offset is added to every value when accessed. This allows e.g.
Member

An #OffsetSpan where a constant -> An #OffsetSpan is a #Span with a constant

`An #OffsetSpan where a constant` -> `An #OffsetSpan is a #Span with a constant`
JacquesLucke marked this conversation as resolved
@ -0,0 +66,4 @@
* [3, 4, 5, 6, 8, 9, 10]
* ^ Range ends here because 6 and 8 are not consecutive.
*/
template<typename T> inline int64_t find_size_of_next_range(const Span<T> indices)
Member

It's pretty clear from the implementations, but it might be nice to put the average and worst case big-O runtime for find_size_until_next_range and find_size_of_next_range

It's pretty clear from the implementations, but it might be nice to put the average and worst case big-O runtime for `find_size_until_next_range` and `find_size_of_next_range`
JacquesLucke marked this conversation as resolved
@ -5,0 +28,4 @@
const IndexMask &get_static_index_mask_for_min_size(const int64_t min_size)
{
static constexpr int64_t size_shift = 30;
static constexpr int64_t max_size = (1 << size_shift);
Member

Might as well add /* 1'073'741'824 */ in a comment after this. Maybe 2 billion or so is safer?

It's a bit confusing that min_size is passed to this function in general, but I guess it's the only good way to have that assert.

Might as well add `/* 1'073'741'824 */` in a comment after this. Maybe 2 billion or so is safer? It's a bit confusing that `min_size` is passed to this function in general, but I guess it's the only good way to have that assert.
JacquesLucke marked this conversation as resolved
@ -44,3 +149,1 @@
}
if (indices_.is_empty()) {
return full_range;
/* TODO: Implement more efficient solution. */
Member

I guess this might come if we work on the operations you wrote earlier using bit masks? Worth doing soon I guess, since it's not great to have this with a very different algorithmic complexity than the proper solution.

I guess this might come if we work on the operations you wrote earlier using bit masks? Worth doing soon I guess, since it's not great to have this with a very different algorithmic complexity than the proper solution.
Author
Member

It's probably faster to implement this without bit masks. One could just generate a new segment for each old non-range segment and add extra segments for the gaps between old segments.

It's probably faster to implement this without bit masks. One could just generate a new segment for each old non-range segment and add extra segments for the gaps between old segments.
@ -54,0 +168,4 @@
int64_t group_start_segment_i = 0;
int64_t group_first = segments[0][0];
int64_t group_last = segments[0].last();
bool group_as_range = unique_sorted_indices::non_empty_is_range(segments[0].base_span());
Member

const bool?

`const bool`?
JacquesLucke marked this conversation as resolved
@ -90,0 +257,4 @@
LinearAllocator<> &allocator,
Vector<IndexMaskSegment> &r_segments)
{
Vector<std::variant<IndexRange, Span<T>>> segments;
Member

With the 1/2^14 constant factor, a larger inline buffer here could probably eliminate most allocations. Same below with Vector<IndexMaskSegment> segments

With the `1/2^14` constant factor, a larger inline buffer here could probably eliminate most allocations. Same below with `Vector<IndexMaskSegment> segments`
JacquesLucke marked this conversation as resolved
@ -90,0 +277,4 @@
segment_indices.size());
while (!segment_indices.is_empty()) {
const int64_t offset = segment_indices[0];
const int64_t next_segment_size = binary_search::find_predicate_begin(
Member

Maybe not worth it (also just want to test my understanding)-- couldn't this limit the maximum size of the span argument to find_predicate_begin with something like find_predicate_begin(segment_indices.take_front(max_segment_size + offset),...?

Maybe not worth it (also just want to test my understanding)-- couldn't this limit the maximum size of the span argument to `find_predicate_begin` with something like `find_predicate_begin(segment_indices.take_front(max_segment_size + offset),...`?
Author
Member

Not sure why the + offset, but taking at most max_segment_size makes sense.
In practice it likely doesn't make a difference right now, because the span passed to segments_from_indices is already sliced (for multi-threading).

Not sure why the `+ offset`, but taking at most `max_segment_size` makes sense. In practice it likely doesn't make a difference right now, because the span passed to `segments_from_indices` is already sliced (for multi-threading).
@ -135,2 +130,2 @@
return curves_to_delete[curve_i];
});
IndexMaskMemory mask_memory;
const IndexMask &mask_to_delete = IndexMask::from_bools(curves_to_delete, mask_memory);
Member

Probably shouldn't be a reference here

Probably shouldn't be a reference here
JacquesLucke marked this conversation as resolved
@ -165,2 +164,4 @@
}
}
IndexMaskMemory memory;
const IndexMask changed_curves_mask = IndexMask::from_indices<int64_t>(changed_curves_indices,
Member

It looks like this whole changed_curves_indices thing could be replaced with from_predicate?

It looks like this whole `changed_curves_indices` thing could be replaced with `from_predicate`?
Author
Member

Might not be entirely trivial right now, but generally I agree. Will leave that for later.

Might not be entirely trivial right now, but generally I agree. Will leave that for later.
@ -323,3 +321,3 @@
MutableSpan<T> dst = attributes.dst[i_attribute].typed<T>();
for (const int i_curve : sliced_selection) {
for (const int i_curve : selection_segment) {
Member

This changes int to int64_t in some places but not others, probably best to be consistent.

This changes `int` to `int64_t` in some places but not others, probably best to be consistent.
JacquesLucke marked this conversation as resolved
@ -527,1 +514,3 @@
for (const int i : selection.slice(range)) {
selection.foreach_segment(GrainSize(512), [&](const IndexMaskSegment segment) {
for (const int i : segment) {
nurbs_order[i] = 4;
Member

Any particular reason not to just use masked_fill here, rather than writing a for loop? Seems like it would be simpler that way.

Any particular reason not to just use `masked_fill` here, rather than writing a for loop? Seems like it would be simpler that way.
@ -1068,3 +1051,3 @@
/* Only trimmed curves are no longer cyclic. */
if (bke::SpanAttributeWriter cyclic = dst_attributes.lookup_for_write_span<bool>("cyclic")) {
cyclic.span.fill_indices(selection.indices(), false);
selection.foreach_index(GrainSize(4096), [&](const int64_t i) { cyclic.span[i] = false; });
Member

Might as well used masked_fill here

Might as well used `masked_fill` here
Author
Member

Don't mind much either way, but performance wise it's likely better the way it is now, even though it also doesn't matter too much. Might be good to have a version of masked_fill that takes IndexMaskSegment.

Don't mind much either way, but performance wise it's likely better the way it is now, even though it also doesn't matter too much. Might be good to have a version of `masked_fill` that takes `IndexMaskSegment`.
Member

I think I'd rather keep the code a bit simpler here, and leave changing from fill_indices to something besides masked_fill for a separate commit, where it can be more easily evaluated separately from this larger change.

I think I'd rather keep the code a bit simpler here, and leave changing from `fill_indices` to something besides `masked_fill` for a separate commit, where it can be more easily evaluated separately from this larger change.
@ -415,2 +408,4 @@
});
});
IndexMaskMemory memory;
Member

This memory will fill up while processing in the for loop, since from_indices doesn't clear the existing memory. Maybe better to declare it inside the loop? Or maybe not, hrmm...

Actually, maybe I'll just try to replace this with the same from_groups thing from elsewhere.

This `memory` will fill up while processing in the for loop, since `from_indices` doesn't clear the existing memory. Maybe better to declare it inside the loop? Or maybe not, hrmm... Actually, maybe I'll just try to replace this with the same `from_groups` thing from elsewhere.
Author
Member

Yeah, difficult, will also leave that for later for now. It's probably good to refactor this a bit more like you mentioned.

Yeah, difficult, will also leave that for later for now. It's probably good to refactor this a bit more like you mentioned.
Hans Goudey requested changes 2023-05-23 22:57:58 +02:00
Hans Goudey left a comment
Member

Oops. Meant to request changes.

Also, it would be nice to see at least a few performance measurements (or at least some memory usage example case).

Oops. Meant to request changes. Also, it would be nice to see at least a few performance measurements (or at least some memory usage example case).
Hans Goudey added 1 commit 2023-05-24 02:04:38 +02:00
Jacques Lucke added 3 commits 2023-05-24 10:07:09 +02:00
Jacques Lucke added 8 commits 2023-05-24 11:25:20 +02:00
Jacques Lucke added 1 commit 2023-05-24 12:02:43 +02:00
Hans Goudey approved these changes 2023-05-24 14:30:55 +02:00
Hans Goudey left a comment
Member

The new foreach documentation is great, thanks for that!

I just left one comment about the code replacing masked_fill in two places. Other than that this looks ready to me.

The new `foreach` documentation is great, thanks for that! I just left one comment about the code replacing `masked_fill` in two places. Other than that this looks ready to me.
Jacques Lucke added 2 commits 2023-05-24 14:35:50 +02:00
Hans Goudey added 1 commit 2023-05-24 14:36:09 +02:00
buildbot/vexp-code-patch-coordinator Build done. Details
fee01c1943
Spelling: over head -> overhead
Author
Member

@blender-bot build

@blender-bot build
Author
Member

@blender-bot build

@blender-bot build
Jacques Lucke merged commit 2cfcb8b0b8 into main 2023-05-24 18:11:47 +02:00
Jacques Lucke deleted branch index-mask-refactor 2023-05-24 18:11:48 +02:00
Howard Trickey referenced this issue from a commit 2023-05-29 02:51:37 +02:00
Sign in to join this conversation.
No reviewers
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No Assignees
2 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#104629
No description provided.