Supporting HDR workflows in Blender #105714

Open
opened 2023-03-13 12:50:47 +01:00 by Sergey Sharybin · 18 comments

Supporting HDR workflows in Blender

This is a write up of a current state and possible ways forward supporting the high dynamic range (HDR) workflow in Blender.

Current state

Internally the whole render pipeline operates on HDR buffers, including Cycles, Compositor, EEVEE, and so on.

However, the configuration of the drawing context which actually displays the rendered image on the display is limited to 8bit range.

The image formats which support HDR are already utilizing it, but this is not done for the video IO.

Moving towards HDR

This section briefly summarizes changes which are to happen in various areas of Blender to unlock artists delivering HDR content.

Operating system window graphics context

The GHOST library of Blender takes care of creating OS windows, and graphics contexts for them which is used to visualize the whole Blender interface.

The window context needs to be given hints to be more than 8 bits per channel, and the window itself might need to provide hints to the operating system about the type of the content (possibly, the color space of the content of the window).

Details are to be figured out on a per-system basis, but generally the files of interest are intern/ghost/intern/GHOST_Window<Platform> and intern/ghost/intern/GHOST_Context<Backend>

OpenColorIO configuration

Emulation of different displays

The OpenColorIO has a built-in ability to emulate look of the final image on a different displays. One of the issue with this is that it kind of conflicts with the same functionality on macOS which has system-wide option for the same functionality.

The idea to mitigate this confusion could be to introduce "Auto" display configuration. which will leave the final display transform to the OS.

There are some open topics with this proposal:

  • What should be the way of checking how render result would look like on a non-HDR screen: is it something to be done via the system-wide settings in the OS, or is it something to be done by adding/ensuring clamping the sRGB display OCIO configuration (possibly presented as a different display from the current sRGB)?

  • How to apply the "Auto" display transform on the saved images? Auto could be a boolean too.

  • Look at virtual displays as potential solution: https://github.com/AcademySoftwareFoundation/OpenColorIO/issues/1082

  • Should Extended sRGB be its own colorspace, or a boolean toggle?

DCI-P3 display configuration

The title is kind of self-explanatory: there needs to be a configuration for the displays which use DCI-P3 color space.

One of the challenges is to preserve the Filmic View, which is currently designed for the sRGB screens. Arguably, it should be a look as it is often used for an artistic reasons. Nevertheless, this is a topic of investigation for the P3 display configuration.

ACES support

ACES is what a lot of studios and hardware manufactures are aligning up, and this configuration seems to be useful to have regardless of HDR story.

An argument for the ACES in the context of HDR is that it seems that it simplifies color grading process, ensuring that the final frame will look as close as possible to what artists had in mind as when viewed on various projectors and displays.

Draw manager & UI

There are following points which seems important for the decision making:

  • On macOS it seems to be very easy to provide window content in a specific color space (which is communicated to the NSWindow), and leave the display transform to the OS.
  • Blender needs to become usable on XYZ projects.

This leads to the following proposal: the entire Blender UI needs to become covered by color management of some sort. Possibly, render the whole Blender UI in wide enough color space (rec2020?). The final display color space transform is then either done by the OS, or as a window-wide shader.

Image IO

For images it is already possible to override display transform. So technically it is possible to apply proper tone mapping to make non-HDR image format look good.

A possible life quality improvement could be to somehow streamline preview of the saved images. Currently it is kind of invisible, and the only way to know the final look is to open the file.

Video IO

Blender uses FFmpeg, which seems to supports HDR. However, in Blender the integration is historically hard-coded to be 8 bit per channel.

Both FFmpeg reader and writer needs to ensure to support higher bit depths.

From implementation details for the writer the interesting functions are BKE_ffmpeg_append and BKE_ffmpeg_start from writeffmpeg.c (this trickles up to the bMovieHandle abstraction used by the render pipeline). The frame API needs to be changed to accept ImBuf instead of int*, and the IMB_colormanagement_imbuf_for_write needs to be checked from a point of view that it does not enforces clamp.

TODO task dedicated to the video output: #118493

Supporting HDR workflows in Blender ----------------------------------- This is a write up of a current state and possible ways forward supporting the high dynamic range (HDR) workflow in Blender. # Current state Internally the whole render pipeline operates on HDR buffers, including Cycles, Compositor, EEVEE, and so on. However, the configuration of the drawing context which actually displays the rendered image on the display is limited to 8bit range. The image formats which support HDR are already utilizing it, but this is not done for the video IO. # Moving towards HDR This section briefly summarizes changes which are to happen in various areas of Blender to unlock artists delivering HDR content. ## Operating system window graphics context The GHOST library of Blender takes care of creating OS windows, and graphics contexts for them which is used to visualize the whole Blender interface. The window context needs to be given hints to be more than 8 bits per channel, and the window itself might need to provide hints to the operating system about the type of the content (possibly, the color space of the content of the window). Details are to be figured out on a per-system basis, but generally the files of interest are `intern/ghost/intern/GHOST_Window<Platform>` and `intern/ghost/intern/GHOST_Context<Backend>` ## OpenColorIO configuration ### Emulation of different displays The OpenColorIO has a built-in ability to emulate look of the final image on a different displays. One of the issue with this is that it kind of conflicts with the same functionality on macOS which has system-wide option for the same functionality. The idea to mitigate this confusion could be to introduce "Auto" display configuration. which will leave the final display transform to the OS. There are some open topics with this proposal: - What should be the way of checking how render result would look like on a non-HDR screen: is it something to be done via the system-wide settings in the OS, or is it something to be done by adding/ensuring clamping the sRGB display OCIO configuration (possibly presented as a different display from the current sRGB)? - How to apply the "Auto" display transform on the saved images? Auto could be a boolean too. - Look at virtual displays as potential solution: https://github.com/AcademySoftwareFoundation/OpenColorIO/issues/1082 - Should Extended sRGB be its own colorspace, or a boolean toggle? ### DCI-P3 display configuration The title is kind of self-explanatory: there needs to be a configuration for the displays which use DCI-P3 color space. One of the challenges is to preserve the Filmic View, which is currently designed for the sRGB screens. Arguably, it should be a look as it is often used for an artistic reasons. Nevertheless, this is a topic of investigation for the P3 display configuration. ### ACES support ACES is what a lot of studios and hardware manufactures are aligning up, and this configuration seems to be useful to have regardless of HDR story. An argument for the ACES in the context of HDR is that it seems that it simplifies color grading process, ensuring that the final frame will look as close as possible to what artists had in mind as when viewed on various projectors and displays. ## Draw manager & UI There are following points which seems important for the decision making: - On macOS it seems to be very easy to provide window content in a specific color space (which is communicated to the NSWindow), and leave the display transform to the OS. - Blender needs to become usable on XYZ projects. This leads to the following proposal: the entire Blender UI needs to become covered by color management of some sort. Possibly, render the whole Blender UI in wide enough color space (rec2020?). The final display color space transform is then either done by the OS, or as a window-wide shader. ## Image IO For images it is already possible to override display transform. So technically it is possible to apply proper tone mapping to make non-HDR image format look good. A possible life quality improvement could be to somehow streamline preview of the saved images. Currently it is kind of invisible, and the only way to know the final look is to open the file. ## Video IO Blender uses FFmpeg, which seems to supports HDR. However, in Blender the integration is historically hard-coded to be 8 bit per channel. Both FFmpeg reader and writer needs to ensure to support higher bit depths. From implementation details for the writer the interesting functions are `BKE_ffmpeg_append` and `BKE_ffmpeg_start` from `writeffmpeg.c` (this trickles up to the `bMovieHandle` abstraction used by the render pipeline). The frame API needs to be changed to accept `ImBuf` instead of `int*`, and the `IMB_colormanagement_imbuf_for_write` needs to be checked from a point of view that it does not enforces clamp. TODO task dedicated to the video output: #118493
Sergey Sharybin added this to the Render & Cycles project 2023-03-13 12:50:48 +01:00
Member

Just some notes:

  • I agree with using rec2020 (or similar) for the UI.
  • Sequencer might still clamp values internally. (didn't test lately)
  • For color management we also need the ability to determine when a value will be clamped (for a colorspace or display).
Just some notes: - I agree with using rec2020 (or similar) for the UI. - Sequencer might still clamp values internally. (didn't test lately) - For color management we also need the ability to determine when a value will be clamped (for a colorspace or display).

An argument for the ACES in the context of HDR is that it seems that it simplifies color grading process, ensuring that the final frame will look as close as possible to what artists had in mind as when viewed on various projectors and displays.

It does not do this. ACES does not, and never has, worked.

> An argument for the ACES in the context of HDR is that it seems that it simplifies color grading process, ensuring that the final frame will look as close as possible to what artists had in mind as when viewed on various projectors and displays. It does not do this. ACES does not, and never has, worked.
Contributor

One of the challenges is to preserve the Filmic View, which is currently designed for the >sRGB screens. Arguably, it should be a look as it is often used for an artistic reasons. >Nevertheless, this is a topic of investigation for the P3 display configuration.

I would like to make sure we are aware of the currently in development Filmic v2 (AgX), which already has several display device support, BT.1886, P3, sRGB, and BT.2020. The project is not finished yet, but we are getting progress.

Troy's current python script:
https://github.com/sobotka/SB2383-Configuration-Generation

My version designed with the Spectral Cycles branch in mind:
https://github.com/EaryChow/AgX

image

HDR BT.2100 uses the same primaries as BT.2020, so I think we can work from there.

We are currently trying to deal with camera footage with negative luminance input (real-word camera-converted EXR with virtual primaries causing problems for the view transform/image formation), but I personally believe we are pretty close to its final form.

I hope the design for future color management can be aware of our work to replace Filmic and the OCIO config.

There are a lot of design decisions we need to get through, I have a feeling that after we officially finish AgX, we will need to get some the points discussed. For example, the use of XYZ I-E (I-E is short for illuminant E) and Linear BT.709 I-E instead of I-D65 in my version, as preparation for Spectral Cycles; the working space choice of BT.2020 instead of ACEScg; whether the HDR image should replicate the SDR version (light saber core be white or chromatic); what nit should our HDR implementation use (the HDR displays on the market don't seems to agree on a number) etc. We still have a ton of matters to go through here.

> >One of the challenges is to preserve the Filmic View, which is currently designed for the >sRGB screens. Arguably, it should be a look as it is often used for an artistic reasons. >Nevertheless, this is a topic of investigation for the P3 display configuration. > I would like to make sure we are aware of the currently in development Filmic v2 (AgX), which already has several display device support, BT.1886, P3, sRGB, and BT.2020. The project is not finished yet, but we are getting progress. Troy's current python script: https://github.com/sobotka/SB2383-Configuration-Generation My version designed with the Spectral Cycles branch in mind: https://github.com/EaryChow/AgX ![image](/attachments/7050fcd7-5a24-459e-91db-8ee41ac1ed11) HDR BT.2100 uses the same primaries as BT.2020, so I think we can work from there. We are currently trying to deal with camera footage with negative luminance input (real-word camera-converted EXR with virtual primaries causing problems for the view transform/image formation), but I personally believe we are pretty close to its final form. I hope the design for future color management can be aware of our work to replace Filmic and the OCIO config. There are a lot of design decisions we need to get through, I have a feeling that after we officially finish AgX, we will need to get some the points discussed. For example, the use of XYZ I-E (I-E is short for illuminant E) and Linear BT.709 I-E instead of I-D65 in my version, as preparation for Spectral Cycles; the working space choice of BT.2020 instead of ACEScg; whether the HDR image should replicate the SDR version (light saber core be white or chromatic); what nit should our HDR implementation use (the HDR displays on the market don't seems to agree on a number) etc. We still have a ton of matters to go through here.
Member

As a resource, here are the ASWF's HDR encoding guidelines for FFMpeg.

https://academysoftwarefoundation.github.io/EncodingGuidelines/enctests/HDR_Encoding.html

As a resource, here are the ASWF's HDR encoding guidelines for FFMpeg. https://academysoftwarefoundation.github.io/EncodingGuidelines/enctests/HDR_Encoding.html

What should be the way of checking how render result would look like on a non-HDR screen: is it something to be done via the system-wide settings in the OS, or is it something to be done by adding/ensuring clamping the sRGB display OCIO configuration (possibly presented as a different display from the current sRGB)?

It's not only what the result looks like on a non-HDR monitor, but also a monitor with a smaller gamut. So in the general case you need two display color spaces, for the monitor being used and the monitor being emulated. The latter is a more advanced option though and seems too complex if you just want to test with HDR on/off.

Should Extended sRGB be its own colorspace, or a boolean toggle?

To me it seems better to have as an option handled outside of the OCIO config, as a boolean toggle. Having multiple displays for this in the OCIO config is not standard as far as I know, and would require some kind of special metadata for Blender to figure out which is which and appropriately configure the framebuffer.

How to apply the "Auto" display transform on the saved images? Auto could be a boolean too.

An "Auto" display transform for file saving could be good, but I think it would have a different meaning. For display the auto transform would match the monitor, for file saving it would pick a good default color space appropriate for the file format. I don't think file saving in a particular .blend should ever behave differently depending on which monitor is being used.

Perhaps there could be different names in the UI for these cases to clarify it.

ACES is what a lot of studios and hardware manufactures are aligning up, and this configuration seems to be useful to have regardless of HDR story.

I don't think the design choices to make here are specific to Filmic / AgX / ACES RRT, or whatever view transform someone wants to use. We should try to design it in such a way that any of them can fit with this.

In OCIO v2 there is the concept of an intermediate color space between the view transform and the final display transform, ideally those can be completely decoupled but it's not necessarily so clean in practice.

This leads to the following proposal: the entire Blender UI needs to become covered by color management of some sort. Possibly, render the whole Blender UI in wide enough color space (rec2020?). The final display color space transform is then either done by the OS, or as a window-wide shader.

I don't think this has to be one specific colorspace necessarily, it could be whatever colorspace the operating system can consume directly. Mainly I think we need to have functions like theme_to_ui_color_space and scene_linear_to_ui_color_space throughout the UI drawing code so that it's flexible.

> What should be the way of checking how render result would look like on a non-HDR screen: is it something to be done via the system-wide settings in the OS, or is it something to be done by adding/ensuring clamping the sRGB display OCIO configuration (possibly presented as a different display from the current sRGB)? It's not only what the result looks like on a non-HDR monitor, but also a monitor with a smaller gamut. So in the general case you need two display color spaces, for the monitor being used and the monitor being emulated. The latter is a more advanced option though and seems too complex if you just want to test with HDR on/off. > Should Extended sRGB be its own colorspace, or a boolean toggle? To me it seems better to have as an option handled outside of the OCIO config, as a boolean toggle. Having multiple displays for this in the OCIO config is not standard as far as I know, and would require some kind of special metadata for Blender to figure out which is which and appropriately configure the framebuffer. > How to apply the "Auto" display transform on the saved images? Auto could be a boolean too. An "Auto" display transform for file saving could be good, but I think it would have a different meaning. For display the auto transform would match the monitor, for file saving it would pick a good default color space appropriate for the file format. I don't think file saving in a particular .blend should ever behave differently depending on which monitor is being used. Perhaps there could be different names in the UI for these cases to clarify it. > ACES is what a lot of studios and hardware manufactures are aligning up, and this configuration seems to be useful to have regardless of HDR story. I don't think the design choices to make here are specific to Filmic / AgX / ACES RRT, or whatever view transform someone wants to use. We should try to design it in such a way that any of them can fit with this. In OCIO v2 there is the concept of an intermediate color space between the view transform and the final display transform, ideally those can be completely decoupled but it's not necessarily so clean in practice. > This leads to the following proposal: the entire Blender UI needs to become covered by color management of some sort. Possibly, render the whole Blender UI in wide enough color space (rec2020?). The final display color space transform is then either done by the OS, or as a window-wide shader. I don't think this has to be one specific colorspace necessarily, it could be whatever colorspace the operating system can consume directly. Mainly I think we need to have functions like `theme_to_ui_color_space` and `scene_linear_to_ui_color_space` throughout the UI drawing code so that it's flexible.

I think HDR and wide gamut can be thought of as being somewhat orthogonal, at least on the UI level and for much of the implementation. I think the tasks could be as follows.

For HDR:

  • Boolean HDR on/off option in scene color space settings
  • This only applies to display, not file saving
  • Change framebuffer allocation to match
  • Adjustments to avoid clamping and using byte instead of float buffers in various places

For wide gamut:

  • Refactor UI drawing code to convert all colors to an arbitrary UI color space rather than sRGB.
  • Change OCIO config so view transforms go to an intermediate color space (cie_xyz_d65_interchange) rather than directly to the final color space.
  • Add conversion from intermediate color space to monitor/system native color space, using features like virtual display in the OCIO config and/or using operating system functionality.
  • Allocate framebuffers in monitor/system native color space.
  • Add "Auto" display device, as default in new .blend files and maybe existing .blend files set to "sRGB".
I think HDR and wide gamut can be thought of as being somewhat orthogonal, at least on the UI level and for much of the implementation. I think the tasks could be as follows. For HDR: - Boolean HDR on/off option in scene color space settings - This only applies to display, not file saving - Change framebuffer allocation to match - Adjustments to avoid clamping and using byte instead of float buffers in various places For wide gamut: - Refactor UI drawing code to convert all colors to an arbitrary UI color space rather than sRGB. - Change OCIO config so view transforms go to an intermediate color space (`cie_xyz_d65_interchange`) rather than directly to the final color space. - Add conversion from intermediate color space to monitor/system native color space, using features like virtual display in the OCIO config and/or using operating system functionality. - Allocate framebuffers in monitor/system native color space. - Add "Auto" display device, as default in new .blend files and maybe existing .blend files set to "sRGB".
Contributor
  • Change OCIO config so view transforms go to an intermediate color space (cie_xyz_d65_interchange) rather than directly to the final color space.

I thought about doing this when making the AgX config, but choose not to for two reasons:

  1. By doing this, view transform spaces will not show up in color space list UI, making users not able to do view transform in compositor with the Convert Colorspace node.

  2. The design of the intermediate space was that one single view transform can be auto converted to multiple display encoding. But as I tested, it doesn't handle out-of-display-range values gracefully. Meaning we still need to have multiple versions of the view transform, just convert them all to CIE-XYZ I-D65 at the end, and then let OCIO convert back to display encoding, which feels unnecessary. The entire intermediate space design doesn't make sense, unless OCIO can come with a standardized Guard Rail during the auto conversion from intermediate space.

BTW on HDR side, have we decided which nits standard we should support? Again, the actual displays on the market don't agree on a number.

EDIT: Looking at other protocols, 1000 nits seems to be the most common implementation, HLG seems to also standardize at 1000 nits. Using 1000 nits might be the choice?

> - Change OCIO config so view transforms go to an intermediate color space (`cie_xyz_d65_interchange`) rather than directly to the final color space. I thought about doing this when making the AgX config, but choose not to for two reasons: 1. By doing this, view transform spaces will not show up in color space list UI, making users not able to do view transform in compositor with the `Convert Colorspace` node. 2. The design of the intermediate space was that one single view transform can be auto converted to multiple display encoding. But as I tested, it doesn't handle out-of-display-range values gracefully. Meaning we still need to have multiple versions of the view transform, just convert them all to CIE-XYZ I-D65 at the end, and then let OCIO convert back to display encoding, which feels unnecessary. The entire intermediate space design doesn't make sense, unless OCIO can come with a standardized `Guard Rail` during the auto conversion from intermediate space. BTW on HDR side, have we decided which nits standard we should support? Again, the actual displays on the market don't agree on a number. EDIT: Looking at other protocols, 1000 nits seems to be the most common implementation, HLG seems to also standardize at 1000 nits. Using 1000 nits might be the choice?

Some thoughts about how to handle displays, after reading discussions in #106355.

The new Windows ACM system is similar to macOS color management handle, where Blender would output to a fixed color space in float and the operating system takes it from there and applies its own display transforms. On Windows that fixed color space is scRGB, which is Linear Rec.709 but explicitly allowing values outside of [0, 1] for wider gamut and HDR.

In both cases it's not a great fit for the OpenColorIO design, where we really need a distinction between the display being targeted and the display being used, with the latter being detected automatically most of the time.

If we have a standard display transform for each display we could so something like this.

* convert from scene linear to display space X, based on choice of view transform and display by user
* do inverse standard transform from display space X to cie_xyz_d65_interchange
* convert cie_xyz_d65_interchange to operating system fixed color space

Not sure if this would be too hacky. But this would be a way to allow targeting a particular display space and letting each view transform do its own gamut and HDR clipping/mapping for it, while still letting the operating system do the final display transform.

The alternative would be to have a bunch of view transforms and look variations for different displays, which doesn't seem like a great UI. A user would need to carefully match things to get the correct results.

Some thoughts about how to handle displays, after reading discussions in #106355. The new Windows ACM system is similar to macOS color management handle, where Blender would output to a fixed color space in float and the operating system takes it from there and applies its own display transforms. On Windows that fixed color space is scRGB, which is Linear Rec.709 but explicitly allowing values outside of [0, 1] for wider gamut and HDR. In both cases it's not a great fit for the OpenColorIO design, where we really need a distinction between the display being targeted and the display being used, with the latter being detected automatically most of the time. If we have a standard display transform for each display we could so something like this. ``` * convert from scene linear to display space X, based on choice of view transform and display by user * do inverse standard transform from display space X to cie_xyz_d65_interchange * convert cie_xyz_d65_interchange to operating system fixed color space ``` Not sure if this would be too hacky. But this would be a way to allow targeting a particular display space and letting each view transform do its own gamut and HDR clipping/mapping for it, while still letting the operating system do the final display transform. The alternative would be to have a bunch of view transforms and look variations for different displays, which doesn't seem like a great UI. A user would need to carefully match things to get the correct results.

Some notes about HDR brightness. Note that I am ignoring wide gamut, it’s a related problem but clearer to think about it separately.

There are two ways to handle brightness:

  • Absolute, where a pixel value X is meant to be displayed as exactly Y nits.
  • Relative, where a user configures the display brightness for their environment or comfort, and SDR and HDR content brightness both scale along with that configuration.

Absolute brightness makes most sense in a cinema, where the environment is dark and the display is fully controlled. For TVs or computer monitors a cinema-like setup can be made, but often these are used in daylight, or users may adjust for the display to their own preference. Also note that on computers, HDR content is often displayed next to SDR content and a big difference is jarring.

For absolute brightness, reference white is the pixel value that corresponds to a nominal diffuse white color. In the PQ standard this is commonly 203 nits. For relative brightness, reference white corresponds to 1.0 in scene linear, and in for example the HLG standard would be encoded as 0.5 after a non-linear transfer function.

Now when it comes to making a view transform for HDR content, it’s important to know how much headroom there is above that reference white. When using absolute brightness, then for a given display the amount of headroom will be fixed. File encoding may also use a fixed maximum. For relative brightness, the amount of headroom depends on the display brightness set by the user, as it is set darker there is more headroom.


Similar to creating SDR content, we want to avoid hard clipping above the maximum value. With SDR content, avoiding this hard clipping is fully the responsibility of the view transform. View transforms like Filmic and AgX assume that 1.0 is the maximum and fit HDR values into the 0..1 range with a gradual roll-off.

When creating HDR content however, the problem is that the headroom above the reference white is often not known in advance.

We can know the headroom for the current display on the computer that is running Blender, and potentially use that information automatically or display it in Blender so the user can configure their view transform.

But in general we want to save files that can be viewed on other displays as well. And here the PQ (absolute) and HLG (relative) standards come in. See here for a detail overview.

My understanding is that the idea is that for both these standards, the responsibility of adapting HDR values shifts partially from the view transform to the display transform. There are maximums: 10,000 nits for PQ and 12x brighter than reference white for HLG. These are quite high. There can be good artistic or comfort reasons for view transforms to lower values below these maximum. But it should not be necessary to do so to avoid hard clipping artifacts on displays that only support e.g. 4x above reference white. Instead the display transform (potentially performed by the operating system) should take care of that.

HLG defines a set of transfer functions that should automatically make HDR content look similar / good across displays with different headroom, as well as adapting to the environment. For display and file export in Blender, this seems the most practical to support.

PQ is more complicated, with different formats that require non-trivial metadata (that may even vary frame-to-frame). Writing out PQ files with appropriate metadata may be best left to specialized grading applications. Or at least, giving full control over such metadata should be, writing out fixed metadata may be possible without making the implementation and UI too complex. Still it seems best to get HLG working first.


Concretely for Blender, the following steps could be taken.

  • Add a film-like HDR view transform to the OpenColorIO config that outputs values in the 0..12 range, matching the limits of HLG.
    • The HDR view transform should not just be the SDR view transform but brighter. Rather some values should become darker, and some brighter, to keep a similar average brightness for the image as a whole. This way the user does not have to change display brightness depending on which content they are viewing.
  • Add a Rec.2100 HLG display to the OpenColorIO config and Blender implementation.
    • The best way to implement this is not clear to me yet, but the goal would be to adapt HDR values following the HLG standard. To which extent this is done by the operating system needs to be investigated.
    • This display transform will map values from the 0..12 into the 0..1 range, following the Rec.2100 standard. The (extended) sRGB display transform will continue to map 0..1 to 0..1 without clipping.
  • Modify the Standard view transform for the Rec.2100 HLG display to preserve average brightness, or alternatively add a new neutral view transform that does so.
  • Add support for saving Rec.2100 HLG encoded files in common file formats that support it: AVIF, JPEG XL, WebP, H.264, H.265.
    • For still images we don’t yet support saving AVIF and JPEG XL, these would need to be added.
    • Writing the appropriate metadata to these files may require hardcoded assumptions about display names in the OpenColorIO config.
  • In the color management panel, information about absolute nits and headroom could be displayed. To help users understand what they are seeing, and how they might want to modify their system settings to make headroom available.

Note that nothing here requires changes to existing displays and view transforms in the OpenColorIO config.

Some notes about HDR brightness. Note that I am ignoring wide gamut, it’s a related problem but clearer to think about it separately. There are two ways to handle brightness: * **Absolute**, where a pixel value X is meant to be displayed as exactly Y nits. * **Relative**, where a user configures the display brightness for their environment or comfort, and SDR and HDR content brightness both scale along with that configuration. Absolute brightness makes most sense in a cinema, where the environment is dark and the display is fully controlled. For TVs or computer monitors a cinema-like setup can be made, but often these are used in daylight, or users may adjust for the display to their own preference. Also note that on computers, HDR content is often displayed next to SDR content and a big difference is jarring. For absolute brightness, **reference white** is the pixel value that corresponds to a nominal diffuse white color. In the PQ standard this is commonly 203 nits. For relative brightness, reference white corresponds to 1.0 in scene linear, and in for example the HLG standard would be encoded as 0.5 after a non-linear transfer function. Now when it comes to making a view transform for HDR content, it’s important to know how much **headroom** there is above that reference white. When using absolute brightness, then for a given display the amount of headroom will be fixed. File encoding may also use a fixed maximum. For relative brightness, the amount of headroom depends on the display brightness set by the user, as it is set darker there is more headroom. ____ Similar to creating SDR content, we want to avoid hard clipping above the maximum value. With SDR content, avoiding this hard clipping is fully the responsibility of the view transform. View transforms like Filmic and AgX assume that 1.0 is the maximum and fit HDR values into the 0..1 range with a gradual roll-off. When creating HDR content however, the problem is that the headroom above the reference white is often not known in advance. We can know the headroom for the current display on the computer that is running Blender, and potentially use that information automatically or display it in Blender so the user can configure their view transform. But in general we want to save files that can be viewed on other displays as well. And here the **PQ** (absolute) and **HLG** (relative) standards come in. [See here for a detail overview](https://www.lightillusion.com/what_is_hdr.html). My understanding is that the idea is that for both these standards, the responsibility of adapting HDR values shifts partially from the view transform to the display transform. There are maximums: 10,000 nits for PQ and 12x brighter than reference white for HLG. These are quite high. There can be good artistic or comfort reasons for view transforms to lower values below these maximum. But it should not be necessary to do so to avoid hard clipping artifacts on displays that only support e.g. 4x above reference white. Instead the display transform (potentially performed by the operating system) should take care of that. HLG defines a set of transfer functions that should automatically make HDR content look similar / good across displays with different headroom, as well as adapting to the environment. For display and file export in Blender, this seems the most practical to support. PQ is more complicated, with different formats that require non-trivial metadata (that may even vary frame-to-frame). Writing out PQ files with appropriate metadata may be best left to specialized grading applications. Or at least, giving full control over such metadata should be, writing out fixed metadata may be possible without making the implementation and UI too complex. Still it seems best to get HLG working first. ____ Concretely for Blender, the following steps could be taken. * Add a film-like HDR view transform to the OpenColorIO config that outputs values in the 0..12 range, matching the limits of HLG. * The HDR view transform should not just be the SDR view transform but brighter. Rather some values should become darker, and some brighter, to keep a similar average brightness for the image as a whole. This way the user does not have to change display brightness depending on which content they are viewing. * Add a Rec.2100 HLG display to the OpenColorIO config and Blender implementation. * The best way to implement this is not clear to me yet, but the goal would be to adapt HDR values following the HLG standard. To which extent this is done by the operating system needs to be investigated. * This display transform will map values from the 0..12 into the 0..1 range, following the Rec.2100 standard. The (extended) sRGB display transform will continue to map 0..1 to 0..1 without clipping. * Modify the Standard view transform for the Rec.2100 HLG display to preserve average brightness, or alternatively add a new neutral view transform that does so. * Add support for saving Rec.2100 HLG encoded files in common file formats that support it: AVIF, JPEG XL, WebP, H.264, H.265. * For still images we don’t yet support saving AVIF and JPEG XL, these would need to be added. * Writing the appropriate metadata to these files may require hardcoded assumptions about display names in the OpenColorIO config. * In the color management panel, information about absolute nits and headroom could be displayed. To help users understand what they are seeing, and how they might want to modify their system settings to make headroom available. Note that nothing here requires changes to existing displays and view transforms in the OpenColorIO config.
Contributor

View transforms like Filmic and AgX assume that 1.0 is the maximum and fit HDR values into the 0..1 range with a gradual roll-off.

I would like to point out that a properly done HDR version of AgX should work exclusively within 0 to 1 range within Rec.2100 HLG/PQ encoding. View transforms should always work in 0 to 1 range of the display's native encoding.

In other words, HDR version of AgX should have "roll-off" in 0 to 1 range as well, just with Rec.2100 encoding. Therefore the focus should not be an "unclipped view transform outputting sRGB", but rather, a view transform that natively encodes in Rec.2100, and then make sure the formed image is being respected as such encoding, before they are transformed to the OS specific exchange encoding.

> View transforms like Filmic and AgX assume that 1.0 is the maximum and fit HDR values into the 0..1 range with a gradual roll-off. I would like to point out that a properly done HDR version of AgX should work exclusively within 0 to 1 range within Rec.2100 HLG/PQ encoding. View transforms should always work in 0 to 1 range of the display's native encoding. In other words, HDR version of AgX should have "roll-off" in 0 to 1 range as well, just with Rec.2100 encoding. Therefore the focus should not be an "unclipped view transform outputting sRGB", but rather, a view transform that natively encodes in Rec.2100, and then make sure the formed image is being respected as such encoding, before they are transformed to the OS specific exchange encoding.

I would like to point out that a properly done HDR version of AgX should work exclusively within 0 to 1 range within Rec.2100 HLG/PQ encoding. View transforms should always work in 0 to 1 range of the display's native encoding.

In other words, HDR version of AgX should have "roll-off" in 0 to 1 range as well, just with Rec.2100 encoding.

If the most convenient way to implement the view transform is to go directly to the display space, without any intermediate 0..12 space, then fine. That's not a requirement, but a detail of how the OpenColorIO config is written.

What is a requirement is the ability for Blender to pry apart the view transform and display transform. Particularly to decode an image from Rec.2100 HLG space in the 0..1 range, to an intermediate linear space in the 0..12 range. For example to composite overlays over the 3D viewport, or to convert an image to scRGB for display on Windows.

Therefore the focus should not be an "unclipped view transform outputting sRGB", but rather, a view transform that natively encodes in Rec.2100, and then make sure the formed image is being respected as such encoding, before they are transformed to the OS specific exchange encoding.

The meaning of wording like "work exclusively within", "natively encodes", "respected as such encoding" is unclear to me. I can't translate those into concrete changes to what I proposed.

> I would like to point out that a properly done HDR version of AgX should work exclusively within 0 to 1 range within Rec.2100 HLG/PQ encoding. View transforms should always work in 0 to 1 range of the display's native encoding. > > In other words, HDR version of AgX should have "roll-off" in 0 to 1 range as well, just with Rec.2100 encoding. If the most convenient way to implement the view transform is to go directly to the display space, without any intermediate 0..12 space, then fine. That's not a requirement, but a detail of how the OpenColorIO config is written. What is a requirement is the ability for Blender to pry apart the view transform and display transform. Particularly to decode an image from Rec.2100 HLG space in the 0..1 range, to an intermediate linear space in the 0..12 range. For example to composite overlays over the 3D viewport, or to convert an image to scRGB for display on Windows. > Therefore the focus should not be an "unclipped view transform outputting sRGB", but rather, a view transform that natively encodes in Rec.2100, and then make sure the formed image is being respected as such encoding, before they are transformed to the OS specific exchange encoding. The meaning of wording like "work exclusively within", "natively encodes", "respected as such encoding" is unclear to me. I can't translate those into concrete changes to what I proposed.

What is a requirement is the ability for Blender to pry apart the view transform and display transform. Particularly to decode an image from Rec.2100 HLG space in the 0..1 range, to an intermediate linear space in the 0..12 range. For example to composite overlays over the 3D viewport, or to convert an image to scRGB for display on Windows.

I suspect this would likely be sub-optimal, as it doesn’t permit the picture formation to properly be tested in advance, and exist standalone via an OpenColorIO configuration.

What might be worth thinking about is to consider two cases:

  1. When HDR is not enabled.
  2. When HDR is enabled.

When enabled, use a general tagging system to identify whether a transform is intended for one of either BT.2100 encodings with respect to transfer functions:

  1. HLG.
  2. ST.2084

That would allow Blender to set the API to either mode, and properly tag the already encoded value out for display.

From within the user interface then, it would be business as usual for folks who select displays, such as ST.2084 with 1000 nit, 1500 nit, etc. limits. which permits the configuration author to properly test the cases and avoid artificial clips. I don’t think this would adversely impact compositing unless I am missing something; Blender could establish some generic “glue” layer that pegs values at an intermediate and internal intensity magnitude, which is then determined to be a specific encoded value based on which of the two above modes are set.

For example, given the UI is already sRGB throughout, a basic “encoding scale” can be applied which converts the values / buffer, based on one of the above toggles, to a uniform encoded signal range for compositing. In the case of HLG, the sRGB peak values would be gained uniformly to land somewhere around code value 0.5 in HLG “linear”, and in ST.2084, the gain is adjusted to place sRGB peak somewhere around code value 0.58.

scRGB is likely a horrible idea given that the encoding range used to clip BT.2020, which means yet more transforms and complexity to formulate a reasonable picture on the display.

> What is a requirement is the ability for Blender to pry apart the view transform and display transform. Particularly to decode an image from Rec.2100 HLG space in the 0..1 range, to an intermediate linear space in the 0..12 range. For example to composite overlays over the 3D viewport, or to convert an image to scRGB for display on Windows. I suspect this would likely be sub-optimal, as it doesn’t permit the picture formation to properly be tested in advance, and exist standalone via an OpenColorIO configuration. What might be worth thinking about is to consider two cases: 1. When HDR is not enabled. 2. When HDR is enabled. When enabled, use a general tagging system to identify whether a transform is intended for one of either BT.2100 encodings with respect to transfer functions: 1. HLG. 2. ST.2084 That would allow Blender to set the API to either mode, and properly tag the already encoded value out for display. From within the user interface then, it would be business as usual for folks who select displays, such as ST.2084 with 1000 nit, 1500 nit, etc. limits. which permits the configuration author to properly test the cases and avoid artificial clips. I don’t think this would adversely impact compositing unless I am missing something; Blender could establish some generic “glue” layer that pegs values at an intermediate and internal intensity magnitude, which is then determined to be a specific encoded value based on which of the two above modes are set. For example, given the UI is already sRGB throughout, a basic “encoding scale” can be applied which converts the values / buffer, based on one of the above toggles, to a uniform encoded signal range for compositing. In the case of HLG, the sRGB peak values would be gained uniformly to land somewhere around code value 0.5 in HLG “linear”, and in ST.2084, the gain is adjusted to place sRGB peak somewhere around code value 0.58. scRGB is likely a horrible idea given that the encoding range used to clip BT.2020, which means yet more transforms and complexity to formulate a reasonable picture on the display.

I suspect this would likely be sub-optimal, as it doesn’t permit the picture formation to properly be tested in advance, and exist standalone via an OpenColorIO configuration.

I don't see why giving Blender the ability to do the view transform without the display transform should prevent using the OpenColorIO config outside of Blender.

That would allow Blender to set the API to either mode, and properly tag the already encoded value out for display.

It may indeed be helpful for Blender to specifically recognize certain displays like Rec.2100 HLG/PQ and pass them on directly if the operating system supports it. It may even be required to get the operating system to properly adapt the content to the available headroom and ambient light, though having this affect the UI as well may be problematic. That why I wrote that the best way to do this is still not clear to me.

For example, given the UI is already sRGB throughout, a basic “encoding scale” can be applied which converts the values / buffer, based on one of the above toggles, to a uniform encoded signal range for compositing. In the case of HLG, the sRGB peak values would be gained uniformly to land somewhere around code value 0.5 in HLG “linear”, and in ST.2084, the gain is adjusted to place sRGB peak somewhere around code value 0.58.

Compositing should happen in a linear space rather than the display space for correct results. And we'd need to change the gamut for UI colors from sRGB to Rec.2020 as well. This doesn't present any additional design challenges, but the implementation is not trivial.

scRGB is likely a horrible idea given that the encoding range used to clip BT.2020, which means yet more transforms and complexity to formulate a reasonable picture on the display.

In Windows ACM, the scRGB buffer is unclipped half float. There should be no clipping or range problems with that.

> I suspect this would likely be sub-optimal, as it doesn’t permit the picture formation to properly be tested in advance, and exist standalone via an OpenColorIO configuration. I don't see why giving Blender the ability to do the view transform without the display transform should prevent using the OpenColorIO config outside of Blender. > That would allow Blender to set the API to either mode, and properly tag the already encoded value out for display. It may indeed be helpful for Blender to specifically recognize certain displays like Rec.2100 HLG/PQ and pass them on directly if the operating system supports it. It may even be required to get the operating system to properly adapt the content to the available headroom and ambient light, though having this affect the UI as well may be problematic. That why I wrote that the best way to do this is still not clear to me. > For example, given the UI is already sRGB throughout, a basic “encoding scale” can be applied which converts the values / buffer, based on one of the above toggles, to a uniform encoded signal range for compositing. In the case of HLG, the sRGB peak values would be gained uniformly to land somewhere around code value 0.5 in HLG “linear”, and in ST.2084, the gain is adjusted to place sRGB peak somewhere around code value 0.58. Compositing should happen in a linear space rather than the display space for correct results. And we'd need to change the gamut for UI colors from sRGB to Rec.2020 as well. This doesn't present any additional design challenges, but the implementation is not trivial. > scRGB is likely a horrible idea given that the encoding range used to clip BT.2020, which means yet more transforms and complexity to formulate a reasonable picture on the display. In Windows ACM, the scRGB buffer is unclipped half float. There should be no clipping or range problems with that.

Compositing should happen in a linear space rather than the display space for correct results.

There is no such thing as a “display space”. There’s simply colourimetry, and it is always uniform with respect to photometric luminance. I suspect you are referring to a non-uniform display encoding?

We can easily composite using a closed domain representation, as long as the colourimetry is uniform. Following that, it is very easy to composite HLG or PQ, and won’t end up with the “energy” over darkening due to a discrepancy between the colourimetric coordinate and the display encoding. We can even customize the range to gain as required by the UI. Should be pretty straightforward.

The API side less so, but I think that both Windows and macOS have requests for both HLG and ST.2084, although I suspect it might require a simple test. I’m happy to test this if someone can frame the buffer accordingly?

In Windows ACM, the scRGB buffer is unclipped half float. There should be no clipping or range problems with that.

Last time I looked, scRGB has a valid domain that excludes BT.2020. If a redefinition exists, I haven’t seen it?

image

> Compositing should happen in a linear space rather than the display space for correct results. There is no such thing as a “display space”. There’s simply colourimetry, and it is always uniform with respect to photometric luminance. I suspect you are referring to a non-uniform display encoding? We can easily composite using a closed domain representation, as long as the colourimetry is uniform. Following that, it is very easy to composite HLG or PQ, and won’t end up with the “energy” over darkening due to a discrepancy between the colourimetric coordinate and the display encoding. We can even customize the range to gain as required by the UI. Should be pretty straightforward. The API side less so, but I *think* that both Windows and macOS have requests for both HLG and ST.2084, although I suspect it might require a simple test. I’m happy to test this if someone can frame the buffer accordingly? > In Windows ACM, the scRGB buffer is unclipped half float. There should be no clipping or range problems with that. [Last time I looked, scRGB has a valid domain that excludes BT.2020](https://developer.android.com/reference/android/graphics/ColorSpace.Named). If a redefinition exists, I haven’t seen it? ![image](/attachments/42897464-c2bc-4de4-bb5d-8c565204e706)

There is no such thing as a “display space”. There’s simply colourimetry, and it is always uniform with respect to photometric luminance. I suspect you are referring to a non-uniform display encoding?

We can easily composite using a closed domain representation, as long as the colourimetry is uniform. Following that, it is very easy to composite HLG or PQ, and won’t end up with the “energy” over darkening due to a discrepancy between the colourimetric coordinate and the display encoding. We can even customize the range to gain as required by the UI. Should be pretty straightforward.

Compositing operations should happen in a color space that is linear with respect to photometric luminance. Not sure what "uniform" means in this context. The HLG OETF non-linearly maps 0..12 to 0..1, and the resulting color space is therefore not suitable for compositing.

On macOS you can provide a linear colorspace to the operating system, which in the case of HLG would be in the 0..12 range. That color space is suitable for compositing.
https://developer.apple.com/documentation/quartzcore/caedrmetadata/3194384-hlgmetadata

Last time I looked, scRGB has a valid domain that excludes BT.2020. If a redefinition exists, I haven’t seen it?

If it was clipped yes, but it's explicitly not.
https://learn.microsoft.com/en-us/windows/win32/direct3darticles/high-dynamic-range#system-composition-using-a-high-bit-depth-canonical-color-space

The BT.2100 ST.2084 color space is an efficient standard for encoding HDR colors, but it's not well suited for many rendering and composition (blending) operations. We also want to future proof the OS to support technologies and color spaces well beyond BT.2100 ...

> There is no such thing as a “display space”. There’s simply colourimetry, and it is always uniform with respect to photometric luminance. I suspect you are referring to a non-uniform display encoding? > > We can easily composite using a closed domain representation, as long as the colourimetry is uniform. Following that, it is very easy to composite HLG or PQ, and won’t end up with the “energy” over darkening due to a discrepancy between the colourimetric coordinate and the display encoding. We can even customize the range to gain as required by the UI. Should be pretty straightforward. Compositing operations should happen in a color space that is linear with respect to photometric luminance. Not sure what "uniform" means in this context. The HLG OETF non-linearly maps 0..12 to 0..1, and the resulting color space is therefore not suitable for compositing. On macOS you can provide a linear colorspace to the operating system, which in the case of HLG would be in the 0..12 range. That color space is suitable for compositing. https://developer.apple.com/documentation/quartzcore/caedrmetadata/3194384-hlgmetadata > [Last time I looked, scRGB has a valid domain that excludes BT.2020](https://developer.android.com/reference/android/graphics/ColorSpace.Named). If a redefinition exists, I haven’t seen it? If it was clipped yes, but it's explicitly not. https://learn.microsoft.com/en-us/windows/win32/direct3darticles/high-dynamic-range#system-composition-using-a-high-bit-depth-canonical-color-space > The BT.2100 ST.2084 color space is an efficient standard for encoding HDR colors, but it's not well suited for many rendering and composition (blending) operations. We also want to future proof the OS to support technologies and color spaces well beyond BT.2100 ...

Compositing operations should happen in a color space that is linear with respect to photometric luminance. Not sure what "uniform" means in this context. The HLG OETF non-linearly maps 0..12 to 0..1, and the resulting color space is therefore not suitable for compositing.

Uniform being colourimetry. Everything else is not colourimetry.

HLG and the 12 thing is a bit of a distraction and it can be normalized perfectly fine 0.0-1.0 and composited on. But it’s easier to provide a demonstration at this point, which I’ll try to do when I get a window.

“Linear” is, at our current historical juncture, so well overloaded that it carries about the same utility of meaning as “gamma”.

> Compositing operations should happen in a color space that is linear with respect to photometric luminance. Not sure what "uniform" means in this context. The HLG OETF non-linearly maps 0..12 to 0..1, and the resulting color space is therefore not suitable for compositing. Uniform being *colourimetry*. Everything else is not colourimetry. HLG and the 12 thing is a bit of a distraction and it can be normalized perfectly fine 0.0-1.0 and composited on. But it’s easier to provide a demonstration at this point, which I’ll try to do when I get a window. “Linear” is, at our current historical juncture, so well overloaded that it carries about the same utility of meaning as “gamma”.

Uniform being colourimetry. Everything else is not colourimetry.

I still don't understand what you mean by that. Googling uniform colourimetry gives results about perceptually uniform color spaces, which are non-linear with respect to photometric luminance and not suitable for compositing.

HLG and the 12 thing is a bit of a distraction and it can be normalized perfectly fine 0.0-1.0 and composited on.

Normalization to 0..1 does not matter for compositing, it's about linearity.

“Linear” is, at our current historical juncture, so well overloaded that it carries about the same utility of meaning as “gamma”.

"Linear colorspace" without any qualifiers is a shorthand for linear with respect to photometric and radiometric units. I think it's well defined.

> Uniform being *colourimetry*. Everything else is not colourimetry. I still don't understand what you mean by that. Googling uniform colourimetry gives results about perceptually uniform color spaces, which are non-linear with respect to photometric luminance and not suitable for compositing. > HLG and the 12 thing is a bit of a distraction and it can be normalized perfectly fine 0.0-1.0 and composited on. Normalization to 0..1 does not matter for compositing, it's about linearity. > “Linear” is, at our current historical juncture, so well overloaded that it carries about the same utility of meaning as “gamma”. "Linear colorspace" without any qualifiers is a shorthand for linear with respect to photometric and radiometric units. I think it's well defined.

"Linear colorspace" without any qualifiers is a shorthand for linear with respect to photometric and radiometric units. I think it's well defined.

The point I was trying to make is that it is not well defined given that the frame of reference of the “linear” is utterly variable.

A film print is uniform colourimetry, for example, yet folks get absolutely confused when discussing it.

Anyways, if I can muster up a demonstration, I think it will be more clear.

> "Linear colorspace" without any qualifiers is a shorthand for linear with respect to photometric and radiometric units. I think it's well defined. The point I was trying to make is that it is *not* well defined given that the frame of reference of the “linear” is utterly variable. A film print is uniform colourimetry, for example, yet folks get absolutely confused when discussing it. Anyways, if I can muster up a demonstration, I think it will be more clear.
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No Assignees
6 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#105714
No description provided.