Compositor: Enable GPU denoising for OpenImageDenoise #115242

Closed
Stefan Werner wants to merge 1 commits from Stefan_Werner/blender:compositor_gpu_denoising into main

When changing the target branch, be careful to rebase the branch in your fork to match. See documentation.
Member

When using OpenImageDenoise 2, the denoiser now gets to decide what device to use. Device buffers will be created where necessary. If the device supports system RAM access, it will be used directly with zero copying.

Ref #115045

When using OpenImageDenoise 2, the denoiser now gets to decide what device to use. Device buffers will be created where necessary. If the device supports system RAM access, it will be used directly with zero copying. Ref #115045
Stefan Werner added 1 commit 2023-11-21 16:27:48 +01:00
When using OpenImageDenoise 2, the denoiser now gets to decide
what device to use. Device buffers will be created where necessary.
If the device supports system RAM access, it will be used directly
with zero copying.
Stefan Werner added this to the 4.1 milestone 2023-11-21 16:28:06 +01:00
Stefan Werner added the
Interest
Compositing
label 2023-11-21 16:28:30 +01:00
Stefan Werner added this to the Compositing project 2023-11-21 16:28:38 +01:00
Brecht Van Lommel requested review from Brecht Van Lommel 2023-11-22 18:43:47 +01:00
Brecht Van Lommel requested review from Sergey Sharybin 2023-11-22 18:43:47 +01:00
Brecht Van Lommel requested review from Omar Emara 2023-11-22 18:43:47 +01:00

I don't think that we should be using GPU devices for compositing by default, when everything else runs on the CPU? To me this seems like it should be tied to using the realtime GPU compositor.

I don't think that we should be using GPU devices for compositing by default, when everything else runs on the CPU? To me this seems like it should be tied to using the realtime GPU compositor.

When systemMemorySupported is available, does that mean this will use little GPU memory, so that there is very little chance for this to fail? Or does it mean it may still need to copy and run out of GPU memory?

When `systemMemorySupported` is available, does that mean this will use little GPU memory, so that there is very little chance for this to fail? Or does it mean it may still need to copy and run out of GPU memory?
Author
Member

systemMemorySupported means that the selected device can use input/output buffers in host memory, allocated by malloc, directly, without copying. Looking at the OIDN implementation, this is currently only true for the CPU device, I could imaging that in the future this could be also return true on systems with unified memory architecture.

For what it's worth, OIDN's automatic selection is intended to use the best device it can find - so on systems with a fast CPU and a slow integrated GPU, it should pick the iGPU. CPU denoising still takes 100s of ms, which can add up when running the compositor on a sequence of already rendered frames.

If there is a way to detect and connect to the real time compositor, sure, I'll be happy to look into that.

`systemMemorySupported` means that the selected device can use input/output buffers in host memory, allocated by `malloc`, directly, without copying. Looking at the OIDN implementation, this is currently only true for the CPU device, I could imaging that in the future this could be also return true on systems with unified memory architecture. For what it's worth, OIDN's automatic selection is intended to use the best device it can find - so on systems with a fast CPU and a slow integrated GPU, it should pick the iGPU. CPU denoising still takes 100s of ms, which can add up when running the compositor on a sequence of already rendered frames. If there is a way to detect and connect to the real time compositor, sure, I'll be happy to look into that.
Sergey Sharybin requested changes 2023-11-23 12:31:37 +01:00
Sergey Sharybin left a comment
Owner

The automatic detection of the device is something I am not sure about:

  • The render farms might not be happy with GPU selected without explicit "permission"
  • Render farms might as well run multiple Blender instances in parallel. If denoiser picks up same GPU for all instances this is less than ideal.
  • Artists might have a secondary (non-display) GPU and be rendering shot in background, while tweaking compositor setup meanwhile. That is another example when automatic selection of the fastest device will not be desired.

When running compositor on CPU with an actual node graph even the current OIDN node is not that often a bottleneck. So I'd follow the same rule as for the Cycles rendering: when compositing on GPU use GPU denoiser, when compositing on CPU use CPU denoiser.

The automatic detection of the device is something I am not sure about: - The render farms might not be happy with GPU selected without explicit "permission" - Render farms might as well run multiple Blender instances in parallel. If denoiser picks up same GPU for all instances this is less than ideal. - Artists might have a secondary (non-display) GPU and be rendering shot in background, while tweaking compositor setup meanwhile. That is another example when automatic selection of the fastest device will not be desired. When running compositor on CPU with an actual node graph even the current OIDN node is not that often a bottleneck. So I'd follow the same rule as for the Cycles rendering: when compositing on GPU use GPU denoiser, when compositing on CPU use CPU denoiser.
@ -33,6 +33,10 @@ class DenoiseFilter {
oidn::DeviceRef device_;
oidn::FilterRef filter_;
bool initialized_ = false;
bool system_memory_supported_ = true;

can_use_host_memory_ seems to be a better name which indicates the intent more clearly.

`can_use_host_memory_ ` seems to be a better name which indicates the intent more clearly.
Contributor

I don't think so. "Host memory" in a GPU compute context often means host pinned memory which is always supported. That's not what this flag indicates. CUDA calls it "system allocated memory", which is where the naming of this flag comes from. Some exotic CUDA systems (with HMM) also support this, not just CPUs, but I haven't encountered such a system so far.

I don't think so. "Host memory" in a GPU compute context often means host *pinned* memory which is always supported. That's not what this flag indicates. CUDA calls it "system allocated memory", which is where the naming of this flag comes from. Some exotic CUDA systems (with HMM) also support this, not just CPUs, but I haven't encountered such a system so far.

This is a bit different from the conventions we follow in Cycles, but I see your point.

How about can_use_system_memory_ with a comment explaining that when true OIDN will use Blender-side allocated memory as-is, without copy to a temporary buffer? Or, if you prefer the current naming, add a comment nevertheless.

This is a bit different from the conventions we follow in Cycles, but I see your point. How about `can_use_system_memory_` with a comment explaining that when true OIDN will use Blender-side allocated memory as-is, without copy to a temporary buffer? Or, if you prefer the current naming, add a comment nevertheless.
Contributor

Sure! I agree that such a comment should be added.

Sure! I agree that such a comment should be added.
@ -50,3 +54,3 @@
BLI_mutex_lock(&oidn_lock);
device_ = oidn::newDevice(oidn::DeviceType::CPU);
device_ = oidn::newDevice();

Does the OIDN 1 always use CPU device?

Does the OIDN 1 always use CPU device?
Author
Member

Yes. OIDN version 1 does not support any other devices. Explicitly requesting the CPU device here was only added when the library was upgraded to 2.0.0 to prevent automatic selection of other devices.

Yes. OIDN version 1 does not support any other devices. Explicitly requesting the CPU device here was only added when the library was upgraded to 2.0.0 to prevent automatic selection of other devices.
@ -53,1 +58,4 @@
system_memory_supported_ = device_.get<bool>("systemMemorySupported");
if (device_.get<int>("type") == (int)oidn::DeviceType::CPU)
#endif
device_.set("setAffinity", false);

I am not really happy with code formatted like this. Such preprocessor in-between of control flow is really not readable.

The more readable code with less functional changes and less changes later on when we completely remove OIDN 1 code paths would be

#  if OIDN_VERSION_MAJOR >= 2
    device_ = oidn::newDevice();
    system_memory_supported_ = device_.get<bool>("systemMemorySupported");
    if (device_.get<int>("type") == (int)oidn::DeviceType::CPU) {
      device_.set("setAffinity", false);
    }
#  else
    device_ = oidn::newDevice(oidn::DeviceType::CPU);
#  endif
I am not really happy with code formatted like this. Such preprocessor in-between of control flow is really not readable. The more readable code with less functional changes and less changes later on when we completely remove OIDN 1 code paths would be ``` # if OIDN_VERSION_MAJOR >= 2 device_ = oidn::newDevice(); system_memory_supported_ = device_.get<bool>("systemMemorySupported"); if (device_.get<int>("type") == (int)oidn::DeviceType::CPU) { device_.set("setAffinity", false); } # else device_ = oidn::newDevice(oidn::DeviceType::CPU); # endif ```
@ -68,2 +76,4 @@
BLI_assert(initialized_);
BLI_assert(!buffer->is_a_single_elem());
oidn::BufferRef oidn_buffer;
size_t buffer_len = buffer->get_elem_bytes_len() * buffer->get_width() * buffer->get_height();

size_t(buffer->get_elem_bytes_len()) * ...

Otherwise it is only up-casted to int, which will not be enough to composite really high-res images.

`size_t(buffer->get_elem_bytes_len()) * ...` Otherwise it is only up-casted to `int`, which will not be enough to composite really high-res images.
Member

If there is a way to detect and connect to the real time compositor, sure, I'll be happy to look into that.

@Stefan_Werner See the execute method in the node_composite_denoise.cc file. There is no GPU texture interoperability at the moment as far as I can see, so we just denoise on host memory just like your existing implementation.

> If there is a way to detect and connect to the real time compositor, sure, I'll be happy to look into that. @Stefan_Werner See the `execute` method in the `node_composite_denoise.cc` file. There is no GPU texture interoperability at the moment as far as I can see, so we just denoise on host memory just like your existing implementation.

@Stefan_Werner Hey. What is the status here? Are you working on it, you waiting for some review to happen?

@Stefan_Werner Hey. What is the status here? Are you working on it, you waiting for some review to happen?
Brecht Van Lommel modified the milestone from 4.1 to 4.2 LTS 2024-03-14 17:47:27 +01:00
Omar Emara removed this from the 4.2 LTS milestone 2024-05-28 08:12:33 +02:00
First-time contributor

Any updates on this?

Any updates on this?

@Emi_Martinez Not really. As soon as developers have update it'll be reflected here.
Currently this is on the list of the Compositor module to pick up, but there was no time available for it yet.

@Emi_Martinez Not really. As soon as developers have update it'll be reflected here. Currently this is on the list of the Compositor module to pick up, but there was no time available for it yet.
First-time contributor

I'm just testing this out, and can confirm it works AMAZINGLY well. Huge thanks to @Stefan_Werner.

I did notice however that the compositor denoise nodes use the GPU if the compositor is set to CPU mode, and use the CPU if the compositor is set to GPU. I think it's supposed to be the other way around?

I think it's line 70 of COM_DenoiseOperations.cc:

#if OIDN_VERSION_MAJOR >= 2
    system_memory_supported_ = device_.get<bool>("systemMemorySupported");
    if (device_.get<int>("type") == (int)oidn::DeviceType::CPU)
#endif

Ideally it should use the GPU denoise regardless of whether the compositor is set to CPU or GPU, because you still want the fastest denoising in CPU mode, especially as it's still necessary to use CPU compositor mode when using nodes that don't yet work correctly when the compositor is in GPU mode (displace node for example).

I'm just testing this out, and can confirm it works AMAZINGLY well. Huge thanks to @Stefan_Werner. I did notice however that the compositor denoise nodes use the GPU if the compositor is set to CPU mode, and use the CPU if the compositor is set to GPU. I think it's supposed to be the other way around? I think it's line 70 of COM_DenoiseOperations.cc: ``` #if OIDN_VERSION_MAJOR >= 2 system_memory_supported_ = device_.get<bool>("systemMemorySupported"); if (device_.get<int>("type") == (int)oidn::DeviceType::CPU) #endif ``` Ideally it should use the GPU denoise regardless of whether the compositor is set to CPU or GPU, because you still want the fastest denoising in CPU mode, especially as it's still necessary to use CPU compositor mode when using nodes that don't yet work correctly when the compositor is in GPU mode (displace node for example).
First-time contributor

@Stefan_Werner to save you a bit of time, I've combined this pull with ff25980b78 and modified it enough so the diff can be applied after the latest commit (ce82d4434f).

I did modify it slightly, so that the CPU isn't set as the OIDN device when the compositor is in CPU mode, it creates a new device instead so that the GPU will always be used if possible.

I also noticed that the GPU denoising in the compositor is around 50% faster if the compositor is set to CPU mode vs GPU mode.

@Stefan_Werner to save you a bit of time, I've combined this pull with https://projects.blender.org/Stefan_Werner/blender/commit/ff25980b78cd289f91a9be5299967bed661ecf5c and modified it enough so the diff can be applied after the latest commit (ce82d4434f4ad9c7f4cb2a008c0c7edc75fafae5). I did modify it slightly, so that the CPU isn't set as the OIDN device when the compositor is in CPU mode, it creates a new device instead so that the GPU will always be used if possible. I also noticed that the GPU denoising in the compositor is around 50% faster if the compositor is set to CPU mode vs GPU mode.

Ideally it should use the GPU denoise regardless of whether the compositor is set to CPU or GPU

For the interface, sure. For the command line renders denoiser should respect the setting. We shouldn't be forcing GPU usage on the renderfarms.

> Ideally it should use the GPU denoise regardless of whether the compositor is set to CPU or GPU For the interface, sure. For the command line renders denoiser should respect the setting. We shouldn't be forcing GPU usage on the renderfarms.
Contributor

Always GPU Denoising is definitely not a good idea.
The VRAM is way more limited than RAM.
Currently, if the VRAM runs out, Blender returns a black frame and doesn't abort.

And yes, I do have projects that need that much V/RAM to denoise.

Always GPU Denoising is definitely not a good idea. The VRAM is way more limited than RAM. Currently, if the VRAM runs out, Blender returns a black frame and doesn't abort. And yes, I do have projects that need that much V/RAM to denoise.
First-time contributor

@Raimund58 one of the outstanding tasks is to fall back to CPU when there is insufficient VRAM, so that's not an issue. My hope is that rather than immediately falling back to CPU denoising, it first checks to see if it's possible to temporarily move sufficient non-denoising related data from VRAM to system RAM (such as a portion of persistent data). I say this because to multi pass denoise a 4k image can take up to 4 minutes on CPU, whereas on GPU it takes several seconds, so the render time will be less impacted by moving some data to system ram than it would by falling back to the CPU.

@Sergey You're right, but rather than always using the CPU as the denoising device when the compositor is in CPU mode, it would be better to have an option (either in preferences or the compositor options) to use GPU denoising when the compositor is in CPU mode. It's important because some compositor nodes don't produce the correct result in the GPU compositor (displace node for example) @OmarEmaraDev is looking into that. The render farms and command line renders can set that option as necessary.

@Raimund58 one of the outstanding tasks is to fall back to CPU when there is insufficient VRAM, so that's not an issue. My hope is that rather than immediately falling back to CPU denoising, it first checks to see if it's possible to temporarily move sufficient non-denoising related data from VRAM to system RAM (such as a portion of persistent data). I say this because to multi pass denoise a 4k image can take up to 4 minutes on CPU, whereas on GPU it takes several seconds, so the render time will be less impacted by moving some data to system ram than it would by falling back to the CPU. @Sergey You're right, but rather than always using the CPU as the denoising device when the compositor is in CPU mode, it would be better to have an option (either in preferences or the compositor options) to use GPU denoising when the compositor is in CPU mode. It's important because some compositor nodes don't produce the correct result in the GPU compositor (displace node for example) @OmarEmaraDev is looking into that. The render farms and command line renders can set that option as necessary.
First-time contributor

Hi, any news on that ?
I use the build Michael Campbell provided and it's such a improvement in speed. It would be lovely to have that in the regular build,

Hi, any news on that ? I use the build Michael Campbell provided and it's such a improvement in speed. It would be lovely to have that in the regular build,
Member

Hi, any news on that ?

If there is progress, you will likely see it as a update in this pull request, or a update in the Cycles or compositing bi-weekly meetings.

> Hi, any news on that ? If there is progress, you will likely see it as a update in this pull request, or a update in the Cycles or compositing bi-weekly meetings.

I removed some off-topic comments. Please do not ask for updates, builds etc. Once there are news, they will be shared here.

I removed some off-topic comments. Please do not ask for updates, builds etc. Once there are news, they will be shared here.
Author
Member

I created a new PR that I think follows the usual control flow more. It respects the selection in Performance/Compositor/Device. Selecting CPU will force the CPU device, selecting GPU will leave the decision to OIDN.

#130217

I created a new PR that I think follows the usual control flow more. It respects the selection in Performance/Compositor/Device. Selecting CPU will force the CPU device, selecting GPU will leave the decision to OIDN. https://projects.blender.org/blender/blender/pulls/130217
Member

As a result of a new PR being avaliable, I will close this one

As a result of a new PR being avaliable, I will close this one
Alaska closed this pull request 2024-11-13 12:24:03 +01:00

Pull request closed

Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset System
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Code Documentation
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Viewport & EEVEE
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Asset Browser Project
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Module
Viewport & EEVEE
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Severity
High
Severity
Low
Severity
Normal
Severity
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
11 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#115242
No description provided.