Cycles: GPU Performance #87836
Labels
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset System
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Code Documentation
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
FBX
Interest
Freestyle
Interest
Geometry Nodes
Interest
glTF
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Viewport & EEVEE
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Asset Browser Project
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Asset System
Module
Core
Module
Development Management
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Module
Viewport & EEVEE
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Severity
High
Severity
Low
Severity
Normal
Severity
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: blender/blender#87836
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Memory usage
Auto detect good integrator state size depending on GPU (hardcoded to 1 million now)
Reduce size of IntegratorState
SoA
Reduce size of ShaderData
Reduce kernel local memory
shade_surface
orshade_surface_raytrace
for reserving memoryKernel Size
svm_eval_nodes
a templated function and specialize it forsvm_eval_nodes
(seems not)shade_background
Scheduling
Display
Render Algorithms
Tuning
In a simple queue based system with fixed queue size for each kernel, memory usage increases as the number of kernels goes up, and much of the queue memory remains unused. However it does have the benefit that memory reads/writes may be faster due to coalescing.
Using more paths and associated memory can significantly improve performance though. The question is if there is an implementation that can get us both coalescing and little memory waste. Some brainstorming.
Filling Gaps
The current initialization of main paths works by filling gaps in the state array. Before
init_from_camera
is called, an array is constructed with unused path indices which is then filled in.The same mechansim could be extended to the shadow queue, constructing an array of empty entries and using that in
shade_surface
, rather than forcing the shadow queue to be emptied beforeshade_surface
can be executed.This still leaves gaps until additional paths can be scheduled, and the fragmentation may cause incoherent paths to be grouped together.
Compaction
Rather than trying to write densely packed arrays, we could compact the kernel state to remove any empty gaps, using a dedicated kernel. This kernel causes additional memory reads and writes, which are hoped to pay off when multiple subsequent kernels can read memory more efficiently. It's not obvious that this is a win. In many cases there may only be 1 or 2 kernel executions before the next path iteration, and the total cost of memory access may increase.
We can do an experiment with a slow coalescing kernel before every kernel execution, to see how much kernel execution is impacted by non-coalesced memory access.
Ring Buffers
In simple cases where we ping-pong between two kernels like
intersect_shadow
andshade_shadow
, an idea would be to share a single queue and fill consecutive empty items. With the idea of a ring buffer this can be made to work for two kernels.In the single-threaded case this is straightforward, however synchronization to avoid overwriting items from the input queue is not so obvious with many threads. In practice we'd likely need to allocate additional memory based on the number of blocks that are executed in parallel, and that can be significant.
Chunks
An idea to avoid that would be to use a chunked allocator for queues, with the size of the chunks being equal to the block size of kernels.
When a kernel is about to write out queue items, it would allocate an additional chunk for the queue if the current chunk does not have enough space for all items. Queue items would then be written into 1 or 2 chunks. When executing a kernel memory reads would be coalesced, since each chunk matches the block size for that kernel and contains all the queue items in order. Sorting by shader would break coalescing for the
shade_surface
kernel though, and a separate queue per shader would likely waste too much memory.Significant memory could still be unused, for two reasons:
The megakernel was a clear win with CUDA when we added it to speed up the last 2% of paths. But benchmarking this with OptiX now it's a mixed bag:
A possible explanation is that
pvt_flat.blend
uses adaptive sampling, where batch sizes are smaller and the megakernel helps more. Also this does not benchmark viewport rendering, where the megakernel helps more.There's still room for optimization when we have few paths active, without using a megakernel. Given these numbers, that seems worth looking into.
Current progress on trying to eliminate the megakernel is in P2111, still not where I want it to be.
Compacting the state array seems not all that helpful.
What I did notice while working on that is that in the
pvt_flat
scene, the number of active paths often drops to a low number but is not refilled quickly. Reducing the tile size to avoid that not only avoids the performance regression, but actually speedups the rendering. However this slows down other scenes.There must be something that can be done to get closer to the best of both.
Looking at the kernel execution times of bmw27, it's clear that optimizing
init_from_camera
for multiple work tiles would help, but it's only a part of the performance gap. There's something else going on here that is harder to pin down.Note about differentials: PBRT-v4 is not even passing differentials along with rays, but simply computing them using the camera information. This gives incorrect results through reflections and refractions, but may be close enough in practice.
This issue was referenced by blender/cycles@5db8d93df3
This issue was referenced by
0803119725
This issue was referenced by blender/cycles@579813f4f8
This issue was referenced by
4d4113adc2
Trying to figure out which parts of
shade_surface_raytrace
kernel are using most local memory:So roughly:
This issue was referenced by blender/cycles@6f7dd81db8
This issue was referenced by
001f548227
This issue was referenced by blender/cycles@9026179d2e
This issue was referenced by
0c52eed863
Cycles X - GPU Performanceto Cycles: GPU Performance