EEVEE Next tests hang on macOS buildbot #120038

Open
opened 2024-03-28 21:31:13 +01:00 by Brecht Van Lommel · 12 comments

The GPU tests hang and time out, which causes the build to get cancelled. Example:
https://builder.blender.org/admin/#/builders/31/builds/15362/steps/6/logs/stdio

Backtrace:
https://projects.blender.org/brecht/paste/src/branch/main/buildbot_macos_eevee_next_hang.txt

There is no significant CPU activity while it's hanging, it just looks stuck.

In some cases the machine restarted,. ps ax | grep Blender shows a bunch of processes marked with status?E, which means they are exiting, but they never seem to and I can't kill them. It might just be something about the way the buildbot cancels processes, though I haven't seen this before.

I bisected the cause down to this commit: ecffea86b1. Reverting it on main makes the tests work again.

I can't reproduce this on a MacBook Pro laptop. The buildbot GPU tests always run on the same tlab-mpm-macos-arm64-06 machine, which is a Mac Mini M1.

The GPU tests hang and time out, which causes the build to get cancelled. Example: https://builder.blender.org/admin/#/builders/31/builds/15362/steps/6/logs/stdio Backtrace: https://projects.blender.org/brecht/paste/src/branch/main/buildbot_macos_eevee_next_hang.txt There is no significant CPU activity while it's hanging, it just looks stuck. In some cases the machine restarted,. `ps ax | grep Blender` shows a bunch of processes marked with status`?E`, which means they are exiting, but they never seem to and I can't kill them. It might just be something about the way the buildbot cancels processes, though I haven't seen this before. I bisected the cause down to this commit: ecffea86b133ba08318cd3f3a5738d3930426170. Reverting it on main makes the tests work again. I can't reproduce this on a MacBook Pro laptop. The buildbot GPU tests always run on the same `tlab-mpm-macos-arm64-06` machine, which is a Mac Mini M1.
Brecht Van Lommel added this to the 4.2 LTS milestone 2024-03-28 21:31:13 +01:00
Brecht Van Lommel added the
Interest
EEVEE
Platform
macOS
Module
EEVEE & Viewport
Type
To Do
labels 2024-03-28 21:31:14 +01:00
Brecht Van Lommel changed title from EEVEE Next tests hange on macOS buildbot to EEVEE Next tests hang on macOS buildbot 2024-03-28 21:47:35 +01:00
Author
Owner

CC @fclem @Jeroen-Bakker @pragma37 @Michael-Parkin-White-Apple, would be good to fix this somehow.

@Sergey If you're curious about why this machine has been going down randomly in the past weeks.

CC @fclem @Jeroen-Bakker @pragma37 @Michael-Parkin-White-Apple, would be good to fix this somehow. @Sergey If you're curious about why this machine has been going down randomly in the past weeks.
Member

Quickly looking at 'wait' patterns in the GPU_finish it might indicate that there are to many command buffers in play. We can create a PR that enables detailed logging to check this is the case.

Last months I saw several changes that adds new GPU_flushes/finishes to the metal backend but the advice is to reduce these calls (GHOST_ContextCGL.mm#29-40). I couldn't find a paper trail why adding a flush/finish is the correct way to solve these issues. Perhaps the root cause isn't known.

Read back of storage buffer is inside the shadow pipeline to inform users and inside the irradiance cache. This is safe guarded with a GPU_memory_barrier. Metal/vulkan backend ignores these barriers. We could invoke GPU_finish to the GPU_BARRIER_BUFFER_UPDATE flag instead in the metal backend as a work-around.

Quickly looking at 'wait' patterns in the `GPU_finish` it might indicate that there are to many command buffers in play. We can create a PR that enables detailed logging to check this is the case. Last months I saw several changes that adds new GPU_flushes/finishes to the metal backend but the advice is to reduce these calls (GHOST_ContextCGL.mm#29-40). I couldn't find a paper trail why adding a flush/finish is the correct way to solve these issues. Perhaps the root cause isn't known. Read back of storage buffer is inside the shadow pipeline to inform users and inside the irradiance cache. This is safe guarded with a GPU_memory_barrier. Metal/vulkan backend ignores these barriers. We could invoke `GPU_finish` to the GPU_BARRIER_BUFFER_UPDATE flag instead in the metal backend as a work-around.
Member

#120048 uses async readback for these buffers to test if that solves the issues

#120048 uses async readback for these buffers to test if that solves the issues
Member

#120050 moves the gpu finish to GPU_memory_barrier calls where CPU synchronization is needed.
This seems to also fix some render tests

#120050 moves the gpu finish to GPU_memory_barrier calls where CPU synchronization is needed. This seems to also fix some render tests
Author
Owner

It seems that unfortunately neither of the PRs resolved the issue.

It seems that unfortunately neither of the PRs resolved the issue.

Looking into this now, I do recall a separate fix for this, which should have resolved it. There was a small edge-case where if an event signal was in a single command buffer with no other commands, the command buffer was not being issued.

Although i'm unsure if the patch landed.

Looking into this now, I do recall a separate fix for this, which should have resolved it. There was a small edge-case where if an event signal was in a single command buffer with no other commands, the command buffer was not being issued. Although i'm unsure if the patch landed.

So looking, the change is correctly in, but unsure why this is still hanging, investigating now, but will see if I can reproduce easily.

Edit;

From the log, what I believe is happening is that we are hitting the upper bound for number of command buffers in flight, due to the compute debug flag for splitting encoders (which I think we should remove).

I see that one system thread is stalled in:

thread #18, queue = 'com.Metal.CommandQueueDispatch'
    frame #0: 0x00000001a11a7f54 libsystem_kernel.dylib`mach_msg2_trap + 8
    frame #1: 0x00000001a11ba280 libsystem_kernel.dylib`mach_msg2_internal + 80
    frame #2: 0x00000001a43752a4 IOKit`io_connect_method + 516
    frame #3: 0x00000001a4375038 IOKit`IOConnectCallMethod + 176
    frame #4: 0x00000001bdf51ccc IOGPU`IOGPUCommandQueueSubmitCommandBuffers + 184
    frame #5: 0x00000001bdf4210c IOGPU`-[IOGPUMetalCommandQueue _submitCommandBuffers:count:] + 352
    frame #6: 0x00000001bdf41f84 IOGPU`-[IOGPUMetalCommandQueue submitCommandBuffers:count:] + 72

Which would imply it is waiting to submit a command buffer, as others are likely in flight. However, due the app is waiting on the latest submitted command buffer pending completion, which seems to be a deadlock. Usually the system queue should be separate, but it may only be manifesting in some cases.

In this case, removing all MTL_DEBUG_SINGLE_DISPATCH_PER_ENCODER cases should resolve this failing test, as we should never otherwise have that many command buffers in flight. I'll also double check the logic for handling command buffer limits, as we should naturally not end up in a deadlock situation, regardless of what happens.

So looking, the change is correctly in, but unsure why this is still hanging, investigating now, but will see if I can reproduce easily. Edit; From the log, what I believe is happening is that we are hitting the upper bound for number of command buffers in flight, due to the compute debug flag for splitting encoders (which I think we should remove). I see that one system thread is stalled in: ``` thread #18, queue = 'com.Metal.CommandQueueDispatch' frame #0: 0x00000001a11a7f54 libsystem_kernel.dylib`mach_msg2_trap + 8 frame #1: 0x00000001a11ba280 libsystem_kernel.dylib`mach_msg2_internal + 80 frame #2: 0x00000001a43752a4 IOKit`io_connect_method + 516 frame #3: 0x00000001a4375038 IOKit`IOConnectCallMethod + 176 frame #4: 0x00000001bdf51ccc IOGPU`IOGPUCommandQueueSubmitCommandBuffers + 184 frame #5: 0x00000001bdf4210c IOGPU`-[IOGPUMetalCommandQueue _submitCommandBuffers:count:] + 352 frame #6: 0x00000001bdf41f84 IOGPU`-[IOGPUMetalCommandQueue submitCommandBuffers:count:] + 72 ``` Which would imply it is waiting to submit a command buffer, as others are likely in flight. However, due the app is waiting on the latest submitted command buffer pending completion, which seems to be a deadlock. Usually the system queue should be separate, but it may only be manifesting in some cases. In this case, removing all `MTL_DEBUG_SINGLE_DISPATCH_PER_ENCODER` cases should resolve this failing test, as we should never otherwise have that many command buffers in flight. I'll also double check the logic for handling command buffer limits, as we should naturally not end up in a deadlock situation, regardless of what happens.

As an additional note, I've now been able to consistently reproduce this on my local machine using the Tree Creature asset in EEVEE Next, instead of via the render tests. The stracktrace is slightly different, but the manifestation of stalling waiting for command buffer completion is the same.

From the looks of it, a command buffer is being scheduled but never executed.

The actual patch which introduced this issue is a red-herring. The issue was occurring before, but the app was not explicitly waiting for this command buffer to complete, so it did not manifest in an app-side problem, despite the underlying cause still being present.

I am currently investigating this, but we can track the progress here.

As an additional note, I've now been able to consistently reproduce this on my local machine using the Tree Creature asset in EEVEE Next, instead of via the render tests. The stracktrace is slightly different, but the manifestation of stalling waiting for command buffer completion is the same. From the looks of it, a command buffer is being scheduled but never executed. The actual patch which introduced this issue is a red-herring. The issue was occurring before, but the app was not explicitly waiting for this command buffer to complete, so it did not manifest in an app-side problem, despite the underlying cause still being present. I am currently investigating this, but we can track the progress here.
Blender Bot added the
Status
Resolved
label 2024-04-16 14:57:22 +02:00
Member

Reopening issue. It was automatically closed by a commit that said and could possibly resolve {this issue}

It's only a possibility that it's resolved and we should wait for confirmation that it's fixed.

Reopening issue. It was automatically closed by a commit that said `and could possibly resolve {this issue}` It's only a possibility that it's resolved and we should wait for confirmation that it's fixed.
Alaska reopened this issue 2024-04-16 16:09:21 +02:00
Blender Bot added
Status
Confirmed
and removed
Status
Resolved
labels 2024-04-16 16:09:24 +02:00
Author
Owner

It still appears to be timing out in main:
https://builder.blender.org/admin/#/builders/31/builds/15677

It still appears to be timing out in main: https://builder.blender.org/admin/#/builders/31/builds/15677

I fixed two missing resources inside the baking pipeline today (see 998d0796e4). Maybe this changes the current state of this issue (since the offending commit is related to lightprobe baking).

I fixed two missing resources inside the baking pipeline today (see 998d0796e46ed8266ed8f04509ba09a19091809d). Maybe this changes the current state of this issue (since the offending commit is related to lightprobe baking).
Author
Owner
No luck unfortunately: https://builder.blender.org/admin/#/builders/31/builds/15734
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
5 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#120038
No description provided.