Blend files with a lot of Shape Keys aren't compressed efficiently #120901

Open
opened 2024-04-22 01:25:50 +02:00 by Jonnyawsom3 · 8 comments

When saving a file with lots of shape keys (In the hundreds) the file size bloats massively. Enabling compression reduces the file size, but compressing the file externally (Via zstd in the command line) massively reduces the file size further.
I first discovered this on a 200 MB file that reduced to 12 MB when re-compressed externally, lowering save time, load time and storage cost.

Attached are two files. A mesh with no shape keys, and the externally compressed file with 64 shape keys (83 MB Uncompressed).
Opening the compressed file and then re-saving, the file size quadruples and gets exponentially worse as the amount of shape keys increases.

I know Blender splits the compression into chunks for faster seeking, but obviously the boundaries of those chunks could be far better placed when compression is enabled in the save dialogue.

When saving a file with lots of shape keys (In the hundreds) the file size bloats massively. Enabling compression reduces the file size, but compressing the file externally (Via zstd in the command line) massively reduces the file size further. I first discovered this on a 200 MB file that reduced to 12 MB when re-compressed externally, lowering save time, load time and storage cost. Attached are two files. A mesh with no shape keys, and the externally compressed file with 64 shape keys (83 MB Uncompressed). Opening the compressed file and then re-saving, the file size quadruples and gets exponentially worse as the amount of shape keys increases. I know Blender splits the compression into chunks for faster seeking, but obviously the boundaries of those chunks could be far better placed when compression is enabled in the save dialogue.
Jonnyawsom3 added the
Type
Report
Status
Needs Triage
Priority
Normal
labels 2024-04-22 01:25:51 +02:00
YimingWu changed title from Shape Keys aren't compressed efficiently to Blend files with a lot of Shape Keys aren't compressed efficiently 2024-04-22 04:12:23 +02:00
Member

Currently blender uses zstd compression level of 3, it's not as small but it's quick enough. zstd can go up to level 22 but requires more memory and is much slower, and of course this would require longer time to preview/link contents from these blend files too.

Maybe an compression level option could be useful in such cases. cc @LukasStockner

Currently blender uses zstd compression level of 3, it's not as small but it's quick enough. zstd can go up to level 22 but requires more memory and is much slower, and of course this would require longer time to preview/link contents from these blend files too. Maybe an compression level option could be useful in such cases. cc @LukasStockner
YimingWu added
Module
Pipeline, Assets & IO
Status
Needs Info from Developers
and removed
Status
Needs Triage
labels 2024-04-22 04:19:47 +02:00
Author

Decompression speed actually remains consistent between compression levels, but that's unrelated for now.

The blend file provided is 83 MB Uncompressed, 45 MB with current Blender compression, and 26 MB with external zstd compression at the same level of 3. Being able to set different compression levels would be nice, but it's clear there's something else happening during Blender's saving too, causing almost a 2x size increase.

I have a 330 MB blend file that Blender can only get down to 202 MB. With external zstd compression set to level 3, it's reduced to under 60 MB, showing there's a lot of room for improvement.

zstd "Uncompressed.blend" -o "ZSTD.blend" -3
Uncompressed.blend   : 17.83%   (   328 MiB =>   58.6 MiB, ZSTD.blend)

Uncompressed.blend   344,450,206
Blender ZSTD.blend   212,350,327
ZSTD.blend            61,401,667
Decompression speed actually remains consistent between compression levels, but that's unrelated for now. The blend file provided is 83 MB Uncompressed, 45 MB with current Blender compression, and 26 MB with external zstd compression at the same level of 3. Being able to set different compression levels would be nice, but it's clear there's something else happening during Blender's saving too, causing almost a 2x size increase. I have a 330 MB blend file that Blender can only get down to 202 MB. With external zstd compression set to level 3, it's reduced to under 60 MB, showing there's a lot of room for improvement. ``` zstd "Uncompressed.blend" -o "ZSTD.blend" -3 Uncompressed.blend : 17.83% ( 328 MiB => 58.6 MiB, ZSTD.blend) Uncompressed.blend 344,450,206 Blender ZSTD.blend 212,350,327 ZSTD.blend 61,401,667 ```
Member

Huh then that's something to look at yes...

Looks like blender only writes chunks of compressed data, not writing everything and compress and compress at last. That's my understanding of mywrite and write_file_handle. @LukasStockner

Huh then that's something to look at yes... Looks like blender only writes chunks of compressed data, not writing everything and compress and compress at last. That's my understanding of `mywrite` and `write_file_handle`. @LukasStockner
Lukas Stockner self-assigned this 2024-04-23 01:26:43 +02:00
Member

I believe I found what's going on here.

In the compression code, there's a tradeoff on chunk size: On the one hand, you want chunks to be as large as possible so the compressor can find redundant data and get a large compression ratio. On the other hand, for loading from linked files, it's useful if Blender can read files partially - and that requires chunks to be smallish, otherwise you end up basically reading the whole file anyways.

Currently, the code writes a frame for every full MB of data. However, in the example file, this works out to roughly one shapekey per chunk - which is just too small for the compressor to detect the similarities between different shapekeys.

The fix for this seems simple - never split a single ID datablock into multiple chunks, since you won't be reading it partially anyways. That way, all shapekeys for one mesh get packed into one chunk and compressed effectively.

There's some implementation details to be worked out though, I'll update this when I have a PR.

I believe I found what's going on here. In the compression code, there's a tradeoff on chunk size: On the one hand, you want chunks to be as large as possible so the compressor can find redundant data and get a large compression ratio. On the other hand, for loading from linked files, it's useful if Blender can read files partially - and that requires chunks to be smallish, otherwise you end up basically reading the whole file anyways. Currently, the code writes a frame for every full MB of data. However, in the example file, this works out to roughly one shapekey per chunk - which is just too small for the compressor to detect the similarities between different shapekeys. The fix for this seems simple - never split a single ID datablock into multiple chunks, since you won't be reading it partially anyways. That way, all shapekeys for one mesh get packed into one chunk and compressed effectively. There's some implementation details to be worked out though, I'll update this when I have a PR.
Author

Sounds like exactly what I was thinking, thanks for looking into it.

Sounds like exactly what I was thinking, thanks for looking into it.
Member

Okay, some more info: I've got an implementation of this working, and it indeed delivers the expected 25MB file size.

However, all implementations that I've thought of so far are a regression in time and/or peak memory use.
The problem here is that Blender currently waits until it has ~1MB of data, and then fires off a background thread to compress and write it. However, with this tweak for the provided sample scene, we need to collect up to 60MB of data and compress it in a single frame.

Therefore, there are three options:

  • Give up on threading, and just use the streaming compression API of zstd to write it as it comes in. Works, but is slower (330ms vs. 90ms originally for the file here, 3589ms vs. 745ms for a larger file).
  • Use zstd's builtin multithreading, and use the streaming compression API as above. Slightly better, but still not ideal (170ms vs. 90ms and 2903ms vs. 745ms).
  • Use the old approach, but support queueing up multiple chunks of data for each thread. Better for the large file, but bad for the Shape Key file (since most of the file is one chunk, which is then again singlethreaded), and also has considerable memory overhead since we need to keep potentially large chunks in memory until we're done: 338ms vs. 90ms, 2270ms vs. 745ms.

So, yeah, in summary: The better compression would be nice, but to be honest, the 4x slowdown isn't worth it. This might be something to include as part of a stronger compression option (e.g. for preparing files for uploads), but that's another todo.
The other option would be to change how shapekeys are stored in the on-disk format, but that would be a compatibility-breaking change.

Okay, some more info: I've got an implementation of this working, and it indeed delivers the expected 25MB file size. However, all implementations that I've thought of so far are a regression in time and/or peak memory use. The problem here is that Blender currently waits until it has ~1MB of data, and then fires off a background thread to compress and write it. However, with this tweak for the provided sample scene, we need to collect up to 60MB of data and compress it in a single frame. Therefore, there are three options: - Give up on threading, and just use the streaming compression API of zstd to write it as it comes in. Works, but is slower (330ms vs. 90ms originally for the file here, 3589ms vs. 745ms for a larger file). - Use zstd's builtin multithreading, and use the streaming compression API as above. Slightly better, but still not ideal (170ms vs. 90ms and 2903ms vs. 745ms). - Use the old approach, but support queueing up multiple chunks of data for each thread. Better for the large file, but bad for the Shape Key file (since most of the file is one chunk, which is then again singlethreaded), and also has considerable memory overhead since we need to keep potentially large chunks in memory until we're done: 338ms vs. 90ms, 2270ms vs. 745ms. So, yeah, in summary: The better compression would be nice, but to be honest, the 4x slowdown isn't worth it. This might be something to include as part of a stronger compression option (e.g. for preparing files for uploads), but that's another todo. The other option would be to change how shapekeys are stored in the on-disk format, but that would be a compatibility-breaking change.

long term - could / should shape keys be converted into generic mesh attributes?

long term - could / should shape keys be converted into generic mesh attributes?
Author

I would've thought the smaller compressed size would counteract time writing to disk for a small net positive on save time, but obviously if you're on an NVME drive then that quickly goes away.

I almost wonder if larger block sizes could scale with the size of the data type. So if there's 20 MB of geometry nodes it still uses 1 MB blocks, but the 200 MB of shape key data uses 10 MB blocks instead, ect.
Likely overcomplicated, but worth throwing ideas around.

I suppose I'll stick to using the zstd command line tool on suspiciously large files until either a sharing option or more compression options in general are available. Hopefully there isn't much difference to the performance compared to Blender segmenting by data type.

I would've thought the smaller compressed size would counteract time writing to disk for a small net positive on save time, but obviously if you're on an NVME drive then that quickly goes away. I almost wonder if larger block sizes could scale with the size of the data type. So if there's 20 MB of geometry nodes it still uses 1 MB blocks, but the 200 MB of shape key data uses 10 MB blocks instead, ect. Likely overcomplicated, but worth throwing ideas around. I suppose I'll stick to using the zstd command line tool on suspiciously large files until either a sharing option or more compression options in general are available. Hopefully there isn't much difference to the performance compared to Blender segmenting by data type.
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
4 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#120901
No description provided.