FBX IO: Speed up parsing by multithreading array decompression #104739

Merged
Thomas Barlow merged 7 commits from Mysteryem/blender-addons:fbx_parse_multithread_pr into main 2024-01-12 21:32:36 +01:00

7 Commits

Author SHA1 Message Date
726ddf588e Increase FBX IO version 2024-01-12 20:30:29 +00:00
0e4e3e4337 Merge remote-tracking branch 'upstream/main' into fbx_parse_multithread_pr 2023-12-18 01:44:04 +00:00
f5e286a94e Update for new multithreading utils 2023-12-03 00:45:59 +00:00
2c1a816491 Merge branch 'main' into fbx_parse_multithread_pr 2023-12-03 00:17:56 +00:00
0fdf09a75b Comment updates 2023-09-18 00:13:11 +01:00
93b00cdb76 Minor speedup to array decompression
Since the expected decompressed size is known ahead of time, the output
buffer can be created at the full size in advance, reducing the number
of memory allocations while decompressing.
2023-09-02 12:43:08 +01:00
00c58e0d91 FBX Parsing: Multithread array decompression
Because `zlib.decompress` releases the GIL, the arrays are now
decompressed on separate threads.

Given enough logical CPUs on the current system, decompressing arrays
and parsing the rest of the file is now done simultaneously.

All the functions for managing the multithreading have been encapsulated
in a helper class that gets exposed through a context manager.

If the current platform does not support multithreading
(wasm32-emscripten/wasm32-wasi), then the code falls back to being
single-threaded.

Aside from .fbx files without any compressed arrays, array decompression
usually takes just under 50% of the parsing duration on average, though
commonly varies between 40% to 60% depending on the contents of the file.

I was only able to get an average of a 35% reduction in parsing duration
because the main thread ends up reading from the file more often and
appears to end up spending more time waiting for IO than before. Though
this is likely to vary depending on the file system that is being read
from and the time taken to read from IO is expected to be longer in real
use cases because the file being read won't have been accessed recently.

For the smallest files, e.g. a single cube mesh, this can be slightly
slower because starting a new thread takes more time than is gained by
starting that thread.

Because the main thread spends some time waiting on IO, even systems
with a single CPU can see a small speedup from this patch. I get about a
6% reduction in parsing duration in this case.

Parsing fbx files takes about 16% of the total import duration on
average, so the overall import duration would be expected to reduce by
about 5.6% on average. However, from timing imports before and after
this patch, I get an actual average reduction of 3.5%.
2023-08-19 02:57:01 +01:00