Point Cache replacement based on Alembic #37578

Closed
opened 2013-11-22 11:32:01 +01:00 by Lukas Tönne · 67 comments
Member

The goal of this project is to implement a new Point Cache system based on the Alembic library and unify the various caching methods used in Blender.

This is a prerequisite for further work on particle systems. Having a working Alembic exporter/importer will also greatly benefit integration of Blender into larger pipelines.

Current Issues

  • Point cache is used for most physics simulations, but does not include mesh caching. For this there is a separate Mesh Cache modifier , using mdd or pc2 file formats, with its own limiations. Using a unified point cache system would help scene management.
  • Alembic, unlike the current point cache binary format, supports arbitrary data attributes. This is very important for flexible simulations and user-defined data in particles and mesh data (vertex groups, custom data layers , etc.).
  • Point cache is limited to fixed-size data sets. This is sufficient for many simulations which only deform geometry (cloth, hair), but for particles and constructive modifier caching variable point data sets are important.
  • Compression in the point cache file format leaves a lot to be desired. Alembic is specifically designed to have a small memory/disk space footprint and make good use of threading for fast export/import.
  • Point cache binary format is unique to Blender. While it is fairly simple to interpret, it does not facilitate import/export and integration into larger pipeline workflows. Alembic on the other hand is a commonly used tool and can leverage a large developer community.
  • Settings for caches are object-centric, each simulation writes its own separate cache files. Baking all caches is simply a loop over all objects, which is not very efficient. Having scene-level settings and operators would make Alembic export/import easier.

Proposal

Keep basic cache settings on Scene level:

  • "Disk Cache" vs. "Memory Cache": This could be reframed as having cache file output by default and use ''packing'' in .blend files like we do with images.
  • Output directory setting for cache files (with nice defaults)
  • Frame ranges
  • Automatic cache recording settings (see below)

Optional overrides on object/sim level

If deviating settings are needed for individual simulation caches. By default it should not be necessary to tweak each cache individually.

Question: How do we handle automatic caching?

Current approach is to automatically record point cache when stepping through frames, unless the cache is explicitly baked.
This breaks easily when jumping over frames and or scrubbing in the timeline, because simulations need a contiguous ordered frame range for correct results.
Does this work well enough to justify keeping it? Are there alternatives to this behavior?
One suggestion was to make simulations independent from the animation playback by default, but this makes it difficult to sync simulation results to keyframe-animated objects and to record valid caches on-the-fly.

Question: Do we really need multiple caches for each object? This feature adds a list of point caches for every object, only the active cache is used, the others are basically a version control system. Could this be moved to the scene level? The problem with this way of "versioning" is that there is really no way to roll back to a previous version, other than comparing the results without any way to revert settings.

Directory and File Names
How should directories and file naming work?
Current naming scheme works per object and uses a combination of output directory + filename:

  • Directory is chosen in this order:
    • explicit path if specified by the user
    • or constructed as //blendcache_<blend file name>/ (if .blend is saved and thus relative paths work)

    or using a temp folder like <tmp folder>/blendcache/

  • File name is generated as <basename>_<frame number>_<cache index>.bphys
    • <basename> can be specified by the user or otherwise the ID datablock name (usually Object) is used
    • <cache index> is generated automatically by default (-1), but needed if multiple caches exist in the same ID block.

Frames, Times and Sample Indices
There are 3 different modes of interpreting "time" that need to be managed in a clean way:

  • Blender usually defines data in terms of frame.
Any notion of "time" is secondary, defined by the frames-per-second (fps) settings in the scene and additional start/end frames.
Some simulations also add their own interpretation of time on top of this, by using time scale factors (e.g. [[ http://wiki.blender.org/index.php/Doc:2.6/Manual/Physics/Smoke/Domain#Generic_options | smoke sim time scale options ]]), although these factors are mostly for the internal simulation time step and don't have much consequence for mapping sample times.

While it would be possible to largely ignore time sampling for Blender cache files and simply read each cached sample as a frame, it is probably more reliable and flexible to calculate time values as defined by the scene render range and fps settings. This would allow using externally generated Alembic files which use a different sampling rate than the current scene and map them into the scenes frame range.

Ultimately it could be very useful to integrate cached data into the NLA editor as a track. At that point the direct sample<->frame relationship that might still exist for cached data from the same scene is useless anyway.

Proposed Changes

  • Output directory is by default specified on scene level, using the same rules as before.
  • Alembic cache files don't store files for each individual frame, instead one file for the whole cached frame range is used
  • 2 possible behaviors:
    • Each cached simulation/modifier creates its own file in the cache folder
    • Combine caches in a single scene cache file.
  This is more in line with standard Alembic use, but makes it more difficult to cache objects separately
  • In both cases identifier names for cache data are a problem.
This is already the case with current point caches: Renaming an object will sever the mapping to file names, leading to unused files on the disk and making re-bake necessary.

Modifiers

Almost all cacheable data is located somewhere in modifier stacks, so these require some special attention. While caching the final result of mesh modifier evaluation (final_dm) is obviously useful, we also want to support partial caching and layered evaluation of mesh stages and simulation.

Proposal

  • Add a "Point Cache" modifier for defining intermediate stages that can be cached. It would have almost no settings and just let users define fixed points, rather than caching every modifier (too much data) or relying on some heuristic of modifier "cost" to determine suitable stages.
  • By default there is one virtual Cache modifier instance shown at the end of the stack. This does not have to be a real modifier, but just reflect caching of the final_dm result. Caching can be disabled entirely by disabling this modifier. [Is it preferable to leave this completely to the user and not do any modifier caching at all by default?]
base_modifiers.png Base modifier stack with cache cached_modifiers.png Modifier stack with intermediate caching

Open Questions

  • How to treat multiple successive cache modifiers? Can/Should this be prevented, or just ignore redundant instances? Similarly for caches at the beginning of the stack where they are useless.
  • Should these caches also handle internal data of simulation modifiers (smoke/fluid voxels, particles, hair, ...). Or do they only handle DerivedMesh caching and leave sim data to explicit cache instances elsewhere (physics tab) [I would prefer the latter to keep design clean and avoid nasty corner cases]

Caching vs. Export

These two cases may need to have somewhat different requirements in terms of necessary data attributes, sampling, etc. (while running the same backend input/output functions).

  • Caching is an internal procedure. The data being written only has to be interpreted by Blender itself (although the files would largely follow Alembic schemas where applicable). Some data attributes (e.g. normals) might be omitted here, since reconstructing them is fairly cheap. [confirm?]
  • Export is a one-way procedure to describe Blender data in a generic format for other software. All possible data attributes should be included to ensure compatible results.

Implementation

(somewhat outdated)

point_cache.png

  • Scene stores global settings for caching
  • Simulations and modifiers can have own settings if needed
  • Cache data is separate from other DNA by default, but can be packed into blend files like images
  • Point Cache is a thin API, using Alembic as a backend
  • Different schema implementations (writer/reader) are created for the various DNA types and to handle interpolation (comparable to current PTCacheID )
  • Scene writer/reader ties everything together for exporting the scene as a whole, possibly using object hierarchy (flattening scene structure for export would also be possible)

Notes

  • Build instructions for Alembic and the development branch:
http://wiki.blender.org/index.php/User:Phonybone/PointCache/BuildNotes
  • Unsorted list of thoughts on Alembic implementation details:
http://wiki.blender.org/index.php/User:Phonybone/PointCache/InitialThoughts
The goal of this project is to implement a new Point Cache system based on the [Alembic library ](https://code.google.com/p/alembic/) and unify the various caching methods used in Blender. This is a prerequisite for further work on particle systems. Having a working Alembic exporter/importer will also greatly benefit integration of Blender into larger pipelines. ### Current Issues * Point cache is used for most physics simulations, but does not include mesh caching. For this there is a separate [Mesh Cache modifier ](http://wiki.blender.org/index.php/Doc:2.6/Manual/Modifiers/Modify/Mesh_Cache), using mdd or pc2 file formats, with its own limiations. Using a unified point cache system would help scene management. * Alembic, unlike the current point cache binary format, supports arbitrary data attributes. This is very important for flexible simulations and user-defined data in particles and mesh data (vertex groups, [custom data layers ](http://wiki.blender.org/index.php/Dev:Source/Architecture/Custom_Element_Data), etc.). * Point cache is limited to fixed-size data sets. This is sufficient for many simulations which only deform geometry (cloth, hair), but for particles and constructive modifier caching variable point data sets are important. * Compression in the point cache file format leaves a lot to be desired. Alembic is specifically designed to have a small memory/disk space footprint and make good use of threading for fast export/import. * Point cache binary format is unique to Blender. While it is fairly simple to interpret, it does not facilitate import/export and integration into larger pipeline workflows. Alembic on the other hand is a commonly used tool and can leverage a large developer community. * Settings for caches are object-centric, each simulation writes its own separate cache files. Baking all caches is simply a loop over all objects, which is not very efficient. Having scene-level settings and operators would make Alembic export/import easier. ### Proposal # Keep basic cache settings on Scene level: - "Disk Cache" vs. "Memory Cache": This could be reframed as having cache file output by default and use ''packing'' in .blend files like we do with images. - Output directory setting for cache files (with nice defaults) - Frame ranges - Automatic cache recording settings (see below) # Optional overrides on object/sim level ``` If deviating settings are needed for individual simulation caches. By default it should not be necessary to tweak each cache individually. ``` # **Question**: How do we handle automatic caching? ``` Current approach is to automatically record point cache when stepping through frames, unless the cache is explicitly baked. This breaks easily when jumping over frames and or scrubbing in the timeline, because simulations need a contiguous ordered frame range for correct results. Does this work well enough to justify keeping it? Are there alternatives to this behavior? One suggestion was to make simulations independent from the animation playback by default, but this makes it difficult to sync simulation results to keyframe-animated objects and to record valid caches on-the-fly. ``` # **Question**: Do we really need multiple caches for each object? This feature adds a list of point caches for every object, only the active cache is used, the others are basically a version control system. Could this be moved to the scene level? The problem with this way of "versioning" is that there is really no way to roll back to a previous version, other than comparing the results without any way to revert settings. **Directory and File Names** How should directories and file naming work? Current naming scheme works per object and uses a combination of output directory + filename: - Directory is chosen in this order: - explicit path if specified by the user - or constructed as `//blendcache_<blend file name>/` (if .blend is saved and thus relative paths work) # or using a temp folder like `<tmp folder>/blendcache/` - File name is generated as `<basename>_<frame number>_<cache index>.bphys` - `<basename>` can be specified by the user or otherwise the ID datablock name (usually Object) is used - `<cache index>` is generated automatically by default (-1), but needed if multiple caches exist in the same ID block. **Frames, Times and Sample Indices** There are 3 different modes of interpreting "time" that need to be managed in a clean way: * Blender usually defines data in terms of **frame**. ``` Any notion of "time" is secondary, defined by the frames-per-second (fps) settings in the scene and additional start/end frames. Some simulations also add their own interpretation of time on top of this, by using time scale factors (e.g. [[ http://wiki.blender.org/index.php/Doc:2.6/Manual/Physics/Smoke/Domain#Generic_options | smoke sim time scale options ]]), although these factors are mostly for the internal simulation time step and don't have much consequence for mapping sample times. ``` * Alembic samples can be accessed - directly by an **index** (0..N) - or by using a **time** value which is mapped to a sample index based on a time sampling (see [Time Sampling overview ]], [[ https:*code.google.com/p/alembic/source/browse/lib/Alembic/Abc/ISampleSelector.h#47 | ISampleSelector class ](https:*code.google.com/p/alembic/wiki/AlembicPoint9UsersGuide#Time_Sampling)) While it would be possible to largely ignore time sampling for Blender cache files and simply read each cached sample as a frame, it is probably more reliable and flexible to calculate time values as defined by the scene render range and fps settings. This would allow using externally generated Alembic files which use a different sampling rate than the current scene and map them into the scenes frame range. Ultimately it could be very useful to integrate cached data into the NLA editor as a track. At that point the direct sample<->frame relationship that might still exist for cached data from the same scene is useless anyway. **Proposed Changes** * Output directory is by default specified on scene level, using the same rules as before. * Alembic cache files don't store files for each individual frame, instead one file for the whole cached frame range is used * 2 possible behaviors: - Each cached simulation/modifier creates its own file in the cache folder - Combine caches in a single scene cache file. ``` This is more in line with standard Alembic use, but makes it more difficult to cache objects separately ``` * In both cases identifier names for cache data are a problem. ``` This is already the case with current point caches: Renaming an object will sever the mapping to file names, leading to unused files on the disk and making re-bake necessary. ``` **Modifiers** Almost all cacheable data is located somewhere in modifier stacks, so these require some special attention. While caching the final result of mesh modifier evaluation (`final_dm`) is obviously useful, we also want to support partial caching and layered evaluation of mesh stages and simulation. **Proposal** * Add a "Point Cache" modifier for defining intermediate stages that can be cached. It would have almost no settings and just let users define fixed points, rather than caching every modifier (too much data) or relying on some heuristic of modifier "cost" to determine suitable stages. * By default there is one virtual Cache modifier instance shown at the end of the stack. This does not have to be a real modifier, but just reflect caching of the `final_dm` result. Caching can be disabled entirely by disabling this modifier. [*Is it preferable to leave this completely to the user and not do any modifier caching at all by default?*] | ![base_modifiers.png](https://archive.blender.org/developer/F121707/base_modifiers.png) Base modifier stack with cache | | ![cached_modifiers.png](https://archive.blender.org/developer/F121709/cached_modifiers.png) Modifier stack with intermediate caching | | -- | -- | -- | **Open Questions** * How to treat multiple successive cache modifiers? Can/Should this be prevented, or just ignore redundant instances? Similarly for caches at the beginning of the stack where they are useless. * Should these caches also handle internal data of simulation modifiers (smoke/fluid voxels, particles, hair, ...). Or do they only handle DerivedMesh caching and leave sim data to explicit cache instances elsewhere (physics tab) [*I would prefer the latter to keep design clean and avoid nasty corner cases*] **Caching vs. Export** These two cases may need to have somewhat different requirements in terms of necessary data attributes, sampling, etc. (while running the same backend input/output functions). * *Caching* is an internal procedure. The data being written only has to be interpreted by Blender itself (although the files would largely follow Alembic schemas where applicable). Some data attributes (e.g. normals) might be omitted here, since reconstructing them is fairly cheap. [*confirm?*] * *Export* is a one-way procedure to describe Blender data in a generic format for other software. All possible data attributes should be included to ensure compatible results. ### Implementation (*somewhat outdated*) ![point_cache.png](https://archive.blender.org/developer/F29943/point_cache.png) - Scene stores global settings for caching - Simulations and modifiers can have own settings if needed - Cache data is separate from other DNA by default, but can be packed into blend files like images - Point Cache is a thin API, using Alembic as a backend - Different schema implementations (writer/reader) are created for the various DNA types and to handle interpolation (comparable to current [PTCacheID ](http://developer.blender.org/diffusion/B/browse/master/source/blender/blenkernel/BKE_pointcache.h;885354daec2fa8181aeb27b637c80e4ee4c247df$122)) - Scene writer/reader ties everything together for exporting the scene as a whole, possibly using object hierarchy (flattening scene structure for export would also be possible) ### Notes * Build instructions for Alembic and the development branch: ``` http://wiki.blender.org/index.php/User:Phonybone/PointCache/BuildNotes ``` * Unsorted list of thoughts on Alembic implementation details: ``` http://wiki.blender.org/index.php/User:Phonybone/PointCache/InitialThoughts ``` * Original [dia ](https://wiki.gnome.org/Apps/Dia) file for the diagram: [point_cache.dia](https://archive.blender.org/developer/F29958/point_cache.dia)
Author
Member

Changed status to: 'Open'

Changed status to: 'Open'
Lukas Tönne self-assigned this 2013-11-22 11:32:01 +01:00
Member

Added subscriber: @plasmasolutions

Added subscriber: @plasmasolutions

Added subscriber: @jasperge-2

Added subscriber: @jasperge-2

Added subscriber: @MartinLindelof

Added subscriber: @MartinLindelof

Added subscriber: @JayrajKharvadi

Added subscriber: @JayrajKharvadi
Author
Member

In my development branch i have now removed the point cache ListBases. As noted in the task description this feature does not add much value and it made the code unnecessarily complicated.

Older cache data is only useful for comparing to newer results, but one cannot actually revert to the previous settings just from the cache - for this you need to actually save a different .blend file version, which has its own cache anyway ... Such versioning functionality needs to be handled as part of the asset management or simply by making cache data backups outside of Blender.

The RNA code for accessing the point_caches collection was also causing recursion issues, basically the point cache contained a reference to itself, which is bad. Without the need to handle potential lists everywhere the code as well as the workflow becomes much simpler.

In my [development branch ](https://github.com/lukastoenne/blender/tree/pointcache) i have now removed the point cache ListBases. As noted in the task description this feature does not add much value and it made the code unnecessarily complicated. Older cache data is only useful for comparing to newer results, but one cannot actually revert to the previous settings just from the cache - for this you need to actually save a different .blend file version, which has its own cache anyway ... Such versioning functionality needs to be handled as part of the asset management or simply by making cache data backups outside of Blender. The RNA code for accessing the point_caches collection was also causing recursion issues, basically the point cache contained a reference to itself, which is bad. Without the need to handle potential lists everywhere the code as well as the workflow becomes much simpler.
Member

Added subscriber: @lichtwerk

Added subscriber: @lichtwerk

Added subscriber: @JonathanWilliamson

Added subscriber: @JonathanWilliamson

Added subscriber: @Brachi

Added subscriber: @Brachi

Added subscriber: @mont29

Added subscriber: @mont29

Added subscriber: @krupa-2

Added subscriber: @krupa-2

Added subscriber: @SayantanChaudhuri

Added subscriber: @SayantanChaudhuri

Removed subscriber: @SayantanChaudhuri

Removed subscriber: @SayantanChaudhuri

Added subscriber: @italic

Added subscriber: @italic
Author
Member

I'd like point caches to works as unobtrusively as possible. However, in order to store a valid state of the simulation/modifier results and keep in sync with the user settings some rules need to be established.

The current scheme works like this:

  • When the cache is "baked" the simulation is calculated explicitly for the whole frame range. After that the user settings are locked superficially in the UI to indicate that they won't have any effect until "free bake", i.e. the cache is released.
  • Without a baked cache, the results are still calculated if a the frame range is walked through (animated with alt+a) in a valid order from the start frame onward. But any change to settings or a dependency (e.g. controller object) will invalidate the cache.
  • Starting caching on a later frame than the start frame doesn't work very well. The cache will store results from the current frame onward if invalidated, but then the results in earlier frames are useless. Sometimes the cache will not be properly invalidated once going back, i can't quite figure out when this happens. So it becomes a hit-and-miss affair which is not good. Better have a limited system with reliable results than a sophisticated machine that you cannot trust ...

I'd like to know if and how this system could be improved.

For a start, it's not really necessary to perform a full simulation over the whole frame range if the cache is already valid. Then we could have an option "lock/unlock" instead of the "bake"/"free bake" operators. What this means is that, when locked, a simulation/modifier will never perform calculations but try to read results from the cache. If no valid cache result is available for a frame this should be clearly indicated in both the property buttons and the 3D viewport (instead of misplaced particles/etc. like it often happens now).

Notice how "lock" refers to the simulation settings rather than the cache! It can be seen as a 3rd option in addition to completely enabling/disabling a simulation - expensive calculations are avoided and the simulation is treated as read-only data.

In addition to, and separate from this, there can be a "bake"/"full update" operator. What this does is simply run through the simulation frame range once to calculate correct results for every frame and cache them. This can be a scene-wide operator, since generally a simulation is calculated in relation to other objects, so there is no need to re-run this for every simulation on its own. Essentially this is a slightly modified "run animation" (alt+a) operator, which just runs through the frame range once instead of looping.

After performing this "full update" there will be valid results for every frame, so it makes sense to lock the simulation settings. But it's not necessary and could be left to the user. A clear indicator for "all frames cached" should help.

I'd like point caches to works as unobtrusively as possible. However, in order to store a valid state of the simulation/modifier results and keep in sync with the user settings some rules need to be established. The current scheme works like this: * When the cache is "baked" the simulation is calculated explicitly for the whole frame range. After that the user settings are locked superficially in the UI to indicate that they won't have any effect until "free bake", i.e. the cache is released. * Without a baked cache, the results are still calculated if a the frame range is walked through (animated with alt+a) in a valid order from the start frame onward. But any change to settings or a dependency (e.g. controller object) will invalidate the cache. * Starting caching on a later frame than the start frame doesn't work very well. The cache will store results from the current frame onward if invalidated, but then the results in earlier frames are useless. Sometimes the cache will not be properly invalidated once going back, i can't quite figure out when this happens. So it becomes a hit-and-miss affair which is not good. Better have a limited system with reliable results than a sophisticated machine that you cannot trust ... I'd like to know if and how this system could be improved. For a start, it's not really necessary to perform a full simulation over the whole frame range if the cache is already valid. Then we could have an option "lock/unlock" instead of the "bake"/"free bake" operators. What this means is that, when locked, a simulation/modifier will never perform calculations but try to read results from the cache. If no valid cache result is available for a frame this should be clearly indicated in both the property buttons and the 3D viewport (instead of misplaced particles/etc. like it often happens now). Notice how "lock" refers to the simulation settings rather than the cache! It can be seen as a 3rd option in addition to completely enabling/disabling a simulation - expensive calculations are avoided and the simulation is treated as read-only data. In addition to, and separate from this, there can be a "bake"/"full update" operator. What this does is simply run through the simulation frame range once to calculate correct results for every frame and cache them. This can be a scene-wide operator, since generally a simulation is calculated in relation to other objects, so there is no need to re-run this for every simulation on its own. Essentially this is a slightly modified "run animation" (alt+a) operator, which just runs through the frame range once instead of looping. After performing this "full update" there will be valid results for every frame, so it makes sense to lock the simulation settings. But it's not necessary and could be left to the user. A clear indicator for "all frames cached" should help.
Member

Added subscriber: @zanqdo

Added subscriber: @zanqdo
Member

Hi Lukas, I'm thinking this more from the character animation workflow than the simulation workflow. I think there can be 2 types of point cache: the manual persistent cache and the automatic disposable cache. This could create a solid difference between actual simulation cache and just playback stuff like frame-scrubbing a sim that can be easily recreated.

The disposable cache could be set to automatically cache even general animation data (from modifier stack) so that for example consecutive playbacks with a few heavy characters are greatly accelerated, only recalculating the needed parts or characters and completely skipping calculation of the rest.

The persistent cache is more like when you decide to bake and lock a character so that it becomes uneditable unless you decide to release the persistent cache, make some edits and maybe bake again.

The persistent cache can have the added ability of remapping the timing and other nice tricks.

Edit: I just read your last comment @LukasTonne, you're already planning some kind of locking behavior. Great, just hoping we can extend this concept to anything worth caching and not only to simulations.

Hi Lukas, I'm thinking this more from the character animation workflow than the simulation workflow. I think there can be 2 types of point cache: the **manual persistent** cache and the **automatic disposable** cache. This could create a solid difference between actual simulation cache and just playback stuff like frame-scrubbing a sim that can be easily **recreated**. The **disposable cache** could be set to automatically cache even general animation data (from modifier stack) so that for example consecutive playbacks with a few heavy characters are greatly accelerated, only recalculating the needed parts or characters and completely skipping calculation of the rest. The **persistent cache** is more like when you decide to bake and **lock** a character so that it becomes uneditable unless you decide to release the persistent cache, make some edits and maybe bake again. The persistent cache can have the added ability of remapping the timing and other nice tricks. Edit: I just read your last comment @LukasTonne, you're already planning some kind of locking behavior. Great, just hoping we can extend this concept to anything worth caching and not only to simulations.

Added subscriber: @AditiaA.Pratama

Added subscriber: @AditiaA.Pratama
Member

Added subscriber: @BassamKurdali

Added subscriber: @BassamKurdali
Member

Thinking from a project pipeline perspective: It would be invaluable at render time to render only caches, not rigged characters. Several advantages accrue:

  • Scrubbing for lighters is real time.
  • No matter the complexity of dependencies, links, rigs, lighting files are super reliable.

No issue with renderfarms with linking, Python scripts, drivers, etc that might require 'unsafe' setting to render correctly

Secondary thing that would be really cool is sculpting on caches, keying caches etc.

Thinking from a project pipeline perspective: It would be invaluable at render time to render only caches, not rigged characters. Several advantages accrue: - Scrubbing for lighters is real time. - No matter the complexity of dependencies, links, rigs, lighting files are super reliable. # No issue with renderfarms with linking, Python scripts, drivers, etc that might require 'unsafe' setting to render correctly Secondary thing that would be really cool is sculpting on caches, keying caches etc.

I'm curious how this would play into final rendering, such as motion blur and sub-frame animation. I know it's possible to export a cache in sub-frame increments, but how motion blur will work is beyond me. Does Alembic have any provisions for this or will it be entirely up to Blender (or other software of preference) to interpret and interpolate between cache steps?

I'm curious how this would play into final rendering, such as motion blur and sub-frame animation. I know it's possible to export a cache in sub-frame increments, but how motion blur will work is beyond me. Does Alembic have any provisions for this or will it be entirely up to Blender (or other software of preference) to interpret and interpolate between cache steps?
Author
Member

@zanqdo: Yes, a form of locking could certainly be helpful to prevent accidentally losing cached data, combined with operators that calculate the whole cache to ensure it is valid in every frame. I just think that we need to keep the cache away from the simulations for the sake of design.

  • When the cache is "locked" it means it will not accept overwriting. The "full bake" operator can temporarily disable locking as part of an explicit user action. This is almost what the current "baked" flag means, the subtle difference being that we don't expect the cache to contain everything per se, although the operator would usually do a full calculation to ensure a valid cache state before locking it. It makes the code a lot simpler if we just view locked as a mechanism to prevent cache writes instead of making assumptions about validity in every frame. It would also be an internal cache feature, cache write attempts would just have no effect and locked does not have to be checked by callers everywhere.
  • To indicate the locked cache state in the UI we can disable buttons that would invalidate the cache, as it happens now when the cache is baked. Just be aware that it is difficult to make this absolutely watertight: Generally a simulation/modifier can depend on a lot of other things outside of it's own settings scope, like controller objects, other mesh surfaces etc. So this should be seen more as an independent feature of the sim (in self-contained character rigs it could work fairly well). The "bake cache" button could be made available in locked state as well, so you can re-bake the cache explicitly if you know external influences have changed and require a cache update.
  • Remapping time is again a feature of the cache user (sim/modifier/rig/...) and the cache itself should not need to know about it. I'll have to look into this closer to figure out how current NLA tools could do that. Perhaps such uses of cache data should be entirely separate from the original "cache creator", so you'd have one object that defines the setup and writes to the cache, and then any number of instances which can use the cache data read-only, do time remapping and so on. Otherwise i can see this becoming quite complicated when you have to always take into account what sort of timing a rig uses.

@BassamKurdali: I think this goes in the same direction as the last point above: Keep rig/mod/sim settings (collectively "cache writers") separate from instances ("cache readers"), which can add further modifications on top of the cached data (lighting, timing, materials, ...). I'll try to make a coherent design proposal for this.

Sculpting on cached data is a cool idea, which is only partially realized in "particle edit mode" so far. There are some tricky challenges though, like how to visualize the time domain in a meaningful way. When you compare it to regular sculpting: there you have a brush with some spatial extent whose effect can be shown all at once (all mesh vertices visible at the same time). But if you modify cache data at one point in time it's not so easy to show how this changes in the time dimension. "Ghost frames" are one approach, but it's most suitable for point data (like particles), i think it becomes difficult for use with surface geometry. Maybe there are proven solutions ...

@italic: Alembic does not do interpolation itself, but it is quite flexible in how it stores sub-frame samples. An Alembic file can have multiple different "time samplings", which usually relates to the frames. The standard would be 1-sample-per-frame, but we can add more fine-grained sampling next to it for fast moving objects that require detailed motion blur.

How this data is then interpreted for rendering etc. is up to Blender. IIRC our current renderers use spline segments to model the subframe motion of objects, which is good enough in most cases.

@zanqdo: Yes, a form of locking could certainly be helpful to prevent accidentally losing cached data, combined with operators that calculate the whole cache to ensure it is valid in every frame. I just think that we need to keep the cache away from the simulations for the sake of design. * When the cache is "locked" it means it will not accept overwriting. The "full bake" operator can temporarily disable locking as part of an explicit user action. This is *almost* what the current "baked" flag means, the subtle difference being that we don't expect the cache to contain everything *per se*, although the operator would usually do a full calculation to ensure a valid cache state before locking it. It makes the code a lot simpler if we just view *locked* as a mechanism to prevent cache writes instead of making assumptions about validity in every frame. It would also be an internal cache feature, cache write attempts would just have no effect and *locked* does not have to be checked by callers everywhere. * To indicate the locked cache state in the UI we can disable buttons that would invalidate the cache, as it happens now when the cache is baked. Just be aware that it is difficult to make this absolutely watertight: Generally a simulation/modifier can depend on a lot of other things outside of it's own settings scope, like controller objects, other mesh surfaces etc. So this should be seen more as an independent feature of the sim (in self-contained character rigs it could work fairly well). The "bake cache" button could be made available in *locked* state as well, so you can re-bake the cache explicitly if you know external influences have changed and require a cache update. * Remapping time is again a feature of the cache user (sim/modifier/rig/...) and the cache itself should not need to know about it. I'll have to look into this closer to figure out how current NLA tools could do that. Perhaps such uses of cache data should be entirely separate from the original "cache creator", so you'd have one object that defines the setup and writes to the cache, and then any number of **instances** which can use the cache data **read-only**, do time remapping and so on. Otherwise i can see this becoming quite complicated when you have to always take into account what sort of timing a rig uses. @BassamKurdali: I think this goes in the same direction as the last point above: Keep rig/mod/sim settings (collectively "cache writers") separate from instances ("cache readers"), which can add further modifications on top of the cached data (lighting, timing, materials, ...). I'll try to make a coherent design proposal for this. Sculpting on cached data is a cool idea, which is only partially realized in "particle edit mode" so far. There are some tricky challenges though, like how to visualize the time domain in a meaningful way. When you compare it to regular sculpting: there you have a brush with some spatial extent whose effect can be shown all at once (all mesh vertices visible at the same time). But if you modify cache data at one point in time it's not so easy to show how this changes in the time dimension. "Ghost frames" are one approach, but it's most suitable for point data (like particles), i think it becomes difficult for use with surface geometry. Maybe there are proven solutions ... @italic: Alembic does not do interpolation itself, but it is quite flexible in how it stores sub-frame samples. An Alembic file can have multiple different "time samplings", which usually relates to the frames. The standard would be 1-sample-per-frame, but we can add more fine-grained sampling next to it for fast moving objects that require detailed motion blur. How this data is then interpreted for rendering etc. is up to Blender. IIRC our current renderers use spline segments to model the subframe motion of objects, which is good enough in most cases.

Added subscriber: @BeornLeonard

Added subscriber: @BeornLeonard

Removed subscriber: @MartinLindelof

Removed subscriber: @MartinLindelof

Added subscriber: @sarazinjean-francois

Added subscriber: @sarazinjean-francois

Hello Lukas! When you ask :Do we really need multiple caches for each object? There is an important parameter, when we work with teamates, animators often overwrite pointcache for only a character, and render team sees updates in its scene. Would it be possible to do this if there is only one big cache file for the scene? Would it possible to link a character from a file, and a set from another file? And would it be possible to overwrite a file while someone else is reading it?

Hello Lukas! When you ask :Do we really need multiple caches for each object? There is an important parameter, when we work with teamates, animators often overwrite pointcache for only a character, and render team sees updates in its scene. Would it be possible to do this if there is only one big cache file for the scene? Would it possible to link a character from a file, and a set from another file? And would it be possible to overwrite a file while someone else is reading it?
Author
Member

@sarazinjean-francois: The "multiple caches for each object" refers to the list feature in every single object. This has very limited use and causes a number of complications in the code.

That being said, i like the concept of instancing cached data. I have outlined this a bit above, but needs to get a more precise design. Basically the current caches are very closely attached to the objects that generate them, making an object that works only as a cache instance is really cumbersome atm (requires using "external" caches and mismatching settings can easily break it).

Reading/writing cache files simultaneously should be as smooth as possible. I have to investigate the details of how Alembic files can be accessed more-or-less simultaneously, but generally it should follow a one writer / multiple readers model. Using the cached result of a writer in the same frame would be nice, the depgraph should enable us to let the writer finish writing the result in that frame before readers access it for instancing.

@sarazinjean-francois: The "multiple caches for each object" refers to the list feature in every single object. This has very limited use and causes a number of complications in the code. That being said, i like the concept of instancing cached data. I have outlined this a bit above, but needs to get a more precise design. Basically the current caches are very closely attached to the objects that generate them, making an object that works only as a cache instance is really cumbersome atm (requires using "external" caches and mismatching settings can easily break it). Reading/writing cache files simultaneously should be as smooth as possible. I have to investigate the details of how Alembic files can be accessed more-or-less simultaneously, but generally it should follow a one writer / multiple readers model. Using the cached result of a writer in the same frame would be nice, the depgraph should enable us to let the writer finish writing the result in that frame before readers access it for instancing.

This writer reader thing seems great.

Instancing cache could be like image management. It sounds good to me. By the way, i think cache should be written with absolute frames, and not in seconds. Time management should be the software Task . I rather keep things simple, ie for compositing it is easier to work with images sequences, as soon you work with movie file you have some framerate issue between projects.

This writer reader thing seems great. Instancing cache could be like image management. It sounds good to me. By the way, i think cache should be written with absolute frames, and not in seconds. Time management should be the software Task . I rather keep things simple, ie for compositing it is easier to work with images sequences, as soon you work with movie file you have some framerate issue between projects.

Added subscriber: @ErickNyanduKabongo

Added subscriber: @ErickNyanduKabongo
Author
Member

@sarazinjean-francois: The "time" values in Alembic are not per-se seconds. We can also interpret the time values as frames (1 frame = time interval of 1.0) instead of scaling by the fps (1 frame = time interval of 1/fps). This can be changed fairly easily though, so we can have a look at what would work best for interchangeability (the ominous "industry standard").

@sarazinjean-francois: The "time" values in Alembic are not per-se seconds. We can also interpret the time values as frames (1 frame = time interval of 1.0) instead of scaling by the fps (1 frame = time interval of 1/fps). This can be changed fairly easily though, so we can have a look at what would work best for interchangeability (the ominous "industry standard").

Added subscriber: @sreich

Added subscriber: @sreich

Added subscriber: @WarrenBahler

Added subscriber: @WarrenBahler

I just discovered the point animation features in messiah studio.

would the task here bring blender closer to implementing functionality like this?:
http://www.usefulslug.com/messiah4/PointAnimationSmall.mov

this would make animation extremely powerful, especially for lip syc, muscle animation, small cloth effects and so on....

I just discovered the point animation features in messiah studio. would the task here bring blender closer to implementing functionality like this?: http://www.usefulslug.com/messiah4/PointAnimationSmall.mov this would make animation extremely powerful, especially for lip syc, muscle animation, small cloth effects and so on....
Member

@WarrenBahler Just so you know you can do this with the AnimAll addon

@WarrenBahler Just so you know you can do this with the AnimAll addon

@zanqdo Wow, I didn't realize you can actually edit the mesh with Animall, very cool work, thanks

@zanqdo Wow, I didn't realize you can actually edit the mesh with Animall, very cool work, thanks
Member

Added subscriber: @z0r

Added subscriber: @z0r

Added subscriber: @BartekMoniewski

Added subscriber: @BartekMoniewski

Added subscriber: @nmitchell

Added subscriber: @nmitchell
Member

@WarrenBahler @zanqdo actually it's a good question: is there anything in the design above that would prevent sculpting directly on the cache (not on the underlying mesh) ? the diagrams don't show this but don't explicitly prevent it either.

@WarrenBahler @zanqdo actually it's a good question: is there anything in the design above that would prevent sculpting directly on the cache (not on the underlying mesh) ? the diagrams don't show this but don't explicitly prevent it either.

Added subscriber: @kevindietrich

Added subscriber: @kevindietrich

Added subscriber: @k9crunch

Added subscriber: @k9crunch

Added subscriber: @mrt3d

Added subscriber: @mrt3d

Added subscriber: @FlorianRichter

Added subscriber: @FlorianRichter

Added subscriber: @obadonke

Added subscriber: @obadonke

Added subscriber: @andreas.atteneder

Added subscriber: @andreas.atteneder
pyrochlore commented 2014-04-01 12:36:58 +02:00 (Migrated from localhost:3001)

Added subscriber: @pyrochlore

Added subscriber: @pyrochlore

Added subscriber: @RobertIves

Added subscriber: @RobertIves

Added subscriber: @boby

Added subscriber: @boby

Added subscriber: @MaxHammond

Added subscriber: @MaxHammond

Added subscriber: @karja

Added subscriber: @karja

Added subscriber: @g-lul

Added subscriber: @g-lul

Added subscriber: @ideasman42

Added subscriber: @ideasman42

Generally agree scene level baking is good so not much comments on what is written in the proposal.

Some feedback:

re: simulations independent from the animation playback by default - This seems fine to me. If you want real result you need to play from start or bake it.

re: Do we really need multiple caches for each object? - I think not, but its worth considering how/if we might use multiple alembic files per scene.


Other comments: ... questions infact!

  • Question: How to handle different resolutions from render/viewport - if you have subsurf - for example, set at different resolutions before the cache modifier, Bake for viewport display - but want to use for render.
  • Question: How to handle linked library data? (you make an animation on a character in another scene and want to link it in) -issue raised by Ton today.
  • Question: How to handle garbage collection (you have some data in an alembic file and the object gets removed).
  • Question: (relates to depsgraph and may not need to solve now), but what might be the workflow for using multiple alembic files on one mesh. maybe link-duplicate the mesh & rename.
  • Question: You already covered it mostly - but per simulation frame range vs global alembic frame range. (assume there would be some way to toggle between them).
Generally agree scene level baking is good so not much comments on what is written in the proposal. Some feedback: re: `simulations independent from the animation playback by default` - This seems fine to me. If you want real result you need to play from start or bake it. re: `Do we really need multiple caches for each object?` - I think not, but its worth considering how/if we might use multiple alembic files per scene. ---- Other comments: ... *questions infact!* - Question: How to handle different resolutions from render/viewport - if you have subsurf - for example, set at different resolutions before the cache modifier, Bake for viewport display - but want to use for render. - Question: How to handle linked library data? (you make an animation on a character in another scene and want to link it in) -*issue raised by Ton today.* - Question: How to handle garbage collection (you have some data in an alembic file and the object gets removed). - Question: *(relates to depsgraph and may not need to solve now)*, but what might be the workflow for using multiple alembic files on one mesh. *maybe link-duplicate the mesh & rename*. - Question: You already covered it mostly - but per simulation frame range vs global alembic frame range. (assume there would be some way to toggle between them).

Added subscriber: @MobyMotion

Added subscriber: @MobyMotion

Added subscriber: @MikePan

Added subscriber: @MikePan

Added subscriber: @Wei-ChengSun

Added subscriber: @Wei-ChengSun
Member

Added subscriber: @xuekepei

Added subscriber: @xuekepei

Added subscriber: @ManuelGrad

Added subscriber: @ManuelGrad

Added subscriber: @dr.sybren

Added subscriber: @dr.sybren

In #37578#189565, @LukasTonne wrote:
Reading/writing cache files simultaneously should be as smooth as possible. I have to investigate the details of how Alembic files can be accessed more-or-less simultaneously, but generally it should follow a one writer / multiple readers model.

Alembic explicitly does not allow changing an existing file, and from my experience does not even allow opening a file for reading unless it has been properly closed for writing (my hunch is that it writes some index only on closing). There are no provisions for simultaneous access, and writing to a file that's opened for reading will actually cause an exception (#51141). As a result, I have serious doubts about the usability of Alembic to replace the current Point Cache.

> In #37578#189565, @LukasTonne wrote: > Reading/writing cache files simultaneously should be as smooth as possible. I have to investigate the details of how Alembic files can be accessed more-or-less simultaneously, but generally it should follow a one writer / multiple readers model. Alembic explicitly [does not allow changing an existing file](https://groups.google.com/forum/#!msg/alembic-discussion/X-2ue86pw5g/Z2fZpW1eAgAJ), and from my experience does not even allow opening a file for reading unless it has been properly closed for writing (my hunch is that it writes some index only on closing). There are no provisions for simultaneous access, and writing to a file that's opened for reading will actually cause an exception (#51141). As a result, I have serious doubts about the usability of Alembic to replace the current Point Cache.

Added subscriber: @lemenicier_julien

Added subscriber: @lemenicier_julien

Added subscriber: @MaciejJutrzenka

Added subscriber: @MaciejJutrzenka

Can we have also option of adding moddifier instance. and what it does it let's as choose... alembic file. it reads attribute like orient, Scale.. and we also choose what is the instance object. it can be collection or object. And that is how we could have instancing based on pointcloud. with is very very usefull.

Summarry:

  • ability to use attributes like orient/scale to drive instances.
Can we have also option of adding moddifier instance. and what it does it let's as choose... alembic file. it reads attribute like orient, Scale.. and we also choose what is the instance object. it can be collection or object. And that is how we could have instancing based on pointcloud. with is very very usefull. Summarry: - ability to use attributes like orient/scale to drive instances.

Changed status from 'Confirmed' to: 'Archived'

Changed status from 'Confirmed' to: 'Archived'

@MaciejJutrzenka This is not the place to ask for new features.

I'm closing this task, as my concern about using Alembic for such caches hasn't been replied to in 1½ years.

Since Lukas did think about things and wrote down some designs, I do feel that this is still useful information, so I've added a link to my personal Alembic page on the wiki so that it's not forgotten.

@MaciejJutrzenka This is not the place to ask for new features. I'm closing this task, as my concern about using Alembic for such caches hasn't been replied to in 1½ years. Since Lukas did think about things and wrote down some designs, I do feel that this is still useful information, so I've added a link to [my personal Alembic page on the wiki](https://wiki.blender.org/wiki/User:Sybren/Alembic) so that it's not forgotten.
Member

Removed subscriber: @xuekepei

Removed subscriber: @xuekepei

Added subscriber: @jeremybep

Added subscriber: @jeremybep
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset System
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Code Documentation
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
FBX
Interest
Freestyle
Interest
Geometry Nodes
Interest
glTF
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Viewport & EEVEE
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Asset Browser Project
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Asset System
Module
Core
Module
Development Management
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Module
Viewport & EEVEE
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Severity
High
Severity
Low
Severity
Normal
Severity
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
45 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#37578
No description provided.