Regression: animated scene audio volume not changing the audible volume on playback #120726

Open
opened 2024-04-17 03:10:34 +02:00 by Campbell Barton · 25 comments

System Information
Operating system: Linux-6.8.5-arch1-1-x86_64-with-glibc2.39 64 Bits, WAYLAND UI
Graphics card: AMD Radeon RX 5700 XT (radeonsi, navi10, LLVM 17.0.6, DRM 3.57, 6.8.5-arch1-1) AMD 4.6 (Core Profile) Mesa 24.0.5-arch1.1

Blender Version
Broken: version: 4.1.0, branch: makepkg (modified), commit date: 2024-03-25 20:42, hash: 40a5e739e270
Worked: 2.79b

Short description of error
Animating the volume of the scene audio doesn't change the audible volume during playback.

Exact steps for others to reproduce the error

  • Change the timeline view to the sequence editor.
  • Add a sound fine (attached noise.mp3).
  • Add a key-frame to the scenes audio volume (1.0)
  • Change the frame to match the end of the audio.
  • Change the volume to zero.
  • Add a key-frame to the scenes audio volume.
  • Space to play the animation.

Notice:

  • The audio volume (as you hear it through speakers) doesn't change.
  • When you move the mouse cursor over the volume button it does change (only on refreshing).
  • When you scrub-time, the audio playback does change at the point where the play-head is moved to - but doesn't animate after that.

Attached an example that has animated volume, where you can play immediately to see the error.


#70569 may be related, although this seems like a simpler case.

**System Information** Operating system: Linux-6.8.5-arch1-1-x86_64-with-glibc2.39 64 Bits, WAYLAND UI Graphics card: AMD Radeon RX 5700 XT (radeonsi, navi10, LLVM 17.0.6, DRM 3.57, 6.8.5-arch1-1) AMD 4.6 (Core Profile) Mesa 24.0.5-arch1.1 **Blender Version** Broken: version: 4.1.0, branch: makepkg (modified), commit date: 2024-03-25 20:42, hash: `40a5e739e270` Worked: 2.79b **Short description of error** Animating the volume of the scene audio doesn't change the audible volume during playback. **Exact steps for others to reproduce the error** - Change the timeline view to the sequence editor. - Add a sound fine (attached `noise.mp3`). - Add a key-frame to the scenes audio volume (1.0) - Change the frame to match the end of the audio. - Change the volume to zero. - Add a key-frame to the scenes audio volume. - Space to play the animation. Notice: - The audio volume (as you hear it through speakers) doesn't change. - When you move the mouse cursor over the volume button it does change (only on refreshing). - When you scrub-time, the audio playback does change at the point where the play-head is moved to - but doesn't animate after that. --- Attached an example that has animated volume, where you can play immediately to see the error. ---- #70569 may be related, although this seems like a simpler case.
Campbell Barton added the
Type
Report
Severity
Normal
Status
Needs Triage
labels 2024-04-17 03:10:34 +02:00
Campbell Barton changed title from Animated volume not working to Regression: animated volume doesn't change audible volume on playback 2024-04-17 03:14:47 +02:00
Campbell Barton changed title from Regression: animated volume doesn't change audible volume on playback to Regression: animated scene audio volume not changing the audible volume on playback 2024-04-17 03:15:22 +02:00
Member

Hi, can confirm. #70569 does look related
cc @iss @neXyon

Hi, can confirm. #70569 does look related cc @iss @neXyon
Member

This is a bug in the UI, not sure where it comes from. This value should not be animatable, since PROP_ANIMATABLE is cleared in rna_scene.cc.

This is a bug in the UI, not sure where it comes from. This value should not be animatable, since `PROP_ANIMATABLE` is cleared in `rna_scene.cc`.
Author
Owner

This is a bug in the UI, not sure where it comes from. This value should not be animatable, since PROP_ANIMATABLE is cleared in rna_scene.cc.

There is an audio_volume as part of FFmpegSettings which is cleared, double checked and Scene's audio_volume can be animated.

> This is a bug in the UI, not sure where it comes from. This value should not be animatable, since `PROP_ANIMATABLE` is cleared in `rna_scene.cc`. There is an `audio_volume` as part of `FFmpegSettings` which is cleared, double checked and Scene's audio_volume can be animated.
Member

You're right, my mistake, sorry! I investigated a little bit. It's apparently an issue of the depsgraph, since all the functions that should be called to update the volume are basically never called. First sound_update_animation_flags in sound_ops.cc is never called, so scene->audio.flag |= AUDIO_VOLUME_ANIMATED never happens. And even if I press the "Update Animation Cache" button, and this gets called, then BKE_scene_update_sound does not get called still. Maybe @Sergey could have a look?

You're right, my mistake, sorry! I investigated a little bit. It's apparently an issue of the depsgraph, since all the functions that should be called to update the volume are basically never called. First `sound_update_animation_flags` in `sound_ops.cc` is never called, so `scene->audio.flag |= AUDIO_VOLUME_ANIMATED` never happens. And even if I press the "Update Animation Cache" button, and this gets called, then `BKE_scene_update_sound` does not get called still. Maybe @Sergey could have a look?
Member

There was #75686 / 6adb254bb0 already for this?

There was #75686 / 6adb254bb046 already for this?
Member

File from #75686 seems to work fine in 4.2...

File from #75686 seems to work fine in 4.2...
Member
CC @dr.sybren
Member

For the file from #75686, BKE_scene_update_tag_audio_volume is constantly called, however for the file from this report, it is not...

For the file from #75686, `BKE_scene_update_tag_audio_volume` is constantly called, however for the file from this report, it is not...
Member

Hm, even SOUND_OT_update_animation_flags -- which actually reaches scene->audio.flag |= AUDIO_VOLUME_ANIMATED does not have the consequence of BKE_scene_update_tag_audio_volume being called constantly

(EDIT: ah, I see @neXyon already checked this...)

Hm, even `SOUND_OT_update_animation_flags` -- which actually reaches `scene->audio.flag |= AUDIO_VOLUME_ANIMATED` does not have the consequence of `BKE_scene_update_tag_audio_volume` being called constantly (EDIT: ah, I see @neXyon already checked this...)
Sybren A. Stüvel self-assigned this 2024-04-25 13:01:07 +02:00

I've done some digging, and these are my findings:

  • The culprit is indeed the scene flag AUDIO_VOLUME_ANIMATED. This is not actually set when creating an F-Curve that animates the scene volume.
  • The dependency graph relations builder uses this flag to figure out whether to call BKE_scene_update_tag_audio_volume() during evaluation or not. If AUDIO_VOLUME_ANIMATED is not set, the relation is not added, and thus the function doesn't get called.
  • VSE only calls sound_update_animation_flags(scene) (which sets that flag) when there is a scene strip in the VSE, which in this demo file is not the case.
  • You can force an update of the scene flag with the "Update Animation" (bpy.ops.sound.update_animation_flags) operator, but that does not trigger a rebuild of the depsgraph relations. As a result, it seems to have no effect until something else triggers that. Once those conditions are met, the flags are set correctly, and the volume is animated.

So basically we have to find a way to set that flag to the appropriate value when the scene.audio_volume RNA property is animated (or its animation is cleared). The depsgraph is already tagged for rebuilding relations, whenever something gets keyed (in a way that creates a new FCurve), so once the AUDIO_VOLUME_ANIMATED is set correctly before the depsgraph relations are actually rebuilt, we should be good to go.

The issue is now: where do we put that code, in a way that doesn't add weirdly specific code in otherwise generic functions? @neXyon do you have ideas?

I've done some digging, and these are my findings: - The culprit is indeed the scene flag `AUDIO_VOLUME_ANIMATED`. This is **not actually set when creating an F-Curve** that animates the scene volume. - The dependency graph relations builder uses this flag to figure out whether to call `BKE_scene_update_tag_audio_volume()` during evaluation or not. **If `AUDIO_VOLUME_ANIMATED` is not set, the relation is not added,** and thus the function doesn't get called. - VSE only calls `sound_update_animation_flags(scene)` (which sets that flag) **when there is a scene strip** in the VSE, which in this demo file is not the case. - You can force an update of the scene flag with the "Update Animation" (`bpy.ops.sound.update_animation_flags`) operator, but that **does _not_ trigger a rebuild of the depsgraph relations**. As a result, it seems to have no effect until something else triggers that. Once those conditions are met, the flags are set correctly, and the volume is animated. So basically we have to find a way to set that flag to the appropriate value when the `scene.audio_volume` RNA property is animated (or its animation is cleared). The depsgraph is already tagged for rebuilding relations, whenever something gets keyed (in a way that creates a new FCurve), so once the `AUDIO_VOLUME_ANIMATED` is set correctly before the depsgraph relations are actually rebuilt, we should be good to go. The issue is now: where do we put that code, in a way that doesn't add weirdly specific code in otherwise generic functions? @neXyon do you have ideas?

This does work as a proof of concept (albeit wrong and ugly as it always sets the flag and does so very specifically in otherwise generic code):

diff --git a/source/blender/animrig/intern/keyframing.cc b/source/blender/animrig/intern/keyframing.cc
index 2944d109571..c56dc01f5cb 100644
--- a/source/blender/animrig/intern/keyframing.cc
+++ b/source/blender/animrig/intern/keyframing.cc
@@ -754,6 +754,10 @@ CombinedKeyingResult insert_keyframe(Main *bmain,
     if (adt != nullptr && adt->action != nullptr && adt->action != act) {
       DEG_id_tag_update(&adt->action->id, ID_RECALC_ANIMATION_NO_FLUSH);
     }
+    if (GS(id.name) == ID_SCE) {
+      Scene *scene = (Scene *)(&id);
+      scene->audio.flag |= AUDIO_VOLUME_ANIMATED;
+    }
   }
 
   return combined_result;

Compile with this change, then set any key that creates a new FCurve (for example hover over the Gravity checkbox and press I). That'll trigger this code, set the flag, and when the depsgraph is rebuilt, relations are correct.

So basically the bug is that this flag is used to track a generic audio property on the Scene (which should be set correctly regardless of whether the VSE is used or not), but it's only set by the VSE.

This does work as a proof of concept (albeit wrong and ugly as it always sets the flag and does so very specifically in otherwise generic code): ```diff diff --git a/source/blender/animrig/intern/keyframing.cc b/source/blender/animrig/intern/keyframing.cc index 2944d109571..c56dc01f5cb 100644 --- a/source/blender/animrig/intern/keyframing.cc +++ b/source/blender/animrig/intern/keyframing.cc @@ -754,6 +754,10 @@ CombinedKeyingResult insert_keyframe(Main *bmain, if (adt != nullptr && adt->action != nullptr && adt->action != act) { DEG_id_tag_update(&adt->action->id, ID_RECALC_ANIMATION_NO_FLUSH); } + if (GS(id.name) == ID_SCE) { + Scene *scene = (Scene *)(&id); + scene->audio.flag |= AUDIO_VOLUME_ANIMATED; + } } return combined_result; ``` Compile with this change, then set any key that creates a new FCurve (for example hover over the Gravity checkbox and press `I`). That'll trigger this code, set the flag, and when the depsgraph is rebuilt, relations are correct. So basically the bug is that this flag is used to track a generic audio property on the Scene (which should be set correctly regardless of whether the VSE is used or not), but it's only set by the VSE.
Sybren A. Stüvel removed their assignment 2024-04-25 15:55:30 +02:00
Member

Thanks for the investigation @dr.sybren ! Unfortunately, I don't know what the ideal place would be to implement this. Maybe @Sergey or @iss ?

Thanks for the investigation @dr.sybren ! Unfortunately, I don't know what the ideal place would be to implement this. Maybe @Sergey or @iss ?

VSE only calls sound_update_animation_flags(scene) (which sets that flag) when there is a scene strip in the VSE, which in this demo file is not the case.

This function seems to be called only from bpy.ops.sound.update_animation_flags operator. Don't see it being called from VSE code.

Not sure if I have any good insight here. Dependency graph seems to know whether sound strip volume is animated and in such case it calls seq_update_sound_properties() for every frame. There is SEQ_AUDIO_VOLUME_ANIMATED flag, but it is never set (it is only set by bpy.ops.sound.update_animation_flags). This relation seems to be created by build_animdata_fcurve_target(), but other than that, the code seems arcane to me.

That said, if this works for strips without flag, I don't see reason why this shouldn't work for scene without this flag. As bonus, perhaps these flags can be removed. @Sergey may have better idea still, so I like to know his opinion

> VSE only calls `sound_update_animation_flags(scene)` (which sets that flag) **when there is a scene strip** in the VSE, which in this demo file is not the case. This function seems to be called only from `bpy.ops.sound.update_animation_flags` operator. Don't see it being called from VSE code. Not sure if I have any good insight here. Dependency graph seems to know whether sound strip volume is animated and in such case it calls `seq_update_sound_properties()` for every frame. There is `SEQ_AUDIO_VOLUME_ANIMATED` flag, but it is never set (it is only set by `bpy.ops.sound.update_animation_flags`). This relation seems to be created by `build_animdata_fcurve_target()`, but other than that, the code seems arcane to me. That said, if this works for strips without flag, I don't see reason why this shouldn't work for scene without this flag. As bonus, perhaps these flags can be removed. @Sergey may have better idea still, so I like to know his opinion

VSE only calls sound_update_animation_flags(scene) (which sets that flag) when there is a scene strip in the VSE, which in this demo file is not the case.

This function seems to be called only from bpy.ops.sound.update_animation_flags operator. Don't see it being called from VSE code.

You're right, I must have mixed up some things.

> > VSE only calls `sound_update_animation_flags(scene)` (which sets that flag) **when there is a scene strip** in the VSE, which in this demo file is not the case. > > This function seems to be called only from `bpy.ops.sound.update_animation_flags` operator. Don't see it being called from VSE code. You're right, I must have mixed up some things.

Do we even need this flag? Can we always assume volume/pitch is animated, and get rid of the complexity of maintaining this flag?

Do we even need this flag? Can we always assume volume/pitch is animated, and get rid of the complexity of maintaining this flag?
Member

Could be possible, might introduce some audible issues though: The audio library distinguishes between animated and non-animated properties (i.e., constant value). If it's animated, you have to set the values for all frames, not just a single one.

So if you change the actually non-animated volume in Blender (but since you removed the flag you always tell the audio library it's animated) you have to update all frames:

  1. Either this happens immediately, which means you would need to know that there is no animation, thus you better use the flag instead.
  2. Or you do it whenever the frame is changed with the potential issue that if you are playing back and it is updated because of that, you might hear the old volume, because the value hasn't been updated on time.
Could be possible, might introduce some audible issues though: The audio library distinguishes between animated and non-animated properties (i.e., constant value). If it's animated, you have to set the values for all frames, not just a single one. So if you change the actually non-animated volume in Blender (but since you removed the flag you always tell the audio library it's animated) you have to update all frames: 1. Either this happens immediately, which means you would need to know that there is no animation, thus you better use the flag instead. 2. Or you do it whenever the frame is changed with the potential issue that if you are playing back and it is updated because of that, you might hear the old volume, because the value hasn't been updated on time.

Could be possible, might introduce some audible issues though: The audio library distinguishes between animated and non-animated properties (i.e., constant value). If it's animated, you have to set the values for all frames, not just a single one.

So if you change the actually non-animated volume in Blender (but since you removed the flag you always tell the audio library it's animated) you have to update all frames:

  1. Either this happens immediately, which means you would need to know that there is no animation, thus you better use the flag instead.
  2. Or you do it whenever the frame is changed with the potential issue that if you are playing back and it is updated because of that, you might hear the old volume, because the value hasn't been updated on time.

If we assume that properties are animated, then this won't be an issue, as each frame will be updated before it is played back. I agree with that approach. We just need to ensure, that animation is evaluated before updating volume, but this should be set up correctly already.

> Could be possible, might introduce some audible issues though: The audio library distinguishes between animated and non-animated properties (i.e., constant value). If it's animated, you have to set the values for all frames, not just a single one. > > So if you change the actually non-animated volume in Blender (but since you removed the flag you always tell the audio library it's animated) you have to update all frames: > 1. Either this happens immediately, which means you would need to know that there is no animation, thus you better use the flag instead. > 2. Or you do it whenever the frame is changed with the potential issue that if you are playing back and it is updated because of that, you might hear the old volume, because the value hasn't been updated on time. If we assume that properties are animated, then this won't be an issue, as each frame will be updated before it is played back. I agree with that approach. We just need to ensure, that animation is evaluated before updating volume, but this should be set up correctly already.

A bit of analysis:

Since this is audio and not video, it runs at a way higher rate, so I can imagine that only updating the volume on the frame itself can cause issues (24 FPS volume changes instead of gradual ones).

Currently Blender also behaves a bit strange here. Look at the attached blend file (in which I already called the 'update animation' operator so the flags are set correctly for volume animation). The scene frame rate is 1 FPS to make sub-frame volume changes obvious.

image

If you start playback at frame 2, I'd expect it to start at a low volume and ramp up until frame 4. What actually happens is that the volume starts loud, and ramps down until frame 3. Continuing playback it becomes clear that the volume is lagging behind the animated value by one frame. Basically setting volume = X at frame N just fades the volume to that value over the duration of that frame.

So this is a great way to avoid stepping volume changes, at the cost of a one frame lag.

Since the volume could be changed not only by F-Curves but also by drivers, Python code, etc. there is no general way to look into the future and see what the volume should be at the next frame (except by paying the cost of evaluating the depsgraph ahead of time).


Could be possible, might introduce some audible issues though: The audio library distinguishes between animated and non-animated properties (i.e., constant value). If it's animated, you have to set the values for all frames, not just a single one.

I don't quite understand the implications of this. The example blend file shows that there is an off-by-one issue with the audio, which wouldn't exist if the correct volumes were given for all frames.

A bit of analysis: Since this is audio and not video, it runs at a way higher rate, so I can imagine that only updating the volume on the frame itself can cause issues (24 FPS volume changes instead of gradual ones). Currently Blender also behaves a bit strange here. Look at the attached blend file (in which I already called the 'update animation' operator so the flags are set correctly for volume animation). The scene frame rate is 1 FPS to make sub-frame volume changes obvious. ![image](/attachments/3e1c790d-f72b-49df-94c1-0161435004d4) If you start playback at frame 2, I'd expect it to start at a low volume and ramp up until frame 4. What actually happens is that the volume starts loud, and ramps down until frame 3. Continuing playback it becomes clear that the volume is lagging behind the animated value by one frame. Basically setting `volume = X` at frame `N` just fades the volume to that value over the duration of that frame. So this is a great way to avoid stepping volume changes, at the cost of a one frame lag. Since the volume could be changed not only by F-Curves but also by drivers, Python code, etc. there is no general way to look into the future and see what the volume should be at the next frame (except by paying the cost of evaluating the depsgraph ahead of time). -------------- > Could be possible, might introduce some audible issues though: The audio library distinguishes between animated and non-animated properties (i.e., constant value). If it's animated, you have to set the values for all frames, not just a single one. I don't quite understand the implications of this. The example blend file shows that there is an off-by-one issue with the audio, which wouldn't exist if the correct volumes were given for all frames.
Member

This is all related to the detailed discussion I made 3 years ago, where I mostly quote a long post from 6 years ago: https://devtalk.blender.org/t/better-audio-integration/17810

I would love to have a better integration of the animation system and audio. Maybe we should think about a better solution that could help with all the issues? Would it be possible to have a separate evaluation state of the dependency graph that the audio system controls in as far as setting the current evaluation time and then being able to query all those properties that might be animated, like volume, pan, position and orientation of the camera and speakers, etc? Then I could probably even drop the audio system's animation cache and the code in Blender could become less complex (i.e., these flags can be dropped).

This is all related to the detailed discussion I made 3 years ago, where I mostly quote a long post from 6 years ago: https://devtalk.blender.org/t/better-audio-integration/17810 I would love to have a better integration of the animation system and audio. Maybe we should think about a better solution that could help with all the issues? Would it be possible to have a separate evaluation state of the dependency graph that the audio system controls in as far as setting the current evaluation time and then being able to query all those properties that might be animated, like volume, pan, position and orientation of the camera and speakers, etc? Then I could probably even drop the audio system's animation cache and the code in Blender could become less complex (i.e., these flags can be dropped).

Thanks for the link @neXyon , good info there.

I tried running the "Update Animation Cache" operator (bpy.ops.sound.bake_animation()), but that didn't seem to fix the off-by-one-frame issue.

Let's split this up into two parts:

  1. This specific issue, where volume animation is completely ignored in certain cases.
  2. Improvements to the audio integration in Blender (which I would very much welcome, but don't have the time for to work on right now).

This seems to work around this particular issue, i.e. simply always treating the scene volume as animated:

diff --git a/source/blender/blenkernel/intern/sound.cc b/source/blender/blenkernel/intern/sound.cc
index 55b9840a5ff..b90091dc03c 100644
--- a/source/blender/blenkernel/intern/sound.cc
+++ b/source/blender/blenkernel/intern/sound.cc
@@ -811,11 +811,7 @@ void BKE_sound_set_scene_volume(Scene *scene, float volume)
   if (scene->sound_scene == nullptr) {
     return;
   }
-  AUD_Sequence_setAnimationData(scene->sound_scene,
-                                AUD_AP_VOLUME,
-                                scene->r.cfra,
-                                &volume,
-                                (scene->audio.flag & AUDIO_VOLUME_ANIMATED) != 0);
+  AUD_Sequence_setAnimationData(scene->sound_scene, AUD_AP_VOLUME, scene->r.cfra, &volume, true);
 }
 
 void BKE_sound_set_scene_sound_volume_at_frame(void *handle,
diff --git a/source/blender/depsgraph/intern/builder/deg_builder_relations.cc b/source/blender/depsgraph/intern/builder/deg_builder_relations.cc
index bcbb82e5d29..3f2af70f8e4 100644
--- a/source/blender/depsgraph/intern/builder/deg_builder_relations.cc
+++ b/source/blender/depsgraph/intern/builder/deg_builder_relations.cc
@@ -3426,8 +3426,8 @@ void DepsgraphRelationBuilder::build_scene_audio(Scene *scene)
   add_relation(scene_audio_entry_key, scene_audio_volume_key, "Audio Entry -> Volume");
   add_relation(scene_audio_volume_key, scene_sound_eval_key, "Audio Volume -> Sound");
 
-  if (scene->audio.flag & AUDIO_VOLUME_ANIMATED) {
-    ComponentKey scene_anim_key(&scene->id, NodeType::ANIMATION);
+  ComponentKey scene_anim_key(&scene->id, NodeType::ANIMATION);
+  if (this->has_node(scene_anim_key)) {
     add_relation(scene_anim_key, scene_audio_volume_key, "Animation -> Audio Volume");
   }
 }

Of course to really get rid of the scene flag, more changes are necessary. This particular change causes hard on-the-frame changes in volume at the first playback. After that, the animation cache is up to date, and volume changes are smooth (albeit still off by one frame).

Thanks for the link @neXyon , good info there. I tried running the "Update Animation Cache" operator (`bpy.ops.sound.bake_animation()`), but that didn't seem to fix the off-by-one-frame issue. Let's split this up into two parts: 1. This specific issue, where volume animation is completely ignored in certain cases. 2. Improvements to the audio integration in Blender (which I would very much welcome, but don't have the time for to work on right now). This seems to work around this particular issue, i.e. simply always treating the scene volume as animated: ```diff diff --git a/source/blender/blenkernel/intern/sound.cc b/source/blender/blenkernel/intern/sound.cc index 55b9840a5ff..b90091dc03c 100644 --- a/source/blender/blenkernel/intern/sound.cc +++ b/source/blender/blenkernel/intern/sound.cc @@ -811,11 +811,7 @@ void BKE_sound_set_scene_volume(Scene *scene, float volume) if (scene->sound_scene == nullptr) { return; } - AUD_Sequence_setAnimationData(scene->sound_scene, - AUD_AP_VOLUME, - scene->r.cfra, - &volume, - (scene->audio.flag & AUDIO_VOLUME_ANIMATED) != 0); + AUD_Sequence_setAnimationData(scene->sound_scene, AUD_AP_VOLUME, scene->r.cfra, &volume, true); } void BKE_sound_set_scene_sound_volume_at_frame(void *handle, diff --git a/source/blender/depsgraph/intern/builder/deg_builder_relations.cc b/source/blender/depsgraph/intern/builder/deg_builder_relations.cc index bcbb82e5d29..3f2af70f8e4 100644 --- a/source/blender/depsgraph/intern/builder/deg_builder_relations.cc +++ b/source/blender/depsgraph/intern/builder/deg_builder_relations.cc @@ -3426,8 +3426,8 @@ void DepsgraphRelationBuilder::build_scene_audio(Scene *scene) add_relation(scene_audio_entry_key, scene_audio_volume_key, "Audio Entry -> Volume"); add_relation(scene_audio_volume_key, scene_sound_eval_key, "Audio Volume -> Sound"); - if (scene->audio.flag & AUDIO_VOLUME_ANIMATED) { - ComponentKey scene_anim_key(&scene->id, NodeType::ANIMATION); + ComponentKey scene_anim_key(&scene->id, NodeType::ANIMATION); + if (this->has_node(scene_anim_key)) { add_relation(scene_anim_key, scene_audio_volume_key, "Animation -> Audio Volume"); } } ``` Of course to really get rid of the scene flag, more changes are necessary. This particular change causes hard on-the-frame changes in volume at the first playback. After that, the animation cache is up to date, and volume changes are smooth (albeit still off by one frame).
Member

The off-by-one-frame issue you are seeing (or better hearing) is not a bug, but a conscious implementation. The per-frame animation values are at a low resolution in comparison to the resolution that audio has with e.g. 800 samples per frame (assuming 48kHz audio vs 60 fps). There are thus multiple possible implementations about where the animated volume has to be reached:

  1. At the beginning of the frame, in the first audio sample.
  2. In the middle of the frame at the 400th sample in our example.
  3. At the end of the frame, in the last sample.
  4. Any other of the 797 remaining samples.

While still debatable, number 2 probably makes the most sense. You seem to expect number 1. The reason number 3 is implemented though is the fact that blender only updates the audio animation cache values when switching to that frame, so when you change something it's out of date and you can hear the jumps when the update is too slow. Number 3 reduces this effect as much as possible, while doing number 1 would strongly worsen the issue. Besides, as long as you don't do huge changes (which usually you shouldn't for volume anyway) and at 30 fps, the "delay" caused by number 3 is hardly noticable. On the other hand, of course, if you stress test this with a huge change at 1 fps it will be clearly noticable. Changing this to 1 or 2 should not be difficult, if you want to do so.


As for your suggested change: as I wrote before, always having it animated works of course. The issue is for the case where scene volume is not automated (which I would consider the most common case, without any usage statistics to prove it), if users change the volume (not animated, i.e., a constant value for the whole animation), they will experience the issue just mentioned above where the cache is out of date until playback of the whole scene completed once (or the update animation cache button is pressed). This might be quite confusing for users, why it happens, since the value is not animated. That's why I'm not a big fan of this solution. If you are aware of this issue, considered the consequences and decided that this is still the fix to go with (for now), it's ok for me.


I'd still like a better solution than this animation cache in the first place. So I would still like to know this:

Would it be possible to have a separate evaluation state of the dependency graph that the audio system controls in as far as setting the current evaluation time and then being able to query all those properties that might be animated, like volume, pan, position and orientation of the camera and speakers, etc?

The off-by-one-frame issue you are seeing (or better hearing) is not a bug, but a conscious implementation. The per-frame animation values are at a low resolution in comparison to the resolution that audio has with e.g. 800 samples per frame (assuming 48kHz audio vs 60 fps). There are thus multiple possible implementations about where the animated volume has to be reached: 1. At the beginning of the frame, in the first audio sample. 2. In the middle of the frame at the 400th sample in our example. 3. At the end of the frame, in the last sample. 4. Any other of the 797 remaining samples. While still debatable, number 2 probably makes the most sense. You seem to expect number 1. The reason number 3 is implemented though is the fact that blender only updates the audio animation cache values when switching to that frame, so when you change something it's out of date and you can hear the jumps when the update is too slow. Number 3 reduces this effect as much as possible, while doing number 1 would strongly worsen the issue. Besides, as long as you don't do huge changes (which usually you shouldn't for volume anyway) and at 30 fps, the "delay" caused by number 3 is hardly noticable. On the other hand, of course, if you stress test this with a huge change at 1 fps it will be clearly noticable. Changing this to 1 or 2 should not be difficult, if you want to do so. ----- As for your suggested change: as I wrote before, always having it animated works of course. The issue is for the case where scene volume is not automated (which I would consider the most common case, without any usage statistics to prove it), if users change the volume (not animated, i.e., a constant value for the whole animation), they will experience the issue just mentioned above where the cache is out of date until playback of the whole scene completed once (or the update animation cache button is pressed). This might be quite confusing for users, why it happens, since the value is not animated. That's why I'm not a big fan of this solution. If you are aware of this issue, considered the consequences and decided that this is still the fix to go with (for now), it's ok for me. ----- I'd still like a better solution than this animation cache in the first place. So I would still like to know this: > Would it be possible to have a separate evaluation state of the dependency graph that the audio system controls in as far as setting the current evaluation time and then being able to query all those properties that might be animated, like volume, pan, position and orientation of the camera and speakers, etc?

While still debatable, number 2 probably makes the most sense. You seem to expect number 1.

I do, because that's what the graph editor is showing me.

The reason number 3 is implemented though is the fact that blender only updates the audio animation cache values when switching to that frame, so when you change something it's out of date and you can hear the jumps when the update is too slow. Number 3 reduces this effect as much as possible, while doing number 1 would strongly worsen the issue. Besides, as long as you don't do huge changes (which usually you shouldn't for volume anyway) and at 30 fps, the "delay" caused by number 3 is hardly noticable. On the other hand, of course, if you stress test this with a huge change at 1 fps it will be clearly noticable. Changing this to 1 or 2 should not be difficult, if you want to do so.

Changing this to 1 would be quite hard, I think, unless I'm missing something. Or better said, it's easy to do but it would loose the gradual fade over the course of that frame. And I personally wouldn't find that very acceptable.

As for your suggested change: as I wrote before, always having it animated works of course. The issue is for the case where scene volume is not automated (which I would consider the most common case, without any usage statistics to prove it), if users change the volume (not animated, i.e., a constant value for the whole animation), they will experience the issue just mentioned above where the cache is out of date until playback of the whole scene completed once (or the update animation cache button is pressed). This might be quite confusing for users, why it happens, since the value is not animated. That's why I'm not a big fan of this solution. If you are aware of this issue, considered the consequences and decided that this is still the fix to go with (for now), it's ok for me.

Yeah, I'm not a big fan either, so I'm definitely open for other solutions.

Is there a way to visualise the animation cache, similar to the physics cache? Even though that wouldn't actually be a solution, it might help to reduce confusion.

Would it be possible to have a separate evaluation state of the dependency graph that the audio system controls in as far as setting the current evaluation time and then being able to query all those properties that might be animated, like volume, pan, position and orientation of the camera and speakers, etc?

I don't think the depsgraph can have multiple evaluation states at the moment. So it would basically have to be a separate depsgraph altogether, that only contains nodes relevant for audio.

#121019 might help here, in the longer run.

> While still debatable, number 2 probably makes the most sense. You seem to expect number 1. I do, because that's what the graph editor is showing me. > The reason number 3 is implemented though is the fact that blender only updates the audio animation cache values when switching to that frame, so when you change something it's out of date and you can hear the jumps when the update is too slow. Number 3 reduces this effect as much as possible, while doing number 1 would strongly worsen the issue. Besides, as long as you don't do huge changes (which usually you shouldn't for volume anyway) and at 30 fps, the "delay" caused by number 3 is hardly noticable. On the other hand, of course, if you stress test this with a huge change at 1 fps it will be clearly noticable. Changing this to 1 or 2 should not be difficult, if you want to do so. Changing this to 1 would be quite hard, I think, unless I'm missing something. Or better said, it's easy to do but it would loose the gradual fade over the course of that frame. And I personally wouldn't find that very acceptable. > As for your suggested change: as I wrote before, always having it animated works of course. The issue is for the case where scene volume is not automated (which I would consider the most common case, without any usage statistics to prove it), if users change the volume (not animated, i.e., a constant value for the whole animation), they will experience the issue just mentioned above where the cache is out of date until playback of the whole scene completed once (or the update animation cache button is pressed). This might be quite confusing for users, why it happens, since the value is not animated. That's why I'm not a big fan of this solution. If you are aware of this issue, considered the consequences and decided that this is still the fix to go with (for now), it's ok for me. Yeah, I'm not a big fan either, so I'm definitely open for other solutions. Is there a way to visualise the animation cache, similar to the physics cache? Even though that wouldn't actually be a solution, it might help to reduce confusion. > Would it be possible to have a separate evaluation state of the dependency graph that the audio system controls in as far as setting the current evaluation time and then being able to query all those properties that might be animated, like volume, pan, position and orientation of the camera and speakers, etc? I don't think the depsgraph can have multiple evaluation states at the moment. So it would basically have to be a separate depsgraph altogether, that only contains nodes relevant for audio. #121019 might help here, in the longer run.
Member

I do, because that's what the graph editor is showing me.

Right, that's understandable.

Changing this to 1 would be quite hard, I think, unless I'm missing something. Or better said, it's easy to do but it would loose the gradual fade over the course of that frame. And I personally wouldn't find that very acceptable.

Why do you think giving the audio library a shifted timing, offset by one frame, would suddenly drop any interpolation features? There's cubic spline interpolation for all animated properties in the animation cache that is sampled at mixing buffer size (that's a setting of the audio device in the settings or you can adapt it during mixdown) and additionally in between volume is linearly interpolated for every sample.

Yeah, I'm not a big fan either, so I'm definitely open for other solutions.

I've made several suggestions in all of my postings, but I understand that none of them are really easy and someone has to implement it. As a "simple" solution, I'd still prefer the flag to work. In my mind, but not knowing how the depsgraph works, it should be possible to query it whether the scene volume depends on anything (be it an f-curve direct animation or some driver or so) and in that case, I would set it to animated otherwise not.

Is there a way to visualise the animation cache, similar to the physics cache? Even though that wouldn't actually be a solution, it might help to reduce confusion.

The cache is pretty simple, it's a number of floats (1 for volume/pitch/pan, 3 for location, 4 for orientation stored as a quaternion) with one entry per frame. When read from it uses cubic spline interpolation as mentioned before. There's just no API to access it directly, but I could add that if you want to implement a visualization?

I don't think the depsgraph can have multiple evaluation states at the moment. So it would basically have to be a separate depsgraph altogether, that only contains nodes relevant for audio.

#121019 might help here, in the longer run.

That's unfortunate. How do you think #121019 could help? I didn't see a point there that would be especially useful?

> I do, because that's what the graph editor is showing me. Right, that's understandable. > Changing this to 1 would be quite hard, I think, unless I'm missing something. Or better said, it's easy to do but it would loose the gradual fade over the course of that frame. And I personally wouldn't find that very acceptable. Why do you think giving the audio library a shifted timing, offset by one frame, would suddenly drop any interpolation features? There's cubic spline interpolation for all animated properties in the animation cache that is sampled at mixing buffer size (that's a setting of the audio device in the settings or you can adapt it during mixdown) and additionally in between volume is linearly interpolated for *every* sample. > Yeah, I'm not a big fan either, so I'm definitely open for other solutions. I've made several suggestions in all of my postings, but I understand that none of them are really easy and someone has to implement it. As a "simple" solution, I'd still prefer the flag to work. In my mind, but not knowing how the depsgraph works, it should be possible to query it whether the scene volume depends on *anything* (be it an f-curve direct animation or some driver or so) and in that case, I would set it to animated otherwise not. > Is there a way to visualise the animation cache, similar to the physics cache? Even though that wouldn't actually be a solution, it might help to reduce confusion. The cache is pretty simple, it's a number of floats (1 for volume/pitch/pan, 3 for location, 4 for orientation stored as a quaternion) with one entry per frame. When read from it uses cubic spline interpolation as mentioned before. There's just no API to access it directly, but I could add that if you want to implement a visualization? > I don't think the depsgraph can have multiple evaluation states at the moment. So it would basically have to be a separate depsgraph altogether, that only contains nodes relevant for audio. > > #121019 might help here, in the longer run. That's unfortunate. How do you think #121019 could help? I didn't see a point there that would be especially useful?

Changing this to 1 would be quite hard, I think, unless I'm missing something. Or better said, it's easy to do but it would loose the gradual fade over the course of that frame. And I personally wouldn't find that very acceptable.

Why do you think giving the audio library a shifted timing, offset by one frame, would suddenly drop any interpolation features?

Because without that offset, when the frame is evaluated, Blender can say "go to volume N" and then there is an entire frame's time to do that smoothly. If, at that same time, Blender says "the volume has to be N NOW" that period of time in which the interpolation can happen is lost.

Of course if the parameter changes are known beforehand, a 1-frame offset is simple enough, and wouldn't, as you point out, cause these issues.

I've made several suggestions in all of my postings, but I understand that none of them are really easy and someone has to implement it.

Uhu, that's why I'm discussing this here & now, to see what practical (albeit limited) solution we can find that I can actually help developing.

As a "simple" solution, I'd still prefer the flag to work. In my mind, but not knowing how the depsgraph works, it should be possible to query it whether the scene volume depends on anything (be it an f-curve direct animation or some driver or so) and in that case, I would set it to animated otherwise not.

There is a bit of a dependency cycle here, as the construction of the depsgraph depends on the AUDIO_VOLUME_ANIMATED flag. So that flag cannot be cleared/set based on querying the depsgraph.

At depsgraph construction time we could, instead of using that flag, query the animation system to see if the scene audio parameters are animated / driven. This wouldn't account for things like 'animating' the volume via Python and on-frame-change handlers, but I think that's not considered currently anyway.

Is there a way to visualise the animation cache, similar to the physics cache? Even though that wouldn't actually be a solution, it might help to reduce confusion.

The cache is pretty simple, it's a number of floats (1 for volume/pitch/pan, 3 for location, 4 for orientation stored as a quaternion) with one entry per frame. When read from it uses cubic spline interpolation as mentioned before. There's just no API to access it directly, but I could add that if you want to implement a visualization?

Hmmm on second thought, let's leave that for now and be economic with our time.

#121019 might help here, in the longer run.

That's unfortunate. How do you think #121019 could help? I didn't see a point there that would be especially useful?

Making it simpler & more efficient to code a custom depsgraph builder, that could help to make an audio-parameters-only tiny depsgraph that runs one (or a few more) frames ahead of the scene time.

> > Changing this to 1 would be quite hard, I think, unless I'm missing something. Or better said, it's easy to do but it would loose the gradual fade over the course of that frame. And I personally wouldn't find that very acceptable. > > Why do you think giving the audio library a shifted timing, offset by one frame, would suddenly drop any interpolation features? Because without that offset, when the frame is evaluated, Blender can say "go to volume `N`" and then there is an entire frame's time to do that smoothly. If, at that same time, Blender says "the volume has to be `N` **NOW**" that period of time in which the interpolation can happen is lost. Of course if the parameter changes are known beforehand, a 1-frame offset is simple enough, and wouldn't, as you point out, cause these issues. > I've made several suggestions in all of my postings, but I understand that none of them are really easy and someone has to implement it. Uhu, that's why I'm discussing this here & now, to see what practical (albeit limited) solution we can find that I can actually help developing. > As a "simple" solution, I'd still prefer the flag to work. In my mind, but not knowing how the depsgraph works, it should be possible to query it whether the scene volume depends on *anything* (be it an f-curve direct animation or some driver or so) and in that case, I would set it to animated otherwise not. There is a bit of a dependency cycle here, as the construction of the depsgraph depends on the `AUDIO_VOLUME_ANIMATED` flag. So that flag cannot be cleared/set based on querying the depsgraph. At depsgraph construction time we could, instead of using that flag, query the animation system to see if the scene audio parameters are animated / driven. This wouldn't account for things like 'animating' the volume via Python and on-frame-change handlers, but I think that's not considered currently anyway. > > Is there a way to visualise the animation cache, similar to the physics cache? Even though that wouldn't actually be a solution, it might help to reduce confusion. > > The cache is pretty simple, it's a number of floats (1 for volume/pitch/pan, 3 for location, 4 for orientation stored as a quaternion) with one entry per frame. When read from it uses cubic spline interpolation as mentioned before. There's just no API to access it directly, but I could add that if you want to implement a visualization? Hmmm on second thought, let's leave that for now and be economic with our time. > > #121019 might help here, in the longer run. > > That's unfortunate. How do you think #121019 could help? I didn't see a point there that would be especially useful? Making it simpler & more efficient to code a custom depsgraph builder, that could help to make an audio-parameters-only tiny depsgraph that runs one (or a few more) frames ahead of the scene time.
Member

Changing this to 1 would be quite hard, I think, unless I'm missing something. Or better said, it's easy to do but it would loose the gradual fade over the course of that frame. And I personally wouldn't find that very acceptable.

Why do you think giving the audio library a shifted timing, offset by one frame, would suddenly drop any interpolation features?

Because without that offset, when the frame is evaluated, Blender can say "go to volume N" and then there is an entire frame's time to do that smoothly. If, at that same time, Blender says "the volume has to be N NOW" that period of time in which the interpolation can happen is lost.

Of course if the parameter changes are known beforehand, a 1-frame offset is simple enough, and wouldn't, as you point out, cause these issues.

The sound is played back with the audio system's cache, so if that is correct, it will work. Of course it's more likely out of date the first time you play back as I tried to explain in this comment: #120726 (comment)

As a "simple" solution, I'd still prefer the flag to work. In my mind, but not knowing how the depsgraph works, it should be possible to query it whether the scene volume depends on anything (be it an f-curve direct animation or some driver or so) and in that case, I would set it to animated otherwise not.

There is a bit of a dependency cycle here, as the construction of the depsgraph depends on the AUDIO_VOLUME_ANIMATED flag. So that flag cannot be cleared/set based on querying the depsgraph.

At depsgraph construction time we could, instead of using that flag, query the animation system to see if the scene audio parameters are animated / driven. This wouldn't account for things like 'animating' the volume via Python and on-frame-change handlers, but I think that's not considered currently anyway.

You mean at in DepsgraphRelationBuilder::build_scene_audio() it is checked and that's why the dependency graph depends on it during construction?

Yeah, it would be great to check it like that during construction and set the flag depending on that, rather than expecting the flag to already be set or not.

Is there a way to visualise the animation cache, similar to the physics cache? Even though that wouldn't actually be a solution, it might help to reduce confusion.

The cache is pretty simple, it's a number of floats (1 for volume/pitch/pan, 3 for location, 4 for orientation stored as a quaternion) with one entry per frame. When read from it uses cubic spline interpolation as mentioned before. There's just no API to access it directly, but I could add that if you want to implement a visualization?

Hmmm on second thought, let's leave that for now and be economic with our time.

Ok.

#121019 might help here, in the longer run.

That's unfortunate. How do you think #121019 could help? I didn't see a point there that would be especially useful?

Making it simpler & more efficient to code a custom depsgraph builder, that could help to make an audio-parameters-only tiny depsgraph that runs one (or a few more) frames ahead of the scene time.

I see. But it would still have to include everything audio animation depends on, which basically could be anything right? Using drivers e.g. any property can be used?!

> > > Changing this to 1 would be quite hard, I think, unless I'm missing something. Or better said, it's easy to do but it would loose the gradual fade over the course of that frame. And I personally wouldn't find that very acceptable. > > > > Why do you think giving the audio library a shifted timing, offset by one frame, would suddenly drop any interpolation features? > > Because without that offset, when the frame is evaluated, Blender can say "go to volume `N`" and then there is an entire frame's time to do that smoothly. If, at that same time, Blender says "the volume has to be `N` **NOW**" that period of time in which the interpolation can happen is lost. > > Of course if the parameter changes are known beforehand, a 1-frame offset is simple enough, and wouldn't, as you point out, cause these issues. The sound is played back with the audio system's cache, so if that is correct, it will work. Of course it's more likely out of date the first time you play back as I tried to explain in this comment: https://projects.blender.org/blender/blender/issues/120726#issuecomment-1184352 > > As a "simple" solution, I'd still prefer the flag to work. In my mind, but not knowing how the depsgraph works, it should be possible to query it whether the scene volume depends on *anything* (be it an f-curve direct animation or some driver or so) and in that case, I would set it to animated otherwise not. > > There is a bit of a dependency cycle here, as the construction of the depsgraph depends on the `AUDIO_VOLUME_ANIMATED` flag. So that flag cannot be cleared/set based on querying the depsgraph. > > At depsgraph construction time we could, instead of using that flag, query the animation system to see if the scene audio parameters are animated / driven. This wouldn't account for things like 'animating' the volume via Python and on-frame-change handlers, but I think that's not considered currently anyway. You mean at in `DepsgraphRelationBuilder::build_scene_audio()` it is checked and that's why the dependency graph depends on it during construction? Yeah, it would be great to check it like that during construction and set the flag depending on that, rather than expecting the flag to already be set or not. > > > Is there a way to visualise the animation cache, similar to the physics cache? Even though that wouldn't actually be a solution, it might help to reduce confusion. > > > > The cache is pretty simple, it's a number of floats (1 for volume/pitch/pan, 3 for location, 4 for orientation stored as a quaternion) with one entry per frame. When read from it uses cubic spline interpolation as mentioned before. There's just no API to access it directly, but I could add that if you want to implement a visualization? > > Hmmm on second thought, let's leave that for now and be economic with our time. Ok. > > > > #121019 might help here, in the longer run. > > > > That's unfortunate. How do you think #121019 could help? I didn't see a point there that would be especially useful? > > Making it simpler & more efficient to code a custom depsgraph builder, that could help to make an audio-parameters-only tiny depsgraph that runs one (or a few more) frames ahead of the scene time. I see. But it would still have to include everything audio animation depends on, which basically could be anything right? Using drivers e.g. any property can be used?!
Bart van der Braak added
Type
Bug
and removed
Type
Report
labels 2024-08-14 13:02:15 +02:00
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset System
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Code Documentation
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Viewport & EEVEE
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Asset Browser Project
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Module
Viewport & EEVEE
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Severity
High
Severity
Low
Severity
Normal
Severity
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
7 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#120726
No description provided.