Tracking: unify data storage for camera intrinsics #82645

Open
opened 2020-11-12 11:33:28 +01:00 by Ivan Perevala · 43 comments

Status: Task review


Team
Commissioner: @Sergey @Jeroen-Bakker
Project leader:@ivpe
Project members: -
Description
Big picture: Improve re-usability and allow animation for camera intrinsics

Design:

Re-usability of camera intrinsics for now is limited by using movie clip itself, so it's good to have a general-purpose place to store these data. For example, distortions can be solved by Movie Tracking Solver and used by another movie clip or compositing node.

Animating lens distortion is common practice for VFX artists and it's good to have such possibility in compositing node "Movie Distortion", which for now is glued to non-animated movie clip camera. Movie clip can be captured using zoom lenses as well as using optical or sensor stabilization, that mean lens distortion can be changed per frame.


Camera Intrinsics

Camera intrinsics should become a part of Camera ID data-block. It should contain calibration / lens distortion data, be animatable and have representation in actual Camera data UI:

  • {icon check-circle color=green} Focal Length - actual Camera's lens in millimeters, as far as different movie clips can have different pixel dimensions which will be evaluated for specific movie clip. (existing camera data).

  • {icon check-circle color=green} Sensor width - actual Camera's sensor. For "Vertical" sensor fit sensor Y should be used as sensor width. (existing camera data).

  • {icon check-circle color=yellow} Principal Point (X, Y) - lens calibration parameter in millimeters but may be showed to user in pixels relative to specific input source pixel dimensions.

  • {icon check-circle color=yellow} Pixel Aspect Ratio - should be clamped in 0.0001-1.9999 to provide distort / undistort workflow, factor value. (existing scene render settings data).

  • {icon check-circle color=yellow} Units - only a for display in millimeters or in pixels relative to current distortion source dimensions. Affect only UI (enumerator value, not animatable).

  • {icon check-circle color=yellow} Distortion Model - all from Libmv: Polynomial, Division, Nuke, Brown-Conrady (enumerator value). Affect same as before, which math algorithm and coefficients should be used.

  • {icon check-circle color=yellow} Radial coefficients - should keep support correspondence with particular Libmv lens distortion models. (k1 - k4 parameters).

  • {icon check-circle color=yellow} Tangential coefficients should keep support correspondence with particular Libmv lens distortion models. At the moment. presented in Brown-Conrady lens model (p1, p2).

  • {icon times color=red} Skew. affinity correction parameters is not used anywhere else in Blender, so should not be presented in current task design.


Affected data-blocks:

Movie Clip
This ID's should always contain pointer to specific Camera ID and use camera's lens distortion parameters for lens distortion evaluations. Movie Tracker Solver should interact with these data as well.
So, from user perspective it should looks like:

  • Open movie clip. At this time a new "Camera" data-block will be created and attached to this specific movie.

  • If user have multiple clips to open, he can decide in time of opening, should be a new Camera be created or not to avoid a lot of garbage camera data which may be confusing. In case when user sure that clip he opening taken to camera he already uses in Blender file; he can select it at opening.

  • Regular workflow should not be changed a lot, but user always should have a choice which Camera should be used for specific movie clip.

Camera background

Assumed that user use Blender's Camera as representation of real-world camera ("This movie clip was taken on this camera"):

  • User should be able to add cameras with their reference footages and, possibly, solved motion to the scene, regardless if the footage is all shot on the same exact camera. This is how one can assemble a "digital" set by combiding footage from multiple cameras.

  • Moving forward is good to have lens distortions for images as well as for movie clips with the same option "Render Undistorted"

New Compositor's "Camera Distortion" node

  • As a source of camera intrinsics, should be used specific Camera ID.

  • User can change coefficients and other stuff in Camera data UI so the only options he need in node is "Camera" and "Mode" (distort/undistort).


Looking Forward

  • #81662 . These changes can be usable for camera projection painting as well. Imagine for a while that we already have implemented camera intrinsics as part of Camera ID. Then user have a scene with active camera. So, when he uses some texture of "Draw" brush (for example, movie clip from drone), he just need select (not implemented yet;)) a new "Camera" texture mapping mode and use active scene's camera as projection source. Then camera intrinsics that he just solved in tracker can be used to paint directly on some VFX scene prop objects and here it is - the result. From the side of photo scan refinements this workflow would be usable as well, with a nice visual representation (background images).

  • Distortion rendering is possible in Cycles, so it open a lot of opportunities for VFX with what is already in Blender, without need to manage something extra.

  • EEVEE might also support distortion rendering, but via slightly different technique.

  • Will be covered with asset manager.

How it should look like in Movie Tracker:
{F9262097, width=100%}

Engineer plan:

Work plan

{icon check-circle color=yellow} Milestone 1

    • Implement DNA implementation for ID Camera

Milestone 2

    • Distortion / Undistortion API migration to BKE_camera.h
    • Full MovieTrackingCamera replacement by Camera pointer (tracker itself, camera backround image/movie, compositing node)
    • File versioning support

Open questions:

  • Is the described change in the workflow and behavior something suported by the current module team? @sebastian_k @SeanKennedy ?

  • Versioning topic. The description is lacking in the design, and I am not really sure how to do it, especially if the clip is linked.

**Status:** Task review --- **Team** **Commissioner:** @Sergey @Jeroen-Bakker **Project leader:**@ivpe **Project members:** `-` **Description** **Big picture:** Improve re-usability and allow animation for camera intrinsics ### Design: Re-usability of camera intrinsics for now is limited by using movie clip itself, so it's good to have a general-purpose place to store these data. For example, distortions can be solved by Movie Tracking Solver and used by another movie clip or compositing node. Animating lens distortion is common practice for VFX artists and it's good to have such possibility in compositing node "Movie Distortion", which for now is glued to non-animated movie clip camera. Movie clip can be captured using zoom lenses as well as using optical or sensor stabilization, that mean lens distortion can be changed per frame. --- **Camera Intrinsics** --- Camera intrinsics should become a part of Camera ID data-block. It should contain calibration / lens distortion data, be animatable and have representation in actual Camera data UI: - {icon check-circle color=green} **Focal Length** - actual Camera's `lens` in millimeters, as far as different movie clips can have different pixel dimensions which will be evaluated for specific movie clip. **(existing camera data)**. - {icon check-circle color=green} **Sensor width** - actual Camera's sensor. For "Vertical" sensor fit sensor Y should be used as sensor width. **(existing camera data)**. - {icon check-circle color=yellow} **Principal Point (X, Y)** - lens calibration parameter in millimeters but may be showed to user in pixels relative to specific input source pixel dimensions. - {icon check-circle color=yellow} **Pixel Aspect Ratio** - should be clamped in 0.0001-1.9999 to provide distort / undistort workflow, factor value. **(existing scene render settings data)**. - {icon check-circle color=yellow} **Units** - only a for display in millimeters or in pixels relative to current distortion source dimensions. Affect only UI (enumerator value, not animatable). - {icon check-circle color=yellow} **Distortion Model** - all from Libmv: Polynomial, Division, Nuke, Brown-Conrady (enumerator value). Affect same as before, which math algorithm and coefficients should be used. - {icon check-circle color=yellow} **Radial coefficients** - should keep support correspondence with particular Libmv lens distortion models. (k1 - k4 parameters). - {icon check-circle color=yellow} **Tangential coefficients** should keep support correspondence with particular Libmv lens distortion models. At the moment. presented in Brown-Conrady lens model (p1, p2). - {icon times color=red} ***Skew. affinity correction*** parameters is not used anywhere else in Blender, so should not be presented in current task design. --- **Affected data-blocks:** --- **Movie Clip** This ID's should always contain pointer to specific Camera ID and use camera's lens distortion parameters for lens distortion evaluations. Movie Tracker Solver should interact with these data as well. So, from user perspective it should looks like: - Open movie clip. At this time a new "Camera" data-block will be created and attached to this specific movie. - If user have multiple clips to open, he can decide in time of opening, should be a new Camera be created or not to avoid a lot of garbage camera data which may be confusing. In case when user sure that clip he opening taken to camera he already uses in Blender file; he can select it at opening. - Regular workflow should not be changed a lot, but user always should have a choice which Camera should be used for specific movie clip. **Camera background** Assumed that user use Blender's Camera as representation of real-world camera ("This movie clip was taken on this camera"): - User should be able to add cameras with their reference footages and, possibly, solved motion to the scene, regardless if the footage is all shot on the same exact camera. This is how one can assemble a "digital" set by combiding footage from multiple cameras. - Moving forward is good to have lens distortions for images as well as for movie clips with the same option "Render Undistorted" **New Compositor's "Camera Distortion" node** - As a source of camera intrinsics, should be used specific Camera ID. - User can change coefficients and other stuff in Camera data UI so the only options he need in node is "Camera" and "Mode" (distort/undistort). --- **Looking Forward** - #81662 . These changes can be usable for camera projection painting as well. Imagine for a while that we already have implemented camera intrinsics as part of Camera ID. Then user have a scene with active camera. So, when he uses some texture of "Draw" brush (for example, movie clip from drone), he just need select (not implemented yet;)) a new "Camera" texture mapping mode and use active scene's camera as projection source. Then camera intrinsics that he just solved in tracker can be used to paint directly on some VFX scene prop objects and here it is - the result. From the side of photo scan refinements this workflow would be usable as well, with a nice visual representation (background images). - Distortion rendering is possible in Cycles, so it open a lot of opportunities for VFX with what is already in Blender, without need to manage something extra. - EEVEE might also support distortion rendering, but via slightly different technique. - Will be covered with asset manager. How it should look like in Movie Tracker: {[F9262097](https://archive.blender.org/developer/F9262097/image.png), width=100%} **Engineer plan:** --- **Work plan** **{icon check-circle color=yellow} Milestone 1** * - [ ] Implement DNA implementation for `ID Camera` **Milestone 2** * - [ ] Distortion / Undistortion API migration to `BKE_camera.h` * - [ ] Full `MovieTrackingCamera` replacement by `Camera` pointer (tracker itself, camera backround image/movie, compositing node) * - [ ] File versioning support * --- **Open questions:** --- - [ ] Is the described change in the workflow and behavior something suported by the current module team? @sebastian_k @SeanKennedy ? - [ ] Versioning topic. The description is lacking in the design, and I am not really sure how to do it, especially if the clip is linked.
Ivan Perevala self-assigned this 2020-11-12 11:33:28 +01:00
Author

Added subscribers: @ivpe, @Sergey, @Jeroen-Bakker, @ssh4

Added subscribers: @ivpe, @Sergey, @Jeroen-Bakker, @ssh4

What is it what you are intending to achieve within this process?

There are few issues I see with making the lens distortion an ID.

Movie clips are often coming from different camera/lens, which have their own specific parameters. Within this design proposal this means that it is pretty much 1:1 relation between movie clip and its distortion ID, meaning, there are no real benefits of moving distortion out of the clip in this case.

From user perspective, it will either be confusing that distortion ID is to be created for each of the movie clips, or the system will do it automatically. So it is either inconvenience, or, again, behavior is as if MovieClip and distortion IDs are strongly coupled.

You mention animation of distortion model and solver. The solver does not support animated parameters. While it can do support animation in theory, it is quite challenging project on its own. If distortion model parameters becomes animatable in the clip editor, you'll introduce a big confusion about what interface is allowing, and what the underlying solver actually supports.

For the compositor proper solution would be to introduce a node which acts as a standalone Libmv distortion, without string coupling to a movie clip (aka, node with Distortion model selection, parameters and so on). This is something what is indisputably will bring value to the compositor.

For the background images in 2.79 it was possible to use movie clip, which brings all required information needed to properly visualize the clip.

If you are looking into making some sort of library management for lens, to make library of calibrated cameras, the motivational part of this design seems off.
If you are looking into adding more flexibility to the compositor, or to visualize movie clip as a background image the solution is something different here.

What is it what you are intending to achieve within this process? There are few issues I see with making the lens distortion an ID. Movie clips are often coming from different camera/lens, which have their own specific parameters. Within this design proposal this means that it is pretty much 1:1 relation between movie clip and its distortion ID, meaning, there are no real benefits of moving distortion out of the clip in this case. From user perspective, it will either be confusing that distortion ID is to be created for each of the movie clips, or the system will do it automatically. So it is either inconvenience, or, again, behavior is as if MovieClip and distortion IDs are strongly coupled. You mention animation of distortion model and solver. The solver does not support animated parameters. While it can do support animation in theory, it is quite challenging project on its own. If distortion model parameters becomes animatable in the clip editor, you'll introduce a big confusion about what interface is allowing, and what the underlying solver actually supports. For the compositor proper solution would be to introduce a node which acts as a standalone Libmv distortion, without string coupling to a movie clip (aka, node with Distortion model selection, parameters and so on). This is something what is indisputably will bring value to the compositor. For the background images in 2.79 it was possible to use movie clip, which brings all required information needed to properly visualize the clip. If you are looking into making some sort of library management for lens, to make library of calibrated cameras, the motivational part of this design seems off. If you are looking into adding more flexibility to the compositor, or to visualize movie clip as a background image the solution is something different here.

Continuation of thoughts.

Depending on specific workflow you want to unlock, there are some possibilities here.

First, what I've mentioned already, have proper Distortion node in compositor, which will support same models and parameters used by MovieClip, but without MovieClip associated with. Just a node with handful of parameters. Can be as crazy as making numeric parameters a sockets, so that node graph can control it. This will make it impossible to cache distortion grid, so performance will be worse than when parameters are constant, but it gives a lot of flexibility.

Second, for seeing movie clips in viewport with proper distortion, the background images should become aware of movie clips and (un)distortion of the clip. This is something nice to make sure is possible (not sure if such possibility or some aspects of it got lost in 2.8).

Third, if there is some crazy workflow you're using (which I'll be curious to know), I do see that you might want to be able to do something like this:

  • Open clip in Movie Clip Editor.
  • Use this clip as background images, in undistorted mode.
  • Use this clip in compositor, and undistort it
  • Be able to change distortion in single place, and see updates in viewport and compositor.

Lets ignore the camera solver for a moment. To achieve such workflow you don't need to have new ID. It is possible to have parameters animatable without introducing new entity (by marking the RNA property as animatable, ensuring there is RNA path for the property). If the background images are fully supporting clip and its distortion, you workflow comes "for free".

Supporting animation in the solver is a tricky project. For the time being, it can be just a warning during solver that animated distortion model is not supported, and the parameters from active frames are used.

The issue of adding new entity (such as Distortion ID) is that this makes it so artists need to keep track of more things, increases chance of mistake. And it seems that flexibility you need can be achieved in different ways. It is not impossible to decouple distortion away from the clip, but there should be a huge argument for doing that, and should be something what can not be solved otherwise.

Continuation of thoughts. Depending on specific workflow you want to unlock, there are some possibilities here. First, what I've mentioned already, have proper Distortion node in compositor, which will support same models and parameters used by MovieClip, but without MovieClip associated with. Just a node with handful of parameters. Can be as crazy as making numeric parameters a sockets, so that node graph can control it. This will make it impossible to cache distortion grid, so performance will be worse than when parameters are constant, but it gives a lot of flexibility. Second, for seeing movie clips in viewport with proper distortion, the background images should become aware of movie clips and (un)distortion of the clip. This is something nice to make sure is possible (not sure if such possibility or some aspects of it got lost in 2.8). Third, if there is some crazy workflow you're using (which I'll be curious to know), I do see that you might want to be able to do something like this: - Open clip in Movie Clip Editor. - Use this clip as background images, in undistorted mode. - Use this clip in compositor, and undistort it - Be able to change distortion in single place, and see updates in viewport and compositor. Lets ignore the camera solver for a moment. To achieve such workflow you don't need to have new ID. It is possible to have parameters animatable without introducing new entity (by marking the RNA property as animatable, ensuring there is RNA path for the property). If the background images are fully supporting clip and its distortion, you workflow comes "for free". Supporting animation in the solver is a tricky project. For the time being, it can be just a warning during solver that animated distortion model is not supported, and the parameters from active frames are used. The issue of adding new entity (such as Distortion ID) is that this makes it so artists need to keep track of more things, increases chance of mistake. And it seems that flexibility you need can be achieved in different ways. It is not impossible to decouple distortion away from the clip, but there should be a huge argument for doing that, and should be something what can not be solved otherwise.
Author

@Sergey For sure you are right in most cases. But for some of them here is my throughts:

I agree, for the things that currently exist in Blender's motion tracking / compositing there is not such a big necessary to create a separate ID data-block for lens distortions, it's just the one of many ways.
Mostly current workflow limitations really indented to "Movie Distortion" compositing node. Yes, I see the way to make it more separated from movie clip - create its own intrinsics block, make it animated. It really gives much more flexibility for users in the compositor. To avoid heavy versioning, it may be better to change not "Movie Distortion" but "Lens Distortion" node? But in this way another issue occurred in terms of usability (and actual, code maintenance but it's another question) - user may be confused why he have 2 nodes doing the same but one can be animated and another - not.

When I start this idea, I looked as an example to Object data-block. It has actually "data" (but there it can be camera, mesh, curve, etc.). As far as I know, there user do not get confused why object (camera, for example) can
have different camera-data (but newer None). So here it can be interpreted by user in the same way "I have a movie clip - it has lens distortions". Of course, in the same way as for object, user expect that a new lens distortion data-block will be created when a new movie clip was added. So, the ability to change lens distortion is only an addition to existing functionality. It can be useful too, I think. For example, you know that all videos in your project was taken from exact the same camera, so you do not need to use solver for all of them or copy-paste a lot of parameters. You can just solve one of them and select / copy lens distortion data-block for all the other ones. For sure, it can be achieved in the compositor without any patches, because of just selecting solved movie clip in "Movie Distortion" node but if you use camera background movie clip and check "Render Undistorted" checkbox, you expect to see the same (un)distortion visualization as for video you solved by tracker. And at the moment you need again solve all the videos or copy-paste distortion parameters. So, I think that in this case would be good to have some data-block to store distortion parameters and make it selectable.

You asked what I intending to achieve within this process. My main target is still #81662 and current task is one of the ways to achieve that. By this way (only relative to movie clips) user can solve distortion in tracker and reuse solved data for paint. For images user can create new lens distortions. It's the hard way to code but I think its most user-friendly way to use lens distortions not only for movies, but for images as well without any code / data duplication. Currently the only way to (un)distort images is pretty weird - use the compositor, render them and save as junky files.

Of course, for now Solver do not support animated distortions, but there is a lot of (free and paid) software which can do so. It good to have this possibility to improve compatibility with third-party software. I hope one day Blender's solver itself can do that, but it is no in terms of current task / patch.

And also, as mentioned before, one of issues is that currently movie clip is the only data-block which have lens distortion (and too much hardcoded into them). What about images? They can be distorted as well but user cannot undistort them, cannot use undistorted images as camera background image.

@Sergey For sure you are right in most cases. But for some of them here is my throughts: I agree, for the things that currently exist in Blender's motion tracking / compositing there is not such a big necessary to create a separate ID data-block for lens distortions, it's just the one of many ways. Mostly current workflow limitations really indented to "Movie Distortion" compositing node. Yes, I see the way to make it more separated from movie clip - create its own intrinsics block, make it animated. It really gives much more flexibility for users in the compositor. To avoid heavy versioning, it may be better to change not "Movie Distortion" but "Lens Distortion" node? But in this way another issue occurred in terms of usability (and actual, code maintenance but it's another question) - user may be confused why he have 2 nodes doing the same but one can be animated and another - not. When I start this idea, I looked as an example to Object data-block. It has actually "data" (but there it can be camera, mesh, curve, etc.). As far as I know, there user do not get confused why object (camera, for example) can have different camera-data (but newer None). So here it can be interpreted by user in the same way "I have a movie clip - it has lens distortions". Of course, in the same way as for object, user expect that a new lens distortion data-block will be created when a new movie clip was added. So, the ability to change lens distortion is only an addition to existing functionality. It can be useful too, I think. For example, you know that all videos in your project was taken from exact the same camera, so you do not need to use solver for all of them or copy-paste a lot of parameters. You can just solve one of them and select / copy lens distortion data-block for all the other ones. For sure, it can be achieved in the compositor without any patches, because of just selecting solved movie clip in "Movie Distortion" node but if you use camera background movie clip and check "Render Undistorted" checkbox, you expect to see the same (un)distortion visualization as for video you solved by tracker. And at the moment you need again solve all the videos or copy-paste distortion parameters. So, I think that in this case would be good to have some data-block to store distortion parameters and make it selectable. You asked what I intending to achieve within this process. My main target is still #81662 and current task is one of the ways to achieve that. By this way (only relative to movie clips) user can solve distortion in tracker and reuse solved data for paint. For images user can create new lens distortions. It's the hard way to code but I think its most user-friendly way to use lens distortions not only for movies, but for images as well without any code / data duplication. Currently the only way to (un)distort images is pretty weird - use the compositor, render them and save as junky files. Of course, for now Solver do not support animated distortions, but there is a lot of (free and paid) software which can do so. It good to have this possibility to improve compatibility with third-party software. I hope one day Blender's solver itself can do that, but it is no in terms of current task / patch. And also, as mentioned before, one of issues is that currently movie clip is the only data-block which have lens distortion (and too much hardcoded into them). What about images? They can be distorted as well but user cannot undistort them, cannot use undistorted images as camera background image.

Compositor story

This is a very great topic about Movie Distortion vs. Lens Distortion. I think best is to leave the Movie Distortion to how it is: limit it to always have a strong coupling to the clip, without manual sliders other than distortion "direction". Basically, it its mental model to "do (un)distortion" exactly how the movie clip behaves at the corresponding frame.

The Lens Distortion node can have a drop-down menu with the distortion model. It will include the current model for compatibility, and all Libmv's distortion models available from there as well. Depending whether you go sliders or sockets route for parameters it might be a bit more tricky to hide "unwanted" sockets. If you go sockets approach you can have a glance into how it is done for the image node or render layer, which handles sockets visibility based on image or scene settings. I'm not sure whether route of sockets indeed worth the complexity though.

Animation of Lens Distortion will come "for free", as all parameters in node trees are already animatable. I would not worry too much about the fact that Movie Distortion can not be animated. Its mental model is to follow what movie clip is doing. Eventually, when we add animation support to solver it will become animatable as well.

From the underlying data structures point of view, we most likely want to reuse same DNA structure in MovieClip and Distortion Node, but this structure will not an ID.

Re-usability of the Distortion model

This is an interesting and valid topic. To me it seems it goes deeper than just a DistortionModel. To fully unlock the usecase you're describing when you have multiple movie clips shot with the same camera you also need to make parameters like pixel aspect. For operators like Solve and Setup Tracking Scene you would also need to share settings like Sensor Width.

So it is almost like we want the entire camera settings to be reusable across movie clips somehow?

Animated distortion parameters in MovieClip

Think we can do that without opening can of worms related on whether we need new ID type or not. While solver will not support animation, making parameters animatable will help with interoperability and artistic tweaks to some extent.

Images

Historically there was a separation between Image and MovieClip. Images is something simple for frame/video I/O allowing things like interlacing and fields, MovieClip was all designed around the idea that it is more tightly coupled with real camera, tracking information and things like that.

The issue with distortion in images is that there are no tools and no editor where distortion can be estimated. So i would say if you need distortion, use MovieClip instead of Image.

If MovieClip can't be used as a camera background, that should be fixed.

Moving forward

I think it is important to break down this task which moved to discussion of some complex topic like linking/reusability into something smaller and achievable steps, which does move you forward satisfaction.

One of the immediate possible little projects is to extend Lens Distortion node with distortion models from Libmv.

Second, could be to make distortion parameters animatable in MovieClip. As I said, we can detect animation before running solver, and supporting animation will easy interoperability.

Third, make sure MovieClip can be used as reference in all sort of background and camera backdrop reference images, things like that. In both distorted and undistored manner.

I feel like making camera reusable across multiple clips is an interesting topic, but it needs some more careful considerations and thoughts and design. And there is no need to hold up other improvements while such design is being made.

**Compositor story** This is a very great topic about `Movie Distortion` vs. `Lens Distortion`. I think best is to leave the `Movie Distortion` to how it is: limit it to always have a strong coupling to the clip, without manual sliders other than distortion "direction". Basically, it its mental model to "do (un)distortion" exactly how the movie clip behaves at the corresponding frame. The `Lens Distortion` node can have a drop-down menu with the distortion model. It will include the current model for compatibility, and all Libmv's distortion models available from there as well. Depending whether you go sliders or sockets route for parameters it might be a bit more tricky to hide "unwanted" sockets. If you go sockets approach you can have a glance into how it is done for the image node or render layer, which handles sockets visibility based on image or scene settings. I'm not sure whether route of sockets indeed worth the complexity though. Animation of Lens Distortion will come "for free", as all parameters in node trees are already animatable. I would not worry too much about the fact that `Movie Distortion` can not be animated. Its mental model is to follow what movie clip is doing. Eventually, when we add animation support to solver it will become animatable as well. From the underlying data structures point of view, we most likely want to reuse same DNA structure in `MovieClip` and `Distortion Node`, but this structure will not an `ID`. **Re-usability of the Distortion model** This is an interesting and valid topic. To me it seems it goes deeper than just a `DistortionModel`. To fully unlock the usecase you're describing when you have multiple movie clips shot with the same camera you also need to make parameters like `pixel aspect`. For operators like `Solve` and `Setup Tracking Scene` you would also need to share settings like `Sensor Width`. So it is almost like we want the entire camera settings to be reusable across movie clips somehow? **Animated distortion parameters in MovieClip** Think we can do that without opening can of worms related on whether we need new ID type or not. While solver will not support animation, making parameters animatable will help with interoperability and artistic tweaks to some extent. **Images** Historically there was a separation between `Image` and `MovieClip`. Images is something simple for frame/video I/O allowing things like interlacing and fields, `MovieClip` was all designed around the idea that it is more tightly coupled with real camera, tracking information and things like that. The issue with distortion in images is that there are no tools and no editor where distortion can be estimated. So i would say if you need distortion, use `MovieClip` instead of `Image`. If `MovieClip` can't be used as a camera background, that should be fixed. **Moving forward** I think it is important to break down this task which moved to discussion of some complex topic like linking/reusability into something smaller and achievable steps, which does move you forward satisfaction. One of the immediate possible little projects is to extend `Lens Distortion` node with distortion models from Libmv. Second, could be to make distortion parameters animatable in `MovieClip`. As I said, we can detect animation before running solver, and supporting animation will easy interoperability. Third, make sure `MovieClip` can be used as reference in all sort of background and camera backdrop reference images, things like that. In both distorted and undistored manner. I feel like making camera reusable across multiple clips is an interesting topic, but it needs some more careful considerations and thoughts and design. And there is no need to hold up other improvements while such design is being made.
Author

@Sergey Yes, I have a few questions.
In terms of reusability yes, indeed, general-purpose structure should contain all camera intrinsics:

Calibration:

(in millimeters, as far as movie clip / node input can have different pixel dimensions)

  • Focal Length (in millimeters, as far as movie clip / node input can have different pixel dimensions)
  • CCD sensor width
  • Principal Point
    (factor values)
  • Pixel Aspect Ratio (actually, should be clamped in 0.0001-1.9999 to provide distort / undistort workflow)
  • Skew (currently not exist in tracker, why?)
    (enumerator values)
  • Units (only a for display in millimeters or in pixels relative to current distortion source dimensions)

Lens Distortion:
(enumerator values)

  • Lens Distortion Model (all from Libmv)
    (factor values)
  • radial for each distortion model: k1-k4
  • tangential for supported: p1, p2 (Brown-Conrady, at the moment)

That is all data required to transfer between different data blocks (movie and node for example).

But if it was not an ID, where they should be stored? From user side, if it will be an ID, user will have a single "database" of camera intrinsics and can select each of them where he wants, for movie or for a node. Can you suggest a right place to store that structures to provide same possibility for end user?

@Sergey Yes, I have a few questions. In terms of reusability yes, indeed, general-purpose structure should contain all camera intrinsics: **Calibration:** (in millimeters, as far as movie clip / node input can have different pixel dimensions) - Focal Length (in millimeters, as far as movie clip / node input can have different pixel dimensions) - CCD sensor width - Principal Point (factor values) - Pixel Aspect Ratio (actually, should be clamped in 0.0001-1.9999 to provide distort / undistort workflow) - Skew (currently not exist in tracker, why?) (enumerator values) - Units (only a for display in millimeters or in pixels relative to current distortion source dimensions) **Lens Distortion:** (enumerator values) - Lens Distortion Model (all from Libmv) (factor values) - radial for each distortion model: k1-k4 - tangential for supported: p1, p2 (Brown-Conrady, at the moment) That is all data required to transfer between different data blocks (movie and node for example). But if it was not an ID, where they should be stored? From user side, if it will be an ID, user will have a single "database" of camera intrinsics and can select each of them where he wants, for movie or for a node. Can you suggest a right place to store that structures to provide same possibility for end user?
Contributor

Added subscriber: @Raimund58

Added subscriber: @Raimund58

I do not think making it a requirement to have and ID with distortion/camera details should be a requirement for the node. It should be possible to have full artistic freedom with (un)distortion without creating a lot of IDs.

To cover re-usability of the camera/distortion settings it is almost as proper place for this is an existing Blender Camera ID datablock. This streamlines things like:

  • No longer needed to have duplicated camera presets for tracking camera and blender camera
  • Distortion rendering is possible in Cycles, so it open a lot of opportunities for VFX with what is already in Blender, without need to manage something extra.
  • EEVEE might also support distortion rendering, but via slightly different technique.
  • Will be covered with asset manager.

There are still open topics about how it all links together within the following workflows:

  • Single clip solver.
  • Multiple clips used to create a preview of larger set scene. This divides into two cases: (1) all clips are shot with same exact camera, without zoom level change (2) some of clip have different lens settings.
  • Photogrammetry from unordered set of pictures.
  • Photogrammetry from set of video clips.
  • Projeciton painting within the photogrammetry context.
I do not think making it a requirement to have and ID with distortion/camera details should be a requirement for the node. It should be possible to have full artistic freedom with (un)distortion without creating a lot of IDs. To cover re-usability of the camera/distortion settings it is almost as proper place for this is an existing Blender Camera ID datablock. This streamlines things like: * No longer needed to have duplicated camera presets for tracking camera and blender camera * Distortion rendering is possible in Cycles, so it open a lot of opportunities for VFX with what is already in Blender, without need to manage something extra. * EEVEE might also support distortion rendering, but via slightly different technique. * Will be covered with asset manager. There are still open topics about how it all links together within the following workflows: * Single clip solver. * Multiple clips used to create a preview of larger set scene. This divides into two cases: (1) all clips are shot with same exact camera, without zoom level change (2) some of clip have different lens settings. * Photogrammetry from unordered set of pictures. * Photogrammetry from set of video clips. * Projeciton painting within the photogrammetry context.
Author

@Sergey some time ago I saw similar suggestion by Brecht and yes, it seems logical that "Intrinsics" should be a part of actually "Camera". This way is cover as animation support of camera intrinsics as re-usability case I mentioned before (which is important for projection painting as well).

Let imagine that we got it in ID "Camera".
And here appear questions:

  • How to deal with circular dependency: Camera use background images, background images use movie clips. But at the same time movie clips use Cameras? It looks like we need to break some of these elements into parts (for example, create a new "Camera Intrinsics" ID;))

  • How to deal with orthographic and panoramic camera types? For example, user select some perspective camera and then changed type to orthographic. Of course, we can just use parameters for lens distortion that used before but it completely incorrect.

  • Focal lens: should be used camera ID data-block lens or it should be a separate parameter? It touches sensor width parameter too. My own opinion that should be used lens, otherwise actual lens distortions will not be correct. For now, I see a difference in tracking camera focal and actual camera "lens" (how they are stored and evaluated) but it's not a big deal to resolve it to Blender's Camera "millimeters" unit.

@Sergey some time ago I saw similar suggestion by Brecht and yes, it seems logical that "Intrinsics" should be a part of actually "Camera". This way is cover as animation support of camera intrinsics as re-usability case I mentioned before (which is important for projection painting as well). Let imagine that we got it in ID "Camera". And here appear questions: - **How to deal with circular dependency:** Camera use background images, background images use movie clips. But at the same time movie clips use Cameras? It looks like we need to break some of these elements into parts (for example, create a new "Camera Intrinsics" ID;)) - How to deal with orthographic and panoramic camera types? For example, user select some perspective camera and then changed type to orthographic. Of course, we can just use parameters for lens distortion that used before but it completely incorrect. - Focal lens: should be used camera ID data-block `lens` or it should be a separate parameter? It touches sensor width parameter too. My own opinion that should be used `lens`, otherwise actual lens distortions will not be correct. For now, I see a difference in tracking camera focal and actual camera "lens" (how they are stored and evaluated) but it's not a big deal to resolve it to Blender's Camera "millimeters" unit.

How to deal with circular dependency

Sorry for the crudity of the model, didn't have time to build it to scale or paint it.

PXL_20201113_104508914.jpg

How to deal with orthographic and panoramic camera types?

Not worry about them within the VFX workflows.
For artistic purposes can support distortion for orthographic cameras.

For the panorama can actually support distortion for Cycles, with the end goal is to be able to properly render footage similar to the ones from 360 cameras. Surely, distortion models would need to be extended for that.

Focal lens

Focal length comes from the Blender ID camera I'd say.
Blender camera already has sensor size, so it is possible to convert mm <-> pixels. I don't currently see why NOT to use Blender's Camera focal lens. Might be missing something though.

> How to deal with circular dependency Sorry for the crudity of the model, didn't have time to build it to scale or paint it. ![PXL_20201113_104508914.jpg](https://archive.blender.org/developer/F9274153/PXL_20201113_104508914.jpg) > How to deal with orthographic and panoramic camera types? Not worry about them within the VFX workflows. For artistic purposes can support distortion for orthographic cameras. For the panorama can actually support distortion for Cycles, with the end goal is to be able to properly render footage similar to the ones from 360 cameras. Surely, distortion models would need to be extended for that. > Focal lens Focal length comes from the Blender ID camera I'd say. Blender camera already has sensor size, so it is possible to convert mm <-> pixels. I don't currently see why NOT to use Blender's Camera focal lens. Might be missing something though.
Author

@Sergey So as can I summarize discussion above. If you are not agreeing with something, please let me know, I would like to move on with this project:

Camera Intrinsics
Camera intrinsics should become a part of Camera ID data-block. It should contain calibration / lens distortion data, be animatable and have representation in actual Camera data UI:

  • Focal Length - actual Camera's lens in millimeters, as far as different movie clips can have different pixel dimensions which will be evaluated for specific movie clip.

  • Sensor width - actual Camera's sensor. For "Vertical" sensor fit sensor Y should be used as sensor width.

  • Principal Point (x, y) - lens calibration parameter in millimeters but may be showed to user in pixels relative to specific input source pixel dimensions.

  • Pixel Aspect Ratio - should be clamped in 0.0001-1.9999 to provide distort / undistort workflow, factor value.

  • Skew - skew correction factor, should be added as a new feature (factor value)

  • Units - only a for display in millimeters or in pixels relative to current distortion source dimensions. Affect only UI (enumerator value, not animatable)

  • Distortion Model - all from Libmv: Polynomial, Division, Nuke, Brown-Conrady (enumerator value). Affect same as before, which math algorithm and coefficients should be used

  • Radial for each distortion model: k1-k4

  • Tangential for supported: p1, p2 (Brown-Conrady only, at the moment)

As far as we know where these data stored, we can think about:

Affected data-blocks

Movie Clip
This ID's should always contain pointer to specific Camera ID and use camera's lens distortion parameters for lens distortion evaluations. Movie Tracker Solver should interact with these data as well.
So, from user perspective it should looks like:

  • Open movie clip. At this time a new "Camera" data-block will be created and attached to this specific movie.
  • If user have multiple clips to open, he can decide in time of opening, should be a new Camera be created or not to avoid a lot of garbage data which may be confusing. In case when user sure that clip he opening taken to camera he already uses in Blender file; he can select it at opening.
  • Regular workflow should not be changed a lot, but user always should have a choice which Camera should be used for specific movie clip.

Camera background image / movie clip
Assumed that user use Blender's Camera as representation of real-world camera. So "This movie clip was taken on this camera"

  • All background movie clips should be (un)distorted (of course if "Render undistorted" option is enabled) using camera intrinsics of current Camera ID. If movie clip has another camera selected as intrinsics source, is required to show a message like "Hey, you are using movie clip you shot on another camera!"
  • Moving forward is good to have lens distortions for images as well as for movie clips with the same option "Render Undistorted"

Compositor's "Movie Distortion" node

  • As a source of camera intrinsics, should be used specific Camera ID, replacing Movie Clip ID.
  • User can change coefficients and other stuff in Camera data UI so the only options he need in node is still "Camera" and "Mode" (distort/undistort).

Moving Forward

As I said before, "my main target is still #81662". So, these changes can be usable for camera projection painting as well. Imagine for a while that we already have implemented camera intrinsics as part of Camera ID. Then user have a scene with active camera. So, when he uses some texture of "Draw" brush (for example, movie clip from drone), he just need select (not implemented yet;)) a new "Camera" texture mapping mode and use active scene's camera as projection source. Then camera intrinsics that he just solved in tracker can be used to paint directly on some VFX scene prop objects and here it is - the result. From the side of photo scan refinements this workflow would be usable as well, with a nice visual representation (background images).

@Sergey So as can I summarize discussion above. If you are not agreeing with something, please let me know, I would like to move on with this project: **Camera Intrinsics** Camera intrinsics should become a part of Camera ID data-block. It should contain calibration / lens distortion data, be animatable and have representation in actual Camera data UI: - Focal Length - actual Camera's `lens` in millimeters, as far as different movie clips can have different pixel dimensions which will be evaluated for specific movie clip. - Sensor width - actual Camera's sensor. For "Vertical" sensor fit sensor Y should be used as sensor width. - Principal Point (x, y) - lens calibration parameter in millimeters but may be showed to user in pixels relative to specific input source pixel dimensions. - Pixel Aspect Ratio - should be clamped in 0.0001-1.9999 to provide distort / undistort workflow, factor value. - Skew - skew correction factor, should be added as a new feature (factor value) - Units - only a for display in millimeters or in pixels relative to current distortion source dimensions. Affect only UI (enumerator value, not animatable) - Distortion Model - all from Libmv: Polynomial, Division, Nuke, Brown-Conrady (enumerator value). Affect same as before, which math algorithm and coefficients should be used - Radial for each distortion model: k1-k4 - Tangential for supported: p1, p2 (Brown-Conrady only, at the moment) As far as we know where these data stored, we can think about: ### Affected data-blocks **Movie Clip** This ID's should always contain pointer to specific Camera ID and use camera's lens distortion parameters for lens distortion evaluations. Movie Tracker Solver should interact with these data as well. So, from user perspective it should looks like: - Open movie clip. At this time a new "Camera" data-block will be created and attached to this specific movie. - If user have multiple clips to open, he can decide in time of opening, should be a new Camera be created or not to avoid a lot of garbage data which may be confusing. In case when user sure that clip he opening taken to camera he already uses in Blender file; he can select it at opening. - Regular workflow should not be changed a lot, but user always should have a choice which Camera should be used for specific movie clip. **Camera background image / movie clip** Assumed that user use Blender's Camera as representation of real-world camera. So "This movie clip was taken on this camera" - All background movie clips should be (un)distorted (of course if "Render undistorted" option is enabled) using camera intrinsics of current Camera ID. If movie clip has another camera selected as intrinsics source, is required to show a message like "Hey, you are using movie clip you shot on another camera!" - Moving forward is good to have lens distortions for images as well as for movie clips with the same option "Render Undistorted" **Compositor's "Movie Distortion" node** - As a source of camera intrinsics, should be used specific Camera ID, replacing Movie Clip ID. - User can change coefficients and other stuff in Camera data UI so the only options he need in node is still "Camera" and "Mode" (distort/undistort). ### Moving Forward As I said before, "my main target is still #81662". So, these changes can be usable for camera projection painting as well. Imagine for a while that we already have implemented camera intrinsics as part of Camera ID. Then user have a scene with active camera. So, when he uses some texture of "Draw" brush (for example, movie clip from drone), he just need select (not implemented yet;)) a new "Camera" texture mapping mode and use active scene's camera as projection source. Then camera intrinsics that he just solved in tracker can be used to paint directly on some VFX scene prop objects and here it is - the result. From the side of photo scan refinements this workflow would be usable as well, with a nice visual representation (background images).
Ivan Perevala changed title from Tracking: unify data storage for lens distortions to Tracking: unify data storage for camera intrinsics 2020-11-16 13:46:28 +01:00
Author

@Sergey @Jeroen-Bakker , I updated task description and waiting for your feedback.

@Sergey @Jeroen-Bakker , I updated task description and waiting for your feedback.

@ivpe, catching with other tasks in the module. I'll give feedback as soon as time allows :)

@ivpe, catching with other tasks in the module. I'll give feedback as soon as time allows :)

Added subscribers: @SeanKennedy, @sebastian_k

Added subscribers: @SeanKennedy, @sebastian_k

Various feedback on design sections

Camera Intrinsics

There are already some of the parameters stated in your list. I assume the plan is to re-use them.
The Skew seems to need more explanation. I don't think any of the area of Blender supports it.

All background movie clips should be (un)distorted (of course if "Render undistorted" option is enabled) using camera intrinsics of current Camera ID data-block. If movie clip has another camera selected as intrinsics source, is required to show a message like "Hey, you are using movie clip you shot on another camera!"

Are you referring "current camera" as "active camera"? If you really mean it is an active camera, then I don't think the explained behavior is correct. One should be able to add cameras with their reference footages and, possibly, solved motion to the scene, regardless if the footage is all shot on the same exact camera. This is how one can assemble a "digital" set by combiding footage from multiple cameras.

Compositor's "Lens Distortion" node

To me Lens Distortion node should allow any sort of distortion specified directly in the node itself, without requirement to have a camera. Distortion might be done as an artistic choice, outside of VFX workflow.

There would need to be a new node, Camera Distortion. So that an image could be distorted exactly according to the specific camera.

The Movie Distortion can probably stay. Would ensure that image is distorted according to the movie clip, regardless of what user does with its camera settings.

Open questions

Is the described change in the workflow and behavior something suported by the current module team? @sebastian_k @SeanKennedy?

Versioning topic. The description is lacking in the design, and I am not really sure how to do it, especially if the clip is linked.

Various feedback on design sections > Camera Intrinsics There are already some of the parameters stated in your list. I assume the plan is to re-use them. The Skew seems to need more explanation. I don't think any of the area of Blender supports it. > All background movie clips should be (un)distorted (of course if "Render undistorted" option is enabled) using camera intrinsics of current Camera ID data-block. If movie clip has another camera selected as intrinsics source, is required to show a message like "Hey, you are using movie clip you shot on another camera!" Are you referring "current camera" as "active camera"? If you really mean it is an active camera, then I don't think the explained behavior is correct. One should be able to add cameras with their reference footages and, possibly, solved motion to the scene, regardless if the footage is all shot on the same exact camera. This is how one can assemble a "digital" set by combiding footage from multiple cameras. > Compositor's "Lens Distortion" node To me Lens Distortion node should allow any sort of distortion specified directly in the node itself, without requirement to have a camera. Distortion might be done as an artistic choice, outside of VFX workflow. There would need to be a new node, Camera Distortion. So that an image could be distorted exactly according to the specific camera. The Movie Distortion can probably stay. Would ensure that image is distorted according to the movie clip, regardless of what user does with its camera settings. **Open questions** Is the described change in the workflow and behavior something suported by the current module team? @sebastian_k @SeanKennedy? Versioning topic. The description is lacking in the design, and I am not really sure how to do it, especially if the clip is linked.
Author

@Sergey , yes. "Camera Intrinsics" section is about whole set of parameters required by user. For sure, existing focal length and sensor dimensions should be re-used, in terms of usability as well as in terms of file versioning. As described, sensor width and height should be used for CCD sensor width evaluation.

In my own opinion, such parameters as "skew" (and maybe, "affinity". but that is another story) should be presented. At least to provide compatibility with third-party software. Additionally asking, "pixel aspect" parameter is not a part of standard intrinsic parameters matrix of pinhole camera. But it presented in current tracking implementation. At the same time, "skew" parameter is a part of standard matrix, but do not exist not in tracking, not in libmv? For sure, its rarely used, but used.

@Sergey , yes. "Camera Intrinsics" section is about whole set of parameters required by user. For sure, existing focal length and sensor dimensions should be re-used, in terms of usability as well as in terms of file versioning. As described, sensor width and height should be used for CCD sensor width evaluation. In my own opinion, such parameters as "skew" (and maybe, "affinity". but that is another story) should be presented. At least to provide compatibility with third-party software. Additionally asking, "pixel aspect" parameter is not a part of standard intrinsic parameters matrix of pinhole camera. But it presented in current tracking implementation. At the same time, "skew" parameter is a part of standard matrix, but do not exist not in tracking, not in libmv? For sure, its rarely used, but used.

"Pixel aspect" is a part of camera intrinsics, in a way: some cameras do not have 1:1 mapping from "sensor" pixels to "footage" pixels, causing requirement to stretch or shrink the footage horizontally.

Confusing aspect with making aspect ratio a camera property which i forgot to mention: there is already aspect correction sliders in the render panel. Having two ways controlling same thing is not something desirable.

The scew is not used anywhere in Blender to my knowledge. We should only expose settings which we actually use. If property is only needed for compatibility, use ID properties instead.

"Pixel aspect" is a part of camera intrinsics, in a way: some cameras do not have 1:1 mapping from "sensor" pixels to "footage" pixels, causing requirement to stretch or shrink the footage horizontally. Confusing aspect with making aspect ratio a camera property which i forgot to mention: there is already aspect correction sliders in the render panel. Having two ways controlling same thing is not something desirable. The scew is not used anywhere in Blender to my knowledge. We should only expose settings which we actually use. If property is only needed for compatibility, use ID properties instead.
Author

@Sergey Yes. I can agree. I updated task description according to your suggestions. I think it would be nice to hear @sebastian_k and @SeanKennedy

@Sergey Yes. I can agree. I updated task description according to your suggestions. I think it would be nice to hear @sebastian_k and @SeanKennedy
Member

My knowledge of the details and intriciacies of matchmoving and camera solving are not nearly as deep as all yours, so please excuse any misunderstandings or gaps in my knowledge.

I've never had to animate a lens distortion, and I'm still not entirely clear why anyone would do that. If the camera does a zoom, does the lens distortion change and that's what you're trying to solve? From a VFX point of view, are there any instances of animated lens distortion in any movies or tv shows that anyone can point to? I've cetainly never had to deal with one in my work.

Having a lens distortion node in the compositor that has actual parameters instead of just pointing to the camera solve would be amazing, so I like Sergey's proposal for the Lens Distorion node, giving it a drop-down menu with the disotrtion models and parameter inputs.

Ivan mentioned Skew not existing in the current tracker. What would that be used for? Some kind of rolling shutter calculation? Doesn't sound like it, but I'm curious as to what skew is used for in solving.

Sorry for the crudity of the model, didn't have time to build it to scale or paint it.

You're really filling me with a lot of confidence, Doc!

The potential of all this for painting on geometry projected directly from a matchmoved camera is great and exciting!

To me Lens Distortion node should allow any sort of distortion specified directly in the node itself, without requirement to have a camera. Distortion might be done as an artistic choice, outside of VFX workflow.

I agree with this, although I'm still not entirely clear on what the differences would be between an updated Lens Distortion node and a new Camera Distortion node. Will they both have the ability to pick a distortion model and adjust value parameters?

Again, please have patience with me if I've misunderstood anything!

My knowledge of the details and intriciacies of matchmoving and camera solving are not nearly as deep as all yours, so please excuse any misunderstandings or gaps in my knowledge. I've never had to animate a lens distortion, and I'm still not entirely clear why anyone would do that. If the camera does a zoom, does the lens distortion change and that's what you're trying to solve? From a VFX point of view, are there any instances of animated lens distortion in any movies or tv shows that anyone can point to? I've cetainly never had to deal with one in my work. Having a lens distortion node in the compositor that has actual parameters instead of just pointing to the camera solve would be amazing, so I like Sergey's proposal for the Lens Distorion node, giving it a drop-down menu with the disotrtion models and parameter inputs. Ivan mentioned Skew not existing in the current tracker. What would that be used for? Some kind of rolling shutter calculation? Doesn't sound like it, but I'm curious as to what skew is used for in solving. > Sorry for the crudity of the model, didn't have time to build it to scale or paint it. You're really filling me with a lot of confidence, Doc! The potential of all this for painting on geometry projected directly from a matchmoved camera is great and exciting! > To me Lens Distortion node should allow any sort of distortion specified directly in the node itself, without requirement to have a camera. Distortion might be done as an artistic choice, outside of VFX workflow. I agree with this, although I'm still not entirely clear on what the differences would be between an updated Lens Distortion node and a new Camera Distortion node. Will they both have the ability to pick a distortion model and adjust value parameters? Again, please have patience with me if I've misunderstood anything!

In #82645#1071387, @SeanKennedy wrote:

I've never had to animate a lens distortion, and I'm still not entirely clear why anyone would do that. If the camera does a zoom, does the lens distortion change and that's what you're trying to solve? From a VFX point of view, are there any instances of animated lens distortion in any movies or tv shows that anyone can point to? I've cetainly never had to deal with one in my work.

Animated lens distortion is more to cover corner cases that can happen with external motion trackers (probably no) or photogrammetry software. For last one different focal length frames automatically required recalculate distortion coefficients. That just can give situation when same camera and “same” lens but different coefficients. Plus all this lens or sensor stabilization that in rough implementation mean per frame changes for principal point with fine tangential coefficients and principal.
As for real example in VFX, not sure if someone worried or tracked zoom, but allow user animate lens distortion can give interesting VFX effect, that someone will find how to use.

Ivan mentioned Skew not existing in the current tracker. What would that be used for? Some kind of rolling shutter calculation? Doesn't sound like it, but I'm curious as to what skew is used for in solving.

Again, this more to allow more smooth connection with external photogrammetry software, and definitely to resolve situations with rolling shutter.
But now everyone have smartphones with pretty good cameras but with perfect example of skewed footage when you shoot and moving fast.
Skew allow more “real” videos of UFO captured from car ;)

> In #82645#1071387, @SeanKennedy wrote: > > I've never had to animate a lens distortion, and I'm still not entirely clear why anyone would do that. If the camera does a zoom, does the lens distortion change and that's what you're trying to solve? From a VFX point of view, are there any instances of animated lens distortion in any movies or tv shows that anyone can point to? I've cetainly never had to deal with one in my work. Animated lens distortion is more to cover corner cases that can happen with external motion trackers (probably no) or photogrammetry software. For last one different focal length frames automatically required recalculate distortion coefficients. That just can give situation when same camera and “same” lens but different coefficients. Plus all this lens or sensor stabilization that in rough implementation mean per frame changes for principal point with fine tangential coefficients and principal. As for real example in VFX, not sure if someone worried or tracked zoom, but allow user animate lens distortion can give interesting VFX effect, that someone will find how to use. > Ivan mentioned Skew not existing in the current tracker. What would that be used for? Some kind of rolling shutter calculation? Doesn't sound like it, but I'm curious as to what skew is used for in solving. Again, this more to allow more smooth connection with external photogrammetry software, and definitely to resolve situations with rolling shutter. But now everyone have smartphones with pretty good cameras but with perfect example of skewed footage when you shoot and moving fast. Skew allow more “real” videos of UFO captured from car ;)
Member

definitely to resolve situations with rolling shutter

Being able to correct rolling shutter and/or track shots that have rolling shutter would be amazing. Not many professional cameras have this problem, but many consumer ones do.

Skew allow more “real” videos of UFO captured from car

Hahahahaha, this will definitely get used for this purpose if Skew ends up as a node for the compositor!

> definitely to resolve situations with rolling shutter Being able to correct rolling shutter and/or track shots that have rolling shutter would be amazing. Not many professional cameras have this problem, but many consumer ones do. > Skew allow more “real” videos of UFO captured from car Hahahahaha, this will definitely get used for this purpose if Skew ends up as a node for the compositor!

So @Sergey what do you think? Do we still need more feedback about this design, or already possible start to work on implementation?

So @Sergey what do you think? Do we still need more feedback about this design, or already possible start to work on implementation?

I am catching up with a several design docs, this is one of them.

Skew allow more “real” videos of UFO captured from car

I would use corner pin for this, same idea but no need to wait for Blender to implement new features prior to capturing UFO ;)

I am catching up with a several design docs, this is one of them. > Skew allow more “real” videos of UFO captured from car I would use corner pin for this, same idea but no need to wait for Blender to implement new features prior to capturing UFO ;)

I am catching up with a several design docs, this is one of them.

Sorry. Any chance to have your feedback/ideas to make this design and related to it features possible to land in v3.0?

> I am catching up with a several design docs, this is one of them. Sorry. Any chance to have your feedback/ideas to make this design and related to it features possible to land in v3.0?

The datablock layout seems to be fine to me.

Afraid I am missing the description of the workflow.

As you've mentioned in the versioning question, data might be linked. This is exactly how we've split tasks across different "departments" for Tears of Steel, for example. Surely, things don't need to be done exactly the same way, but from the current state of the design it is not clear to me how the workflow would need to be adapted.

For landing timeline I do not want to make any estimates until the is a rock solid design.

The datablock layout seems to be fine to me. Afraid I am missing the description of the workflow. As you've mentioned in the versioning question, data might be linked. This is exactly how we've split tasks across different "departments" for Tears of Steel, for example. Surely, things don't need to be done exactly the same way, but from the current state of the design it is not clear to me how the workflow would need to be adapted. For landing timeline I do not want to make any estimates until the is a rock solid design.
Author

This comment was removed by @ivpe

*This comment was removed by @ivpe*

@ivpe, sure I can see how that's helpful and which other benefits are possible with the datablock change. However, I'd call those usecase. Is also important to have those stated, but that's different from workflow. Workflow is more of how to perform a task, rather than what the tasks are.

  • Jessie wants to track footage and is working alone. Think this one is the most clear and I can see how this works from the current design.
  • Jessie is a motion tracking artist at a studio, working in a team. How the .blend files are organized from linking point of view? Is it camera which is linked? Are there any caveats with the datablock change apart from the versioning?

For the versioning, I think the versioning function would need to create a Camera datablock in the same library's bmain as the linked movie clip. But it also a bit depends on how we see such workflow is to be organized.

@ivpe, sure I can see how that's helpful and which other benefits are possible with the datablock change. However, I'd call those usecase. Is also important to have those stated, but that's different from workflow. Workflow is more of how to perform a task, rather than what the tasks are. - Jessie wants to track footage and is working alone. Think this one is the most clear and I can see how this works from the current design. - Jessie is a motion tracking artist at a studio, working in a team. How the .blend files are organized from linking point of view? Is it camera which is linked? Are there any caveats with the datablock change apart from the versioning? For the versioning, I think the versioning function would need to create a Camera datablock in the same library's bmain as the linked movie clip. But it also a bit depends on how we see such workflow is to be organized.

@Sergey
I still have feelings that we will stay forever on same point if will think only about usage but not type of data it can be.

For this moment Blender have Camera and Movie clip only. And properties are locked to them only.
Camera have extrinsic, Movie store lens intrinsic.

In real world:
Camera or cameras have are Camera+Lens and have extrinsic and intrinsic:
Camera - Position and Orientation in 3D space, Sensor Size
Lens - Intrinsic: Lens distortion (Brown, Division, Other), principal point, etc.
Movie Clip - only resolution and aspect (and color space)
Image - only resolution and aspect (and color space)

When camera tracking was only limited to Camera/Lens and clip.

For create complete support for camera projection painting in Blender with unique, not exist in any other apps, lens distortion support we need the way to define and store all camera, lens, clip extrinsic and intrinsic are separate.
Camera Projection not have a clip, but can have camera and lens. No image or clip can be in this moment.
Camera projection or composing can not have a clip, but have third party data with all camera and lens intrinsic, and this can be variable intrinsic as per camera as per frame. Including lens distortions, principal point or focal length.
Composing can not have any camera in case of external movie clips, but sensor size are important value that dramatically change distortion looks.

How about some sort of override data block for storing extrinsic and intrinsic in addition to exist Camera/Clip storage.
In situation when this new data need independent from clip or from camera this data can "override" Camera or Clip properties, or even used without them.

@Sergey I still have feelings that we will stay forever on same point if will think only about usage but not type of data it can be. For this moment Blender have Camera and Movie clip only. And properties are locked to them only. Camera have extrinsic, Movie store lens intrinsic. In real world: Camera or cameras have are Camera+Lens and have extrinsic and intrinsic: Camera - Position and Orientation in 3D space, Sensor Size Lens - Intrinsic: Lens distortion (Brown, Division, Other), principal point, etc. Movie Clip - only resolution and aspect (and color space) Image - only resolution and aspect (and color space) When camera tracking was only limited to Camera/Lens and clip. For create complete support for camera projection painting in Blender with unique, not exist in any other apps, lens distortion support we need the way to define and store all camera, lens, clip extrinsic and intrinsic are separate. Camera Projection not have a clip, but can have camera and lens. No image or clip can be in this moment. Camera projection or composing can not have a clip, but have third party data with all camera and lens intrinsic, and this can be variable intrinsic as per camera as per frame. Including lens distortions, principal point or focal length. Composing can not have any camera in case of external movie clips, but sensor size are important value that dramatically change distortion looks. How about some sort of override data block for storing extrinsic and intrinsic in addition to exist Camera/Clip storage. In situation when this new data need independent from clip or from camera this data can "override" Camera or Clip properties, or even used without them.

I do see how projection painting is important to you, and the changes to how information is stored and reused does sound good to me. However, the changes in how those things work should not hard an existing usecases and workflows.

I am just going by the open questions in this design, and those are the big ones. The versioning is important, but I can not give suggestions about how to approach that issue without knowing how exactly the workflow is to be organized.

The override idea you're mentioning I'm not sure. Override in Blender is something else, and it does sound like a more complex solution than it should be.

Now, to move forward with the open questions in the design.

  • The workflow within a single file I think is all clear, and it does sound powerful and good and opens new possibilities. Maybe some trick is possible to make it so if there is a single clip open, it shares the same camera datablock as scene's camera?
  • It is unclear what is the suggested or correct way to organize linking for the studios who does VFX. The importance here is to keep datablocks clear for artists.
  • The versioning might depend on the previous point. Maybe it's not, it just bothers me a bit that there is unclarity in related a field.
  • How to avoid extra configuration steps for artists. The proper paradigm here is: simple things simple, complex things are possible.

For versioning topic points to be explored:

  • For the local movie clip, set its camera to the same one which has camera track constraint. This way it feels that the versioning will bring the state closer to how the setup would have been created from scratch.
  • If there is no camera object in the file with local movie clip, create a "stray" camera datablock and use that for the clip.
  • For the linked movie clips, if there is usable camera datablock in the library file, pull it in and sue for the clip. Otherwise somehow create "stray" camera datablock in the library. Not sure how the local camera is to be modified though. Maybe use linked camera datablock in local cameras which has camera solver constraint?
I do see how projection painting is important to you, and the changes to how information is stored and reused does sound good to me. However, the changes in how those things work should not hard an existing usecases and workflows. I am just going by the open questions in this design, and those are the big ones. The versioning is important, but I can not give suggestions about how to approach that issue without knowing how exactly the workflow is to be organized. The override idea you're mentioning I'm not sure. Override in Blender is something else, and it does sound like a more complex solution than it should be. Now, to move forward with the open questions in the design. - The workflow within a single file I think is all clear, and it does sound powerful and good and opens new possibilities. Maybe some trick is possible to make it so if there is a single clip open, it shares the same camera datablock as scene's camera? - It is unclear what is the suggested or correct way to organize linking for the studios who does VFX. The importance here is to keep datablocks clear for artists. - The versioning might depend on the previous point. Maybe it's not, it just bothers me a bit that there is unclarity in related a field. - How to avoid extra configuration steps for artists. The proper paradigm here is: simple things simple, complex things are possible. For versioning topic points to be explored: - For the local movie clip, set its camera to the same one which has camera track constraint. This way it feels that the versioning will bring the state closer to how the setup would have been created from scratch. - If there is no camera object in the file with local movie clip, create a "stray" camera datablock and use that for the clip. - For the linked movie clips, if there is usable camera datablock in the library file, pull it in and sue for the clip. Otherwise somehow create "stray" camera datablock in the library. Not sure how the local camera is to be modified though. Maybe use linked camera datablock in local cameras which has camera solver constraint?

Added subscriber: @Eliot-Mack

Added subscriber: @Eliot-Mack

Added subscriber: @greg-12

Added subscriber: @greg-12

Added subscriber: @RobN

Added subscriber: @RobN

Getting this implemented would be extremely useful for us, so I'd like to help if there are any remaining 'decision roadblocks.'

For a studio that is doing VFX, it is likely that they would want to be able to track multiple clips into a given scene, with each clip associated with a separate tracked camera with its own camera extrinsics + intrinsics.

Sean is correct that animated camera intrinsic parameters are rare in VFX shots unless you have real time lens encoding data (very rare), as solving zooming shots optically is very difficult.

Since the 'solve' generated by either libmv or an external photogrammetry application is closely coupled to the camera sensor that shot the images, it seems that those parameters should be directly linked to the Blender camera. For example, calculating the focal length from a given monocular solve can only be determined if you know the pixels-per-mm of the camera sensor that took them.

One would intuitively think that the 'lens' data could be separated from the 'camera' data, but it turns out that trying to keep those two pieces separate is quite prone to human error. As an individual camera is switched to different resolutions, its sensor back size effectively changes, so a given solve is tightly linked to the camera setting used at the time of capture.

Fortunately, starting with the field of view (which is the actual raw value calculated with photogrammetry or libmv), manually entering the sensor back size, and then calculating focal length as a function of FOV will give the correct results.

Several shots in a row from the same camera/lens combo can use the same camera intrinsics, but that feels like it could be handled by copying and pasting the intrinsic (FOV, k1, k2 etc.) values.

For the photogrammetry application, Ivan is correct that each still image will have its own associated camera intrinsics, which should be part of the camera associated with that image.

In a studio, tracking multiple shots might be done by a 'tracking team', with the resulting solved camera data handed off to the next department.

In terms of linking, each individual tracked camera could end up in its own Collection that is linked or appended to a 'master scene', so that (in general) only one tracked camera is enabled at a time.

Once a clip is solved enough to move out of tracking, the solve generally doesn't get updated much, so I'm not sure how much of a 'live link update' is necessary for this. The workflow on camera tracking is different than asset construction, as continuous updates aren't needed.

I note that @RobN's requests in #81662 of a "modifier/node to have a camera input and then an image (or sequence) input" and Sergey's 'Camera Input' node mentioned above look identical. Both share the (IMHO correct) approach where the local movie clip has a camera set to one with a track constraint, and the lens distortion data is then contained inside the camera, instead of inside the movie clip.

Ivan is exactly correct that the current workflow requires rendering out an intermediate image sequence of undistorted files in order to be able to use them interactively, which is a real workflow pain. It's far better to be able to load the original footage, and have it display undistorted without the intermediate step.

This and the https://developer.blender.org/T81662 projection painting piece would be extraordinarily useful for groups that start to deal with a high volume of CGI/live action integrated shots.

What other pieces need to be decided before implementing this?

Getting this implemented would be extremely useful for us, so I'd like to help if there are any remaining 'decision roadblocks.' For a studio that is doing VFX, it is likely that they would want to be able to track multiple clips into a given scene, with each clip associated with a separate tracked camera with its own camera extrinsics + intrinsics. Sean is correct that animated camera intrinsic parameters are rare in VFX shots unless you have real time lens encoding data (very rare), as solving zooming shots optically is very difficult. Since the 'solve' generated by either libmv or an external photogrammetry application is closely coupled to the camera sensor that shot the images, it seems that those parameters should be directly linked to the Blender camera. For example, calculating the focal length from a given monocular solve can only be determined if you know the pixels-per-mm of the camera sensor that took them. One would intuitively think that the 'lens' data could be separated from the 'camera' data, but it turns out that trying to keep those two pieces separate is quite prone to human error. As an individual camera is switched to different resolutions, its sensor back size effectively changes, so a given solve is tightly linked to the camera setting used at the time of capture. Fortunately, starting with the field of view (which is the actual raw value calculated with photogrammetry or libmv), manually entering the sensor back size, and then calculating focal length as a function of FOV will give the correct results. Several shots in a row from the same camera/lens combo can use the same camera *intrinsics,* but that feels like it could be handled by copying and pasting the intrinsic (FOV, k1, k2 etc.) values. For the photogrammetry application, Ivan is correct that each still image will have its own associated camera intrinsics, which should be part of the camera associated with that image. In a studio, tracking multiple shots might be done by a 'tracking team', with the resulting solved camera data handed off to the next department. In terms of linking, each individual tracked camera could end up in its own Collection that is linked or appended to a 'master scene', so that (in general) only one tracked camera is enabled at a time. Once a clip is solved enough to move out of tracking, the solve generally doesn't get updated much, so I'm not sure how much of a 'live link update' is necessary for this. The workflow on camera tracking is different than asset construction, as continuous updates aren't needed. I note that @RobN's requests in #81662 of a "modifier/node to have a camera input and then an image (or sequence) input" and Sergey's 'Camera Input' node mentioned above look identical. Both share the (IMHO correct) approach where the local movie clip has a camera set to one with a track constraint, and the lens distortion data is then contained inside the camera, instead of inside the movie clip. Ivan is exactly correct that the current workflow requires rendering out an intermediate image sequence of undistorted files in order to be able to use them interactively, which is a real workflow pain. It's far better to be able to load the original footage, and have it display undistorted without the intermediate step. This and the https://developer.blender.org/T81662 projection painting piece would be extraordinarily useful for groups that start to deal with a high volume of CGI/live action integrated shots. What other pieces need to be decided before implementing this?

@Eliot-Mack yes it would be nice to have the lens distortion data, etc live inside the camera, and then this node/modifier would read that data and use it. Many times, we undistort images prior to any projection/painting being done but it's always great to have the option in there. Using tracked cameras (with distorted lenses) and then being able to create projected environments from that, purely in Blender, should be doable if this node/modifier and workflow existed.

@Eliot-Mack yes it would be nice to have the lens distortion data, etc live inside the camera, and then this node/modifier would read that data and use it. Many times, we undistort images prior to any projection/painting being done but it's always great to have the option in there. Using tracked cameras (with distorted lenses) and then being able to create projected environments from that, purely in Blender, should be doable if this node/modifier and workflow existed.
Member

Changed status from 'Needs Triage' to: 'Confirmed'

Changed status from 'Needs Triage' to: 'Confirmed'

Added subscriber: @Garek

Added subscriber: @Garek
Author

Changed status from 'Confirmed' to: 'Needs Triage'

Changed status from 'Confirmed' to: 'Needs Triage'
Author

Added subscriber: @PratikPB2123

Added subscriber: @PratikPB2123
Author

@PratikPB2123 Current design is still not confirmed by module owners because we need to find the best solution from side of user experience. I wish one day I would see the same task status change authored by someone of them, but before it still should be discussed. Any way, the latest coommit you saw has no reviewer beacause first things first I need to summarize all previous discussion. I think the best way to do that is to implement it in my own local branch and than, a lot of hidden questions would be solved faster.

@PratikPB2123 Current design is still not confirmed by module owners because we need to find the best solution from side of user experience. I wish one day I would see the same task status change authored by someone of them, but before it still should be discussed. Any way, the latest coommit you saw has no reviewer beacause first things first I need to summarize all previous discussion. I think the best way to do that is to implement it in my own local branch and than, a lot of hidden questions would be solved faster.
Author

@Sergey, After such a long time, I would like to continue what I started.

I carefully re-read the entire thread, the last thing we stopped at was linking the movie clip data block. This question is actually not so simple.

On the one hand, if the linking comes from a file that was saved with an already attached camera data block to the linked movie clip (i.e. saved by the Blender build that already knows how to do this), then everything is simple - the existing camera data block is also linked.

On the other hand, the end user can have a project consisting of several files that are saved in Blender without the corresponding functionality - the problem here is not only that no camera is attached to the movie clip, but in the file itself, which acts as a library for linking , may not have any cameras.

Of course, I am against the destruction of backward compatibility, but as far as I understand the processes of transferring data between files, only an append is possible here. In this case, the corresponding camera data block must be created (with the transfer of the existing camera values ​​of previous versions of the movie clips) and attached to the transferred movie clip. This is one of the possibilities and, in my opinion, the most correct one.

The second option I see is something less useful: make it possible for a movie clip not to have a camera attached to it as a valid option. But in this case, when linking, only the movie clip itself will be transferred, and its lens data will simply be lost (even if you notify the user about it, I think it's still not a good idea, right?))

Therefore, I'm still more inclined towards the first option : in case of an attempt to link a data block, simply notify the user that instead of linking, the file must either be appended or resaved in the current version of Blender (in this case, the problem will be solved since the corresponding data block will already be present).

You may have noticed the presence of D13708. In the process of using this patch, I also had a question:

What should happen to the movie clip data block when the camera data block is deleted? For example, the camera was completely removed through the outliner. I think in this case it should be removed? By analogy with the Camera Object. Or still allow the possibility of a movie clip not having a camera? Again, I am against this, because in this case, in addition to what was described earlier, the solver loses its meaning somewhat.

@Sergey, After such a long time, I would like to continue what I started. I carefully re-read the entire thread, the last thing we stopped at was linking the movie clip data block. This question is actually not so simple. On the one hand, if the linking comes from a file that was saved with an already attached camera data block to the linked movie clip (i.e. saved by the Blender build that already knows how to do this), then everything is simple - the existing camera data block is also linked. On the other hand, the end user can have a project consisting of several files that are saved in Blender without the corresponding functionality - the problem here is not only that no camera is attached to the movie clip, but in the file itself, which acts as a library for linking , may not have any cameras. Of course, I am against the destruction of backward compatibility, but as far as I understand the processes of transferring data between files, only an append is possible here. In this case, the corresponding camera data block must be created (with the transfer of the existing camera values ​​of previous versions of the movie clips) and attached to the transferred movie clip. This is one of the possibilities and, in my opinion, the most correct one. The second option I see is something less useful: make it possible for a movie clip not to have a camera attached to it as a valid option. But in this case, when linking, only the movie clip itself will be transferred, and its lens data will simply be lost (even if you notify the user about it, I think it's still not a good idea, right?)) Therefore, I'm still more inclined towards the first option : in case of an attempt to link a data block, simply notify the user that instead of linking, the file must either be appended or resaved in the current version of Blender (in this case, the problem will be solved since the corresponding data block will already be present). You may have noticed the presence of [D13708](https://archive.blender.org/developer/D13708). In the process of using this patch, I also had a question: What should happen to the movie clip data block when the camera data block is deleted? For example, the camera was completely removed through the outliner. I think in this case it should be removed? By analogy with the Camera Object. Or still allow the possibility of a movie clip not having a camera? Again, I am against this, because in this case, in addition to what was described earlier, the solver loses its meaning somewhat.
Member

Added subscriber: @lichtwerk

Added subscriber: @lichtwerk
Member

Changed status from 'Needs Triage' to: 'Needs Developer To Reproduce'

Changed status from 'Needs Triage' to: 'Needs Developer To Reproduce'
Member

Good to see this getting active again (sort of).
Since there is not much the #triaging team can do, I think it is up to #motion_tracking devs to decide on status here.

Good to see this getting active again (sort of). Since there is not much the #triaging team can do, I think it is up to #motion_tracking devs to decide on status here.
Philipp Oeser removed the
Interest
VFX & Video
label 2023-02-10 09:31:58 +01:00
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
11 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#82645
No description provided.