Camera projection mapping for brush texture #81662

Open
opened 2020-10-12 22:17:31 +02:00 by Ivan Perevala · 55 comments

Status: Task design and engineer plan review


Team
Code Review: @Jeroen-Bakker @ideasman42
Project leader: @ivpe
Project members: @ssh4

Preamble

Goal of this project are add code support and tools for Camera Projection Painting to Blender.
Such workflows used in 3D CG from early days for matte painting, 3D and video integration, etc.
In all workflows most logical UX is projection preview as it seen from camera and projection preview overlay on 3D model or scene that visible from any point.

When matte painting is one of oldest workflows. Last years we can see huge progress in 3D scanning/Photogrammetry methods, that now one of main parts of 3D assets or scenes creation process. And in same time a lot of artists use photo modeling workflows with photo or video data as a source for textures.
And again, most logical and fast way to work with textures for that models is using 3D paint feature directly in 3D app, instead of use external 3D painters or 2D editors to work with UVed textures.


Who can use it

This project improvements in used UX from 3D assets creators, Photogrammetrists (3D scanners) and VFX/Matte painters point of view.
For that we made a list of current limitations and propose improvements for wide type of 3D CG users.

  • 3D assets creators:

Probably one of biggest group. Standard photo modeling workflow use photo or images as a backgrounds. Final model UVed in human readable form and textures are making with standard for 2D editors or texture painter's workflows with using layers, masks, etc.

Paint tool that can allow project image or photo from specific camera view can speedup time for mock-ups iterations in works with clients and art/creative directors. Due to real-time preview of results in 3D. In same moment using real photo, especially in architectural models limiting possible source of images. Due to lens distortions in all photo, required additional attention to used only high-quality professional photo equipment. Or applying undistortion to all sources before using them for texturing. And when using video in that moment required lot of additional disk space to store that temporary data.

If 3D assets are low-polygon than this also required creation of “proxy” objects with higher polygons count, because current Blender UV Projection give visible distortions on textures if used low-poly mesh or mesh with n-gons.

  • 3D sculptors:

Another and also big group of 3D content creators. And if in early CG days that was mostly artists that working with organic shapes or characters. In current time a lot of artists doing hard-surface modeling in 3D sculpting editors.
That group of artists also need the way to project photo data to textures and using paint editor in same app, speedup sketching or work in progress iterations.

All models made in that workflows in time of work can have complicated and non-optimized UVs. And using external 2D editors or texture painters not usable until model are retopologized and UVed in human readable form.
Current Blender UV projection have less issues with such meshes. But polygons count already have a big impact on CPU time required for work. Computing new UV Projection from new camera/view for 5-10 million poly meshes already far from real-time.

This user group also have demand on works with photo or video sources, and lens distortion in source data required to use of undistorted images.

  • Photogrammetry and 3D scanning:

Last years one of most growing groups of 3D content creators. That group work with a huge poly count meshes. And it more narrows to 3D sculpting by used workflows in post-processing scan data. With one but important difference, 3D scans are often monolith or split to chunks 3D meshes. And in the middle of process this meshes have completely non optimized UV's with thousands UV islands made by UV parametrization algorithms optimized for speed and works with thousands or even hundreds of millions polygons meshes.

Photogrammetry with all its power have a limitations in used algorithms due to a huge amount of data used in time of creation of 3D meshes and their texturing. And photogrammetry user not always have way to easily control process of choosing best source for texture in specific part of model. And result texture can use source data from image (camera) with worse depth of field or changed environment lights.

Lot of clients of 3D scan studios prefer to work with raw meshes and original textures to be sure that all possible details from 3D scan was used in most efficient way for their project.
But photogrammetrist still need fast way to quickly fix raw mesh and raw texture from specific image (specific camera) with better details, or correct light, etc.

As with 3D sculptors, mesh polygon count is more than enough to avoid UV projection deformations on textures. But used polygon count around 10-20 million of polygons in raw mesh is not a rare use case for high-quality high-resolution 3D scan.

One of biggest differences for other groups, is an image (camera count) used for 3D model and texture creation. 50-60 images/cameras are average small photogrammetry scan.
Big or precise scans can easily have 1000 or even 10000 images with 8K or bigger resolution.
Good point that using all these images/cameras for final texture is not a mandatory. But workflow that allow quickly switching between images/cameras are big time saver.

And as soon as source for 3D model and texture are real photo, that always have lens distortions. But if other groups probably can work with some small distortions. Photogrammetry due to it high precision required using mathematically correct lens distortion models. That allow precise copy of source texture from scanned objects with sub millimeter resolution.

Used image count and size of this images required a lot of disc space for any additional temporary files like undistorted images.
So support for using lens undistortion in projection painting “must have” feature.

  • Matte painting/VFX:
    That group can combine workflows and issues from all groups mentioned above. But in addition, this group often work with video sources. That mean this can be hundreds or thousands of frames that also can have real lens distortions or even rolling shutter issue (visible as a skew image deformation).
    3D models can be a combination of low-polygon models or blocks or n-gons for quickly build different distance planes on scene.
    And all of this in production or sketching time required quick integration with projection mapping painting.

Current Blender workflow limitations and issues

Summarizing the above limitations of Blender:

Blender currently has a way to do camera projection painting, with "Clone" brush and "UV Project" modifier. This method is enough labor intensive on the part of the user experience, requires to choose a lot of options even just to change camera from which projection act, which negatively affects the speed of the workflow. The main disadvantage of this method is the requirement for the density of the geometry. Low poly models (for example, after retopology) will have unnecessary distortions when drawing. Also, an extra step in this process is the export undistorted images/movie clips, because datasets can be large enough to create a lot of unnecessary files. It is also desirable to have control over brightness/contrast, ect, this method do not allow this.


User workflow expectations

The ultimate goal of this project is to build a workflow that eliminates these shortcomings:

Scene setup
Assumed that the user has a scene with mesh objects, created by himself or imported from third-party software. There are also a number of cameras and original images/movie clips.

  • Automatically attach images/movie clips to specific cameras, they have similar names in any software. It should also be possible to do this manually.
  • Distortion parameters can be obtained through Movie Tracking Solver, but it should be possible to set parameters separately.

Canvas texture correction/painting process
Assumed that user is in Texture Paint Mode.

  • Select a camera. It is assumed that when you select a camera, the corresponding image/movie clip will also be selected for painting.
  • Paint. It is assumed that texture projection will occur from the camera with distortion correction according to the distortion parameters.

More globally, a major refactoring of the Texture Paint module is needed (like this one #73935 ), all users expect smooth workflow with high-polygon objects and high-resolution images/movie clips.
Also, an important part is to give the user a visual representation in the viewport of exactly how the texture will project on the object and from which camera the projection is act (the most obvious is to project from the active camera of the scene).


Suggestions

Here described only a suggestions, each of them can be an open question.

  • For both "Movie Tracking" and "Texture Paint" modules image/movie clip as well as distortion parameters should be part of a particular camera (That should be coordinated with Movie Tracking module). About the Brush Management project. The described workflows have a tight relationship with a specific scene and objects in it, hence from the point the user's view of the photogrammetrist or VFX artist, it is expected that the work done by him at the scene setup stage will be saved specifically for this scene and objects in it. Therefore, it is important to understand that the camera with which the projection is act, its distortion parameters should not have relations with a specific brush. Since the dataset of images/movie clips can be large and also associated with specific cameras of a specific scene, it is also cannot relate to the brush.
  • Automate the process of attaching images/movie clips to specific cameras. This is most often possible by the name matching of the camera and the image/movie clip file.
  • Add "Camera" brush texture projection mode. It should use data (image/movie clip) from the active scene camera for the brush texture and also to determine the current parameters of the distortion.
  • Refactoring the brush preview. At the moment it is not informative and has limited presentation of the end result (for example, for "3D" texture map mode it's not presented for many years). In the context of this project the preview should be an overlay that takes into account the geometry of the object and gives an actual representation at any view.

All this but more visually
Previously, we (@ssh4, @ivpe) did it as an addon, it was more photogrammetry-oriented so UI/UX should be different in some cases. In this short video described basic principles of workflow with large dataset of raw (distorted) images, viewport preview. Non-commercial scene is used.
Camera Projection Mapping.mov


Work plan

This section will proceed only after final design is ready:

  • ...

Open questions
Open questions according to the suggestions below. Everyone can have their own opinion, so we need a general solution to be approved by everyone. Therefore, I leave a link to the original post, as well as its summary. After design document update some suggestions are included into it and some are still questionable:

  • [Original post by]] @brecht [ https:*developer.blender.org/T81662#1038426 | Original post by @Sergey Where to store the distortion parameters of images/movie clips as well as the relationship between a specific camera with a specific image as given the current Movie Tracking design and in the case of its modification to more unified?

  • Original post by @brecht In case if camera store image or movie clip as background image. Should we have option for automation switch brush texture image / movie enabled by default? Or it should be the only way?

  • Original post by @sebastian_k Camera background image is enough for visual preview or it should be shader based?

  • Original post by @sebastian_k VFX scenes has already set background movie clips on cameras. It can be used for any workflow as well, I think. In terms of current project - to store image or movie clip relative to specific camera. But if we have more than one background image on single camera? In this case background images/movies are blended. The oldest principle: "You get what you see" should work here too. So how can we handle this? In my opinion the simple way - remove list of background images and leave only one. As I remember in older releases (< 2.8x) we had this as a list to use from different view angles (from left, right, top, camera). So, I think it is more Blender's design questionable too.

  • Original post by @ssh4 should be camera intrinsics animated for support videos that captured using zoom lenses as well as using optical or sensor stabilization?

  • Refactoring of the Texture Paint mode. Is it planned and is it worth counting on help in its implementation?

  • ...

Relevant links:
A small research on YouTube about current limitations:
CGMatter: Blender 2.8 Panorama stitching projection painting (part 2)
CGMatter: 2D To 3D
CGMatter: Realistic 3D? EASY
IanHubert: Wild Tricks for Greenscreen in Blender

Previous implementation using UV-projection method in separate boost-module:
Camera projection painter addon
Industry-compatible lens distortion models:
[Brown-Conrady]], [ https:*developer.blender.org/D7484 | Nuke

**Status:** Task design and engineer plan review --- **Team** **Code Review:** @Jeroen-Bakker @ideasman42 **Project leader:** @ivpe **Project members:** @ssh4 **Preamble** Goal of this project are add code support and tools for Camera Projection Painting to Blender. Such workflows used in 3D CG from early days for matte painting, 3D and video integration, etc. In all workflows most logical UX is projection preview as it seen from camera and projection preview overlay on 3D model or scene that visible from any point. When matte painting is one of oldest workflows. Last years we can see huge progress in 3D scanning/Photogrammetry methods, that now one of main parts of 3D assets or scenes creation process. And in same time a lot of artists use photo modeling workflows with photo or video data as a source for textures. And again, most logical and fast way to work with textures for that models is using 3D paint feature directly in 3D app, instead of use external 3D painters or 2D editors to work with UVed textures. ----------------- **Who can use it** This project improvements in used UX from 3D assets creators, Photogrammetrists (3D scanners) and VFX/Matte painters point of view. For that we made a list of current limitations and propose improvements for wide type of 3D CG users. * 3D assets creators: Probably one of biggest group. Standard photo modeling workflow use photo or images as a backgrounds. Final model UVed in human readable form and textures are making with standard for 2D editors or texture painter's workflows with using layers, masks, etc. Paint tool that can allow project image or photo from specific camera view can speedup time for mock-ups iterations in works with clients and art/creative directors. Due to real-time preview of results in 3D. In same moment using real photo, especially in architectural models limiting possible source of images. Due to lens distortions in all photo, required additional attention to used only high-quality professional photo equipment. Or applying undistortion to all sources before using them for texturing. And when using video in that moment required lot of additional disk space to store that temporary data. If 3D assets are low-polygon than this also required creation of “proxy” objects with higher polygons count, because current Blender UV Projection give visible distortions on textures if used low-poly mesh or mesh with n-gons. * 3D sculptors: Another and also big group of 3D content creators. And if in early CG days that was mostly artists that working with organic shapes or characters. In current time a lot of artists doing hard-surface modeling in 3D sculpting editors. That group of artists also need the way to project photo data to textures and using paint editor in same app, speedup sketching or work in progress iterations. All models made in that workflows in time of work can have complicated and non-optimized UVs. And using external 2D editors or texture painters not usable until model are retopologized and UVed in human readable form. Current Blender UV projection have less issues with such meshes. But polygons count already have a big impact on CPU time required for work. Computing new UV Projection from new camera/view for 5-10 million poly meshes already far from real-time. This user group also have demand on works with photo or video sources, and lens distortion in source data required to use of undistorted images. * Photogrammetry and 3D scanning: Last years one of most growing groups of 3D content creators. That group work with a huge poly count meshes. And it more narrows to 3D sculpting by used workflows in post-processing scan data. With one but important difference, 3D scans are often monolith or split to chunks 3D meshes. And in the middle of process this meshes have completely non optimized UV's with thousands UV islands made by UV parametrization algorithms optimized for speed and works with thousands or even hundreds of millions polygons meshes. Photogrammetry with all its power have a limitations in used algorithms due to a huge amount of data used in time of creation of 3D meshes and their texturing. And photogrammetry user not always have way to easily control process of choosing best source for texture in specific part of model. And result texture can use source data from image (camera) with worse depth of field or changed environment lights. Lot of clients of 3D scan studios prefer to work with raw meshes and original textures to be sure that all possible details from 3D scan was used in most efficient way for their project. But photogrammetrist still need fast way to quickly fix raw mesh and raw texture from specific image (specific camera) with better details, or correct light, etc. As with 3D sculptors, mesh polygon count is more than enough to avoid UV projection deformations on textures. But used polygon count around 10-20 million of polygons in raw mesh is not a rare use case for high-quality high-resolution 3D scan. One of biggest differences for other groups, is an image (camera count) used for 3D model and texture creation. 50-60 images/cameras are average small photogrammetry scan. Big or precise scans can easily have 1000 or even 10000 images with 8K or bigger resolution. Good point that using all these images/cameras for final texture is not a mandatory. But workflow that allow quickly switching between images/cameras are big time saver. And as soon as source for 3D model and texture are real photo, that always have lens distortions. But if other groups probably can work with some small distortions. Photogrammetry due to it high precision required using mathematically correct lens distortion models. That allow precise copy of source texture from scanned objects with sub millimeter resolution. Used image count and size of this images required a lot of disc space for any additional temporary files like undistorted images. So support for using lens undistortion in projection painting “must have” feature. * Matte painting/VFX: That group can combine workflows and issues from all groups mentioned above. But in addition, this group often work with video sources. That mean this can be hundreds or thousands of frames that also can have real lens distortions or even rolling shutter issue (visible as a skew image deformation). 3D models can be a combination of low-polygon models or blocks or n-gons for quickly build different distance planes on scene. And all of this in production or sketching time required quick integration with projection mapping painting. ---------------- **Current Blender workflow limitations and issues** Summarizing the above limitations of Blender: Blender currently has a way to do camera projection painting, with "Clone" brush and "UV Project" modifier. This method is enough labor intensive on the part of the user experience, requires to choose a lot of options even just to change camera from which projection act, which negatively affects the speed of the workflow. The main disadvantage of this method is the requirement for the density of the geometry. Low poly models (for example, after retopology) will have unnecessary distortions when drawing. Also, an extra step in this process is the export undistorted images/movie clips, because datasets can be large enough to create a lot of unnecessary files. It is also desirable to have control over brightness/contrast, ect, this method do not allow this. ----------------- **User workflow expectations** The ultimate goal of this project is to build a workflow that eliminates these shortcomings: **Scene setup** Assumed that the user has a scene with mesh objects, created by himself or imported from third-party software. There are also a number of cameras and original images/movie clips. - Automatically attach images/movie clips to specific cameras, they have similar names in any software. It should also be possible to do this manually. - Distortion parameters can be obtained through Movie Tracking Solver, but it should be possible to set parameters separately. **Canvas texture correction/painting process** Assumed that user is in Texture Paint Mode. - Select a camera. It is assumed that when you select a camera, the corresponding image/movie clip will also be selected for painting. - Paint. It is assumed that texture projection will occur from the camera with distortion correction according to the distortion parameters. More globally, a major refactoring of the Texture Paint module is needed (like this one [#73935 ](https://developer.blender.org/T73935)), all users expect smooth workflow with high-polygon objects and high-resolution images/movie clips. Also, an important part is to give the user a visual representation in the viewport of exactly how the texture will project on the object and from which camera the projection is act (the most obvious is to project from the active camera of the scene). ------------------ **Suggestions** Here described only a suggestions, each of them can be an open question. - For both "Movie Tracking" and "Texture Paint" modules image/movie clip as well as distortion parameters should be part of a particular camera (That should be coordinated with Movie Tracking module). About the Brush Management project. The described workflows have a tight relationship with a specific scene and objects in it, hence from the point the user's view of the photogrammetrist or VFX artist, it is expected that the work done by him at the scene setup stage will be saved specifically for this scene and objects in it. Therefore, it is important to understand that the camera with which the projection is act, its distortion parameters should not have relations with a specific brush. Since the dataset of images/movie clips can be large and also associated with specific cameras of a specific scene, it is also cannot relate to the brush. - Automate the process of attaching images/movie clips to specific cameras. This is most often possible by the name matching of the camera and the image/movie clip file. - Add "Camera" brush texture projection mode. It should use data (image/movie clip) from the active scene camera for the brush texture and also to determine the current parameters of the distortion. - Refactoring the brush preview. At the moment it is not informative and has limited presentation of the end result (for example, for "3D" texture map mode it's not presented for many years). In the context of this project the preview should be an overlay that takes into account the geometry of the object and gives an actual representation at any view. **All this but more visually** Previously, we (@ssh4, @ivpe) did it as an addon, it was more photogrammetry-oriented so UI/UX should be different in some cases. In this short video described basic principles of workflow with large dataset of raw (distorted) images, viewport preview. Non-commercial scene is used. [Camera Projection Mapping.mov](https://archive.blender.org/developer/F9139802/Camera_Projection_Mapping.mov) ------------------- **Work plan** > This section will proceed only after final design is ready: - ... **Open questions** Open questions according to the suggestions below. Everyone can have their own opinion, so we need a general solution to be approved by everyone. Therefore, I leave a link to the original post, as well as its summary. After design document update some suggestions are included into it and some are still questionable: - [Original post by]] @brecht [[ https:*developer.blender.org/T81662#1038426 | Original post by](https:*developer.blender.org/T81662#1037146) @Sergey Where to store the distortion parameters of images/movie clips as well as the relationship between a specific camera with a specific image as given the current Movie Tracking design and in the case of its modification to more unified? - [Original post by](https://developer.blender.org/T81662#1037214) @brecht In case if camera store image or movie clip as background image. Should we have option for automation switch brush texture image / movie enabled by default? Or it should be the only way? - [Original post by](https://developer.blender.org/T81662#1046815) @sebastian_k Camera background image is enough for visual preview or it should be shader based? - [Original post by](https://developer.blender.org/T81662#1046815) @sebastian_k VFX scenes has already set background movie clips on cameras. It can be used for any workflow as well, I think. In terms of current project - to store image or movie clip relative to specific camera. But if we have more than one background image on single camera? In this case background images/movies are blended. The oldest principle: "You get what you see" should work here too. So how can we handle this? In my opinion the simple way - remove list of background images and leave only one. As I remember in older releases (< 2.8x) we had this as a list to use from different view angles (from left, right, top, camera). So, I think it is more Blender's design questionable too. - [Original post by](https://developer.blender.org/T81662#1047507) @ssh4 should be camera intrinsics animated for support videos that captured using zoom lenses as well as using optical or sensor stabilization? - Refactoring of the Texture Paint mode. Is it planned and is it worth counting on help in its implementation? - ... **Relevant links**: A small research on YouTube about current limitations: [CGMatter: Blender 2.8 Panorama stitching projection painting (part 2)](https://youtu.be/s_x62z1T9tk?t=467) [CGMatter: 2D To 3D](https://youtu.be/nWCWtojeM14) [CGMatter: Realistic 3D? EASY](https://youtu.be/uWkKV0HxwFQ?t=919) [IanHubert: Wild Tricks for Greenscreen in Blender](https://youtu.be/RxD6H3ri8RI?t=131) Previous implementation using UV-projection method in separate boost-module: [Camera projection painter addon](https://www.youtube.com/watch?v=lasdJIIAv70&t=13s) Industry-compatible lens distortion models: [Brown-Conrady]], [[ https:*developer.blender.org/D7484 | Nuke ](https:*developer.blender.org/D9037)
Ivan Perevala self-assigned this 2020-10-12 22:17:31 +02:00
Author

Added subscribers: @ivpe, @ssh4, @ideasman42

Added subscribers: @ivpe, @ssh4, @ideasman42

Added subscriber: @SteffenD

Added subscriber: @SteffenD

Added subscriber: @jvanko

Added subscriber: @jvanko
Member

Added subscriber: @lichtwerk

Added subscriber: @lichtwerk
Ivan Perevala removed their assignment 2020-10-13 12:16:12 +02:00
Vlad Kuzmin was assigned by Ivan Perevala 2020-10-13 12:16:12 +02:00

Added subscriber: @slumber

Added subscriber: @slumber

Added subscriber: @Jeroen-Bakker

Added subscriber: @Jeroen-Bakker
Vlad Kuzmin was unassigned by Ivan Perevala 2020-10-17 00:04:48 +02:00
Ivan Perevala self-assigned this 2020-10-17 00:04:48 +02:00
Author

A clear description of the basic principles of camera projection implementation using the simplest examples:
https://youtu.be/s93T72LeRv0

A clear description of the basic principles of camera projection implementation using the simplest examples: https://youtu.be/s93T72LeRv0

Added subscriber: @weirdmonkey2807

Added subscriber: @weirdmonkey2807

Added subscribers: @Sergey, @brecht

Added subscribers: @Sergey, @brecht

In terms of usability, from what I can tell from the video there is no way to preview the projection before you paint, which makes it hard to set it up correctly. And users need to manually sync between brush data and scene or motion tracking data.

I also don't think all this data really belongs in a brush datablock. Especially as we will do more changes so that brushes becomes part of asset management and shared between .blend files, it makes little sense to store more scene specific camera parameters in them.

Have you considered using camera background images instead? These already come with a preview before you paint, and provide a way to pull in motion tracked clips and their distortion parameters.

Besides cameras, even image objects could be considered, as users will often already set those up as references images in the right position.

And besides manually switching between the appropriate projector, it could support automatically picking the camera (or image object) depending on what aligns closest to the viewpoint in the viewport.

CC @sergey.

In terms of usability, from what I can tell from the video there is no way to preview the projection before you paint, which makes it hard to set it up correctly. And users need to manually sync between brush data and scene or motion tracking data. I also don't think all this data really belongs in a brush datablock. Especially as we will do more changes so that brushes becomes part of asset management and shared between .blend files, it makes little sense to store more scene specific camera parameters in them. Have you considered using camera background images instead? These already come with a preview before you paint, and provide a way to pull in motion tracked clips and their distortion parameters. Besides cameras, even image objects could be considered, as users will often already set those up as references images in the right position. And besides manually switching between the appropriate projector, it could support automatically picking the camera (or image object) depending on what aligns closest to the viewpoint in the viewport. CC @sergey.
Author

@brecht

In terms of usability, from what I can tell from the video there is no way to preview the projection before you paint, which makes it hard to set it up correctly. And users need to manually sync between brush data and scene or motion tracking data.

Yes, I think that the background image of the camera would be a good preview option (ideally, of course, it would be implemented as in an addon that serves as a prototype for development now, I will still refer to it, there the preview of the brush was implemented by shaders and exactly repeated the result not only when viewed from the camera. However, I looked at the current implementation of the preview brushes and for the same result, a serious refactoring is needed)

I also don't think all this data really belongs in a brush datablock. Especially as we will do more changes so that brushes becomes part of asset management and shared between .blend files, it makes little sense to store more scene specific camera parameters in them.

I can’t say with certainty which data block this data should belong to, taking into account the proposed changes. However, this is important information. Perhaps I should keep a list of these parameters for scene tool settings?

Have you considered using camera background images instead? These already come with a preview before you paint, and provide a way to pull in motion tracked clips and their distortion parameters.

Yes, I am considering the idea to add the same Clip field as in the "Movie Distortion" node in addition to the existing block of distortion parameters. This idea is good because it will create a small ecosystem with which you can, for example, calculate the distortion parameters with a solver in tracking and then use them to distort the image / video of the brush texture

And besides manually switching between the appropriate projector, it could support automatically picking the camera (or image object) depending on what aligns closest to the viewpoint in the viewport.

This feature was at one time in the addon, from the earliest versions. Actually, the direction of the viewer was compared with the direction of the cameras and the closest of the co-directional ones was chosen. However, this idea had to be abandoned because, in fact, no one used it. It was replaced by another, the list was simply sorted by the direction of the cameras in space, but alas, this method is quite narrowly focused for photogrammetrists who use the camera rig

@brecht > In terms of usability, from what I can tell from the video there is no way to preview the projection before you paint, which makes it hard to set it up correctly. And users need to manually sync between brush data and scene or motion tracking data. Yes, I think that the background image of the camera would be a good preview option (ideally, of course, it would be implemented as in an addon that serves as a prototype for development now, I will still refer to it, there the preview of the brush was implemented by shaders and exactly repeated the result not only when viewed from the camera. However, I looked at the current implementation of the preview brushes and for the same result, a serious refactoring is needed) >I also don't think all this data really belongs in a brush datablock. Especially as we will do more changes so that brushes becomes part of asset management and shared between .blend files, it makes little sense to store more scene specific camera parameters in them. I can’t say with certainty which data block this data should belong to, taking into account the proposed changes. However, this is important information. Perhaps I should keep a list of these parameters for scene tool settings? >Have you considered using camera background images instead? These already come with a preview before you paint, and provide a way to pull in motion tracked clips and their distortion parameters. Yes, I am considering the idea to add the same Clip field as in the "Movie Distortion" node in addition to the existing block of distortion parameters. This idea is good because it will create a small ecosystem with which you can, for example, calculate the distortion parameters with a solver in tracking and then use them to distort the image / video of the brush texture >And besides manually switching between the appropriate projector, it could support automatically picking the camera (or image object) depending on what aligns closest to the viewpoint in the viewport. This feature was at one time in the addon, from the earliest versions. Actually, the direction of the viewer was compared with the direction of the cameras and the closest of the co-directional ones was chosen. However, this idea had to be abandoned because, in fact, no one used it. It was replaced by another, the list was simply sorted by the direction of the cameras in space, but alas, this method is quite narrowly focused for photogrammetrists who use the camera rig

@brecht
Definitely for this moment camera projection is not so usable without preview.
And @ivpe have this Milestone 3 Preview.
Only issue that Camera Projection can be used without preview. But have preview without camera projection implemented have no reason. So this is only matter of priority.
And for this moment when code for camera projection is working, and we can brush up UX for this tool.

Regarding deeper connection with tracker:
I expecting at least three main use cases for this feature:
• Image based assets texture painting (all this video tutorials how to paint texture from photo on modeled 3D object)
• Photogrammetry or 3D scanned objects texture painting (from facial and fullbody scans to environment or architecture scans)
• Matte painting (variation of point 1 but mainly from video sources)
And based on this, only video source can have a reason to use tracker.
Point 1 can have 1, 2, 5 (as low as artist will find) images of desired object. Big chance that will be completely different in scale, size and resolution images from internet. And Tracker as a “source” for such painting data looks confusingly not logical.
Same situation with “perfect” datasets from Photogrammetry multi camera rigs.
From photogrammetry point of view, source is a camera image/camera projection.
And any camera/lens have intrinsics that define exhaustive model of this projection.
And we have 3 types of users:
• 3D artist
• 3D scanner
• Video producer
3D artist can even not know how important lens distortions/in distortions in camera projection. Results can be good for his tasks.
Photogrammetrysts (3D scanners) perfectly understand that Every real camera and lens have distortions. And not possible make precise scan without skipping lens distortions.
Video producer have video, him can worry more about good camera track and him know that him need apply good lens undistortion before paint texture to default cube.

To be honest. I have idea for Blender composer with additional node Lens Distortion that will do all that doing Lens Distortion module in Tracker but users will have chance to choose desired lens distortion model and input distortion coefficients right in this node. And Current Undistortion Node should be Tracker node and can be plugged to this Lens Distortion node as a source of distortions coefficients estimated in Blender Tracker.
Or even more. As soon as Blender have Camera object this object have lens parameters, it should have Lens distortions here in camera. But I not sure that so deep code refactoring is so easy for single coder :(

Regarding idea automatically switching projector depend on view:
When we jars start making our Camera Projection Painter addon, @ivpe had same idea. But imagine use case when User loaded 3D model with 240 cameras (from multi camera rig) and all him really need is paint existing texture from one or two specific cameras.
A lot of rigs have dedicated cameras looks to face, but this can be cameras that mounted quite far from center and use narrow lenses. Automatic switching looks attractive at first look, but everyone will disable it as soon as find that this not only cpu extensive but also quite annoying when you need control your process.
Plus main idea of projection painting is allow paint from any orientation not only from camera through background image.

I have idea on workflow using Sculptong Pose Brush and Camera Projection to paint texture of human body parts using images made with any camera 3D artist have.
For example iPhone camera.
This is one camera and one lens. And it have distortions.
At first user do precalibrarion of camera/lens in any photogrammetry software to calculate specific for his camera lens distortions. This is one settings can be used for all images made with this camera.
Next him load couple of photos for example of hand.
And using camera projection preview, rotate object and pose palm to match 3D hand with UV and photo.
Next paint some part of texture. Attach another image rotate, orient fingers to second image, paint, etc.
But sculpting pose brush have limitations, and no yet lens distortions preview, so my first attempt was not so cool. And here was used only one camera.
image.png
But this workflow clearly show that camera, image and lens distortions are independent entities. And should be independent to give more freedom to 3D artists.

CC @sergey.

@brecht Definitely for this moment camera projection is not so usable without preview. And @ivpe have this Milestone 3 Preview. Only issue that Camera Projection can be used without preview. But have preview without camera projection implemented have no reason. So this is only matter of priority. And for this moment when code for camera projection is working, and we can brush up UX for this tool. Regarding deeper connection with tracker: I expecting at least three main use cases for this feature: • Image based assets texture painting (all this video tutorials how to paint texture from photo on modeled 3D object) • Photogrammetry or 3D scanned objects texture painting (from facial and fullbody scans to environment or architecture scans) • Matte painting (variation of point 1 but mainly from video sources) And based on this, only video source can have a reason to use tracker. Point 1 can have 1, 2, 5 (as low as artist will find) images of desired object. Big chance that will be completely different in scale, size and resolution images from internet. And Tracker as a “source” for such painting data looks confusingly not logical. Same situation with “perfect” datasets from Photogrammetry multi camera rigs. From photogrammetry point of view, source is a camera image/camera projection. And any camera/lens have intrinsics that define exhaustive model of this projection. And we have 3 types of users: • 3D artist • 3D scanner • Video producer 3D artist can even not know how important lens distortions/in distortions in camera projection. Results can be good for his tasks. Photogrammetrysts (3D scanners) perfectly understand that Every real camera and lens have distortions. And not possible make precise scan without skipping lens distortions. Video producer have video, him can worry more about good camera track and him know that him need apply good lens undistortion before paint texture to default cube. To be honest. I have idea for Blender composer with additional node Lens Distortion that will do all that doing Lens Distortion module in Tracker but users will have chance to choose desired lens distortion model and input distortion coefficients right in this node. And Current Undistortion Node should be Tracker node and can be plugged to this Lens Distortion node as a source of distortions coefficients estimated in Blender Tracker. Or even more. As soon as Blender have Camera object this object have lens parameters, it should have Lens distortions here in camera. But I not sure that so deep code refactoring is so easy for single coder :( Regarding idea automatically switching projector depend on view: When we jars start making our Camera Projection Painter addon, @ivpe had same idea. But imagine use case when User loaded 3D model with 240 cameras (from multi camera rig) and all him really need is paint existing texture from one or two specific cameras. A lot of rigs have dedicated cameras looks to face, but this can be cameras that mounted quite far from center and use narrow lenses. Automatic switching looks attractive at first look, but everyone will disable it as soon as find that this not only cpu extensive but also quite annoying when you need control your process. Plus main idea of projection painting is allow paint from any orientation not only from camera through background image. I have idea on workflow using Sculptong Pose Brush and Camera Projection to paint texture of human body parts using images made with any camera 3D artist have. For example iPhone camera. This is one camera and one lens. And it have distortions. At first user do precalibrarion of camera/lens in any photogrammetry software to calculate specific for his camera lens distortions. This is one settings can be used for all images made with this camera. Next him load couple of photos for example of hand. And using camera projection preview, rotate object and pose palm to match 3D hand with UV and photo. Next paint some part of texture. Attach another image rotate, orient fingers to second image, paint, etc. But sculpting pose brush have limitations, and no yet lens distortions preview, so my first attempt was not so cool. And here was used only one camera. ![image.png](https://archive.blender.org/developer/F9016830/image.png) But this workflow clearly show that camera, image and lens distortions are independent entities. And should be independent to give more freedom to 3D artists. > CC @sergey.

In #81662#1037188, @ivpe wrote:
This feature was at one time in the addon, from the earliest versions. Actually, the direction of the viewer was compared with the direction of the cameras and the closest of the co-directional ones was chosen. However, this idea had to be abandoned because, in fact, no one used it. It was replaced by another, the list was simply sorted by the direction of the cameras in space, but alas, this method is quite narrowly focused for photogrammetrists who use the camera rig

Did it show a preview of the image that would be painted? If you can't easily tell which image you're about to paint, then I imagine manually switching is preferrable.

In #81662#1037189, @ssh4 wrote:
Regarding deeper connection with tracker:
I expecting at least three main use cases for this feature:
• Image based assets texture painting (all this video tutorials how to paint texture from photo on modeled 3D object)
• Photogrammetry or 3D scanned objects texture painting (from facial and fullbody scans to environment or architecture scans)
• Matte painting (variation of point 1 but mainly from video sources)

This assumes the only part of photogrammetry done in Blender is about texture painting, but if add-ons or future builtin features do more it will be limiting to make it texture painting specific.

Or even more. As soon as Blender have Camera object this object have lens parameters, it should have Lens distortions here in camera. But I not sure that so deep code refactoring is so easy for single coder :(

For native Blender features, we don't use the easiest solution, but try to think more long term.

Regarding idea automatically switching projector depend on view:
When we jars start making our Camera Projection Painter addon, @ivpe had same idea. But imagine use case when User loaded 3D model with 240 cameras (from multi camera rig) and all him really need is paint existing texture from one or two specific cameras.

Then you would select just those 2 cameras for texture painting (perhaps by specifying a collection)? It doesn't have to use all cameras in the scene.

Automatic switching looks attractive at first look, but everyone will disable it as soon as find that this not only cpu extensive but also quite annoying when you need control your process.

It's not CPU intensive.

Plus main idea of projection painting is allow paint from any orientation not only from camera through background image.

Automatic does not have to be the only way of doing things, but it could be a reasonable default.

> In #81662#1037188, @ivpe wrote: > This feature was at one time in the addon, from the earliest versions. Actually, the direction of the viewer was compared with the direction of the cameras and the closest of the co-directional ones was chosen. However, this idea had to be abandoned because, in fact, no one used it. It was replaced by another, the list was simply sorted by the direction of the cameras in space, but alas, this method is quite narrowly focused for photogrammetrists who use the camera rig Did it show a preview of the image that would be painted? If you can't easily tell which image you're about to paint, then I imagine manually switching is preferrable. > In #81662#1037189, @ssh4 wrote: > Regarding deeper connection with tracker: > I expecting at least three main use cases for this feature: > • Image based assets texture painting (all this video tutorials how to paint texture from photo on modeled 3D object) > • Photogrammetry or 3D scanned objects texture painting (from facial and fullbody scans to environment or architecture scans) > • Matte painting (variation of point 1 but mainly from video sources) This assumes the only part of photogrammetry done in Blender is about texture painting, but if add-ons or future builtin features do more it will be limiting to make it texture painting specific. > Or even more. As soon as Blender have Camera object this object have lens parameters, it should have Lens distortions here in camera. But I not sure that so deep code refactoring is so easy for single coder :( For native Blender features, we don't use the easiest solution, but try to think more long term. > Regarding idea automatically switching projector depend on view: > When we jars start making our Camera Projection Painter addon, @ivpe had same idea. But imagine use case when User loaded 3D model with 240 cameras (from multi camera rig) and all him really need is paint existing texture from one or two specific cameras. Then you would select just those 2 cameras for texture painting (perhaps by specifying a collection)? It doesn't have to use all cameras in the scene. > Automatic switching looks attractive at first look, but everyone will disable it as soon as find that this not only cpu extensive but also quite annoying when you need control your process. It's not CPU intensive. > Plus main idea of projection painting is allow paint from any orientation not only from camera through background image. Automatic does not have to be the only way of doing things, but it could be a reasonable default.
Author

@brecht I got your point, now I will do some refactor of code and update task description (looks like from scratch))

@brecht I got your point, now I will do some refactor of code and update task description (looks like from scratch))

Removed subscriber: @weirdmonkey2807

Removed subscriber: @weirdmonkey2807
Author

@brecht ,
Me and @ssh4 have thoroughly discussed the possibilities offered by you.

About the use of background camera images.
To display the texture of the brush in the viewport there is an option "Cursor> Texture Override Overlay". Since this project is essentially an addition to the existing texture mapping mode (3D), it would be logical to see the preview here and not in the background image of the camera. In addition, the background of the image is a list that can give a false idea of ​​how the drawing will take place (for example, there are several images). Also, by looking at the code, you can understand that the background images do not support image distortion, only movieclip.

  • Maybe a better way is implement the display in approximately the same way as for the STENCIL mode, but the preview should be visible only when viewed from the camera and adjusted to the camera frame?

About the storage location of the distortion parameters

Store cameras intrinsics (Principal Point, Aspect Ratio, Skew, Distortion Model (enum), k1, k2, k3, k4, p1, p2 (per distortion model)) in struct Camera at first glance would seem logical, but this data is used only for this project. Therefore, from the point of view of other projects, it looks more like garbage data. But the idea to move the list from Brush to ToolSettings is great both in view of the proposed changes in brush management and in terms of user experience, since in this way a list of blocks will be available for any brush.

  • Maybe a better way is move the list of blocks to toolsettings->imapaint?

About using movie clip distortion
This is a good idea. As I said, it's quite possible to just add a field to select a movie clip. The distortion parameters will be used from it. However, certain limitations arise here, since the focal distance parameter and the sensor width for the movie clip are separate parameters, as a result of which the distortion will differ with identical distortion coefficients due to the discrepancy between the parameters of the camera's datablock.
I think @Sergey can clarify the possibility of implementing this idea.

  • Maybe a better way is leave this idea as a milestone, it is quite possible that it will be implemented in the near future?

About auto-switching cameras
Yes, indeed, when I first started developing as an add-on, it seemed like a great idea to me. But in fact, yes, in itself the operation of comparing directions is not difficult, but in most cases it is assumed that when changing the camera, the image will also be replaced, but here everything is not so simple. If the work was carried out with large images (in our particular case, then only with such images), loading it into memory was quite long.

  • Maybe a better way is to abandon this idea as part of the current project?

About other operators
"Fill from Cameras" and "Bind Images" ops provide a production ready level without any add-ons, since they allow you to automate filling the list and attaching images / videos

(I have not yet updated the description of the task for this, I would be interested to hear also the opinion of other developers)

@brecht , Me and @ssh4 have thoroughly discussed the possibilities offered by you. **About the use of background camera images.** To display the texture of the brush in the viewport there is an option "Cursor> Texture Override Overlay". Since this project is essentially an addition to the existing texture mapping mode (3D), it would be logical to see the preview here and not in the background image of the camera. In addition, the background of the image is a list that can give a false idea of ​​how the drawing will take place (for example, there are several images). Also, by looking at the code, you can understand that the background images do not support image distortion, only movieclip. - Maybe a better way is implement the display in approximately the same way as for the STENCIL mode, but the preview should be visible only when viewed from the camera and adjusted to the camera frame? **About the storage location of the distortion parameters** Store cameras intrinsics (Principal Point, Aspect Ratio, Skew, Distortion Model (enum), k1, k2, k3, k4, p1, p2 (per distortion model)) in `struct Camera` at first glance would seem logical, but this data is used only for this project. Therefore, from the point of view of other projects, it looks more like garbage data. But the idea to move the list from Brush to ToolSettings is great both in view of the proposed changes in brush management and in terms of user experience, since in this way a list of blocks will be available for any brush. - Maybe a better way is move the list of blocks to toolsettings->imapaint? **About using movie clip distortion** This is a good idea. As I said, it's quite possible to just add a field to select a movie clip. The distortion parameters will be used from it. However, certain limitations arise here, since the focal distance parameter and the sensor width for the movie clip are separate parameters, as a result of which the distortion will differ with identical distortion coefficients due to the discrepancy between the parameters of the camera's datablock. I think @Sergey can clarify the possibility of implementing this idea. - Maybe a better way is leave this idea as a milestone, it is quite possible that it will be implemented in the near future? **About auto-switching cameras** Yes, indeed, when I first started developing as an add-on, it seemed like a great idea to me. But in fact, yes, in itself the operation of comparing directions is not difficult, but in most cases it is assumed that when changing the camera, the image will also be replaced, but here everything is not so simple. If the work was carried out with large images (in our particular case, then only with such images), loading it into memory was quite long. - Maybe a better way is to abandon this idea as part of the current project? **About other operators** "Fill from Cameras" and "Bind Images" ops provide a production ready level without any add-ons, since they allow you to automate filling the list and attaching images / videos (I have not yet updated the description of the task for this, I would be interested to hear also the opinion of other developers)
Author

I think it is worth immediately showing the pros and purpose of the current implementation, so far without the intended changes
Desktop 2020.10.20 - 15.26.32.03_Trim.mp4

I think it is worth immediately showing the pros and purpose of the current implementation, so far without the intended changes [Desktop 2020.10.20 - 15.26.32.03_Trim.mp4](https://archive.blender.org/developer/F9022279/Desktop_2020.10.20_-_15.26.32.03_Trim.mp4)

Be aware that design decisions need to be approved by reviewers and module owners. So you can of course to implement it however you want, but it's won't necessarily be accepted that way.

Also to be clear, @Jeroen-Bakker and @ideasman42 are listed as reviewers. But do they have time and did they agree to review the design and code? If they did then they should be involved in the design discussion. If not, they should be removed from the task description and then this project is still waiting for a committer to get involved.

Be aware that design decisions need to be approved by reviewers and module owners. So you can of course to implement it however you want, but it's won't necessarily be accepted that way. Also to be clear, @Jeroen-Bakker and @ideasman42 are listed as reviewers. But do they have time and did they agree to review the design and code? If they did then they should be involved in the design discussion. If not, they should be removed from the task description and then this project is still waiting for a committer to get involved.

In #81662#1037854, @brecht wrote:
Be aware that design decisions need to be approved by reviewers and module owners. So you can of course to implement it however you want, but it's won't necessarily be accepted that way.

Can you clarify this point? @ivpe definitely can’t make own decision on final design. And If you know any plans in upcoming changes in this module please share them to Ivan.
If someone already working on refactoring this area in Blender and that code can interfere with that refactoring, we definitely need to bear in mind that upcoming changes to make this work usable in them too.
But if there is no plans for near future, and no Blender foundation coder- [x] assigned to that tasks, Ivan as independent coder can only add new feature with maximal compatibility with existing code and current blender user experience.
Also sorry, English is not native for us, and ultimate word “decision” just mean, opinion that such way is more usable. :)

> In #81662#1037854, @brecht wrote: > Be aware that design decisions need to be approved by reviewers and module owners. So you can of course to implement it however you want, but it's won't necessarily be accepted that way. > Can you clarify this point? @ivpe definitely can’t make own decision on final design. And If you know any plans in upcoming changes in this module please share them to Ivan. If someone already working on refactoring this area in Blender and that code can interfere with that refactoring, we definitely need to bear in mind that upcoming changes to make this work usable in them too. But if there is no plans for near future, and no Blender foundation coder- [x] assigned to that tasks, Ivan as independent coder can only add new feature with maximal compatibility with existing code and current blender user experience. Also sorry, English is not native for us, and ultimate word “decision” just mean, opinion that such way is more usable. :)

"It was decided" sounded rather definitive, I wanted to clarify that reviewers might disagree with that decision, but you're aware of that so it's fine.

"It was decided" sounded rather definitive, I wanted to clarify that reviewers might disagree with that decision, but you're aware of that so it's fine.
Author

@brecht Yes, you are right. I slightly corrected the previous comment as it meant the question of whether this would be a better solution than what is now / you suggested. Sometimes my knowledge of English is ...
I also invite @Jeroen-Bakker and @ideasman42 to discuss, of course, when you have free time.

@brecht Yes, you are right. I slightly corrected the previous comment as it meant the question of whether this would be a better solution than what is now / you suggested. Sometimes my knowledge of English is ... I also invite @Jeroen-Bakker and @ideasman42 to discuss, of course, when you have free time.
Author

Added subscriber: @PabloDobarro

Added subscriber: @PabloDobarro
Author

Removed subscriber: @PabloDobarro

Removed subscriber: @PabloDobarro

Added subscriber: @alisealy

Added subscriber: @alisealy

Added subscribers: @SeanKennedy, @sebastian_k

Added subscribers: @SeanKennedy, @sebastian_k

I think proper approach here would be to ensure there is a solid workflow is laid down. The issue with what is being discussed here is not only comes down to the lack of preview, but also to a rather cumbersome interaction with entities.

For example, distortion model does not exist on its own. It is highly coupled to a specific camera and its configuration used when shooting a footage. Currently such coupling exists in the Movie Clip datablock, with tools to estimate the distortion model done in Movie Clip Editor.

From this point of view, it seems logical to build a workflow based on use of movie clip datablocks. This solves the problem of non-reusable parameters stored in brush datablock, and solves problem of adding parameters to camera which are only usable by painting.

I am not sure why Libmv project has been tagged here. The proposal here is all about improving painting tools in Blender, which can and should be done without touching Libmv library.

Also, please be in contact with the VFX/tracking module artists, such as @sebastian_k and @hype. They can help building a nice solid workflow. And at this time I do not see a reason to poke developers in a specific area of code.

I think proper approach here would be to ensure there is a solid workflow is laid down. The issue with what is being discussed here is not only comes down to the lack of preview, but also to a rather cumbersome interaction with entities. For example, distortion model does not exist on its own. It is highly coupled to a specific camera and its configuration used when shooting a footage. Currently such coupling exists in the Movie Clip datablock, with tools to estimate the distortion model done in Movie Clip Editor. From this point of view, it seems logical to build a workflow based on use of movie clip datablocks. This solves the problem of non-reusable parameters stored in brush datablock, and solves problem of adding parameters to camera which are only usable by painting. I am not sure why Libmv project has been tagged here. The proposal here is all about improving painting tools in Blender, which can and should be done without touching Libmv library. Also, please be in contact with the VFX/tracking module artists, such as @sebastian_k and @hype. They can help building a nice solid workflow. And at this time I do not see a reason to poke developers in a specific area of code.

For example, distortion model does not exist on its own. It is highly coupled to a specific camera and its configuration used when shooting a footage. Currently such coupling exists in the Movie Clip datablock, with tools to estimate the distortion model done in Movie Clip Editor.

From this point of view, it seems logical to build a workflow based on use of movie clip datablocks. This solves the problem of non-reusable parameters stored in brush datablock, and solves problem of adding parameters to camera which are only usable by painting.

How from artists, matte painter, or accets creator point of view images that use camera view as a projection (we can name them Projector Projection that is more like a Light with color gel on it) images are related to Movie Clip? Or you only about datablocks, not about UX?

> For example, distortion model does not exist on its own. It is highly coupled to a specific camera and its configuration used when shooting a footage. Currently such coupling exists in the Movie Clip datablock, with tools to estimate the distortion model done in Movie Clip Editor. > > From this point of view, it seems logical to build a workflow based on use of movie clip datablocks. This solves the problem of non-reusable parameters stored in brush datablock, and solves problem of adding parameters to camera which are only usable by painting. > How from artists, matte painter, or accets creator point of view images that use camera view as a projection (we can name them Projector Projection that is more like a Light with color gel on it) images are related to Movie Clip? Or you only about datablocks, not about UX?
Author

@Sergey ,

From this point of view, it seems logical to build a workflow based on use of movie clip datablocks. This solves the problem of non-reusable parameters stored in brush datablock, and solves problem of adding parameters to camera which are only usable by painting.

The MovieTrackingCamera settings store the pixel_aspect and sensor_width options. I think this is not entirely correct in the current context, as these parameters affect distortion. As you said, the distortion parameters are tightly tied to a specific camera, I believe that they have a place either in a separate block (previous implementation) or directly in the data block of the camera (as suggested by @brecht ). I see good changes for estimation camera intrinsics in [D9294 ](https://developer.blender.org/D9294). But I also mean that it may be worth giving the user a choice (in terms of UX): use distortion parameters that are stored in the camera data block or take them from the movie clip. What you think about this?

Also Libmv was tagged because it is an important part of this project. And it is possible that it will also make some changes (for example, add the parameter of correction of the aspect ratio and skew)

In general, I listened to the advice of the @brecht, on the basis of which I made adjustments to the design of the task and updated the revision. The previous design of the task was designed to intersect as little as possible with other projects, now it seems to me that everything is in place (for short, see updated "Engineer plan" and [D9291 ](https://developer.blender.org/D9291)). Here I would like to hear the opinion of @brecht

@Sergey , >From this point of view, it seems logical to build a workflow based on use of movie clip datablocks. This solves the problem of non-reusable parameters stored in brush datablock, and solves problem of adding parameters to camera which are only usable by painting. The `MovieTrackingCamera` settings store the `pixel_aspect` and `sensor_width` options. I think this is not entirely correct in the current context, as these parameters affect distortion. As you said, the distortion parameters are tightly tied to a specific camera, I believe that they have a place either in a separate block (previous implementation) or directly in the data block of the camera (as suggested by @brecht ). I see good changes for estimation camera intrinsics in [[D9294](https://archive.blender.org/developer/D9294) ](https://developer.blender.org/D9294). But I also mean that it may be worth giving the user a choice (in terms of UX): use distortion parameters that are stored in the camera data block or take them from the movie clip. What you think about this? Also `Libmv` was tagged because it is an important part of this project. And it is possible that it will also make some changes (for example, add the parameter of correction of the aspect ratio and skew) In general, I listened to the advice of the @brecht, on the basis of which I made adjustments to the design of the task and updated the revision. The previous design of the task was designed to intersect as little as possible with other projects, now it seems to me that everything is in place (for short, see updated "Engineer plan" and [[D9291](https://archive.blender.org/developer/D9291) ](https://developer.blender.org/D9291)). Here I would like to hear the opinion of @brecht
Ivan Perevala removed their assignment 2020-10-31 11:51:28 +01:00
Vlad Kuzmin was assigned by Ivan Perevala 2020-10-31 11:51:28 +01:00

First of all, thanks for very detailed and thourough description.
I am also very happy that you want to improve the projection painting workflow. It is very much needed!

As an artist I can only really comment on the VFX and motion tracking part of things. I have never worked with photogrammetry, so I have no expertise there. But your suggestions seem reasonable.
So when I describe some thoughts about workflow and UI below, I do it only from the perspective of motion tracking / cleanplate creation / mattepainting.

As you have correctly pointed out, one of the main problems when it comes to projection painting currently in Blender is indeed the need for rather dense geometry for the UV-project modifier.
But distortion is a bit problematic too.

The cases where I need projection painting in a VFX workflow are mostly when I need to create a cleanplate.
First I would track the shot and setup the scene. This usually includes setting up the camera and the Background Images overlay.
And I will insert 3D geometry that matches all necessary objects in the clip.
If the footage is heavily distorted, I have to use the undistortion workflow, which will undistort the footage on the fly (I could also use proxies to speed up playback). Depending on what endresult I need, I can setup the composite so that it gives me undistorted renderings, where the CG is rendered as is, but the footage wil be undistorted, or that the footage will be left distorted, but the CG will be distorted.
In both cases I will have to work with undistorted footage in the viewport, so that the CG elements match the footage. And yes, the projection that is used to paint will have to be undistorted too, otherwise it will not match! As far as I know this is currently only possible by first rendering the undistorted footage to disk.

So your initiative could definitely solve some problems!
Here are some thoughts about the UI from a motion tracking perspective:

  • In the cleanplate creation workflow with a tracked shot, there is usually just one camera present, because as of now we do not have multicam solving, or a way to link mutliple tracked cameras. Sure, you can manually line up several tracked cameras, so projecting from multiple tracked cameras to one object can be possible, but afaik this is not really reliable. So usually there's just 1 camera in the scene for projecing.
  • Usually this camera will have Background Images overlay enabled with 0.5 opacity, in order to check if the track was succesfull. So when I start painting, I will probably already be able to see where and what I would paint. Having the brush texture overlay on top of the camera Background Images could be really nice actually. However, it would have to be undistorted!
  • Mostly I think I would paint onto the geometry from the camera view, since that would be my main focus. However, it could be quite nice to rotate out of camera view and paint while orbiting around the 3d geometry. In that case, an option to project a preview of the footage onto the mesh could be very handy, in addition to the brush preview itself.
  • Obviously the projection brush would have to use the movie clip as input, and sync the active frame to the scene. So you can for example paint out some people in a clip, by changing between frames, like here: https://www.youtube.com/watch?v=s_MXkTP73CI&feature=youtu.be&t=2099
  • It would be nice to be able to use the clip directly as brush input. This would get rid of the need to setup an image datablock and adjust all frame and offset settings so that it matches the clip. Also, a movie clip already has all distortion parameters.

So these are my thoughts. I think you already mentioned some or most of them already in the discussion, but writing this down made it also a bit clearer to myself. :)
If you have questions, let me know.

First of all, thanks for very detailed and thourough description. I am also very happy that you want to improve the projection painting workflow. It is very much needed! As an artist I can only really comment on the VFX and motion tracking part of things. I have never worked with photogrammetry, so I have no expertise there. But your suggestions seem reasonable. So when I describe some thoughts about workflow and UI below, I do it only from the perspective of motion tracking / cleanplate creation / mattepainting. As you have correctly pointed out, one of the main problems when it comes to projection painting currently in Blender is indeed the need for rather dense geometry for the UV-project modifier. But distortion is a bit problematic too. The cases where I need projection painting in a VFX workflow are mostly when I need to create a cleanplate. First I would track the shot and setup the scene. This usually includes setting up the camera and the Background Images overlay. And I will insert 3D geometry that matches all necessary objects in the clip. If the footage is heavily distorted, I have to use the undistortion workflow, which will undistort the footage on the fly (I could also use proxies to speed up playback). Depending on what endresult I need, I can setup the composite so that it gives me undistorted renderings, where the CG is rendered as is, but the footage wil be undistorted, or that the footage will be left distorted, but the CG will be distorted. In both cases I will have to work with undistorted footage in the viewport, so that the CG elements match the footage. And yes, the projection that is used to paint will have to be undistorted too, otherwise it will not match! As far as I know this is currently only possible by first rendering the undistorted footage to disk. So your initiative could definitely solve some problems! Here are some thoughts about the UI from a motion tracking perspective: - In the cleanplate creation workflow with a tracked shot, there is usually just one camera present, because as of now we do not have multicam solving, or a way to link mutliple tracked cameras. Sure, you can manually line up several tracked cameras, so projecting from multiple tracked cameras to one object can be possible, but afaik this is not really reliable. So usually there's just 1 camera in the scene for projecing. - Usually this camera will have Background Images overlay enabled with 0.5 opacity, in order to check if the track was succesfull. So when I start painting, I will probably already be able to see where and what I would paint. Having the brush texture overlay on top of the camera Background Images could be really nice actually. However, it would have to be undistorted! - Mostly I think I would paint onto the geometry from the camera view, since that would be my main focus. However, it could be quite nice to rotate out of camera view and paint while orbiting around the 3d geometry. In that case, an option to project a preview of the footage onto the mesh could be very handy, in addition to the brush preview itself. - Obviously the projection brush would have to use the movie clip as input, and sync the active frame to the scene. So you can for example paint out some people in a clip, by changing between frames, like here: https://www.youtube.com/watch?v=s_MXkTP73CI&feature=youtu.be&t=2099 - It would be nice to be able to use the clip directly as brush input. This would get rid of the need to setup an image datablock and adjust all frame and offset settings so that it matches the clip. Also, a movie clip already has all distortion parameters. So these are my thoughts. I think you already mentioned some or most of them already in the discussion, but writing this down made it also a bit clearer to myself. :) If you have questions, let me know.
Author

@sebastian_k Thank you for your response and description of your VFX workflow!

So as can I compare and summarize your current workflow as VFX artist and Photogrammetrist workflow, we have some collisions.

  • For now, camera intrinsics (focal lens, principal point, lens distortion parameters, etc.) are stored for each movie clip and only for movie clips. That caused that actually these parameters can be reused only with relations to specific movie clip (as for example, compositor node "Movie Distortion" and camera background image). But we need a way to get more opportunities for reuse this data. In terms of current project - for image distortions of background image as well as movie.
  • VFX scenes has already set background movie clips on cameras. It can be used for any workflow as well, I think. In terms of current project - to store image or movie clip relative to specific camera. But if we have more than one background image on single camera? In this case background images/movies are blended. The oldest principle: "You get what you see" should work here too. So how can we handle this? In my opinion the simple way - remove list of background images and leave only one. As I remember in older releases (< 2.8x) we had this as a list to use from different view angles (from left, right, top, camera). So, I think it is more Blender's design questionable too.
  • Usage of movie clip as brush input is possible using "Draw" brush texture, but currently it cannot be used as well because we missing texture mapping mode for it.

@sebastian_k what do you think about this suggestions?

@Sergey , you are a "VFX & Video" module owner, I want to hear your suggestion about a place where camera intrinsics are stored. In my opinion as coder, intrinsics data should be more Camera relative. I mean that all lens distortion API and actually container for that data should be stored in DNA_camera_types.h and BKE_camera.h. This way we got at least an opportunity to use same data in different ID's (in image, as well as in movieclip). From side of user experience, it should not change anything about movie tracking. Or maybe it's possible to make more major changes and create a separate ID for camera intrinsics block?

@sebastian_k Thank you for your response and description of your VFX workflow! So as can I compare and summarize your current workflow as VFX artist and Photogrammetrist workflow, we have some collisions. - For now, camera intrinsics (focal lens, principal point, lens distortion parameters, etc.) are stored for each movie clip and only for movie clips. That caused that actually these parameters can be reused only with relations to specific movie clip (as for example, compositor node "Movie Distortion" and camera background image). But we need a way to get more opportunities for reuse this data. In terms of current project - for image distortions of background image as well as movie. - VFX scenes has already set background movie clips on cameras. It can be used for any workflow as well, I think. In terms of current project - to store image or movie clip relative to specific camera. But if we have more than one background image on single camera? In this case background images/movies are blended. The oldest principle: "You get what you see" should work here too. So how can we handle this? In my opinion the simple way - remove list of background images and leave only one. As I remember in older releases (< 2.8x) we had this as a list to use from different view angles (from left, right, top, camera). So, I think it is more Blender's design questionable too. - Usage of movie clip as brush input is possible using "Draw" brush texture, but currently it cannot be used as well because we missing texture mapping mode for it. @sebastian_k what do you think about this suggestions? @Sergey , you are a "VFX & Video" module owner, I want to hear your suggestion about a place where camera intrinsics are stored. In my opinion as coder, intrinsics data should be more Camera relative. I mean that all lens distortion API and actually container for that data should be stored in `DNA_camera_types.h` and `BKE_camera.h`. This way we got at least an opportunity to use same data in different ID's (in image, as well as in movieclip). From side of user experience, it should not change anything about movie tracking. Or maybe it's possible to make more major changes and create a separate ID for camera intrinsics block?

@sebastian_k @ivpe @Sergey as soon as we touching Movie Clips. Video can be captured using zoom lenses as well as using optical or sensor stabilization, that mean lens intrinsics can be changed per frame. And even if current tracker code is not allow tracking such footage, external trackers, especially photogrammetry based (including opensource tools) can give such data. That can be imported to blender in Movie Clip. How in that case we can store per frame camera intrinsics in a future?

@sebastian_k @ivpe @Sergey as soon as we touching Movie Clips. Video can be captured using zoom lenses as well as using optical or sensor stabilization, that mean lens intrinsics can be changed per frame. And even if current tracker code is not allow tracking such footage, external trackers, especially photogrammetry based (including opensource tools) can give such data. That can be imported to blender in Movie Clip. How in that case we can store per frame camera intrinsics in a future?
Author

I updated open questions in task description according to @sebastian_k and @ssh4 posts.

I updated open questions in task description according to @sebastian_k and @ssh4 posts.

In practise I think it is rather rare to have multiple background images in the context I described. At least I never did. And even if I would load a second background image, maybe as a reference or so, I'd probably only show one.
But either way, I am not sure if we should actually use the Background Image as paint input. As a way preview the paint input I don't see a practical problem with Background Images, even if more than one are present.
About the technical implementation, I have no idea, that would something more for @Sergey .
Maybe @SeanKennedy has an idea as well?

In practise I think it is rather rare to have multiple background images in the context I described. At least I never did. And even if I would load a second background image, maybe as a reference or so, I'd probably only show one. But either way, I am not sure if we should actually use the Background Image as paint input. As a way preview the paint input I don't see a practical problem with Background Images, even if more than one are present. About the technical implementation, I have no idea, that would something more for @Sergey . Maybe @SeanKennedy has an idea as well?

Added subscriber: @GeorgiaPacific

Added subscriber: @GeorgiaPacific

Added subscriber: @AndyCuccaro

Added subscriber: @AndyCuccaro

Added subscriber: @Gadas

Added subscriber: @Gadas

Added subscriber: @ClemensRudolph

Added subscriber: @ClemensRudolph

Added subscriber: @Jaydead

Added subscriber: @Jaydead

Added subscriber: @ckohl_art

Added subscriber: @ckohl_art

Added subscriber: @JerBot

Added subscriber: @JerBot

Added subscriber: @RobN

Added subscriber: @RobN

In #81662#1047681, @sebastian_k wrote:
In practise I think it is rather rare to have multiple background images in the context I described. At least I never did. And even if I would load a second background image, maybe as a reference or so, I'd probably only show one.
But either way, I am not sure if we should actually use the Background Image as paint input. As a way preview the paint input I don't see a practical problem with Background Images, even if more than one are present.
About the technical implementation, I have no idea, that would something more for @Sergey .
Maybe @SeanKennedy has an idea as well?

I have used multiple images for projection. Example : Frame 1 and Frame 200 have different angles from the tracked camera. Faces visible on frame 200 are different than on frame 1. So I want to use frame 200 image to paint from the frame 200 camera. Same with frame 1 camera/image. This allows me to blend images together on to the geometry.

I am describing how I do my workflow in nuke

Rob

> In #81662#1047681, @sebastian_k wrote: > In practise I think it is rather rare to have multiple background images in the context I described. At least I never did. And even if I would load a second background image, maybe as a reference or so, I'd probably only show one. > But either way, I am not sure if we should actually use the Background Image as paint input. As a way preview the paint input I don't see a practical problem with Background Images, even if more than one are present. > About the technical implementation, I have no idea, that would something more for @Sergey . > Maybe @SeanKennedy has an idea as well? I have used multiple images for projection. Example : Frame 1 and Frame 200 have different angles from the tracked camera. Faces visible on frame 200 are different than on frame 1. So I want to use frame 200 image to paint from the frame 200 camera. Same with frame 1 camera/image. This allows me to blend images together on to the geometry. I am describing how I do my workflow in nuke Rob

In #81662#1108669, @RobN wrote:
I have used multiple images for projection. Example : Frame 1 and Frame 200 have different angles from the tracked camera. Faces visible on frame 200 are different than on frame 1. So I want to use frame 200 image to paint from the frame 200 camera. Same with frame 1 camera/image. This allows me to blend images together on to the geometry.

Sure, I do that as well, but this is not the issue here. In terms of datablocks, technically it is still the same Background Image on frame 1 and 200, e.g. a movie or image sequence, just a different frame.

> In #81662#1108669, @RobN wrote: > I have used multiple images for projection. Example : Frame 1 and Frame 200 have different angles from the tracked camera. Faces visible on frame 200 are different than on frame 1. So I want to use frame 200 image to paint from the frame 200 camera. Same with frame 1 camera/image. This allows me to blend images together on to the geometry. > Sure, I do that as well, but this is not the issue here. In terms of datablocks, technically it is still the same Background Image on frame 1 and 200, e.g. a movie or image sequence, just a different frame.

In #81662#1108797, @sebastian_k wrote:
Sure, I do that as well, but this is not the issue here. In terms of datablocks, technically it is still the same Background Image on frame 1 and 200, e.g. a movie or image sequence, just a different frame.

Totally agree.
I'd love to see this tool be integrated in blender.
Should I contribute a bit more to my Blender Cloud subscription? :)

> In #81662#1108797, @sebastian_k wrote: > Sure, I do that as well, but this is not the issue here. In terms of datablocks, technically it is still the same Background Image on frame 1 and 200, e.g. a movie or image sequence, just a different frame. Totally agree. I'd love to see this tool be integrated in blender. Should I contribute a bit more to my Blender Cloud subscription? :)

In #81662#1108879, @RobN wrote:

Should I contribute a bit more to my Blender Cloud subscription? :)

Hehe, cool, though I think feedback and testing helps much more. Especially if it comes from persons with some rather solid experience. ;)

> In #81662#1108879, @RobN wrote: > Should I contribute a bit more to my Blender Cloud subscription? :) Hehe, cool, though I think feedback and testing helps much more. Especially if it comes from persons with some rather solid experience. ;)

I’ll happily test this out as soon as an implementation is done.
I can even provide videos on the way this works in other software and then how I’d like to see the workflow improved. There are ways to make it better for sure.

I’ll happily test this out as soon as an implementation is done. I can even provide videos on the way this works in other software and then how I’d like to see the workflow improved. There are ways to make it better for sure.
Author

@RobN,

I’ll happily test this out as soon as an implementation is done.

The implementation will be completed until we finally find a consensus on its design. A lot of people want this tool, and I wish that one day I could bring it to life.
But there are still many unresolved questions (as in the relative task: D9291). So, if you want and can, you can always tell your vision of this project)

@RobN, >>I’ll happily test this out as soon as an implementation is done. The implementation will be completed until we finally find a consensus on its design. A lot of people want this tool, and I wish that one day I could bring it to life. But there are still many unresolved questions (as in the relative task: [D9291](https://archive.blender.org/developer/D9291)). So, if you want and can, you can always tell your vision of this project)

In #81662#1110056, @ivpe wrote:
The implementation will be completed until we finally find a consensus on its design. A lot of people want this tool, and I wish that one day I could bring it to life.
But there are still many unresolved questions (as in the relative task: D9291). So, if you want and can, you can always tell your vision of this project)

Would it be beneficial to see how the workflow goes in Nuke?
I can make a video about that for you if that's beneficial.

> In #81662#1110056, @ivpe wrote: > The implementation will be completed until we finally find a consensus on its design. A lot of people want this tool, and I wish that one day I could bring it to life. > But there are still many unresolved questions (as in the relative task: [D9291](https://archive.blender.org/developer/D9291)). So, if you want and can, you can always tell your vision of this project) Would it be beneficial to see how the workflow goes in Nuke? I can make a video about that for you if that's beneficial.

Would it be beneficial to see how the workflow goes in Nuke?
I can make a video about that for you if that's beneficial.

Based on last Ton post we should avoid mention commercial software on task designs. And should be extremely careful with software you mentioned. That company had awful history on own software protection.

In same moment Blender have solid UI and UX and any tool implementation must follow already used in blender workflows. And paint mode in blender quite straight forward. Just missing native camera projection support.

And technically speaking. Camera projection is only matter of correct camera placement and camera and lens intrinsics. And such data just need to be accessible in per camera or in per frame or both matter. Because we already added proper Brown-Conrady lens distortion support to blender. Now only projection is missed.

> Would it be beneficial to see how the workflow goes in Nuke? > I can make a video about that for you if that's beneficial. Based on last Ton post we should avoid mention commercial software on task designs. And should be extremely careful with software you mentioned. That company had awful history on own software protection. In same moment Blender have solid UI and UX and any tool implementation must follow already used in blender workflows. And paint mode in blender quite straight forward. Just missing native camera projection support. And technically speaking. Camera projection is only matter of correct camera placement and camera and lens intrinsics. And such data just need to be accessible in per camera or in per frame or both matter. Because we already added proper Brown-Conrady lens distortion support to blender. Now only projection is missed.

In #81662#1110823, @ssh4 wrote:

Would it be beneficial to see how the workflow goes in Nuke?

Based on last Ton post we should avoid mention commercial software on task designs. And should be extremely careful with software you mentioned. That company had awful history on own software protection.

In same moment Blender have solid UI and UX and any tool implementation must follow already used in blender workflows. And paint mode in blender quite straight forward. Just missing native camera projection support.

And technically speaking. Camera projection is only matter of correct camera placement and camera and lens intrinsics. And such data just need to be accessible in per camera or in per frame or both matter. Because we already added proper Brown-Conrady lens distortion support to blender. Now only projection is missed.

Certainly. I would expect this modifier/node to have a camera input and then an image (or sequence) input similar to the UV Project node but it would know all the data from the camera and the user would not have to enter aspect ratio, as that would be assumed from camera input.

Ideally you could put in a sequence and specify a frame to lock the projection from so then the user could use a second modifier/node for a secondary projection, and then use paint to create a blend between the two.
There should be an option for where to project : all polys (back and front) / front facing only.

Something that would be very handy is to find a way to use the camera as a "light" of sorts and create a shadow for a mask. That mask could be used in the secondary projection modifier/node so that the user could fill in the empty areas left by a different camera angle. I realize some kind of baking step would/could be needed in this case but it would be pretty powerful.

Additionally, there should be an option so that the projection "sticks" so in case the object receiving the projection moves, the projection would move WITH it, versus THROUGH it. I'd call it sticky projection.

hope that helps

> In #81662#1110823, @ssh4 wrote: >> Would it be beneficial to see how the workflow goes in Nuke? > > Based on last Ton post we should avoid mention commercial software on task designs. And should be extremely careful with software you mentioned. That company had awful history on own software protection. > > In same moment Blender have solid UI and UX and any tool implementation must follow already used in blender workflows. And paint mode in blender quite straight forward. Just missing native camera projection support. > > And technically speaking. Camera projection is only matter of correct camera placement and camera and lens intrinsics. And such data just need to be accessible in per camera or in per frame or both matter. Because we already added proper Brown-Conrady lens distortion support to blender. Now only projection is missed. Certainly. I would expect this modifier/node to have a camera input and then an image (or sequence) input similar to the UV Project node but it would know all the data from the camera and the user would not have to enter aspect ratio, as that would be assumed from camera input. Ideally you could put in a sequence and specify a frame to lock the projection from so then the user could use a second modifier/node for a secondary projection, and then use paint to create a blend between the two. There should be an option for where to project : all polys (back and front) / front facing only. Something that would be very handy is to find a way to use the camera as a "light" of sorts and create a shadow for a mask. That mask could be used in the secondary projection modifier/node so that the user could fill in the empty areas left by a different camera angle. I realize some kind of baking step would/could be needed in this case but it would be pretty powerful. Additionally, there should be an option so that the projection "sticks" so in case the object receiving the projection moves, the projection would move WITH it, versus THROUGH it. I'd call it sticky projection. hope that helps

Added subscriber: @eoyilmaz

Added subscriber: @eoyilmaz

Added subscriber: @Eliot-Mack

Added subscriber: @Eliot-Mack

Added subscriber: @greg-12

Added subscriber: @greg-12

Added subscriber: @JulianR

Added subscriber: @JulianR
Julien Kaspar added this to the Sculpt, Paint & Texture project 2023-02-08 10:48:53 +01:00
Philipp Oeser removed the
Interest
Sculpt, Paint & Texture
label 2023-02-10 09:12:17 +01:00
Member

I am removing the Needs Triage label. This is under the general rule that Design and TODO tasks should not have a status.

If you believe this task is no longer relevant, feel free to close it.

I am removing the `Needs Triage` label. This is under the general rule that Design and TODO tasks should not have a status. If you believe this task is no longer relevant, feel free to close it.
Alaska removed the
Status
Needs Triage
label 2024-04-07 05:52:24 +02:00
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No Assignees
25 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#81662
No description provided.