Camera projection mapping for brush texture #81662
Open
opened 2020-10-12 22:17:31 +02:00 by Ivan Perevala
·
54 comments
No Branch/Tag Specified
main
universal-scene-description
temp-sculpt-dyntopo
blender-v3.3-release
blender-v3.6-release
asset-browser-frontend-split
brush-assets-project
asset-shelf
anim/armature-drawing-refactor-3
temp-sculpt-dyntopo-hive-alloc
tmp-usd-python-mtl
tmp-usd-3.6
blender-v3.5-release
blender-projects-basics
blender-v2.93-release
temp-sculpt-attr-api
realtime-clock
sculpt-dev
gpencil-next
bevelv2
microfacet_hair
xr-dev
principled-v2
v3.6.3
v3.3.11
v3.6.2
v3.3.10
v3.6.1
v3.3.9
v3.6.0
v3.3.8
v3.3.7
v2.93.18
v3.5.1
v3.3.6
v2.93.17
v3.5.0
v2.93.16
v3.3.5
v3.3.4
v2.93.15
v2.93.14
v3.3.3
v2.93.13
v2.93.12
v3.4.1
v3.3.2
v3.4.0
v3.3.1
v2.93.11
v3.3.0
v3.2.2
v2.93.10
v3.2.1
v3.2.0
v2.83.20
v2.93.9
v3.1.2
v3.1.1
v3.1.0
v2.83.19
v2.93.8
v3.0.1
v2.93.7
v3.0.0
v2.93.6
v2.93.5
v2.83.18
v2.93.4
v2.93.3
v2.83.17
v2.93.2
v2.93.1
v2.83.16
v2.93.0
v2.83.15
v2.83.14
v2.83.13
v2.92.0
v2.83.12
v2.91.2
v2.83.10
v2.91.0
v2.83.9
v2.83.8
v2.83.7
v2.90.1
v2.83.6.1
v2.83.6
v2.90.0
v2.83.5
v2.83.4
v2.83.3
v2.83.2
v2.83.1
v2.83
v2.82a
v2.82
v2.81a
v2.81
v2.80
v2.80-rc3
v2.80-rc2
v2.80-rc1
v2.79b
v2.79a
v2.79
v2.79-rc2
v2.79-rc1
v2.78c
v2.78b
v2.78a
v2.78
v2.78-rc2
v2.78-rc1
v2.77a
v2.77
v2.77-rc2
v2.77-rc1
v2.76b
v2.76a
v2.76
v2.76-rc3
v2.76-rc2
v2.76-rc1
v2.75a
v2.75
v2.75-rc2
v2.75-rc1
v2.74
v2.74-rc4
v2.74-rc3
v2.74-rc2
v2.74-rc1
v2.73a
v2.73
v2.73-rc1
v2.72b
2.72b
v2.72a
v2.72
v2.72-rc1
v2.71
v2.71-rc2
v2.71-rc1
v2.70a
v2.70
v2.70-rc2
v2.70-rc
v2.69
v2.68a
v2.68
v2.67b
v2.67a
v2.67
v2.66a
v2.66
v2.65a
v2.65
v2.64a
v2.64
v2.63a
v2.63
v2.61
v2.60a
v2.60
v2.59
v2.58a
v2.58
v2.57b
v2.57a
v2.57
v2.56a
v2.56
v2.55
v2.54
v2.53
v2.52
v2.51
v2.50
v2.49b
v2.49a
v2.49
v2.48a
v2.48
v2.47
v2.46
v2.45
v2.44
v2.43
v2.42a
v2.42
v2.41
v2.40
v2.37a
v2.37
v2.36
v2.35a
v2.35
v2.34
v2.33a
v2.33
v2.32
v2.31a
v2.31
v2.30
v2.28c
v2.28a
v2.28
v2.27
v2.26
v2.25
Labels
Clear labels
This issue affects/is about backward or forward compatibility
Issues relating to security: https://wiki.blender.org/wiki/Process/Vulnerability_Reports
Apply labels
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
This issue affects/is about backward or forward compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest: Wayland
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Issues relating to security: https://wiki.blender.org/wiki/Process/Vulnerability_Reports
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest: Wayland
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
Milestone
Set milestone
Clear milestone
No items
No Milestone
Projects
Set Project
Clear projects
No project
Assignees
Assign users
Clear assignees
No Assignees
24 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.
No due date set.
Dependencies
No dependencies set.
Reference: blender/blender#81662
Reference in New Issue
There is no content yet.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may exist for a short time before cleaning up, in most cases it CANNOT be undone. Continue?
Status: Task design and engineer plan review
Team
Code Review: @Jeroen-Bakker @ideasman42
Project leader: @ivpe
Project members: @ssh4
Preamble
Goal of this project are add code support and tools for Camera Projection Painting to Blender.
Such workflows used in 3D CG from early days for matte painting, 3D and video integration, etc.
In all workflows most logical UX is projection preview as it seen from camera and projection preview overlay on 3D model or scene that visible from any point.
When matte painting is one of oldest workflows. Last years we can see huge progress in 3D scanning/Photogrammetry methods, that now one of main parts of 3D assets or scenes creation process. And in same time a lot of artists use photo modeling workflows with photo or video data as a source for textures.
And again, most logical and fast way to work with textures for that models is using 3D paint feature directly in 3D app, instead of use external 3D painters or 2D editors to work with UVed textures.
Who can use it
This project improvements in used UX from 3D assets creators, Photogrammetrists (3D scanners) and VFX/Matte painters point of view.
For that we made a list of current limitations and propose improvements for wide type of 3D CG users.
Probably one of biggest group. Standard photo modeling workflow use photo or images as a backgrounds. Final model UVed in human readable form and textures are making with standard for 2D editors or texture painter's workflows with using layers, masks, etc.
Paint tool that can allow project image or photo from specific camera view can speedup time for mock-ups iterations in works with clients and art/creative directors. Due to real-time preview of results in 3D. In same moment using real photo, especially in architectural models limiting possible source of images. Due to lens distortions in all photo, required additional attention to used only high-quality professional photo equipment. Or applying undistortion to all sources before using them for texturing. And when using video in that moment required lot of additional disk space to store that temporary data.
If 3D assets are low-polygon than this also required creation of “proxy” objects with higher polygons count, because current Blender UV Projection give visible distortions on textures if used low-poly mesh or mesh with n-gons.
Another and also big group of 3D content creators. And if in early CG days that was mostly artists that working with organic shapes or characters. In current time a lot of artists doing hard-surface modeling in 3D sculpting editors.
That group of artists also need the way to project photo data to textures and using paint editor in same app, speedup sketching or work in progress iterations.
All models made in that workflows in time of work can have complicated and non-optimized UVs. And using external 2D editors or texture painters not usable until model are retopologized and UVed in human readable form.
Current Blender UV projection have less issues with such meshes. But polygons count already have a big impact on CPU time required for work. Computing new UV Projection from new camera/view for 5-10 million poly meshes already far from real-time.
This user group also have demand on works with photo or video sources, and lens distortion in source data required to use of undistorted images.
Last years one of most growing groups of 3D content creators. That group work with a huge poly count meshes. And it more narrows to 3D sculpting by used workflows in post-processing scan data. With one but important difference, 3D scans are often monolith or split to chunks 3D meshes. And in the middle of process this meshes have completely non optimized UV's with thousands UV islands made by UV parametrization algorithms optimized for speed and works with thousands or even hundreds of millions polygons meshes.
Photogrammetry with all its power have a limitations in used algorithms due to a huge amount of data used in time of creation of 3D meshes and their texturing. And photogrammetry user not always have way to easily control process of choosing best source for texture in specific part of model. And result texture can use source data from image (camera) with worse depth of field or changed environment lights.
Lot of clients of 3D scan studios prefer to work with raw meshes and original textures to be sure that all possible details from 3D scan was used in most efficient way for their project.
But photogrammetrist still need fast way to quickly fix raw mesh and raw texture from specific image (specific camera) with better details, or correct light, etc.
As with 3D sculptors, mesh polygon count is more than enough to avoid UV projection deformations on textures. But used polygon count around 10-20 million of polygons in raw mesh is not a rare use case for high-quality high-resolution 3D scan.
One of biggest differences for other groups, is an image (camera count) used for 3D model and texture creation. 50-60 images/cameras are average small photogrammetry scan.
Big or precise scans can easily have 1000 or even 10000 images with 8K or bigger resolution.
Good point that using all these images/cameras for final texture is not a mandatory. But workflow that allow quickly switching between images/cameras are big time saver.
And as soon as source for 3D model and texture are real photo, that always have lens distortions. But if other groups probably can work with some small distortions. Photogrammetry due to it high precision required using mathematically correct lens distortion models. That allow precise copy of source texture from scanned objects with sub millimeter resolution.
Used image count and size of this images required a lot of disc space for any additional temporary files like undistorted images.
So support for using lens undistortion in projection painting “must have” feature.
That group can combine workflows and issues from all groups mentioned above. But in addition, this group often work with video sources. That mean this can be hundreds or thousands of frames that also can have real lens distortions or even rolling shutter issue (visible as a skew image deformation).
3D models can be a combination of low-polygon models or blocks or n-gons for quickly build different distance planes on scene.
And all of this in production or sketching time required quick integration with projection mapping painting.
Current Blender workflow limitations and issues
Summarizing the above limitations of Blender:
Blender currently has a way to do camera projection painting, with "Clone" brush and "UV Project" modifier. This method is enough labor intensive on the part of the user experience, requires to choose a lot of options even just to change camera from which projection act, which negatively affects the speed of the workflow. The main disadvantage of this method is the requirement for the density of the geometry. Low poly models (for example, after retopology) will have unnecessary distortions when drawing. Also, an extra step in this process is the export undistorted images/movie clips, because datasets can be large enough to create a lot of unnecessary files. It is also desirable to have control over brightness/contrast, ect, this method do not allow this.
User workflow expectations
The ultimate goal of this project is to build a workflow that eliminates these shortcomings:
Scene setup
Assumed that the user has a scene with mesh objects, created by himself or imported from third-party software. There are also a number of cameras and original images/movie clips.
Canvas texture correction/painting process
Assumed that user is in Texture Paint Mode.
More globally, a major refactoring of the Texture Paint module is needed (like this one #73935 ), all users expect smooth workflow with high-polygon objects and high-resolution images/movie clips.
Also, an important part is to give the user a visual representation in the viewport of exactly how the texture will project on the object and from which camera the projection is act (the most obvious is to project from the active camera of the scene).
Suggestions
Here described only a suggestions, each of them can be an open question.
All this but more visually
Previously, we (@ssh4, @ivpe) did it as an addon, it was more photogrammetry-oriented so UI/UX should be different in some cases. In this short video described basic principles of workflow with large dataset of raw (distorted) images, viewport preview. Non-commercial scene is used.
Camera Projection Mapping.mov
Work plan
Open questions
Open questions according to the suggestions below. Everyone can have their own opinion, so we need a general solution to be approved by everyone. Therefore, I leave a link to the original post, as well as its summary. After design document update some suggestions are included into it and some are still questionable:
[Original post by]] @brecht [ https:*developer.blender.org/T81662#1038426 | Original post by @Sergey Where to store the distortion parameters of images/movie clips as well as the relationship between a specific camera with a specific image as given the current Movie Tracking design and in the case of its modification to more unified?
Original post by @brecht In case if camera store image or movie clip as background image. Should we have option for automation switch brush texture image / movie enabled by default? Or it should be the only way?
Original post by @sebastian_k Camera background image is enough for visual preview or it should be shader based?
Original post by @sebastian_k VFX scenes has already set background movie clips on cameras. It can be used for any workflow as well, I think. In terms of current project - to store image or movie clip relative to specific camera. But if we have more than one background image on single camera? In this case background images/movies are blended. The oldest principle: "You get what you see" should work here too. So how can we handle this? In my opinion the simple way - remove list of background images and leave only one. As I remember in older releases (< 2.8x) we had this as a list to use from different view angles (from left, right, top, camera). So, I think it is more Blender's design questionable too.
Original post by @ssh4 should be camera intrinsics animated for support videos that captured using zoom lenses as well as using optical or sensor stabilization?
Refactoring of the Texture Paint mode. Is it planned and is it worth counting on help in its implementation?
...
Relevant links:
A small research on YouTube about current limitations:
CGMatter: Blender 2.8 Panorama stitching projection painting (part 2)
CGMatter: 2D To 3D
CGMatter: Realistic 3D? EASY
IanHubert: Wild Tricks for Greenscreen in Blender
Previous implementation using UV-projection method in separate boost-module:
Camera projection painter addon
Industry-compatible lens distortion models:
[Brown-Conrady]], [ https:*developer.blender.org/D7484 | Nuke
Added subscribers: @ivpe, @ssh4, @ideasman42
Added subscriber: @SteffenD
Added subscriber: @jvanko
Added subscriber: @lichtwerk
Added subscriber: @slumber
Added subscriber: @Jeroen-Bakker
A clear description of the basic principles of camera projection implementation using the simplest examples:
https://youtu.be/s93T72LeRv0
Added subscriber: @weirdmonkey2807
Added subscribers: @Sergey, @brecht
In terms of usability, from what I can tell from the video there is no way to preview the projection before you paint, which makes it hard to set it up correctly. And users need to manually sync between brush data and scene or motion tracking data.
I also don't think all this data really belongs in a brush datablock. Especially as we will do more changes so that brushes becomes part of asset management and shared between .blend files, it makes little sense to store more scene specific camera parameters in them.
Have you considered using camera background images instead? These already come with a preview before you paint, and provide a way to pull in motion tracked clips and their distortion parameters.
Besides cameras, even image objects could be considered, as users will often already set those up as references images in the right position.
And besides manually switching between the appropriate projector, it could support automatically picking the camera (or image object) depending on what aligns closest to the viewpoint in the viewport.
CC @sergey.
@brecht
Yes, I think that the background image of the camera would be a good preview option (ideally, of course, it would be implemented as in an addon that serves as a prototype for development now, I will still refer to it, there the preview of the brush was implemented by shaders and exactly repeated the result not only when viewed from the camera. However, I looked at the current implementation of the preview brushes and for the same result, a serious refactoring is needed)
I can’t say with certainty which data block this data should belong to, taking into account the proposed changes. However, this is important information. Perhaps I should keep a list of these parameters for scene tool settings?
Yes, I am considering the idea to add the same Clip field as in the "Movie Distortion" node in addition to the existing block of distortion parameters. This idea is good because it will create a small ecosystem with which you can, for example, calculate the distortion parameters with a solver in tracking and then use them to distort the image / video of the brush texture
This feature was at one time in the addon, from the earliest versions. Actually, the direction of the viewer was compared with the direction of the cameras and the closest of the co-directional ones was chosen. However, this idea had to be abandoned because, in fact, no one used it. It was replaced by another, the list was simply sorted by the direction of the cameras in space, but alas, this method is quite narrowly focused for photogrammetrists who use the camera rig
@brecht
Definitely for this moment camera projection is not so usable without preview.
And @ivpe have this Milestone 3 Preview.
Only issue that Camera Projection can be used without preview. But have preview without camera projection implemented have no reason. So this is only matter of priority.
And for this moment when code for camera projection is working, and we can brush up UX for this tool.
Regarding deeper connection with tracker:
I expecting at least three main use cases for this feature:
• Image based assets texture painting (all this video tutorials how to paint texture from photo on modeled 3D object)
• Photogrammetry or 3D scanned objects texture painting (from facial and fullbody scans to environment or architecture scans)
• Matte painting (variation of point 1 but mainly from video sources)
And based on this, only video source can have a reason to use tracker.
Point 1 can have 1, 2, 5 (as low as artist will find) images of desired object. Big chance that will be completely different in scale, size and resolution images from internet. And Tracker as a “source” for such painting data looks confusingly not logical.
Same situation with “perfect” datasets from Photogrammetry multi camera rigs.
From photogrammetry point of view, source is a camera image/camera projection.
And any camera/lens have intrinsics that define exhaustive model of this projection.
And we have 3 types of users:
• 3D artist
• 3D scanner
• Video producer
3D artist can even not know how important lens distortions/in distortions in camera projection. Results can be good for his tasks.
Photogrammetrysts (3D scanners) perfectly understand that Every real camera and lens have distortions. And not possible make precise scan without skipping lens distortions.
Video producer have video, him can worry more about good camera track and him know that him need apply good lens undistortion before paint texture to default cube.
To be honest. I have idea for Blender composer with additional node Lens Distortion that will do all that doing Lens Distortion module in Tracker but users will have chance to choose desired lens distortion model and input distortion coefficients right in this node. And Current Undistortion Node should be Tracker node and can be plugged to this Lens Distortion node as a source of distortions coefficients estimated in Blender Tracker.
Or even more. As soon as Blender have Camera object this object have lens parameters, it should have Lens distortions here in camera. But I not sure that so deep code refactoring is so easy for single coder :(
Regarding idea automatically switching projector depend on view:
When we jars start making our Camera Projection Painter addon, @ivpe had same idea. But imagine use case when User loaded 3D model with 240 cameras (from multi camera rig) and all him really need is paint existing texture from one or two specific cameras.
A lot of rigs have dedicated cameras looks to face, but this can be cameras that mounted quite far from center and use narrow lenses. Automatic switching looks attractive at first look, but everyone will disable it as soon as find that this not only cpu extensive but also quite annoying when you need control your process.
Plus main idea of projection painting is allow paint from any orientation not only from camera through background image.
I have idea on workflow using Sculptong Pose Brush and Camera Projection to paint texture of human body parts using images made with any camera 3D artist have.

For example iPhone camera.
This is one camera and one lens. And it have distortions.
At first user do precalibrarion of camera/lens in any photogrammetry software to calculate specific for his camera lens distortions. This is one settings can be used for all images made with this camera.
Next him load couple of photos for example of hand.
And using camera projection preview, rotate object and pose palm to match 3D hand with UV and photo.
Next paint some part of texture. Attach another image rotate, orient fingers to second image, paint, etc.
But sculpting pose brush have limitations, and no yet lens distortions preview, so my first attempt was not so cool. And here was used only one camera.
But this workflow clearly show that camera, image and lens distortions are independent entities. And should be independent to give more freedom to 3D artists.
Did it show a preview of the image that would be painted? If you can't easily tell which image you're about to paint, then I imagine manually switching is preferrable.
This assumes the only part of photogrammetry done in Blender is about texture painting, but if add-ons or future builtin features do more it will be limiting to make it texture painting specific.
For native Blender features, we don't use the easiest solution, but try to think more long term.
Then you would select just those 2 cameras for texture painting (perhaps by specifying a collection)? It doesn't have to use all cameras in the scene.
It's not CPU intensive.
Automatic does not have to be the only way of doing things, but it could be a reasonable default.
@brecht I got your point, now I will do some refactor of code and update task description (looks like from scratch))
Removed subscriber: @weirdmonkey2807
@brecht ,
Me and @ssh4 have thoroughly discussed the possibilities offered by you.
About the use of background camera images.
To display the texture of the brush in the viewport there is an option "Cursor> Texture Override Overlay". Since this project is essentially an addition to the existing texture mapping mode (3D), it would be logical to see the preview here and not in the background image of the camera. In addition, the background of the image is a list that can give a false idea of how the drawing will take place (for example, there are several images). Also, by looking at the code, you can understand that the background images do not support image distortion, only movieclip.
About the storage location of the distortion parameters
Store cameras intrinsics (Principal Point, Aspect Ratio, Skew, Distortion Model (enum), k1, k2, k3, k4, p1, p2 (per distortion model)) in
struct Camera
at first glance would seem logical, but this data is used only for this project. Therefore, from the point of view of other projects, it looks more like garbage data. But the idea to move the list from Brush to ToolSettings is great both in view of the proposed changes in brush management and in terms of user experience, since in this way a list of blocks will be available for any brush.About using movie clip distortion
This is a good idea. As I said, it's quite possible to just add a field to select a movie clip. The distortion parameters will be used from it. However, certain limitations arise here, since the focal distance parameter and the sensor width for the movie clip are separate parameters, as a result of which the distortion will differ with identical distortion coefficients due to the discrepancy between the parameters of the camera's datablock.
I think @Sergey can clarify the possibility of implementing this idea.
About auto-switching cameras
Yes, indeed, when I first started developing as an add-on, it seemed like a great idea to me. But in fact, yes, in itself the operation of comparing directions is not difficult, but in most cases it is assumed that when changing the camera, the image will also be replaced, but here everything is not so simple. If the work was carried out with large images (in our particular case, then only with such images), loading it into memory was quite long.
About other operators
"Fill from Cameras" and "Bind Images" ops provide a production ready level without any add-ons, since they allow you to automate filling the list and attaching images / videos
(I have not yet updated the description of the task for this, I would be interested to hear also the opinion of other developers)
I think it is worth immediately showing the pros and purpose of the current implementation, so far without the intended changes
Desktop 2020.10.20 - 15.26.32.03_Trim.mp4
Be aware that design decisions need to be approved by reviewers and module owners. So you can of course to implement it however you want, but it's won't necessarily be accepted that way.
Also to be clear, @Jeroen-Bakker and @ideasman42 are listed as reviewers. But do they have time and did they agree to review the design and code? If they did then they should be involved in the design discussion. If not, they should be removed from the task description and then this project is still waiting for a committer to get involved.
Can you clarify this point? @ivpe definitely can’t make own decision on final design. And If you know any plans in upcoming changes in this module please share them to Ivan.
If someone already working on refactoring this area in Blender and that code can interfere with that refactoring, we definitely need to bear in mind that upcoming changes to make this work usable in them too.
But if there is no plans for near future, and no Blender foundation coder- [x] assigned to that tasks, Ivan as independent coder can only add new feature with maximal compatibility with existing code and current blender user experience.
Also sorry, English is not native for us, and ultimate word “decision” just mean, opinion that such way is more usable. :)
"It was decided" sounded rather definitive, I wanted to clarify that reviewers might disagree with that decision, but you're aware of that so it's fine.
@brecht Yes, you are right. I slightly corrected the previous comment as it meant the question of whether this would be a better solution than what is now / you suggested. Sometimes my knowledge of English is ...
I also invite @Jeroen-Bakker and @ideasman42 to discuss, of course, when you have free time.
Added subscriber: @PabloDobarro
Removed subscriber: @PabloDobarro
Added subscriber: @alisealy
Added subscribers: @SeanKennedy, @sebastian_k
I think proper approach here would be to ensure there is a solid workflow is laid down. The issue with what is being discussed here is not only comes down to the lack of preview, but also to a rather cumbersome interaction with entities.
For example, distortion model does not exist on its own. It is highly coupled to a specific camera and its configuration used when shooting a footage. Currently such coupling exists in the Movie Clip datablock, with tools to estimate the distortion model done in Movie Clip Editor.
From this point of view, it seems logical to build a workflow based on use of movie clip datablocks. This solves the problem of non-reusable parameters stored in brush datablock, and solves problem of adding parameters to camera which are only usable by painting.
I am not sure why Libmv project has been tagged here. The proposal here is all about improving painting tools in Blender, which can and should be done without touching Libmv library.
Also, please be in contact with the VFX/tracking module artists, such as @sebastian_k and @hype. They can help building a nice solid workflow. And at this time I do not see a reason to poke developers in a specific area of code.
How from artists, matte painter, or accets creator point of view images that use camera view as a projection (we can name them Projector Projection that is more like a Light with color gel on it) images are related to Movie Clip? Or you only about datablocks, not about UX?
@Sergey ,
The
MovieTrackingCamera
settings store thepixel_aspect
andsensor_width
options. I think this is not entirely correct in the current context, as these parameters affect distortion. As you said, the distortion parameters are tightly tied to a specific camera, I believe that they have a place either in a separate block (previous implementation) or directly in the data block of the camera (as suggested by @brecht ). I see good changes for estimation camera intrinsics in [D9294 ](https://developer.blender.org/D9294). But I also mean that it may be worth giving the user a choice (in terms of UX): use distortion parameters that are stored in the camera data block or take them from the movie clip. What you think about this?Also
Libmv
was tagged because it is an important part of this project. And it is possible that it will also make some changes (for example, add the parameter of correction of the aspect ratio and skew)In general, I listened to the advice of the @brecht, on the basis of which I made adjustments to the design of the task and updated the revision. The previous design of the task was designed to intersect as little as possible with other projects, now it seems to me that everything is in place (for short, see updated "Engineer plan" and [D9291 ](https://developer.blender.org/D9291)). Here I would like to hear the opinion of @brecht
First of all, thanks for very detailed and thourough description.
I am also very happy that you want to improve the projection painting workflow. It is very much needed!
As an artist I can only really comment on the VFX and motion tracking part of things. I have never worked with photogrammetry, so I have no expertise there. But your suggestions seem reasonable.
So when I describe some thoughts about workflow and UI below, I do it only from the perspective of motion tracking / cleanplate creation / mattepainting.
As you have correctly pointed out, one of the main problems when it comes to projection painting currently in Blender is indeed the need for rather dense geometry for the UV-project modifier.
But distortion is a bit problematic too.
The cases where I need projection painting in a VFX workflow are mostly when I need to create a cleanplate.
First I would track the shot and setup the scene. This usually includes setting up the camera and the Background Images overlay.
And I will insert 3D geometry that matches all necessary objects in the clip.
If the footage is heavily distorted, I have to use the undistortion workflow, which will undistort the footage on the fly (I could also use proxies to speed up playback). Depending on what endresult I need, I can setup the composite so that it gives me undistorted renderings, where the CG is rendered as is, but the footage wil be undistorted, or that the footage will be left distorted, but the CG will be distorted.
In both cases I will have to work with undistorted footage in the viewport, so that the CG elements match the footage. And yes, the projection that is used to paint will have to be undistorted too, otherwise it will not match! As far as I know this is currently only possible by first rendering the undistorted footage to disk.
So your initiative could definitely solve some problems!
Here are some thoughts about the UI from a motion tracking perspective:
So these are my thoughts. I think you already mentioned some or most of them already in the discussion, but writing this down made it also a bit clearer to myself. :)
If you have questions, let me know.
@sebastian_k Thank you for your response and description of your VFX workflow!
So as can I compare and summarize your current workflow as VFX artist and Photogrammetrist workflow, we have some collisions.
@sebastian_k what do you think about this suggestions?
@Sergey , you are a "VFX & Video" module owner, I want to hear your suggestion about a place where camera intrinsics are stored. In my opinion as coder, intrinsics data should be more Camera relative. I mean that all lens distortion API and actually container for that data should be stored in
DNA_camera_types.h
andBKE_camera.h
. This way we got at least an opportunity to use same data in different ID's (in image, as well as in movieclip). From side of user experience, it should not change anything about movie tracking. Or maybe it's possible to make more major changes and create a separate ID for camera intrinsics block?@sebastian_k @ivpe @Sergey as soon as we touching Movie Clips. Video can be captured using zoom lenses as well as using optical or sensor stabilization, that mean lens intrinsics can be changed per frame. And even if current tracker code is not allow tracking such footage, external trackers, especially photogrammetry based (including opensource tools) can give such data. That can be imported to blender in Movie Clip. How in that case we can store per frame camera intrinsics in a future?
I updated open questions in task description according to @sebastian_k and @ssh4 posts.
In practise I think it is rather rare to have multiple background images in the context I described. At least I never did. And even if I would load a second background image, maybe as a reference or so, I'd probably only show one.
But either way, I am not sure if we should actually use the Background Image as paint input. As a way preview the paint input I don't see a practical problem with Background Images, even if more than one are present.
About the technical implementation, I have no idea, that would something more for @Sergey .
Maybe @SeanKennedy has an idea as well?
Added subscriber: @GeorgiaPacific
Added subscriber: @AndyCuccaro
Added subscriber: @Gadas
Added subscriber: @ClemensRudolph
Added subscriber: @Jaydead
Added subscriber: @ckohl_art
Added subscriber: @JerBot
Added subscriber: @RobN
I have used multiple images for projection. Example : Frame 1 and Frame 200 have different angles from the tracked camera. Faces visible on frame 200 are different than on frame 1. So I want to use frame 200 image to paint from the frame 200 camera. Same with frame 1 camera/image. This allows me to blend images together on to the geometry.
I am describing how I do my workflow in nuke
Rob
Sure, I do that as well, but this is not the issue here. In terms of datablocks, technically it is still the same Background Image on frame 1 and 200, e.g. a movie or image sequence, just a different frame.
Totally agree.
I'd love to see this tool be integrated in blender.
Should I contribute a bit more to my Blender Cloud subscription? :)
Hehe, cool, though I think feedback and testing helps much more. Especially if it comes from persons with some rather solid experience. ;)
I’ll happily test this out as soon as an implementation is done.
I can even provide videos on the way this works in other software and then how I’d like to see the workflow improved. There are ways to make it better for sure.
@RobN,
The implementation will be completed until we finally find a consensus on its design. A lot of people want this tool, and I wish that one day I could bring it to life.
But there are still many unresolved questions (as in the relative task: D9291). So, if you want and can, you can always tell your vision of this project)
Would it be beneficial to see how the workflow goes in Nuke?
I can make a video about that for you if that's beneficial.
Based on last Ton post we should avoid mention commercial software on task designs. And should be extremely careful with software you mentioned. That company had awful history on own software protection.
In same moment Blender have solid UI and UX and any tool implementation must follow already used in blender workflows. And paint mode in blender quite straight forward. Just missing native camera projection support.
And technically speaking. Camera projection is only matter of correct camera placement and camera and lens intrinsics. And such data just need to be accessible in per camera or in per frame or both matter. Because we already added proper Brown-Conrady lens distortion support to blender. Now only projection is missed.
Certainly. I would expect this modifier/node to have a camera input and then an image (or sequence) input similar to the UV Project node but it would know all the data from the camera and the user would not have to enter aspect ratio, as that would be assumed from camera input.
Ideally you could put in a sequence and specify a frame to lock the projection from so then the user could use a second modifier/node for a secondary projection, and then use paint to create a blend between the two.
There should be an option for where to project : all polys (back and front) / front facing only.
Something that would be very handy is to find a way to use the camera as a "light" of sorts and create a shadow for a mask. That mask could be used in the secondary projection modifier/node so that the user could fill in the empty areas left by a different camera angle. I realize some kind of baking step would/could be needed in this case but it would be pretty powerful.
Additionally, there should be an option so that the projection "sticks" so in case the object receiving the projection moves, the projection would move WITH it, versus THROUGH it. I'd call it sticky projection.
hope that helps
Added subscriber: @eoyilmaz
Added subscriber: @Eliot-Mack
Added subscriber: @greg-12
Added subscriber: @JulianR