It was rather confusing from the user usage point
of view and didn't get so much improvement after
new bundle adjuster was added.
In the future we might want to switch resection
to PPnP algorithm, which could also might be a
nice alternative to fallback option.
Jittering was caused by homography not being estimated
accurate enough.
Before this, only algebraic estimation was used, which
is indeed not so much great, Now use algebraic estimation
followed with refinement step using Ceres minimizer.
The code was already there since keyframe selection patch,
made such estimation a generic function in multiview/ and
changed API for estimation in order to pass all additional
options via an options structure (the same way as it's
done fr Ceres).
This includes changes to both homography and fundamental
estimation.
TODO:
- Need to document Ceres functors better.
- Need to support homogeneous coordinates (currently
only euclidean coords are supported).
From the math point of view there're two cases:
- Clearing the keyframe between two other ones.
In this case tracker will first track plane from
left keyframe to right one without doing any kind
of blending. This will make plane stick to the
actual plane motion, but lead to possible jump
at the right keyframe.
Second step is to track from the right keyframe
to the left one with blending. This gives nice
transition at the point of second keyframe and
this mimics situation when you've been setting
keyframes from left to right.
- Clearing left-most/right-most keyframe.
In this case it's enough to only re-track the
plane without blending from the neighbor keyframe
without blending.
- Do plane re-evaluation only when transform is actually done.
Before this re-evaluation happened on every mouse move.
- Added a flag "Auto Keyframe" for the plane track, which does:
* If Auto Keyframe is enabled, then every manual edit of the
plane will create a new keyframe at current frame and update
plane motion between current frame and previous/next keyframe.
This now also implies blending detected motion with neighbor
keyframes, so there's no jump happening.
No automatic update on manual point tracks edit will happen.
* If auto Keyframe is disabled, then no keyframes are adding
to the plane and every plane tweak will re-evaluate in on
the whole frame range.
In this case manual tweaks to point tracks and re-tracking
them implies plane re-evaluation.
- add missing headers from cmake (own omission)
- quiet rna_test.c unused define warnings.
- minor style edits
- spelling corrections and ignore all uppercase words with spell checking script.
Track margin checks needed some tweaks to deal better with the fact
that normalized values for the same pixel values might be different
across X and Y axis.
Also, non-centered patters are expected to be handling better now.
Issue was caused by tacks map merge re-allocating the tracks and this
didn't update plane tracks.
Ideally tracks_map_merge shall not re-allocate tracks, but for now
just update plane tracks. Shouldn't be too much slow anyway and could
always be tweaked without affecting any artists.
This commit includes all the changes made for plane tracker
in tomato branch.
Movie clip editor changes:
- Artist might create a plane track out of multiple point
tracks which belongs to the same track (minimum amount of
point tracks is 4, maximum is not actually limited).
When new plane track is added, it's getting "tracked"
across all point tracks, which makes it stick to the same
plane point tracks belong to.
- After plane track was added, it need to be manually adjusted
in a way it covers feature one might to mask/replace.
General transform tools (G, R, S) or sliding corners with
a mouse could be sued for this. Plane corner which
corresponds to left bottom image corner has got X/Y axis
on it (red is for X axis, green for Y).
- Re-adjusting plane corners makes plane to be "re-tracked"
for the frames sequence between current frame and next
and previous keyframes.
- Kayframes might be removed from the plane, using Shit-X
(Marker Delete) operator. However, currently manual
re-adjustment or "re-track" trigger is needed.
Compositor changes:
- Added new node called Plane Track Deform.
- User selects which plane track to use (for this he need
to select movie clip datablock, object and track names).
- Node gets an image input, which need to be warped into
the plane.
- Node outputs:
* Input image warped into the plane.
* Plane, rasterized to a mask.
Masking changes:
- Mask points might be parented to a plane track, which
makes this point deforming in a way as if it belongs
to the tracked plane.
Some video tutorials are available:
- Coder video: http://www.youtube.com/watch?v=vISEwqNHqe4
- Artist video: https://vimeo.com/71727578
This is mine and Keir's holiday code project :)
Clean up inconsistencies in the libmv C API:
- All type identifiers are libmv_TypeName
- All function identifiers libmv_functionName
- Prefer libmv_nounVerb function names (e.g. libmv_featuresDestroy)
- Match Blender code formatting rather than Google
- Spelling corrections
Code review: https://codereview.appspot.com/11494044/
Hopefully it's more readable now. Took me a while to remmeber
all the stuff going on here while was looking into possibility
of implementing some feature here.
Implements an automatic keyframe selection algorithm which uses
couple of approaches to find out best keyframes candidates:
- First, slightly modifier Pollefeys's criteria is used, which
limits correspondence ration from 80% to 100%. This allows to
reject keyframe candidate early without doing heavy math in
cases there're not much common features with first keyframe.
- Second step is based on Geometric Robust Information Criteria
(aka GRIC), which checks whether features motion between
candidate keyframes is better defined by homography or
fundamental matrices.
To be a good keyframe candidate, fundamental matrix need to
define motion better than homography (in this case F-GRIC will
be smaller than H-GRIC).
This two criteria are well described in this paper:
http://www.cs.ait.ac.th/~mdailey/papers/Tahir-KeyFrame.pdf
- Final step is based on estimating reconstruction error of
a full-scene solution using candidate keyframes. This part
is based on the following paper:
ftp://ftp.tnt.uni-hannover.de/pub/papers/2004/ECCV2004-TTHBAW.pdf
This step requires reconstruction using candidate keyframes
and obtaining covariance matrix of 3D points positions.
Reconstruction was done pretty much straightforward using
other simple pipeline routines, and for covariance estimation
pseudo-inverse of Hessian is used, which is in this case
(J^T * J)+, where + denotes pseudo-inverse.
Jacobian matrix is estimating using Ceres evaluate API.
This is also crucial to get rid of possible gauge ambiguity,
which is in our case made by zero-ing 7 (by gauge freedoms
number) eigen values in pseudo-inverse.
There're still room for improving and optimizing the code,
but we need some point to start with anyway :)
Thanks to Keir Mierle and Sameer Agarwal who assisted a lot
to make this feature working.
- Added const modifiers where it makes sense and
helps keep code safe.
- Reshuffled argument to match <inputs>,<outputs>
convention on parameters.
- Pass values to ApplyRadialDistortionCameraIntrinsics
by a constant reference.
This will save lots of CPU ticks passing relatively
heavy jet objects to this function when running
bundle adjustment.
This means when you've got reconstructed scene assigned to a
3d camera (via camera solver constraint) and applies scale on
this camera from Ctrl-A menu, scale will be applied on the
reconstructed scene and reset camera size to identity.
This is very useful feature for scene orientation, when you'll
just scale camera by S in the viewport to match bundles
some points in the space, and then you'll easiy make camera
have identity scale (which is needed for nice working moblur
and other things mentioning by Sebastian :) without loosing
scale of bundles themselves.
Behavior of apply scale for cameras without clip assigned
to them does not change at all.
Makes code in tracking.cc much easier to understand and modify,
without worring to breck compulation with Libmv disabled.
It is still possible compilation will break due to libmv-capi
changes, but that's not happening so much often.
This operator will run a tracker from previous
keyframe to current frame for all selected markers.
Current markers positions are considering initial
position guess which could be updated by a tracker
for better match.
Useful in cases when feature disappears from the
frame and then appears again. Usage in this case
is the following:
- When feature point re-appeared on frame, manully
place marker on it.
- Use Refine Markers operation (which is in Track
panel) to allow tracker to find a better match.
Depending on direction of tracking use either
Forwards or Backwards refining. It's easy: if
trackign happens forwards, use Refine Frowards,
otherwise use Refine Backwards :)
Additional changes:
- Cleaned up sources to reduce mess in some
big functions.
- Removed unused function from libmv c-api.
- Made functions naming more consistent.
- Use bool for internal stuff in tracking.c.
Shall be no functional changes :)
Made it so reconstructed scene always scaled in a way
that variance of camera centers is unity.
This solves "issues" when different keyframes will
give the same reprojection error but will give scenes
with different.scale, which could easily have been
considered as a bad keyframe combination.
This change is essential for automatic keyframe
selection algorithm to work reliable for user.
Also removed unneeded image buffer scaling, it was only needed
for "early output" if there was no rotation. That is no longer
supported since it used to pixelate result a lot and interpolation
is always used now.
Saves quite a few of memory and CPU cycles.
- Nearest interpolation was always used when there's
no rotation for stabilization. Was a failure of
optimization heuristic.
- Made 2d stabilization frame acquiring threaded.
This function is only used for display and sequencer
which will only benefit of threads here.
- Fixed bug introduced in r48749 which lead to
re-making stable frame on every redraw.
This made preview working but that broke internals
of tracking.
Namely, BlurredImageAndDerivativesChannels is giving
much more blurred image because it was assuming pixel
center is an integer position.
Guess other parts of libmv used to suffer because of
this issue.
Now pixel centering happens in blender side, and
libmv assumes integer position is a pixel center.
This commit implements multi-threaded calculation of frames
when building proxies. Both scaling and undistortion steps
are now threaded.
Frames and proxy resolution are still handled one-by-one,
saving files after every single step. So if HDD is not so
fast, this commit could have not so much benefit.
Internal changes:
- Added IMB_scaleImBuf_threaded which scales given image
buffer in multiple threads and uses bilinear filtering.
- libmv's camera intrinsics now have SetThreads() method
which is used to specify how many OpenMP threads to use
for buffer distortion/undistortion.
And yeah, this code is using OpenMP for threading.
- Reshuffled a bit libmv-capi calls and added function
BKE_tracking_distortion_set_threads to specify number
of threads used by intrinscis.
- Dopesheet need to be updated when adding or switching
between objects.
- After removing object it shall also be tagged for update,
otherwise crash will likely happen.