This commit extends limit of ID and objects to 64 (it means 63 meaning
characters and 1 for zero-terminator). CustomData layers names are also
extended.
Changed DNA structures and all places where length constants were hardcoded.
All names which are "generating" from ID block should be limited by MAX_ID_NAME-2,
all non-id names now has got own define called MAX_NAME which should be used all
over for non-id names to make further name migration stuff easier.
All name fields in DNA now have comment with constant which corresponds to
hardcoded numeric value which should make it easier to further update this
limits or even switch to non-hardcoded values in DNA.
Special thanks to Campbell who helped figuring out some issues and helped a lot
in finding all cases where hardcoded valued were still used in code.
Both of forwards and backwards compatibility is stored with blender versions newer
than January 5, 2011. Older versions had issue with placing null-terminator to
DNA strings on file load which will lead to some unpredictable behavior or even
crashes.
- marker_find_frame moved to MovieTrack.markers and called find_frame
- Added MovieTrack.markers.insert_frame to insert marker at specified frame
- Added MovieTrack.markers.delete_frame to delete marker from specified frame
- This operators always used to work with tracks for camera
- Properly set camera and object fields to Follow Track constraint
- TrackingObject.tracks is now pointing to actual list of tracks for
camera objects.
Added slider to define scale of object solution which is used to define
"depth" of object relative to camera position. This slider effects on all
"users" of object solution such as bundles display, constrained objects and so.
Added new operator called "Set Solution Scale" to set real scale for object
solution based on real distance between two bundles reconstructed for this object.
New slider and operator can be found on "Object" panel in toolbox when in
reconstruction mode and active tracking object isn't a camera.
This commit implements basis stuff needed for object tracking,
use case isn't perfect now, interface also should be cleaned a bit.
- Added list of objects to be tracked. Default there's only one object called
"Camera" which is used for solving camera motion. Other objects can be added
and each of them will have it;s own list of tracks. Only one object can be used
for camera solving at this moment.
- Added new constraint called "Object Tracking" which makes oriented object be
moving in the save way as solved object motion.
- Scene orientation tools can be used for orienting object to bundles.
- All tools which works with list of tracks or reconstruction data now
gets that lists from active editing object.
- All objects and their tracking data are available via python api.
Comment from Keir's commit:
Add a new hybrid region tracker for motion tracking to libmv, and
add it as an option (under "Hybrid") in the tracking settings. The
region tracker is a combination of brute force tracking for coarse
alignment, then refinement with the ESM/KLT algorithm already in
libmv that gives excellent subpixel precision (typically 1/50'th
of a pixel)
This also adds a new "brute force" region tracker which does a
brute force search through every pixel position in the destination
for the pattern in the first frame. It leverages SSE if available,
similar to the SAD tracker, to do this quickly. Currently it does
some unnecessary conversions to/from floating point that will get
fixed later.
The hybrid tracker glues the two trackers (brute & ESM) together
to get an overall better tracker. The algorithm is simple:
1. Track from frame 1 to frame 2 with the brute force tracker.
This tries every possible pixel position for the pattern from
frame 1 in frame 2. The position with the smallest
sum-of-absolute-differences is chosen. By definition, this
position is only accurate up to 1 pixel or so.
2. Using the result from 1, initialize a track with ESM. This does
a least-squares fit with subpixel precision.
3. If the ESM shift was more than 2 pixels, report failure.
4. If the ESM track shifted less than 2 pixels, then the track is
good and we're done. The rationale here is that if the
refinement stage shifts more than 1 pixel, then the brute force
result likely found some random position that's not a good fit.
svn command used: svn merge -r 42375:42376 -r 42377:42379 ^/branches/soc-2011-tomato
add it as an option (under "Hybrid") in the tracking settings. The
region tracker is a combination of brute force tracking for coarse
alignment, then refinement with the ESM/KLT algorithm already in
libmv that gives excellent subpixel precision (typically 1/50'th
of a pixel)
This also adds a new "brute force" region tracker which does a
brute force search through every pixel position in the destination
for the pattern in the first frame. It leverages SSE if available,
similar to the SAD tracker, to do this quickly. Currently it does
some unnecessary conversions to/from floating point that will get
fixed later.
The hybrid tracker glues the two trackers (brute & ESM) together
to get an overall better tracker. The algorithm is simple:
1. Track from frame 1 to frame 2 with the brute force tracker.
This tries every possible pixel position for the pattern from
frame 1 in frame 2. The position with the smallest
sum-of-absolute-differences is chosen. By definition, this
position is only accurate up to 1 pixel or so.
2. Using the result from 1, initialize a track with ESM. This does
a least-squares fit with subpixel precision.
3. If the ESM shift was more than 2 pixels, report failure.
4. If the ESM track shifted less than 2 pixels, then the track is
good and we're done. The rationale here is that if the
refinement stage shifts more than 1 pixel, then the brute force
result likely found some random position that's not a good fit.
This commit implements:
- Configurable settings for newly creating tracks
Now it's possible to set tracking algorithm and it's settings for
all newly creating tracks including manual tracks creation and
tracks creation by "Detect Features" operator.
- Moves margin, frames limit and adjust frame inside per-track
settings.
Was request from Francois for this.
- Adjust Frames replaced with menu called Pattern Match where it's
possible to choose between matching pattern from keyframe frame
or from previously tracked frame.
Didn't see somebody used adjust frames values differ from 0 and 1,
and this menu should make things more clear here/
- Changed some names so now people who aren't really familiar with
motion tracking can understand what they exactly means
- Also cleaned up and rephraded some descriptions
- Changed behavior of operator which creates empty for 2d tracks:
now it operates on all selected tracks rather than active track only
- Added checkbox to enable/disable rotation stabilization
- Move tracking-related constraints to own section in list
Currently there are only two constraints, so can look a bit odd,
but it'll be other constraints like "Object Solver" and so.
- Move motion-tracking parameters from 3D viewport Display panel
to it's own panel.
- Get rid of "Bundle" in 3d viewport. It's quite obvious that it's
a 3D representation of tracks is used in 3D viewport and it shouldn't
be so confusing for artists now.
- Also get rid of "Bundle" in Follow Track constraint.
Old files can change a bit because of changes in DNA.
- Also get rid of "Bundles" in operator which creates vertices cloud
from 3D position of tracks.
- Rename "Principal Point" to "Optical Center" in the interface.
- Add support for refining the camera's intrinsic parameters
during a solve. Currently, refining supports only the following
combinations of intrinsic parameters:
f
f, cx, cy
f, cx, cy, k1, k2
f, k1
f, k1, k2
This is not the same as autocalibration, since the user must
still make a reasonable initial guess about the focal length and
other parameters, whereas true autocalibration would eliminate
the need for the user specify intrinsic parameters at all.
However, the solver works well with only rough guesses for the
focal length, so perhaps full autocalibation is not that
important.
Adding support for the last two combinations, (f, k1) and (f,
k1, k2) required changes to the library libmv depends on for
bundle adjustment, SSBA. These changes should get ported
upstream not just to libmv but to SSBA as well.
- Improved the region of convergence for bundle adjustment by
increasing the number of Levenberg-Marquardt iterations from 50
to 500. This way, the solver is able to crawl out of the bad
local minima it gets stuck in when changing from, for example,
bundling k1 and k2 to just k1 and resetting k2 to 0.
- Add several new region tracker implementations. A region tracker
is a libmv concept, which refers to tracking a template image
pattern through frames. The impact to end users is that tracking
should "just work better". I am reserving a more detailed
writeup, and maybe a paper, for later.
- Other libmv tweaks, such as detecting that a tracker is headed
outside of the image bounds.
This includes several changes made directly to the libmv extern
code rather expecting to get those changes through normal libmv
channels, because I, the libmv BDFL, decided it was faster to work
on libmv directly in Blender, then later reverse-port the libmv
changes from Blender back into libmv trunk. The interesting part
is that I added a full Levenberg-Marquardt loop to the region
tracking code, which should lead to a more stable solutions. I
also added a hacky implementation of "Efficient Second-Order
Minimization" for tracking, which works nicely. A more detailed
quantitative evaluation will follow.
Original patch by Keir, cleaned a bit by myself.
- Fixed inconsistent data type used for pts number in ffmpeg_fetchibuf
and stored in timecode structure. Not really issue for "correct" movie files,
but probably can help for "broken" one
- Lock to selection and center to selection will now work fine with undistorted rendering
- Do not display pyramid for disabled tracks
- Corrected fix for wrong correlation_min property name
===========================
Commiting camera tracking integration gsoc project into trunk.
This commit includes:
- Bundled version of libmv library (with some changes against official repo,
re-sync with libmv repo a bit later)
- New datatype ID called MovieClip which is optimized to work with movie
clips (both of movie files and image sequences) and doing camera/motion
tracking operations.
- New editor called Clip Editor which is currently used for motion/tracking
stuff only, but which can be easily extended to work with masks too.
This editor supports:
* Loading movie files/image sequences
* Build proxies with different size for loaded movie clip, also supports
building undistorted proxies to increase speed of playback in
undistorted mode.
* Manual lens distortion mode calibration using grid and grease pencil
* Supervised 2D tracking using two different algorithms KLT and SAD.
* Basic algorithm for feature detection
* Camera motion solving. scene orientation
- New constraints to "link" scene objects with solved motions from clip:
* Follow Track (make object follow 2D motion of track with given name
or parent object to reconstructed 3D position of track)
* Camera Solver to make camera moving in the same way as reconstructed camera
This commit NOT includes changes from tomato branch:
- New nodes (they'll be commited as separated patch)
- Automatic image offset guessing for image input node and image editor
(need to do more tests and gather more feedback)
- Code cleanup in libmv-capi. It's not so critical cleanup, just increasing
readability and understanadability of code. Better to make this chaneg when
Keir will finish his current patch.
More details about this project can be found on this page:
http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011
Further development of small features would be done in trunk, bigger/experimental
features would first be implemented in tomato branch.