Compare commits

...

99 Commits

Author SHA1 Message Date
1ff25a1494 Merging r60962 through r60963 from trunk into soc-2011-tomato 2013-10-28 10:42:08 +00:00
15d139291f Merging r59178 through r60961 from trunk into soc-2011-tomato 2013-10-28 10:39:53 +00:00
9cc8a3049a Made branch closer to trunk 2013-08-16 09:58:00 +00:00
be4f2188dd Merging r59173 through r59177 from trunk into soc-2011-tomato 2013-08-16 09:56:30 +00:00
a5d7507ee5 Merging r59088 through r59172 from trunk into soc-2011-tomato 2013-08-16 08:21:05 +00:00
3f2f647004 Merging r59029 through r59087 from trunk into soc-2011-tomato 2013-08-12 14:42:49 +00:00
f568e779f1 Replace our own box AA filter with BLI_jitter-based
Using box filtering for AA makes image too much blurry,
so here's an attempt to use generic BLI_jitter routines
to do sampling.

It's expected to be exactly the same as opengl renderer
does, but i could have got something wrong.

But anyway, results seems to be a little bit less fuzzy.
2013-08-09 01:10:35 +00:00
07b615bf9e Merging r59000 through r59028 from trunk into soc-2011-tomato 2013-08-09 01:09:55 +00:00
1ca3b73801 Fix crash when adding plane track in some cases 2013-08-08 17:02:00 +00:00
1b0d6274f3 Need to re-evaluate plane after susing generic transform tools. 2013-08-08 07:39:18 +00:00
3541929eb7 Delete Marker operator will now also tae plane markers into account
Smaller fixes:

- Point track delection missed checking whether track actually
  belongs to the plane or not.

- Adding point track shall deselect all the planes.

- Fixed bug with restoring marker's flag after transformaiton.
2013-08-08 07:39:11 +00:00
07593df57b Prevent plane tracks from being concave 2013-08-08 07:39:02 +00:00
4fc86075e9 Re-arrange AA sampling a bit
Apparently when calculating UV from upsampled
corners warped result seems too much doggy.

Now result seems to be exactly the same as it
was before AA changes (apart from AA-ed edges :)
2013-08-08 07:38:56 +00:00
86f11e35eb Anti-aliasing for plane track image warping
Uses the same exact kernel size of 4x4 as for plane
track mask output.

Think this is not a bad way to go, difference in speed
using test file was only about 30% with much better
output result.
2013-08-08 07:38:49 +00:00
e633d7beff Made plane track transformable by generic transform operators
Pretty much straightforward change, apart form one trick needed
to preserve point tracks scaling nice:

- If TransData belong to point track, use it's individual center
  when scaling (ignoring pivot point center).

  This works much better for adjusting sizes of point tracks.

- If TransData belongs to plane track, use pivot point settings.

This is done via TransData's flag TD_INDIVIDUAL_SCALE which is
currently used by point tracks conversion code, but might be
used by other transform elements as well.
2013-08-08 07:38:42 +00:00
3f73ccb7a3 Plane mask anti-aliasing experiment
Currently uses hardcoded kernel size of 4, which means
node is actually 16 times slower now, but gives pretty
much nice anti-aliased results.
2013-08-08 07:38:35 +00:00
da1d34859e Fix incorrect plane tracks array initialization
It lead to crashes when trying to create plane track
from subset of existing point tracks.
2013-08-08 07:38:27 +00:00
79c332d427 Merging r58956 through r58999 from trunk into soc-2011-tomato 2013-08-07 19:11:37 +00:00
2c89508cc2 Make byte-float ocnversion threaded in compositor
In fact, there's no need to get float buffer at all,
conversion could be done in pixel processor level
after interpolation.

It might give slightly worse interpolation results
(which i'm not sure would be visible by eye) but
it gives more than 2x speedup on my laptop on node
setups used for warping image.
2013-08-07 07:45:26 +00:00
4b3e6244f8 Mask points now could be parented to the plane track
MaskParent structure was extended by:
- type, which indicates whether it's a parent to point
  or plane track. Types might be extended further.

- Original corners, which are used to obtain homography
  used to deform point's coordinate.

Homoraphy is detecting between original corners position
(corners at the parenting time) and current plne corners,
and the it's being applied on mask point coordinate.

Some tricks with switching between mask and clip coords
is needed, but it's all pretty much straightforward in
the code.

Parenting happens on spline point level, not spline one.
This fits existing design, and it's not so big difference
for artist.

Parenting operator (Ctrl-P) might for sure be used to
parent points to the active plane track.

Additional change: added visualization of plane corners
orientation. Corner which corresponds to left bottom
image corner now has got two axis: red which corresponds
to X axis, and green one for Y axis. This way it's
easier to orient corners on the footage.
2013-08-07 07:45:17 +00:00
c7411ea86c Merging r58947 through r58955 from trunk into soc-2011-tomato 2013-08-06 04:16:09 +00:00
643489e1c5 Merging r58870 through r58946 from trunk into soc-2011-tomato 2013-08-06 00:39:21 +00:00
b1d48823ec Code cleanup: made function name consistemt 2013-08-03 20:19:05 +00:00
1eda5c4f87 Remove downsample operation
It didn't work too much nice, and finishing it
is not so high priority for now.

Files would be in svn history anyway, so if
someone would like to look into the code, he's
welcome to do this.
2013-08-03 19:54:00 +00:00
e899f4125f Remove plane track mask warping operation
After some thoughts, it's not needed actually.
As soon as mask is parented to a plane, Mask
input node would give you warped mask already.
2013-08-03 19:53:53 +00:00
972a71e62f Made corner sliding more convenient
Now it's possible to slide corner roughly, then
press Shift to switch to accurate positioning.

Before this change corner used to jump when one
presses shift, which wasn't too much convenient.
2013-08-03 19:53:44 +00:00
faf7d689e7 Fix wrong usage of mul_v3_m3v3.
You couldn't use the same vector as input and output.
2013-08-03 19:53:38 +00:00
51cca133fa Use a proper depending are of interest for image warping
Instead of requesting the whole frame to be evaluated
before plane warping could be invoked, warp tile into
original image space and use it's bounding box as area
of interest.

There're some tricks with margins going on still, but
without them some tiles are missing.

Quick tests seems to be working fine, we could solve
possible remaining issue later.
2013-08-03 19:53:30 +00:00
1ae255602f Replace crappy custom sampler with EWA one
Our sampler could be useful for more general usage,
like simpler downsampling in scale node, but it needs
to be improved before it's really useful.

It was giving lots of jittering artifacts which isn't
good for motion tracking. So now image wrap operation
used EWA filtering.

For now, it uses copy-pasted dx/dy calculation which
came from MapUV operation. Ideally we need to get
rid of duplicated code, but it's not so trivial for
now because of UV coordinates are calculating in
different ways. Not a big deal to have duplicated
code for a while.
2013-08-03 19:53:24 +00:00
1cbb2725c5 Initial code layout for plane track deform node
Idea of this is:
- User selects which plane track to use (for this he
  need to select movie clip datablock, object and track
  names).
- Node gets an image and mask inputs (both are optional).
- Node outputs:
  * Input image warped into the plane.
  * Input mask warped by the plane.
  * Plane, rasterized to a mask.

Warping image is done by computing reverse bilinear
coordinates, and getting pixel from corresponding
position.

This requires some tricks with downsampling to make warped
image looks smooth.

Currently compositor doesn't support downsampling, so we
needed to implement our own operation for this.

Currently idea is dead simple: value of output pixel equals
to an average of neighborhood of corresponding pixel in input
image. Size of neighborhood is defined by ratio between input
and output resolutions.

This operation doesn't give perfect results and works only
for downsampling. But it's totally internal operation not
exposed to the interface, so it's easy to replace it with
smarter bi-directional sampler with nicer filtering.

Limitations:
- Node currently only warps image and outputs mask created
  out of plane, warping input mask is not implemented yet.
- Image warping doesn't report proper depending area of
  interest yet, meaning interactivity might be not so much
  great,
- There's no anti-aliasing applied on the edges of warped
  image and plane mask, so they look really sharp at this
  moment.
2013-08-03 19:53:16 +00:00
8f572599fd Merging r58795 through r58869 from trunk into soc-2011-tomato 2013-08-03 19:46:21 +00:00
7ce5c672c1 Compute homographies when creating new plane tracker
When new plane tracker is creating, blender will go
over tracked sequence of point tracks, compute homography
between neighbor frames and deform plane rectangle using
this homography.

This makes rectangle plane follow tracked plane quite
nicely. But as an additional homographies will be
updated after corners were slided with a mouse.

Additional changes:

- Display keyframed/tracked information in the cache line
  for plane tracks (the same way as it's done for point tracks).
- Fix couple of buys in homography C-API.
2013-08-02 08:50:30 +00:00
e021860d13 Add Procrustes PNP ("PPnP") resection algorithm to libmv
This adds a new Euclidean resection method, used to create the
initial reconstruction in the motion tracker, to libmv. The method
is based on the Procrustes PNP algorithm (aka "PPnP"). Currently
the algorithm is not connected with the motion tracker, but it
will be eventually since it supports initialization.

Having an initial guess when doing resection is important for
ambiguous cases where potentially the user could offer extra
guidance to the solver, in the form of "this point is in front of
that point".
2013-08-02 07:59:26 +00:00
4fe091f0dc Initial code layout for real plane tracker
This commit includes:

- DNA structures layout. It's not completely finished
  yet, and at some point files could become broken,
  but we'll try to avoid this as much as we can.

- Basic RNA code layout for new data structures.
  Not completely finished in terms it's not possible
  to define plane tracks from Python just yet.

- Operator to define the plane.

- Deletion and selection operators are aware of planes.

- Plane marker's corners could be slided with mouse.

Still lots of thing to be done, but we need to commit
at some point so we could share the code work.
2013-08-02 05:45:58 +00:00
0395b8ae8f Fix compilation error after recent merge. 2013-08-01 00:34:03 +00:00
32f944867f Merging r58363 through r58794 from trunk into soc-2011-tomato 2013-07-31 22:34:58 +00:00
5342d13e55 Remove files which were actually removed form trunk already. 2013-06-10 15:11:13 +00:00
e4672c3c5e Remove workarounds for mask depsraph issue,
They're not needed after recent trunk merge.
2013-06-10 15:11:06 +00:00
8a9b54117a Merging r57346 through r57354 from trunk into soc-2011-tomato 2013-06-10 14:28:34 +00:00
b52fc8e41b Merging r57134 through r57345 from trunk into soc-2011-tomato 2013-06-10 12:33:07 +00:00
54e3dbbccf Experimental operator to cleanup mask shapekeys
For every shapekey operator check whether mask shape
stays within given tolerance and if so, shapekey is
removing.

TODO:
- Radius and weight are not handled completely crorrect,
  need some smarter way to handle them.
- There're huge issues with depsgraph tag update for
  mask and acutal mask update -- worked around by manual
  call of BKE_mask_evaluate from places where shapekeys
  are removing.

  Would need to look further into this in trunk.
2013-06-09 10:04:25 +00:00
b252f7287f Merging r57122 through r57133 from trunk into soc-2011-tomato 2013-05-30 09:39:23 +00:00
6ced1b1bfd Merging r56947 through r57121 from trunk into soc-2011-tomato 2013-05-29 17:53:54 +00:00
059faac87f Fixes for wrong memory usage in mask tracking operator 2013-05-21 17:25:35 +00:00
2ead9266c2 Merging r56933 through r56946 from trunk into soc-2011-tomato 2013-05-21 16:27:26 +00:00
a08a43dc04 Masks tracking operators
Added an operators which tracks the whole masks
in clip editor. Namely, this operators are tracking
selected splines of active spline layer.

This uses the same exact region tracker as motion
tracker used for ages, just marker is created from
mask spline. Pattern area will match the size of
spline's bounding box (without feather taken into
account!). Search area is currently hardcoded to
1.5x times bigger than marker's pattern area.

Settings for the tracked are used from default
tracker settings (which are in T-panel). So for
this added special panel to mask mode which
contains default tracker settings.

Matching will be ignored and Previous Frame
matching will be used.

Speed is also currently hardcoded to realtime.

This new tracker operators are using the same
shortcuts as regular tracker.

TODO:
  - Feather support.
  - Better deal with search area size.
  - Think of tracking speed control.

This is an experimental feature, might crash or
do not behave as you expect it to behave.
2013-05-20 21:55:01 +00:00
8e9da56ff8 Merging r56770 through r56932 from trunk into soc-2011-tomato 2013-05-20 21:39:07 +00:00
26c4a837a5 Merging r56752 through r56769 from trunk into soc-2011-tomato 2013-05-13 14:53:37 +00:00
a1fc8f175f Merging r56732 through r56751 from trunk into soc-2011-tomato 2013-05-13 09:21:11 +00:00
89d6bc4942 Merging r56730 through r56731 from trunk into soc-2011-tomato 2013-05-12 22:35:07 +00:00
083850f08c Merging r56725 through r56729 from trunk into soc-2011-tomato 2013-05-12 22:27:39 +00:00
2205be9461 Merging r56720 through r56724 from trunk into soc-2011-tomato 2013-05-12 19:05:45 +00:00
84942342b3 Merging r56717 through r56719 from trunk into soc-2011-tomato 2013-05-12 17:17:05 +00:00
b5c7769366 Merging r56657 through r56716 from trunk into soc-2011-tomato 2013-05-12 16:16:48 +00:00
43cc69921b Merging r56645 through r56656 from trunk into soc-2011-tomato 2013-05-10 12:38:19 +00:00
eae5399e2b Update libmv to version from own branch
Brings in the changes:

- Optimized keyframe selection, which is reached by
  switching pseudoInverse from SVD to eigensolver.
- Cleans up comments and functions names.

Also cleaned code in libmv-capi a bit.
2013-05-10 12:35:31 +00:00
cc5793a47a Merging r56628 through r56644 from trunkinto soc-2011-tomato 2013-05-10 07:57:35 +00:00
b2730db084 Merging r56536 through r56627 from trunk into soc-2011-tomato 2013-05-09 15:26:44 +00:00
f823dadc52 Merging r56518 through r56535 from trunk into soc-2011-tomato 2013-05-08 07:15:06 +00:00
fe72467bc5 Merging r56470 through r56517 from trunk into soc-2011-tomato 2013-05-06 18:44:46 +00:00
f54145b3c1 Merging r56414 through r56469 from trunk into soc-2011-tomato 2013-05-02 13:40:00 +00:00
50db58f9dc Add check for points behind camera in euclidan BA cost functor
In cases keyframes are no so good, algebraic two frames construction
could produce result, for which more aggressive Ceres-based BA code
will fall to a solution for which points goes behind the camera,
which is not so nice.

Seems in newer Ceres returning false from cost functor wouldn't
abort solution, but will restrict solver from moving points behind
the camera.

Works fine in own tests, but requires more tests.
2013-04-30 13:10:14 +00:00
6d9cca4e03 Merging r56410 through r56413 from trunk into soc-2011-tomato 2013-04-30 12:20:31 +00:00
b8c924ca6b Merging r56389 through r56408 from trunk into soc-2011-tomato 2013-04-30 07:58:42 +00:00
0e504aee97 Merging r56328 through r56388 from trunk into soc-2011-tomato 2013-04-29 17:16:17 +00:00
97a18f4fa6 Merging r56320 through r56327 from trunk into soc-2011-tomato 2013-04-26 19:06:25 +00:00
88d2681327 Merging r56307 through r56319 from trunk into soc-2011-tomato 2013-04-26 15:54:15 +00:00
83d9159929 Keyframe selection improvements
Added additional criteria, which ignores
keyframe pair if success intersection factor
is lower than current candidate pair factor.

This solves issue with keyframe pair at which
most of the tracks are intersecting behind the
camera is accepted (because variance in this
case is really small),

Also tweaked generalized inverse function,
which now doesn't scale epsilon by maximal
matrix element. This gave issues at really bad
candidates with unstable covariance. In this
case almost all eigen values getting zeroed
on inverse leading to small expected error
estimation.
2013-04-26 12:57:27 +00:00
78382d7030 Added a button to apply scale on scene solution
This is an alternative to using camera to scale the
scene and it's expected to be better solution because
scaling camera leads to issues with z-buffer.

Found the whole scaling thing a bit confusing,
especially for object tracking, but cleaning this up
is a bit different topic.
2013-04-26 11:43:28 +00:00
148bc2d140 Setting tracking object scale shall not depend on active object 2013-04-26 11:43:23 +00:00
750407da75 Made bundles in 3D viewport have constant size
This means bundles' size is not affected by camera scale.
This way it's more useful to work with -- bundles never
becomes too small or too large (depending on reconstructed
scene scale).
2013-04-26 11:43:19 +00:00
cdc8c0a8b7 Merging r56261 through r56306 from trunk into soc-2011-tomato 2013-04-26 09:49:56 +00:00
46be7e067b Initial commit of reconstruction variance criteria
which is an addition for GRIC-based keyframe selection.

Uses paper Keyframe Selection for Camera Motion and Structure
Estimation from Multiple Views,
ftp://ftp.tnt.uni-hannover.de/pub/papers/2004/ECCV2004-TTHBAW.pdf
as a basis.

Currently implemented camera positions reconstructions,
bundle positions estimation and bundle adjustment step.
Covriance estimation is implemented in very basic way
and need to be cleaned up, speed up and probably fixed.

Covariance matrix is estimating using generalized inverse
with 7 (by the number of gauge freedoms) zeroed eigen values
of J^T * J. Use value of 7 because we've got 3 translations,
3 rotations and 1 scale freedoms.

Additional changes:
- Added utility function FundamentalToEssential to extract
  E from F matrix, used by both final reconstruction pipeline
  and reconstruction variance code.

- Refactored bundler a bit, so now it's possible to return
  different evaluation data, such as number of cameras and
  points being minimized and also jacobian.

  Jacobian currently contains only camera and points columns,
  no intrinsics there yet. It is also currently converting to
  an Eigen dense matrix. A bit weak, but speed is nice for
  tests.

  Columns in jacobian are ordered in the following way:
  first columns are cameras (3 cols for rotation and 3 cols
  for translation), then goes 3D point columns.

- Switched F and H refining to normalized space. Apparently,
  refining F in pixel space squeezes it a lot making it wrong.

- EuclideanIntersect will not add point to reconstruction if
  it happened to be behind the camera.

- Cleaned style a bit.
2013-04-24 14:30:42 +00:00
48f3a305ca Reconstructed scene scale ambiguity improvement
Made it so reconstructed scene always scaled in a way
that variance of camera centers is unity.

This solves "issues" when different keyframes will
give the same reprojection error but will give scenes
with different.scale, which could easily have been
considered as a bad keyframe combination.

This change is essential for automatic keyframe
selection algorithm to work reliable for user.
2013-04-24 13:52:17 +00:00
0adc57ab37 Merging r56252 through r56260 from trunk into soc-2011-tomato 2013-04-24 13:51:19 +00:00
c8dd77aed4 Merging r56248 through r56251 from trunk into soc-2011-tomato 2013-04-23 20:28:52 +00:00
1e4dfb7514 Merging r55842 through r56247 from trunk into soc-2011-tomato 2013-04-23 18:59:33 +00:00
d32422f45b Merging r55495 through r55841 from trunk into soc-2011-tomato 2013-04-06 14:51:32 +00:00
ac6a32a70e Merging r55272 through r55494 from trunk into soc-2011-tomato 2013-03-22 08:45:45 +00:00
1c2258d12d Merging r55080 through r55271 from trunk into soc-2011-tomato 2013-03-14 09:55:05 +00:00
009921a173 Merging r55060 through r55079 from trunk into soc-2011-tomato 2013-03-06 18:17:01 +00:00
3d63371f13 Merging r54848 through r55059 from trunk into soc-2011-tomato 2013-03-05 18:05:18 +00:00
dc529e09c7 Oh bummer, it's too early for this patch to be in! 2013-02-25 10:19:02 +00:00
20ec34eabf Merging r54846 through r54847 from trunk into soc-2011-tomato 2013-02-25 10:14:40 +00:00
8e9ebf1653 Merging r54751 through r54845 from trunk into soc-2011-tomato 2013-02-25 10:11:22 +00:00
37605008d6 Merging r54570 through r54750 from trunk into soc-2011-tomato 2013-02-22 10:43:49 +00:00
1d7d0824a0 Motion tracking dopesheet
Highlight background depending on number of tracks existing on frame.

This is not so much mathematically accurate displaying where things
shall be improved, but it's nice feedback about which frames better
be reviewed.

Bad frames are tracks < 8, highlighted with red.
OK-ish frame  are 8 <= tracks < 16, highlighted with yellow.

Could be some artifacts with color region start/end, this is a bit
unclear what exactly expected to be highlighted -- frames are
displayed as dots, but in fact they're quite noticeable segments.
2013-02-15 09:02:17 +00:00
f6b797191d Merging r53628 through r54569 from trunk into soc-2011-tomato 2013-02-15 06:33:27 +00:00
8ed686d1fe Fixed for keyframe selection
- Calculate residuals for GRIC in pixel space rather than
  in normalized space.

  This seems to be how it's intended to be used.

  Algebraic H and F will still use normalized coordinates which
  are more stable, after this matrices are converted to pixel
  space and Ceres refinement happens in pixel space.

- Standard deviation calculation was wrong in GRIC. It shouldn't
  be deviation of residuals, but as per Torr it should be deviation
  of measurement error, which is constant (in our case 0.1)

  Not sure if using squared cost function is correct for GRIC,
  but cost function is indeed squared and in most papers cost
  function is used for GRIC. After some further tests we could
  switch GRIC residuals to non-squared distance.

- Bring back rho part of GRIC, in unit tests it doesn't make
  sense whether it's enabled or not, lets see how it'll behave
  in real-life footage.

- Reduce minimal correspondence to match real-world manually
  tracked footage

- Returned back squares to SymmetricEpipolarDistance and
  SymmetricGeometricDistance -- this is actually a cost
  functions, not distances and they shall be squared.
2013-01-07 12:20:51 +00:00
66424993b0 Merging r53625 through r53627 from trunk into soc-2011-tomato 2013-01-07 11:52:09 +00:00
59f723358f Merging r53212 through r53624 from trunk into soc-2011-tomato 2013-01-07 11:01:03 +00:00
838a1f8198 Tomato: syncronize with changes in libmv branch with assorted fixes
- Biggest error was in cost functors used for F and H refirement,
  they were just wrong.

- Use natural logarithms, since it's actually makes sense from
  math papers point of view and error is somewhere else.

- Disabled rho for GRIC, for now use non-clamped error for tests.

- Made SymmetricEpipolarDistance returning non-squared distance
  Keyframe selection is currently the only used of this function
  and it seems using non-squared distance makes much more sense.

  Also would think to append suffix "Squared" to functions which
  returns squared distances.

- Removed templated version of SymmetricEpipolarDistance, since
  it's not needed actually.

- Changed main keyframe selection cycle, so in cases there're no
  more next keyframes for current keyframe could be found in the
  image sequence, current keyframe would be moved forward and
  search continues.

  This helps in cases when there's poor motion in the beginning
  of the sequence, then markers becomes occluded. There could be
  good keyframes in the middle of the shot still.

- Moved correspondences constraint to the top, so no H/F estimation
  happens if there's bad correspondence. Makes algorithm go a bit
  faster.

Selection still behaves very much fuzzy and still working on this.
2012-12-20 19:40:30 +00:00
0402204026 Tomato: added missing files to template generator 2012-12-20 16:50:34 +00:00
3a6f138852 Merging r53203 through r53211 from trunk into soc-2011-tomato 2012-12-20 16:46:50 +00:00
54166177b9 Merging r52895 through r53202 from trunk into soc-2011-tomato 2012-12-20 11:46:58 +00:00
9f20813452 Camera tracking: fixed for automatic keyframe selection
Previously GRIC wasn't working because of typo in getting vector
dimension which are vector-columns, not vector rows.

This implied some additional changes to make selector working nicer:

- lambda coefficients in GRIC were changed form natural to base-10,
  otherwise sums of rho functions are much smaller than constant
  part of GRIC.
  This also seems to correlate with papers where log() usually means
  base-10 logarithm.

- Limited correspondence ratio from top, which will give better
  results because it usually means longer base line.

- Made logging more useful.
2012-12-11 17:25:59 +00:00
ca127ed218 Merging r52852 through r52894 from trunk into soc-2011-tomato 2012-12-11 16:49:19 +00:00
b98042bca6 Camera tracking: fix for incorrect two ketframe selection criteria
Reprojection error of a solution should be calculated for tracks in
image space, not in normalized space.
2012-12-10 18:04:29 +00:00
5d1218c5eb Motion tracking: automatic keyframe selection
This commit contains implementation of automatic keyframe selection algorithm
based on Pollefeys's criteria (F-GRIC is smaller than H-GRIC and correspondence
ratio is more then 90%). It is implemented as a part of simple pipeline and
returns vector of keyframe images for a given Tracks structure.

For simple pipeline reconstruction two best keyframes are selecting from list
of all possible keyframes. Criteria for this selection is reprojection error
of solution from two candidate keyfames.

In Blender side added an option in Solve panel to enable Keyframe Selection.
If this option enabled, libmv will detect two best keyframes and use them for
solution. This keyframes will be set back to the interface when solution is
done.
2012-12-10 16:38:58 +00:00
16 changed files with 1229 additions and 52 deletions

View File

@@ -1094,4 +1094,15 @@ void libmv_homography2DFromCorrespondencesEuc(double (*x1)[2], double (*x2)[2],
memcpy(H, H_mat.data(), 9 * sizeof(double));
}
void libmv_applyInverseCanonicalHomography(double x, double y,
const double *xs, const double *ys,
int num_samples_x, int num_samples_y,
double *warped_position_x,
double *warped_position_y)
{
libmv::ApplyInverseCanonicalHomography(x, y, xs, ys,
num_samples_x, num_samples_y,
warped_position_x, warped_position_y);
}
#endif

View File

@@ -158,6 +158,12 @@ void libmv_cameraIntrinsicsInvert(const libmv_CameraIntrinsicsOptions *libmv_cam
void libmv_homography2DFromCorrespondencesEuc(double (*x1)[2], double (*x2)[2], int num_points, double H[3][3]);
void libmv_applyInverseCanonicalHomography(double x, double y,
const double *xs, const double *ys,
int num_samples_x, int num_samples_y,
double *warped_position_x,
double *warped_position_y);
#ifdef __cplusplus
}
#endif

View File

@@ -286,4 +286,14 @@ void libmv_homography2DFromCorrespondencesEuc(double (* /* x1 */)[2], double (*
H[2][2] = 1.0f;
}
void libmv_applyInverseCanonicalHomography(double x, double y,
const double *xs, const double *ys,
int num_samples_x, int num_samples_y,
double *warped_position_x,
double *warped_position_y)
{
*warped_position_x = (double) num_samples_x * 0.5;
*warped_position_y = (double) num_samples_y * 0.5;
}
#endif // ifndef WITH_LIBMV

View File

@@ -1560,4 +1560,22 @@ bool SamplePlanarPatch(const FloatImage &image,
return true;
}
void ApplyInverseCanonicalHomography(const double x, const double y,
const double *xs, const double *ys,
int num_samples_x, int num_samples_y,
double *warped_position_x,
double *warped_position_y) {
//int num_samples_x, num_samples_y;
//PickSampling(xs, ys, xs, ys, &num_samples_x, &num_samples_y);
// Compute the warp from rectangular coordinates.
Mat3 canonical_homography = ComputeCanonicalHomography(xs, ys,
num_samples_x,
num_samples_y);
Vec3 warped_position = canonical_homography.inverse() * Vec3(x, y, 1);
*warped_position_x = warped_position(0) / warped_position(2);
*warped_position_y = warped_position(1) / warped_position(2);
}
} // namespace libmv

View File

@@ -149,6 +149,12 @@ bool SamplePlanarPatch(const FloatImage &image,
FloatImage *mask, FloatImage *patch,
double *warped_position_x, double *warped_position_y);
void ApplyInverseCanonicalHomography(const double x, const double y,
const double *xs, const double *ys,
int num_samples_x, int num_samples_y,
double *warped_position_x,
double *warped_position_y);
} // namespace libmv
#endif // LIBMV_TRACKING_TRACK_REGION_H_

View File

@@ -267,6 +267,7 @@ class MASK_PT_tools():
col.operator("mask.shape_key_insert")
col.operator("mask.shape_key_feather_reset")
col.operator("mask.shape_key_rekey")
col.operator("mask.shape_key_cleanup")
class MASK_MT_mask(Menu):
@@ -326,6 +327,7 @@ class MASK_MT_animation(Menu):
layout.operator("mask.shape_key_insert")
layout.operator("mask.shape_key_feather_reset")
layout.operator("mask.shape_key_rekey")
layout.operator("mask.shape_key_cleanup")
class MASK_MT_select(Menu):

View File

@@ -198,6 +198,54 @@ class CLIP_PT_reconstruction_panel:
return clip and sc.mode == 'RECONSTRUCTION' and sc.view == 'CLIP'
def _draw_default_tracker_settings(layout, settings):
col = layout.column()
row = col.row(align=True)
label = CLIP_MT_tracking_settings_presets.bl_label
row.menu('CLIP_MT_tracking_settings_presets', text=label)
row.operator("clip.tracking_settings_preset_add",
text="", icon='ZOOMIN')
props = row.operator("clip.tracking_settings_preset_add",
text="", icon='ZOOMOUT')
props.remove_active = True
col.separator()
row = col.row(align=True)
row.prop(settings, "use_default_red_channel",
text="R", toggle=True)
row.prop(settings, "use_default_green_channel",
text="G", toggle=True)
row.prop(settings, "use_default_blue_channel",
text="B", toggle=True)
col.separator()
sub = col.column(align=True)
sub.prop(settings, "default_pattern_size")
sub.prop(settings, "default_search_size")
col.label(text="Tracker:")
col.prop(settings, "default_motion_model")
col.prop(settings, "use_default_brute")
col.prop(settings, "use_default_normalization")
col.prop(settings, "use_default_mask")
col.prop(settings, "default_correlation_min")
col.separator()
sub = col.column(align=True)
sub.prop(settings, "default_frames_limit")
sub.prop(settings, "default_margin")
col.label(text="Match:")
col.prop(settings, "default_pattern_match", text="")
col.separator()
col.operator("clip.track_settings_as_default",
text="Copy From Active Track")
class CLIP_PT_tools_marker(CLIP_PT_tracking_panel, Panel):
bl_space_type = 'CLIP_EDITOR'
bl_region_type = 'TOOLS'
@@ -221,50 +269,27 @@ class CLIP_PT_tools_marker(CLIP_PT_tracking_panel, Panel):
row.label(text="Tracking Settings")
if settings.show_default_expanded:
col = box.column()
row = col.row(align=True)
label = CLIP_MT_tracking_settings_presets.bl_label
row.menu('CLIP_MT_tracking_settings_presets', text=label)
row.operator("clip.tracking_settings_preset_add",
text="", icon='ZOOMIN')
row.operator("clip.tracking_settings_preset_add",
text="", icon='ZOOMOUT').remove_active = True
_draw_default_tracker_settings(box, settings)
col.separator()
row = col.row(align=True)
row.prop(settings, "use_default_red_channel",
text="R", toggle=True)
row.prop(settings, "use_default_green_channel",
text="G", toggle=True)
row.prop(settings, "use_default_blue_channel",
text="B", toggle=True)
class CLIP_PT_tools_default_tracking_settings(Panel):
bl_space_type = 'CLIP_EDITOR'
bl_region_type = 'TOOLS'
bl_label = "Default Settings"
col.separator()
@classmethod
def poll(cls, context):
sc = context.space_data
clip = sc.clip
sub = col.column(align=True)
sub.prop(settings, "default_pattern_size")
sub.prop(settings, "default_search_size")
return clip and sc.mode == 'MASK'
col.label(text="Tracker:")
col.prop(settings, "default_motion_model")
col.prop(settings, "use_default_brute")
col.prop(settings, "use_default_normalization")
col.prop(settings, "use_default_mask")
col.prop(settings, "default_correlation_min")
col.separator()
sub = col.column(align=True)
sub.prop(settings, "default_frames_limit")
sub.prop(settings, "default_margin")
col.label(text="Match:")
col.prop(settings, "default_pattern_match", text="")
col.separator()
col.operator("clip.track_settings_as_default",
text="Copy From Active Track")
def draw(self, context):
layout = self.layout
sc = context.space_data
clip = sc.clip
settings = clip.tracking.settings
_draw_default_tracker_settings(layout, settings)
class CLIP_PT_tools_tracking(CLIP_PT_tracking_panel, Panel):

View File

@@ -52,6 +52,11 @@ struct rcti;
/* **** Common functions **** */
void BKE_frame_unified_to_search_pixel(struct MovieTrackingMarker *marker,
int frame_width, int frame_height,
float unified_x, float unified_y,
float *search_x, float *search_y);
void BKE_tracking_free(struct MovieTracking *tracking);
void BKE_tracking_settings_init(struct MovieTracking *tracking);
@@ -213,6 +218,22 @@ void BKE_tracking_context_sync_user(const struct MovieTrackingContext *context,
bool BKE_tracking_context_step(struct MovieTrackingContext *context);
void BKE_tracking_context_finish(struct MovieTrackingContext *context);
bool BKE_tracking_track_region(const struct MovieTrackingTrack *track,
const struct MovieTrackingMarker *old_marker,
const struct ImBuf *old_search_ibuf,
const struct ImBuf *new_search_ibuf,
const int frame_width,
const int frame_height,
float *mask,
struct MovieTrackingMarker *new_marker);
void BKE_tracking_apply_inverse_homography(const struct MovieTrackingMarker *marker,
int frame_width, int frame_height,
float x, float y,
int num_samples_x, int num_samples_y,
float *warped_position_x,
float *warped_position_y);
void BKE_tracking_refine_marker(struct MovieClip *clip, struct MovieTrackingTrack *track, struct MovieTrackingMarker *marker, bool backwards);
/* **** Plane tracking **** */

View File

@@ -579,8 +579,10 @@ void BKE_bpath_traverse_id(Main *bmain, ID *id, BPathVisitor visit_cb, const int
{
if (SEQ_HAS_PATH(seq)) {
if (ELEM(seq->type, SEQ_TYPE_MOVIE, SEQ_TYPE_SOUND_RAM)) {
rewrite_path_fixed_dirfile(seq->strip->dir, seq->strip->stripdata->name,
visit_cb, absbase, bpath_user_data);
if (seq->strip->stripdata) {
rewrite_path_fixed_dirfile(seq->strip->dir, seq->strip->stripdata->name,
visit_cb, absbase, bpath_user_data);
}
}
else if (seq->type == SEQ_TYPE_IMAGE) {
/* might want an option not to loop over all strips */

View File

@@ -463,6 +463,24 @@ static void set_marker_coords_from_tracking(int frame_width, int frame_height, M
marker->pos[1] += marker_unified[1];
}
void BKE_frame_unified_to_search_pixel(MovieTrackingMarker *marker,
int frame_width, int frame_height,
float unified_x, float unified_y,
float *search_x, float *search_y)
{
float unified_coords[2];
float pixel_coords[2];
unified_coords[0] = unified_x;
unified_coords[1] = unified_y;
marker_unified_to_search_pixel(frame_width, frame_height, marker,
unified_coords, pixel_coords);
*search_x = pixel_coords[0];
*search_y = pixel_coords[1];
}
/*********************** clipboard *************************/
/* Free clipboard by freeing memory used by all tracks in it. */
@@ -2674,6 +2692,24 @@ static void uint8_rgba_to_float_gray(const unsigned char *rgba, float *gray, int
}
}
static float *imbuf_to_grayscale_pixels(const ImBuf *ibuf)
{
float *gray_pixels;
gray_pixels = MEM_mallocN(ibuf->x * ibuf->y * sizeof(float), "tracking floatBuf");
if (ibuf->rect_float) {
float_rgba_to_gray(ibuf->rect_float, gray_pixels, ibuf->x * ibuf->y,
0.2126f, 0.7152f, 0.0722f);
}
else {
uint8_rgba_to_float_gray((unsigned char *)ibuf->rect, gray_pixels, ibuf->x * ibuf->y,
0.2126f, 0.7152f, 0.0722f);
}
return gray_pixels;
}
/* Get grayscale float search buffer for given marker and frame. */
static float *track_get_search_floatbuf(ImBuf *ibuf, MovieTrackingTrack *track, MovieTrackingMarker *marker,
int *width_r, int *height_r)
@@ -2693,17 +2729,7 @@ static float *track_get_search_floatbuf(ImBuf *ibuf, MovieTrackingTrack *track,
width = searchibuf->x;
height = searchibuf->y;
gray_pixels = MEM_callocN(width * height * sizeof(float), "tracking floatBuf");
if (searchibuf->rect_float) {
float_rgba_to_gray(searchibuf->rect_float, gray_pixels, width * height,
0.2126f, 0.7152f, 0.0722f);
}
else {
uint8_rgba_to_float_gray((unsigned char *)searchibuf->rect, gray_pixels, width * height,
0.2126f, 0.7152f, 0.0722f);
}
gray_pixels = imbuf_to_grayscale_pixels(searchibuf);
IMB_freeImBuf(searchibuf);
*width_r = width;
@@ -3233,6 +3259,86 @@ void BKE_tracking_refine_marker(MovieClip *clip, MovieTrackingTrack *track, Movi
IMB_freeImBuf(destination_ibuf);
}
bool BKE_tracking_track_region(const MovieTrackingTrack *track,
const MovieTrackingMarker *old_marker,
const ImBuf *old_search_ibuf,
const ImBuf *new_search_ibuf,
const int frame_width,
const int frame_height,
float *mask,
MovieTrackingMarker *new_marker)
{
/* To convert to the x/y split array format for libmv. */
double src_pixel_x[5], src_pixel_y[5];
double dst_pixel_x[5], dst_pixel_y[5];
/* Settings for the tracker */
libmv_TrackRegionOptions options = {0};
libmv_TrackRegionResult result;
float *old_search, *new_search;
bool tracked;
/* configure the tracker */
tracking_configure_tracker(track, mask, &options);
/* Convert the marker corners and center into
* pixel coordinates in the search/destination images.
*/
get_marker_coords_for_tracking(frame_width, frame_height,
old_marker, src_pixel_x, src_pixel_y);
get_marker_coords_for_tracking(frame_width, frame_height,
new_marker, dst_pixel_x, dst_pixel_y);
old_search = imbuf_to_grayscale_pixels(old_search_ibuf);
new_search = imbuf_to_grayscale_pixels(new_search_ibuf);
/* run the tracker! */
tracked = libmv_trackRegion(&options,
old_search,
old_search_ibuf->x,
old_search_ibuf->y,
new_search,
new_search_ibuf->x,
new_search_ibuf->y,
src_pixel_x, src_pixel_y,
&result,
dst_pixel_x, dst_pixel_y);
set_marker_coords_from_tracking(frame_width, frame_height,
new_marker,
dst_pixel_x, dst_pixel_y);
MEM_freeN(old_search);
MEM_freeN(new_search);
return tracked;
}
void BKE_tracking_apply_inverse_homography(const MovieTrackingMarker *marker,
int frame_width, int frame_height,
float x, float y,
int num_samples_x, int num_samples_y,
float *warped_position_x,
float *warped_position_y)
{
double xs[5], ys[5];
double warped_position_x_double,
warped_position_y_double;
get_marker_coords_for_tracking(frame_width, frame_height, marker, xs, ys);
libmv_applyInverseCanonicalHomography(x - 0.5f, y - 0.5f,
xs, ys,
num_samples_x,
num_samples_y,
&warped_position_x_double,
&warped_position_y_double);
*warped_position_x = warped_position_x_double;
*warped_position_y = warped_position_y_double;
}
/*********************** Plane tracking *************************/
typedef double Vec2[2];

View File

@@ -450,6 +450,7 @@ void ED_operatortypes_mask(void)
WM_operatortype_append(MASK_OT_shape_key_clear);
WM_operatortype_append(MASK_OT_shape_key_feather_reset);
WM_operatortype_append(MASK_OT_shape_key_rekey);
WM_operatortype_append(MASK_OT_shape_key_cleanup);
/* layers */
WM_operatortype_append(MASK_OT_layer_move);

View File

@@ -114,5 +114,6 @@ void MASK_OT_shape_key_insert(struct wmOperatorType *ot);
void MASK_OT_shape_key_clear(struct wmOperatorType *ot);
void MASK_OT_shape_key_feather_reset(struct wmOperatorType *ot);
void MASK_OT_shape_key_rekey(struct wmOperatorType *ot);
void MASK_OT_shape_key_cleanup(struct wmOperatorType *ot);
#endif /* __MASK_INTERN_H__ */

View File

@@ -451,3 +451,111 @@ int ED_mask_layer_shape_auto_key_select(Mask *mask, const int frame)
return change;
}
/******************** shape key cleanup operator *********************/
static int mask_shape_key_cleanup_exec(bContext *C, wmOperator *op)
{
Mask *mask = CTX_data_edit_mask(C);
MaskLayer *mask_layer = BKE_mask_layer_active(mask);
MaskLayerShape *current_shape, *next_shape;
int removed_count = 0;
const float tolerance = RNA_float_get(op->ptr, "tolerance");
const float tolerance_squared = tolerance * tolerance;
int width, height;
float side;
if (mask_layer == NULL) {
return OPERATOR_CANCELLED;
}
ED_mask_get_size(CTX_wm_area(C), &width, &height);
side = max_ff(width, height);
for (current_shape = mask_layer->splines_shapes.first;
current_shape;
current_shape = next_shape)
{
MaskLayerShape *previous_shape = current_shape->prev;
MaskLayerShapeElem *previous_elements, *current_elements, *next_elements;
int i;
float interpolation_factor, inv_interpolation_factor;
float average_error;
next_shape = current_shape->next;
if (previous_shape == NULL || next_shape == NULL) {
continue;
}
previous_elements = (MaskLayerShapeElem *) previous_shape->data;
current_elements = (MaskLayerShapeElem *) current_shape->data;
next_elements = (MaskLayerShapeElem *) next_shape->data;
if (previous_shape->tot_vert != current_shape->tot_vert ||
previous_shape->tot_vert != next_shape->tot_vert ||
current_shape->tot_vert != next_shape->tot_vert)
{
printf("%s: unexpected mistmatch between shapes vertices number.\n", __func__);
continue;
}
interpolation_factor = (float)(current_shape->frame - previous_shape->frame) /
(float)(next_shape->frame - previous_shape->frame);
inv_interpolation_factor = 1.0f - interpolation_factor;
average_error = 0.0f;
for (i = 0; i < current_shape->tot_vert; i++) {
int j;
for (j = 0; j < MASK_OBJECT_SHAPE_ELEM_SIZE; j++) {
float interpolated_value;
float current_error;
interpolated_value = inv_interpolation_factor * previous_elements[i].value[j] +
interpolation_factor * next_elements[i].value[j];
current_error = (current_elements[i].value[j] - interpolated_value) * side;
average_error += current_error * current_error;
}
}
average_error /= (float) current_shape->tot_vert;
if (ELEM(current_shape->frame, 3, 9)) {
printf("%d %f\n", current_shape->frame, average_error);
}
if (average_error < tolerance_squared) {
BLI_remlink(&mask_layer->splines_shapes, current_shape);
BKE_mask_layer_shape_free(current_shape);
removed_count++;
}
}
if (removed_count > 0) {
WM_event_add_notifier(C, NC_MASK | ND_DATA, mask);
DAG_id_tag_update(&mask->id, OB_RECALC_DATA);
}
BKE_reportf(op->reports, RPT_INFO, "Removed %d keys", removed_count);
return OPERATOR_FINISHED;
}
void MASK_OT_shape_key_cleanup(wmOperatorType *ot)
{
/* identifiers */
ot->name = "Cleanup Shape Keys";
ot->description = "Remove keyframes which doesn't have much affect on mask shape";
ot->idname = "MASK_OT_shape_key_cleanup";
/* api callbacks */
ot->exec = mask_shape_key_cleanup_exec;
ot->poll = ED_maskedit_mask_poll;
/* flags */
ot->flag = OPTYPE_REGISTER | OPTYPE_UNDO;
/* properties */
RNA_def_float(ot->srna, "tolerance", 0.8f, -FLT_MAX, FLT_MAX,
"Tolerance", "Average distance in pixels which mask is allowed to "
"slide off after removing shapekey",
-100.0f, 100.0f);
}

View File

@@ -195,6 +195,8 @@ void CLIP_OT_tracking_object_remove(struct wmOperatorType *ot);
void CLIP_OT_copy_tracks(struct wmOperatorType *ot);
void CLIP_OT_paste_tracks(struct wmOperatorType *ot);
void CLIP_OT_track_mask(struct wmOperatorType *ot);
void CLIP_OT_create_plane_track(struct wmOperatorType *ot);
void CLIP_OT_slide_plane_marker(struct wmOperatorType *ot);

View File

@@ -519,6 +519,9 @@ static void clip_operatortypes(void)
WM_operatortype_append(CLIP_OT_copy_tracks);
WM_operatortype_append(CLIP_OT_paste_tracks);
/* test */
WM_operatortype_append(CLIP_OT_track_mask);
/* Plane tracker */
WM_operatortype_append(CLIP_OT_create_plane_track);
WM_operatortype_append(CLIP_OT_slide_plane_marker);
@@ -576,6 +579,20 @@ static void clip_keymap(struct wmKeyConfig *keyconf)
RNA_boolean_set(kmi->ptr, "backwards", TRUE);
RNA_boolean_set(kmi->ptr, "sequence", TRUE);
/* Mask tracking */
kmi = WM_keymap_add_item(keymap, "CLIP_OT_track_mask", LEFTARROWKEY, KM_PRESS, KM_ALT, 0);
RNA_boolean_set(kmi->ptr, "backwards", TRUE);
RNA_boolean_set(kmi->ptr, "sequence", FALSE);
kmi = WM_keymap_add_item(keymap, "CLIP_OT_track_mask", RIGHTARROWKEY, KM_PRESS, KM_ALT, 0);
RNA_boolean_set(kmi->ptr, "backwards", FALSE);
RNA_boolean_set(kmi->ptr, "sequence", FALSE);
kmi = WM_keymap_add_item(keymap, "CLIP_OT_track_mask", TKEY, KM_PRESS, KM_CTRL, 0);
RNA_boolean_set(kmi->ptr, "backwards", FALSE);
RNA_boolean_set(kmi->ptr, "sequence", TRUE);
kmi = WM_keymap_add_item(keymap, "CLIP_OT_track_mask", TKEY, KM_PRESS, KM_SHIFT | KM_CTRL, 0);
RNA_boolean_set(kmi->ptr, "backwards", TRUE);
RNA_boolean_set(kmi->ptr, "sequence", TRUE);
/* mode */
WM_keymap_add_menu(keymap, "CLIP_MT_select_mode", TABKEY, KM_PRESS, 0, 0);

View File

@@ -3756,6 +3756,847 @@ void CLIP_OT_paste_tracks(wmOperatorType *ot)
ot->flag = OPTYPE_REGISTER | OPTYPE_UNDO;
}
/********************** Track mask operator *********************/
typedef struct TrackMaskContext {
Main *bmain;
Scene *scene;
Mask *mask;
/* Layer of a mask for which splines are tracking.
* This is an actual mask layer for single-frame
* tracking and it's a copy of that layer for sequence
* tracking which happens in separate thread.
*/
MaskLayer *mask_layer;
/* Clip mask is tracking for. */
MovieClip *clip;
/* Runtime user used to acquire original frames from a clip. */
MovieClipUser clip_user;
/* Tracking operator settings. */
bool backwards;
bool sequence;
/* In case of sequence tracking indicates how many frames
* (or tracking steps) are needed.
*/
int steps;
/* Last frame number for which keyframe is set for sure. */
int last_keyframe_inserted;
MaskSpline *active_mask_spline;
int active_point_index;
/* Delay between next frame is being tracked. */
float delay;
} TrackMaskContext;
static void configure_dummy_track(const MovieTracking *tracking,
MovieTrackingTrack *track)
{
const MovieTrackingSettings *settings = &tracking->settings;
/* Fill track's settings from default tracking settings. */
track->motion_model = settings->default_motion_model;
track->minimum_correlation = settings->default_minimum_correlation;
track->margin = settings->default_margin;
track->pattern_match = settings->default_pattern_match;
track->frames_limit = settings->default_frames_limit;
track->flag = settings->default_flag;
track->algorithm_flag = settings->default_algorithm_flag;
/* We always using mask for the track. */
track->algorithm_flag |= TRACK_ALGORITHM_FLAG_USE_MASK;
}
static void compute_spline_points_boundbox(MaskSpline *mask_spline,
float (*diff_points)[2],
unsigned int tot_diff_point,
float corner_min[2],
float corner_max[2])
{
MaskSplinePoint *spline_points, *current_spline_point;
int i;
float *current_diff_point;
INIT_MINMAX2(corner_min, corner_max);
spline_points = mask_spline->points;
for (i = 0, current_spline_point = spline_points;
i < mask_spline->tot_point;
i++, current_spline_point++)
{
BezTriple *bezt = &current_spline_point->bezt;
DO_MINMAX2(bezt->vec[0], corner_min, corner_max);
DO_MINMAX2(bezt->vec[1], corner_min, corner_max);
DO_MINMAX2(bezt->vec[2], corner_min, corner_max);
}
for (i = 0, current_diff_point = diff_points[0];
i < tot_diff_point;
i++, current_diff_point += 2)
{
DO_MINMAX2(current_diff_point, corner_min, corner_max);
}
}
static bool configure_dummy_marker(MovieClip *clip, MovieClipUser *user,
MaskSpline *mask_spline,
MovieTrackingMarker *marker)
{
float (*diff_points)[2];
unsigned int tot_diff_point;
float spline_corner_min[2], spline_corner_max[2];
diff_points = BKE_mask_spline_differentiate(mask_spline,
&tot_diff_point);
if (diff_points == NULL) {
/* Failed to rasterize spline, or spline is ampty. */
return false;
}
/* Compute bounding box of the spline */
/* TODO(sergey): consider taking feather into account */
compute_spline_points_boundbox(mask_spline,
diff_points,
tot_diff_point,
spline_corner_min,
spline_corner_max);
/* Convert corner coorinates from mask's square space
* to movie clip frame space.
*/
BKE_mask_coord_to_movieclip(clip, user,
spline_corner_min,
spline_corner_min);
BKE_mask_coord_to_movieclip(clip, user,
spline_corner_max,
spline_corner_max);
/* Marker position is at the center of spline bounding box. */
interp_v2_v2v2(marker->pos,
spline_corner_min,
spline_corner_max,
0.5f);
/* Marker corners matches bounding box.
*
* Could be useful to store previous homography
* for more accurate tracking.
*/
sub_v2_v2v2(marker->pattern_corners[0],
spline_corner_min,
marker->pos);
sub_v2_v2v2(marker->pattern_corners[2],
spline_corner_max,
marker->pos);
marker->pattern_corners[1][0] = marker->pattern_corners[2][0];
marker->pattern_corners[1][1] = marker->pattern_corners[0][1];
marker->pattern_corners[3][0] = marker->pattern_corners[0][0];
marker->pattern_corners[3][1] = marker->pattern_corners[2][1];
/* Search is 1.5 times larger than pattern for now. */
mul_v2_v2fl(marker->search_min, marker->pattern_corners[0], 1.5f);
mul_v2_v2fl(marker->search_max, marker->pattern_corners[2], 1.5f);
MEM_freeN(diff_points);
return true;
}
static void convert_delta_to_mask_space(MovieClip *clip, MovieClipUser *user,
const float old_pos[2], const float new_pos[2],
float delta[2])
{
float old_pos_in_mask_space[2], new_pos_in_mask_space[2];
BKE_mask_coord_from_movieclip(clip, user, old_pos_in_mask_space, old_pos);
BKE_mask_coord_from_movieclip(clip, user, new_pos_in_mask_space, new_pos);
sub_v2_v2v2(delta, new_pos_in_mask_space, old_pos_in_mask_space);
}
static void configure_dummy_mask(const Mask *orig_mask,
MaskSpline *mask_spline,
Mask *dummy_mask)
{
MaskLayer *dummy_mask_layer;
MaskSpline *dummy_mask_spline;
/* Use original mask as a reference for all the settings. */
*dummy_mask = *orig_mask;
/* And use own layer with single spline only. */
dummy_mask->masklayers.first = dummy_mask->masklayers.last = NULL;
dummy_mask_layer = BKE_mask_layer_new(dummy_mask, "Dummy Layer");
dummy_mask_spline = BKE_mask_spline_copy(mask_spline);
BLI_addtail(&dummy_mask_layer->splines, dummy_mask_spline);
}
static void free_dummy_mask(Mask *dummy_mask)
{
BKE_mask_layer_free_list(&dummy_mask->masklayers);
}
static ImBuf *get_rasterized_mask_spline_buffer(const Mask *mask,
MaskSpline *mask_spline,
int frame_width,
int frame_height)
{
Mask dummy_mask;
MaskRasterHandle *mask_rasterizer_handle;
ImBuf *rasterized_ibuf;
float *rasterized_buffer;
int i;
configure_dummy_mask(mask, mask_spline, &dummy_mask);
rasterized_buffer = MEM_mallocN(sizeof(float) * frame_width * frame_height,
"rasterized spline buffer");
mask_rasterizer_handle = BKE_maskrasterize_handle_new();
BKE_maskrasterize_handle_init(mask_rasterizer_handle, &dummy_mask,
frame_width, frame_height,
TRUE, TRUE, TRUE);
BKE_maskrasterize_buffer(mask_rasterizer_handle,
frame_width, frame_height,
rasterized_buffer);
BKE_maskrasterize_handle_free(mask_rasterizer_handle);
rasterized_ibuf = IMB_allocImBuf(frame_width, frame_height,
32, IB_rectfloat);
for (i = 0; i < frame_width * frame_height; i++) {
float *pixel = rasterized_ibuf->rect_float + 4 * i;
pixel[0] = pixel[1] = pixel[2] = rasterized_buffer[i];
pixel[3] = 1.0f;
}
MEM_freeN(rasterized_buffer);
free_dummy_mask(&dummy_mask);
return rasterized_ibuf;
}
static float *tracking_mask_from_ibuf(ImBuf *ibuf)
{
float *mask = MEM_mallocN(sizeof(float) * ibuf->x * ibuf->y, "mask tracking mask");
int i;
for (i = 0; i < ibuf->x * ibuf->y; i++) {
float *pixel = ibuf->rect_float + ibuf->channels * i;
mask[i] = pixel[0];
}
return mask;
}
static void deform_spline_points_by_marker(MovieClip *clip, MovieClipUser *user,
MaskSpline *mask_spline,
MovieTrackingMarker *old_marker,
MovieTrackingMarker *new_marker)
{
MaskSplinePoint *spline_points, *current_spline_point;
int i, frame_width, frame_height;
float corners_delta[4][2];
BKE_movieclip_get_size(clip, user, &frame_width, &frame_height);
/* First we compute delta of all pattern corners in mask space. */
for (i = 0; i < 4; i++) {
float old_corner[2], new_corner[2];
add_v2_v2v2(old_corner,
old_marker->pattern_corners[i],
old_marker->pos);
add_v2_v2v2(new_corner,
new_marker->pattern_corners[i],
new_marker->pos);
convert_delta_to_mask_space(clip, user,
old_corner, new_corner,
corners_delta[i]);
}
/* Then we iterate all mask spline points and moving them
* by an interpolated coordinate delta.
*/
spline_points = mask_spline->points;
for (i = 0, current_spline_point = spline_points;
i < mask_spline->tot_point;
i++, current_spline_point++)
{
BezTriple *bezt = &current_spline_point->bezt;
int j;
for (j = 0; j < 3; j++) {
float delta[2];
float pos_in_marker_space[2];
float pos_in_search_space[2];
float pos_in_homography_space[2];
float left_side_point[2],
right_side_point[2];
/* Compute spline point coordinate relative to marker's center.
* This is still in normalized space.
*/
BKE_mask_coord_to_movieclip(clip, user,
pos_in_marker_space,
bezt->vec[j]);
sub_v2_v2(pos_in_marker_space, old_marker->pos);
/* Then convert coordinate from normalized space to a pixel
* space relative to marker search area origin.
*/
BKE_frame_unified_to_search_pixel(old_marker,
frame_width, frame_height,
pos_in_marker_space[0],
pos_in_marker_space[1],
&pos_in_search_space[0],
&pos_in_search_space[1]);
/* And finally apply an inverse homography to get spline point
* coordinate in pattern patch coordinates.
* Using number of samples of 1 so this coordinates could be
* used as weights for bilinear interpolation.
*/
BKE_tracking_apply_inverse_homography(old_marker,
frame_width, frame_height,
pos_in_search_space[0],
pos_in_search_space[1],
1, 1,
&pos_in_homography_space[0],
&pos_in_homography_space[1]);
/* Do bilinear interpolation. */
interp_v2_v2v2(left_side_point,
corners_delta[0],
corners_delta[1],
pos_in_homography_space[1]);
interp_v2_v2v2(right_side_point,
corners_delta[1],
corners_delta[2],
pos_in_homography_space[1]);
interp_v2_v2v2(delta,
left_side_point,
right_side_point,
pos_in_homography_space[0]);
/* Apply delta on bezier point/handle. */
add_v2_v2(bezt->vec[j], delta);
}
}
/* Need to update handles after points were deformed. */
for (i = 0; i < mask_spline->tot_point; i++) {
BKE_mask_calc_handle_point(mask_spline, &spline_points[i]);
}
}
static bool mask_track_context_init(const bContext *C,
const wmOperator *op,
bool duplicate_layer,
TrackMaskContext *track_mask_context)
{
SpaceClip *sc = CTX_wm_space_clip(C);
Mask *mask = CTX_data_edit_mask(C);
MaskLayer *mask_layer = BKE_mask_layer_active(mask);
Scene *scene = CTX_data_scene(C);
if (mask == NULL || mask_layer == NULL) {
return false;
}
BKE_mask_evaluate(mask, sc->user.framenr, TRUE);
if (mask_layer->act_spline && mask_layer->act_point) {
track_mask_context->active_point_index =
mask_layer->act_point - mask_layer->act_spline->points;
}
else {
track_mask_context->active_point_index = -1;
}
if (duplicate_layer) {
int active_spline_index =
BLI_findindex(&mask_layer->splines, mask_layer->act_spline);
mask_layer = BKE_mask_layer_copy(mask_layer);
track_mask_context->active_mask_spline =
BLI_findlink(&mask_layer->splines, active_spline_index);
}
else {
track_mask_context->active_mask_spline = mask_layer->act_spline;
}
/* We don't need deformed spline hanging around.
*
* This is because differentiation will use deformed
* points if they present and this is something we
* don't want to happen (sine deformed points are
* not being updated during tracking).
*/
BKE_mask_layer_free_deform(mask_layer);
track_mask_context->bmain = CTX_data_main(C);
track_mask_context->scene = scene;
track_mask_context->mask = mask;
track_mask_context->mask_layer = mask_layer;
track_mask_context->clip = ED_space_clip_get_clip(sc);
track_mask_context->clip_user = sc->user;
track_mask_context->backwards = RNA_boolean_get(op->ptr, "backwards");
track_mask_context->sequence = RNA_boolean_get(op->ptr, "sequence");
track_mask_context->delay = 1.0f / scene->r.frs_sec * 1000.0f;
if (track_mask_context->sequence) {
if (track_mask_context->backwards) {
track_mask_context->steps = CFRA - SFRA;
}
else {
track_mask_context->steps = EFRA - CFRA;
}
}
else {
track_mask_context->steps = 1;
}
return true;
}
static bool track_mask_step(TrackMaskContext *track_mask_context)
{
Mask *mask = track_mask_context->mask;
MaskLayer *mask_layer = track_mask_context->mask_layer;
MaskSpline *mask_spline;
MovieClip *clip = track_mask_context->clip;
MovieClipUser *user = &track_mask_context->clip_user;
MovieTracking *tracking = &clip->tracking;
ImBuf *old_ibuf, *new_ibuf;
int frame_delta = track_mask_context->backwards ? -1 : 1;
int clip_flag = clip->flag & MCLIP_TIMECODE_FLAGS;
/* Image buffer we're tracking from */
old_ibuf = BKE_movieclip_get_ibuf_flag(clip, user, clip_flag,
MOVIECLIP_CACHE_SKIP);
if (!old_ibuf) {
return false;
}
/* Image buffer we're tracking on */
user->framenr += frame_delta;
new_ibuf = BKE_movieclip_get_ibuf_flag(clip, user, clip_flag,
MOVIECLIP_CACHE_SKIP);
if (!new_ibuf) {
IMB_freeImBuf(old_ibuf);
return false;
}
for (mask_spline = mask_layer->splines.first;
mask_spline;
mask_spline = mask_spline->next)
{
if (mask_spline->flag & SELECT) {
MovieTrackingMarker marker = {{0}};
ImBuf *old_search_ibuf, *new_search_ibuf;
BLI_assert(mask_spline->points_deform == NULL);
/* Try creating dummy marker for tracking from current spline. */
if (configure_dummy_marker(clip, user, mask_spline, &marker)) {
MovieTrackingTrack track = {0};
MovieTrackingMarker new_marker = {{0}};
ImBuf *rasterized_spline_ibuf, *mask_ibuf;
bool tracked;
float *tracking_mask;
/* Configure dummy track. */
configure_dummy_track(tracking, &track);
/* Create image buffer for search area at old and new frames. */
old_search_ibuf =
BKE_tracking_get_search_imbuf(old_ibuf,
&track, &marker,
FALSE, FALSE);
new_search_ibuf =
BKE_tracking_get_search_imbuf(new_ibuf,
&track, &marker,
FALSE, FALSE);
/* Compute mask. */
rasterized_spline_ibuf =
get_rasterized_mask_spline_buffer(mask,
mask_spline,
old_ibuf->x,
old_ibuf->y);
mask_ibuf =
BKE_tracking_get_search_imbuf(rasterized_spline_ibuf,
&track, &marker,
FALSE, FALSE);
tracking_mask = tracking_mask_from_ibuf(mask_ibuf);
/* Use old position as an initial guess. */
new_marker = marker;
/* Run the tracker. */
tracked = BKE_tracking_track_region(&track, &marker,
old_search_ibuf,
new_search_ibuf,
old_ibuf->x, old_ibuf->y,
tracking_mask,
&new_marker);
if (tracked) {
MaskLayerShape *mask_layer_shape;
/* Move the mask to new position. */
deform_spline_points_by_marker(clip, user, mask_spline,
&marker, &new_marker);
/* Create shape key for tracked mask position. */
mask_layer_shape =
BKE_mask_layer_shape_varify_frame(mask_layer,
user->framenr);
BKE_mask_layer_shape_from_mask(mask_layer,
mask_layer_shape);
/* We store frame number of latest keyframes inserted to
* deal with threading synchronization between main thread
* and tracking thread.
* Basically, main thread need to set current clip editor's
* frame to latest tracked frame, which is not always equal to
* context user's frame due to latency between altering
* that frame number and actual keyframe is being created here.
*/
track_mask_context->last_keyframe_inserted = user->framenr;
}
/* Free memory used. */
IMB_freeImBuf(old_search_ibuf);
IMB_freeImBuf(new_search_ibuf);
IMB_freeImBuf(mask_ibuf);
IMB_freeImBuf(rasterized_spline_ibuf);
MEM_freeN(tracking_mask);
}
}
}
IMB_freeImBuf(old_ibuf);
IMB_freeImBuf(new_ibuf);
return true;
}
static bool track_mask_do_locked(bContext *C, wmOperator *op)
{
TrackMaskContext track_mask_context;
Scene *scene;
Mask *mask;
MovieClip *clip;
int i, framenr;
if (!mask_track_context_init(C, op, false, &track_mask_context)) {
return false;
}
scene = track_mask_context.scene;
mask = track_mask_context.mask;
clip = track_mask_context.clip;
for (i = 0; i < track_mask_context.steps; i++) {
if (!track_mask_step(&track_mask_context)) {
break;
}
}
/* Update scene current frame. */
framenr = track_mask_context.clip_user.framenr;
scene->r.cfra =
BKE_movieclip_remap_clip_to_scene_frame(clip, framenr);
WM_event_add_notifier(C, NC_SCENE | ND_FRAME, scene);
WM_event_add_notifier(C, NC_MASK | NA_EDITED, mask);
DAG_id_tag_update(&mask->id, 0);
return true;
}
static int track_mask_testbreak(void)
{
return G.is_break;
}
static void track_mask_startjob(void *custom_data, short *stop,
short *do_update, float *progress)
{
TrackMaskContext *track_mask_context = custom_data;
int i;
for (i = 0; i < track_mask_context->steps; i++) {
double start_time = PIL_check_seconds_timer();
if (*stop || track_mask_testbreak())
break;
if (track_mask_step(track_mask_context)) {
double exec_time = PIL_check_seconds_timer() - start_time;
if (track_mask_context->delay > (float)exec_time)
PIL_sleep_ms(track_mask_context->delay - (float)exec_time);
*progress = (float) i / track_mask_context->steps;
*do_update = 1;
}
else {
break;
}
}
}
static void update_space_clip_user(Main *bmain, int framenr)
{
bScreen *scr;
for (scr = bmain->screen.first; scr; scr = scr->id.next) {
ScrArea *area;
for (area = scr->areabase.first; area; area = area->next) {
SpaceLink *sl;
for (sl = area->spacedata.first; sl; sl = sl->next) {
switch (sl->spacetype) {
case SPACE_CLIP:
{
SpaceClip *space_clip = (SpaceClip *)sl;
space_clip->user.framenr = framenr;
break;
}
}
}
}
}
}
static void mask_layer_set_data_from_other(const TrackMaskContext *track_mask_context,
const MaskLayer *mask_layer_from,
MaskLayer *mask_layer_to)
{
MaskSpline *current_spline;
MaskLayerShape *current_shape;
/* Free old data. */
BKE_mask_layer_free_shapes(mask_layer_to);
BKE_mask_spline_free_list(&mask_layer_to->splines);
mask_layer_to->act_spline = NULL;
mask_layer_to->act_point = NULL;
/* Copy splines. */
for (current_spline = mask_layer_from->splines.first;
current_spline;
current_spline = current_spline->next)
{
MaskSpline *new_spline = BKE_mask_spline_copy(current_spline);
BLI_addtail(&mask_layer_to->splines, new_spline);
if (current_spline == track_mask_context->active_mask_spline) {
mask_layer_to->act_spline = new_spline;
}
}
/* Update active spline point pointer. */
if (mask_layer_to->act_spline &&
track_mask_context->active_point_index > 0)
{
mask_layer_to->act_point =
mask_layer_to->act_spline->points +
track_mask_context->active_point_index;
}
/* Copy shapes. */
for (current_shape = mask_layer_from->splines_shapes.first;
current_shape;
current_shape = current_shape->next)
{
MaskLayerShape *new_shape =
BKE_mask_layer_shape_duplicate(current_shape);
BLI_addtail(&mask_layer_to->splines_shapes, new_shape);
}
}
static void mask_layer_merge_to_origin(const TrackMaskContext *track_mask_context)
{
Mask *mask = track_mask_context->mask;
MaskLayer *mask_layer = track_mask_context->mask_layer;
MaskLayer *current_mask_layer;
for (current_mask_layer = mask->masklayers.first;
current_mask_layer;
current_mask_layer = current_mask_layer->next)
{
if (!strcmp(current_mask_layer->name, mask_layer->name)) {
mask_layer_set_data_from_other(track_mask_context,
mask_layer,
current_mask_layer);
break;
}
}
}
static void track_mask_updatejob(void *custom_data)
{
TrackMaskContext *track_mask_context = custom_data;
Main *bmain = track_mask_context->bmain;
Mask *mask = track_mask_context->mask;
int framenr = track_mask_context->last_keyframe_inserted;
mask_layer_merge_to_origin(track_mask_context);
update_space_clip_user(bmain, framenr);
WM_main_add_notifier(NC_MASK | NA_EDITED, mask);
DAG_id_tag_update(&mask->id, 0);
}
static void track_mask_freejob(void *custom_data)
{
TrackMaskContext *track_mask_context = custom_data;
Scene *scene = track_mask_context->scene;
Mask *mask = track_mask_context->mask;
MovieClip *clip = track_mask_context->clip;
int framenr = track_mask_context->clip_user.framenr;
scene->r.cfra =
BKE_movieclip_remap_clip_to_scene_frame(clip, framenr);
BKE_mask_layer_free(track_mask_context->mask_layer);
MEM_freeN(track_mask_context);
WM_main_add_notifier(NC_SCENE | ND_FRAME, scene);
WM_main_add_notifier(NC_MASK | NA_EDITED, mask);
}
static int track_mask_invoke(bContext *C, wmOperator *op,
const wmEvent *UNUSED(event))
{
wmWindowManager *window_manager = CTX_wm_manager(C);
wmWindow *window = CTX_wm_window(C);
ScrArea *screen_area = CTX_wm_area(C);
TrackMaskContext *track_mask_context;
bool sequence = RNA_boolean_get(op->ptr, "sequence");
wmJob *wm_job;
if (sequence == false) {
if (track_mask_do_locked(C, op)) {
return OPERATOR_FINISHED;
}
else {
return OPERATOR_CANCELLED;
}
}
if (WM_jobs_test(window_manager, screen_area, WM_JOB_TYPE_ANY)) {
/* Only one tracking is allowed at a time. */
return OPERATOR_CANCELLED;
}
track_mask_context = MEM_mallocN(sizeof(TrackMaskContext),
"mask tracking context");
if (!mask_track_context_init(C, op, true, track_mask_context)) {
MEM_freeN(track_mask_context);
return OPERATOR_CANCELLED;
}
/* Setup job. */
wm_job = WM_jobs_get(window_manager, window, screen_area, "Track Mask",
WM_JOB_PROGRESS, WM_JOB_TYPE_CLIP_TRACK_MARKERS);
WM_jobs_customdata_set(wm_job, track_mask_context, track_mask_freejob);
WM_jobs_timer(wm_job, track_mask_context->delay / 1000.0f,
NC_MASK | ND_DATA, 0);
WM_jobs_callbacks(wm_job, track_mask_startjob, NULL,
track_mask_updatejob, NULL);
G.is_break = FALSE;
WM_jobs_start(window_manager, wm_job);
/* add modal handler for ESC */
WM_event_add_modal_handler(C, op);
return OPERATOR_RUNNING_MODAL;
}
static int track_mask_modal(bContext *C, wmOperator *UNUSED(op),
const wmEvent *event)
{
/* no running tracking, remove handler and pass through */
if (0 == WM_jobs_test(CTX_wm_manager(C), CTX_wm_area(C), WM_JOB_TYPE_ANY))
return OPERATOR_FINISHED | OPERATOR_PASS_THROUGH;
/* running tracking */
switch (event->type) {
case ESCKEY:
return OPERATOR_RUNNING_MODAL;
break;
}
return OPERATOR_PASS_THROUGH;
}
static int track_mask_exec(bContext *C, wmOperator *op)
{
if (track_mask_do_locked(C, op)) {
return OPERATOR_FINISHED;
}
else {
return OPERATOR_CANCELLED;
}
}
void CLIP_OT_track_mask(wmOperatorType *ot)
{
/* identifiers */
ot->name = "Track Mask";
ot->description = "Track Mask";
ot->idname = "CLIP_OT_track_mask";
/* api callbacks */
ot->invoke = track_mask_invoke;
ot->exec = track_mask_exec;
ot->modal = track_mask_modal;
ot->poll = ED_space_clip_maskedit_poll;
/* flags */
ot->flag = OPTYPE_REGISTER | OPTYPE_UNDO;
RNA_def_boolean(ot->srna, "backwards", 0, "Backwards", "Do backwards tracking");
RNA_def_boolean(ot->srna, "sequence", 0, "Track Sequence", "Track marker during image sequence rather than single image");
}
/********************** Create plane track operator *********************/
static int create_plane_track_tracks_exec(bContext *C, wmOperator *op)