1
1

Compare commits

...

163 Commits

Author SHA1 Message Date
f11ff0e672 Merge branch 'master' into soc-2016-multiview 2017-04-15 21:44:48 +08:00
473653f337 replace Set(Get)ClipNum() with MaxClip()+1 2017-04-15 21:31:28 +08:00
66e01ce37a remove correspondence struct on the blender side accordingly 2017-03-30 14:20:01 +08:00
6a9a861bba the first revision according to D2570 2017-03-30 14:02:17 +08:00
0624a2d0f2 Merge branch 'master' into soc-2016-multiview 2017-03-22 00:05:08 +08:00
bb332043f0 Merge branch 'master' into soc-2016-multiview 2017-03-09 20:27:36 +08:00
459d429fec Merge branch 'master' into soc-2016-multiview 2017-02-14 16:41:09 +08:00
ea7c4f61da add a space before public to follow google style 2017-01-29 22:24:02 +08:00
bb372761f0 Don't mix two operations in a single line. 2017-01-29 22:18:43 +08:00
86d52ade62 * fix code style
- In Blender and C-API we follow Type *foo NOT Type* foo. Might be
different in Libmv itself.
2017-01-29 22:15:38 +08:00
52811b3fb7 Merge branch 'master' into soc-2016-multiview 2017-01-29 21:58:01 +08:00
60a3e223c7 Merge branch 'master' into soc-2016-multiview 2016-12-19 01:00:35 +08:00
a9e071d12d Merge branch 'master' into soc-2016-multiview 2016-10-21 16:39:57 +08:00
487284db94 Merge branch 'master' into soc-2016-multiview 2016-10-05 01:13:12 +08:00
4f69557528 Merge branch 'master' into soc-2016-multiview
- revise code accordingly with the 2.78 release
2016-09-27 00:00:49 +08:00
3bbc836b2f Merge branch 'master' into soc-2016-multiview 2016-09-20 16:15:15 +08:00
49fe9a94d2 Merge branch 'master' into soc-2016-multiview 2016-09-13 14:24:04 +08:00
9f9be6ed84 Merge branch 'master' into soc-2016-multiview 2016-09-11 23:53:20 +08:00
da631b72f9 Merge branch 'master' into soc-2016-multiview
fix versioning_270.c conflict. Move secondary_clip version below 2.78
2016-09-07 11:15:07 +08:00
313cbd8eaf Merge branch 'master' into soc-2016-multiview 2016-09-02 00:46:23 +08:00
c5dc4099e4 Merge branch 'master' into soc-2016-multiview 2016-08-30 00:48:17 +08:00
4648bd8da6 Merge branch 'master' into soc-2016-multiview 2016-08-28 21:33:32 +08:00
4c3ff437cd remove several debug info 2016-08-25 22:18:26 +08:00
bf27315a62 Merge branch 'master' into soc-2016-multiview 2016-08-25 21:55:53 +08:00
f4dee67e8b use name instead of pointer in correspondence
- remove self_track and other_track pointers in MovieTrackCorrespondence,
instead use self_track_name and other_track_name.
- now we can save and load correspondences reliably.
- one issue after this change is that the color of linked tracks are not
  shown correctly.
2016-08-22 00:35:33 +08:00
1d469973d4 Merge branch 'master' into soc-2016-multiview 2016-08-21 21:16:38 +08:00
6f5e3668ae Merge branch 'master' into soc-2016-multiview
It seems that we have some changes in multiview stablization
happened in the master branch. Solve the conflicts in versioning.
2016-08-19 00:58:03 +08:00
59b5bdbb7c now multi-view can work with more than 2 cameras
solve the camera number limitation by iterating over main->movieclip.
2016-08-16 17:20:29 +08:00
c493fff3d5 Multiview tracking count clip using main.movieclip
- using open clip given by main->movieclip
- haven't set up clip pointer using main.movieclip
2016-08-16 15:04:37 +08:00
1cb668054e Merge branch 'master' into soc-2016-multiview 2016-08-15 00:42:51 +08:00
Julian Eisel
a20025b631 Fix incorrect version check
Was just always running.
2016-08-13 01:21:28 +02:00
Julian Eisel
a7697ab0ed Correctly convert old clip editor data to new one
Instead of just using defaults.
2016-08-13 00:12:42 +02:00
Julian Eisel
e4053f13f6 Fix compiling with blenderplayer 2016-08-12 23:43:24 +02:00
11f375a62b revise versioning code for RegionSpaceClip
Thank Julian Eisel for pointing out the error. Also remove the version
bump.
2016-08-13 01:24:57 +08:00
41986db9c6 fix last commit, solve "Motion Tracking" layout
should be using area, instead of space_clip for iteration.
2016-08-12 17:22:06 +08:00
56e330a190 do versioning of RegionSpaceClip
This commit may be useless since there is no region data in a space_clip
at the beginning of Blender. So a valid ARegion cannot be found for
ar->regiondata to new.
2016-08-12 16:55:53 +08:00
5767ef53a7 remove one line of setting zoom
it is not needed since we have RegionSpaceClip now.
2016-08-12 15:52:35 +08:00
f4fd5dc6e1 fix error in ED_space_clip_get_clip_in_region
didn't consider the case that regiondata may be a NULL pointer. Since
dopesheet and graph are also Movie Editors and they don't have
regiondata. Other drawing code also has this problem, so it is not fully
solved. Will need to initialize regiondata for dopesheet and graph.
2016-08-12 01:25:55 +08:00
d2e8a2dec1 Merge branch 'master' into soc-2016-multiview 2016-08-11 19:35:14 +08:00
2a9248aec8 add secondary_clip I/O to writefile and readfile
The ->secondary_clip save/load code basically follows the save pattern
of ->clip. It works but may be incomplete, need to be checked.
2016-08-11 01:06:38 +08:00
62614850c9 comment out iterating code
we don't need to iterate over all open spaceclip to find witness cameras
since we now know where the witness comes from (secondary_clip pointer).
However, the con is that we don't know how to access more than one
witness camera. Need to fix this later.
2016-08-11 00:15:13 +08:00
1e1371567d Merge branch 'master' into soc-2016-multiview 2016-08-08 11:08:07 +08:00
33a9441946 change iterating-all-windows-to-find-witness-clip
- Previously we need to iterate all windows to get witness clips, now we
  don't need to do that because it can be accessed by secondary_clip
  poitner in SpaceClip. Correspondence links are confined to
  'correspondenc' mode.

- Restore 'synchronize two frame bars'
2016-08-07 12:43:20 +08:00
66bc69fb00 change get clip functions
- change all ED_space_clip_get_clip to ED_space_clip_get_clip_in_region
  to enable selection in correspondence mode. Since in this mode we need
  to access winess clip (secondary_clip) pointer.
2016-08-06 23:31:05 +08:00
83f23099c2 remove duplicate code of drawing secondary clip
- main clip and witness clip seem to only differ in one pointer, use a
  function utility to handle different context change, so that other
  duplicate code for witness clip can be remove.
2016-08-06 19:40:23 +08:00
174cbc6aba remove duplicate code by adding a movieclip select 2016-08-06 19:18:05 +08:00
ab73d3ab27 make the witness clip context shown right
- duplicate code in clip_draw_secondary_clip and clip_draw_main
- selecting in a sub-region is a problem
2016-08-06 19:07:11 +08:00
496830a2a3 add region free function
fix region data free error
2016-08-06 01:09:28 +08:00
d8292baf9c remove debug info of last commit 2016-08-06 00:19:22 +08:00
eaf9ccb22a fix bug in chaning mode
- error: when 'tracking' mode is clicked, it also enter into 'correspondence' mode.
the if-else logic was wrong
2016-08-06 00:17:47 +08:00
17b207f399 Merge branch 'master' into soc-2016-multiview 2016-08-05 23:42:16 +08:00
3a131f879b move duplicate code to region_splitview_init 2016-08-04 17:19:11 +08:00
cdc29928b2 add a flag field to RegionSpaceClip
- this field is to specify main clip or secondary clip, used in drawing
  code
2016-08-04 16:42:40 +08:00
8ec606af04 Multiview reconstruction: Hack/workaround for the view split
Please follow the comment to see what's going on here.

Ideally i think we need to modify region_rect_recursive() so it does
proper business. Seems that will be safe since i couldn't find any
other usages of VSPLIT and HSPLIT, but want to have interface team
involved here since i might be missing something here.
2016-08-04 10:36:41 +02:00
d865f2dbf6 WIP: drawing secondary clip 2016-08-04 15:36:58 +08:00
a9bdb16905 Merge branch 'master' into soc-2016-multiview 2016-08-03 16:13:34 +08:00
e56c68cdcb separating panning fields to RegionSpaceClip
- each sub-region now has its own zoom and panning
- leave stabilization fields in SpaceClip as-is.
2016-08-03 15:55:18 +08:00
77670703bd recover view when changing from correspondence mode 2016-08-03 11:17:11 +08:00
8e66dbdf97 seperate zoom from SpaceClip to RegionSpaceClip
- mark zoom in SpaceClip as deprecated
- known issue: versioning is not done
2016-08-03 00:04:12 +08:00
9527950b37 Merge branch 'master' into soc-2016-multiview 2016-08-02 16:47:38 +08:00
fe312bab5e WIP: correspondence update and drawing functions 2016-07-29 22:56:32 +08:00
9be1b3f82f Merge branch 'master' into soc-2016-multiview 2016-07-28 22:42:02 +08:00
b7d09c42ea WIP: design RegionSpaceClip (split SpaceClip)
in the progress of seperating view-dependent data fields from SpaceClip
to RegionSpaceClip
2016-07-28 22:40:16 +08:00
ce2f81e435 fix checking SC_VIEW_CLIP error
- should check sc->view rather than sc->mode
- de-duplicate counting tracking code, by making it get_single_track
2016-07-26 20:31:07 +08:00
283e6ffdc0 fix trivial code format 2016-07-26 17:37:45 +08:00
1b8de1bc1d check witness_clip->mode == SC_VIEW_CLIP
otherwise the multiview mode has problem in 'Motion Tracking' layout,
since dopsheet and graph are also counte when iterating through clips.
2016-07-26 13:55:30 +08:00
7ca98a3bec Merge branch 'master' into soc-2016-multiview 2016-07-25 23:53:51 +08:00
929e9694f4 trying to shift window for secondary clip 2016-07-22 11:43:50 +08:00
d572da5b12 Merge branch 'master' into soc-2016-multiview 2016-07-21 19:43:35 +08:00
9628be3679 fix bug in secondary clip open button
after consulting Sergey, I have exposed secondary_clip via RNA and then
fix the corresponding python ui bugs. The operator show now be able to load a
clip to secondary_clip pointer
2016-07-21 19:15:27 +08:00
e6fe10c044 minor fix for secondary clip open button
know issue: cannot desync primary clip and secondary clip
2016-07-20 19:45:04 +08:00
45628cc8b6 make a open secondary clip button 2016-07-20 19:31:41 +08:00
aaa745d3d5 Merge branch 'master' into soc-2016-multiview 2016-07-20 16:22:51 +08:00
798dcb489d fix track correspondence loading issue 2016-07-19 19:19:49 +08:00
aadaff4a7e fix self correspondence error
a track cannot link to itself
2016-07-19 18:51:25 +08:00
a9758d75f2 move correspondence link code to correspondence mode 2016-07-18 23:58:49 +08:00
a0ed1f29de Merge branch 'master' into soc-2016-multiview 2016-07-18 20:28:47 +08:00
c77b28c158 Merge branch 'master' into soc-2016-multiview 2016-07-15 17:27:48 +08:00
99bef956db Merge branch 'master' into soc-2016-multiview 2016-07-15 16:10:39 +08:00
fc965a85c6 add a new mode in space clip view 2016-07-15 16:07:59 +08:00
9be7037e5a add correspondence mode 2016-07-15 00:08:20 +08:00
94851cf00f Merge branch 'master' into soc-2016-multiview 2016-07-14 09:51:14 +08:00
d302bee5ac complete correpondence deletion operator 2016-07-13 21:40:41 +08:00
f9500de41c fix color scheme version
- move color scheme of linked_marker and sel_linked_marker to 277.2
2016-07-13 15:54:15 +08:00
93ffcdb201 Merge branch 'master' into soc-2016-multiview 2016-07-13 11:40:55 +08:00
3f21869c89 mark linked tracks with another color
* currently this implementation has known bugs:
- colors for linked tracks and selected linked tracks are not correct
- linked tracks in witness tracks are not shown correctly
2016-07-13 11:24:16 +08:00
7910bd844f Merge branch 'master' into soc-2016-multiview 2016-07-12 17:24:02 +08:00
8074d04334 add TH_LINKED_MARKER and TH_SEL_LINKED_MARKER 2016-07-12 17:00:10 +08:00
d2c4db0fd2 Merge branch 'master' into soc-2016-multiview 2016-07-11 10:47:32 +08:00
b8e339470e Add keyframe selection in multi-view solver mode 2016-07-11 10:46:09 +08:00
a4f8f3982c Merge branch 'master' into soc-2016-multiview 2016-07-08 13:57:55 +08:00
d283893044 Merge branch 'master' into soc-2016-multiview 2016-07-07 16:30:24 +08:00
3017d0d68c fix re-projection error bug
- it is caused by using normalized tracks for intrinsics refinement

- remove all std::cout and change them to LG (glog) for logging purpose
2016-07-07 16:27:42 +08:00
52dad8de61 Merge branch 'master' into soc-2016-multiview 2016-07-06 19:56:38 +08:00
0570e5eeab remove check for correspondence num and clip num
- fix bugs in BKE_tracking_correspondence_add:
  also check whether other_clip.other_track has been linked
  previously hasn't done that

- remove check for correspondence number and clip number, previously
  needs 8 correpondences and 2 clips to trigger multi-view
  reconstruction

- format code to conform google style and blender style
2016-07-06 19:35:40 +08:00
d23db741af fix correspondence bug and format code
- fix correspondence bug thanks to Sergey. ID pointers are to be
  restored using lib pointer map

- move two function with BKE_ prefix to blender kernel

- format code according to blender coding style
2016-07-06 17:17:51 +08:00
8280dc41b3 confirm bugs in correspondence saving 2016-07-05 17:46:20 +08:00
e341c6a4ed Merge branch 'master' into soc-2016-multiview 2016-07-05 15:25:33 +08:00
eaf30b24fc complete multiview_reconstruction_finish 2016-07-04 17:31:31 +08:00
d19ecbbed8 Merge branch 'master' into soc-2016-multiview 2016-07-04 14:24:08 +08:00
cf1e0a87b1 trivial formatting 2016-07-04 14:22:47 +08:00
ff4487b70c add robustness to correspondence adding:
- when the correspondence has been added, give an error message and
  return
- when a conflict correspondence is added, give an error message
2016-07-02 00:52:41 +08:00
f843bc9f07 Merge branch 'master' into soc-2016-multiview 2016-07-01 00:13:18 +08:00
5ef5fff336 The pipeline can run now, but the reconstruction error is huge 2016-06-29 23:46:23 +08:00
30c4194876 Merge branch 'master' into soc-2016-multiview 2016-06-29 21:32:05 +08:00
62b8a0f75b format the libmv code to google code style 2016-06-29 21:30:27 +08:00
9d9e0f102b remove some std::cout 2016-06-29 17:29:38 +08:00
4c6399fe4a fix bug in EuclideanScaleToUnity 2016-06-29 16:22:10 +08:00
5d05af20b2 set-up intrinsics map, proceed to resection 2016-06-29 16:22:10 +08:00
bc51238354 Multiview reconstruction: Fix issue with reading .blend files
The issue was caused by attempt to iterate list which pointers were not
properly restored yet. Now the flow is following:

- List pointers gets restored on direct_link_movieclip since those
  addresses are guaranteed to be from the current main.

- Self track and clip pointers are also restored on direct link
  (i assume self_clip is actually clip itself, so do we really need
  this pointer?)

- Other clip and track are restored on lib link since those pointers
  might come from linked movie clip.
2016-06-29 12:28:51 +05:00
7491f13085 Multiview reconstruction: Fix compilation error on Linux with strict compiler flags 2016-06-29 12:21:03 +05:00
b2aec13185 debugging in progress 2016-06-29 00:36:55 +08:00
7d0653a08f fix a bug in CameraPoseForFrame() 2016-06-28 23:47:29 +08:00
4dd582c84a add (wrong) read/write correspondence 2016-06-28 11:33:04 +08:00
1b29de2cbb Merge branch 'master' into soc-2016-multiview 2016-06-26 00:02:38 +08:00
65e6387307 finish the rough pipeline, begin debugging 2016-06-23 21:51:08 +08:00
5332f1b3bb Merge branch 'master' into soc-2016-multiview 2016-06-23 17:16:17 +08:00
b7a42255fb temporary commit 2016-06-23 17:15:49 +08:00
e183da7db9 add intersect and resect 2016-06-23 15:01:41 +08:00
bd3f3d003e finish migrating bundle code 2016-06-22 17:49:32 +08:00
46855fccb4 Merge branch 'master' into soc-2016-multiview 2016-06-22 15:28:22 +08:00
a17ef5f9fe at the point where two-view bundle is returned 2016-06-22 15:27:17 +08:00
dacb77135d Merge branch 'master' into soc-2016-multiview 2016-06-21 10:12:51 +08:00
71bac14ca4 bundle in progress 2016-06-20 23:49:14 +08:00
6040cf17bb convert global track index on blender side 2016-06-20 22:47:33 +08:00
c111857d3b Merge branch 'master' into soc-2016-multiview 2016-06-20 21:42:08 +08:00
8232ce0605 in the middle of nasty bundle migration 2016-06-16 23:41:33 +08:00
8a41a8180b Merge branch 'master' into soc-2016-multiview 2016-06-16 16:09:18 +08:00
683febfbca get corresondences into libmv 2016-06-16 15:56:58 +08:00
6730c6d19a pipeline in progress, so far so good 2016-06-15 00:29:01 +08:00
91bff8cfad Merge branch 'master' into soc-2016-multiview 2016-06-14 20:41:43 +08:00
4a1fdb48eb update autotrack two-view reconstrution 2016-06-14 17:01:38 +08:00
2ce153377a add reconstruction.cc in autotrack 2016-06-09 23:40:59 +08:00
6b096974c5 get re-projection error and make it displayed right in blender 2016-06-09 17:15:41 +08:00
f4048ea8b1 about to begin reconstruction in libmv 2016-06-09 15:29:48 +08:00
ced7dd74a7 Merge branch 'master' into soc-2016-multiview 2016-06-09 09:49:48 +08:00
41ad8cea51 make refine_flags into an array 2016-06-09 00:03:56 +08:00
5977d05b55 intermediate result, ready to move into libmv 2016-06-08 17:31:03 +08:00
0542f0e57c Merge branch 'master' into soc-2016-multiview 2016-06-08 15:24:27 +08:00
c458725976 complete new and free multiview reconstruction context, so far so good 2016-06-08 14:14:13 +08:00
cb8dc230c9 add reconstructN.* files, refine data structures 2016-06-08 00:08:23 +08:00
478c9eedd4 Merge branch 'master' into soc-2016-multiview 2016-06-06 21:54:04 +08:00
8f85a4e989 update correspondence data structure 2016-06-06 21:51:51 +08:00
27172decdd Merge branch 'master' into soc-2016-multiview 2016-06-03 22:39:57 +08:00
a7987b74fe adapt reconstruction to multiview_reconstruction 2016-06-03 22:35:20 +08:00
468d75d0af Merge branch 'master' into soc-2016-multiview 2016-06-02 20:59:21 +08:00
889bbb03c4 make an independent multiview reconstruction pipeline for multiview 2016-06-02 00:05:20 +08:00
a7b96295db Merge branch 'master' into soc-2016-multiview 2016-06-01 00:23:29 +08:00
75b69c72f2 prepare solve_multiview button, copy from solve_camera button and revise 2016-06-01 00:22:17 +08:00
1aa64d2974 Merge branch 'master' into soc-2016-multiview 2016-05-31 13:47:24 +08:00
a9a1aaf7e4 add correspondence data struct 2016-05-31 03:08:03 +08:00
e10ce799c4 Merge branch 'master' into soc-2016-multiview 2016-05-30 20:52:04 +08:00
dfbbd4b2c2 solve conflict 2016-05-30 18:54:42 +08:00
fc38e0a39f daily commit before rebase 2016-05-30 18:49:23 +08:00
01bb9bb806 finish link two correspondence together 2016-05-30 18:49:23 +08:00
bd3dac4357 add link and unlink button in scripts for python 2016-05-30 18:49:23 +08:00
bfabe7cdf3 add two correspondence operators
users can specify correspondence in tracking mode
placeholder for now, unfinished
2016-05-30 18:49:23 +08:00
12b3733462 solve conflict between local and upstream 2016-05-26 00:24:44 +08:00
35285ea2d4 finish link two correspondence together 2016-05-26 00:19:32 +08:00
c62020d506 add link and unlink button in scripts for python 2016-05-26 00:19:32 +08:00
a086858672 add two correspondence operators
users can specify correspondence in tracking mode
placeholder for now, unfinished
2016-05-26 00:19:32 +08:00
5d27929c1c add link and unlink button in scripts for python 2016-05-24 00:58:59 +08:00
3652e9ce80 add two correspondence operators
users can specify correspondence in tracking mode
placeholder for now, unfinished
2016-05-24 00:58:59 +08:00
52 changed files with 5380 additions and 190 deletions

View File

@@ -70,11 +70,18 @@ if(WITH_LIBMV)
intern/image.cc
intern/logging.cc
intern/reconstruction.cc
intern/reconstructionN.cc
intern/track_region.cc
intern/tracks.cc
intern/tracksN.cc
libmv/autotrack/autotrack.cc
libmv/autotrack/bundle.cc
libmv/autotrack/intersect.cc
libmv/autotrack/keyframe_selection.cc
libmv/autotrack/pipeline.cc
libmv/autotrack/predict_tracks.cc
libmv/autotrack/reconstruction.cc
libmv/autotrack/resect.cc
libmv/autotrack/tracks.cc
libmv/base/aligned_malloc.cc
libmv/image/array_nd.cc
@@ -119,18 +126,24 @@ if(WITH_LIBMV)
intern/image.h
intern/logging.h
intern/reconstruction.h
intern/reconstructionN.h
intern/track_region.h
intern/tracks.h
intern/tracksN.h
libmv/autotrack/autotrack.h
libmv/autotrack/bundle.h
libmv/autotrack/callbacks.h
libmv/autotrack/frame_accessor.h
libmv/autotrack/intersect.h
libmv/autotrack/keyframe_selection.h
libmv/autotrack/marker.h
libmv/autotrack/model.h
libmv/autotrack/pipeline.h
libmv/autotrack/predict_tracks.h
libmv/autotrack/quad.h
libmv/autotrack/reconstruction.h
libmv/autotrack/region.h
libmv/autotrack/resect.h
libmv/autotrack/tracks.h
libmv/base/aligned_malloc.h
libmv/base/id_generator.h

View File

@@ -0,0 +1,541 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2011 Blender Foundation.
* All rights reserved.
*
* Contributor(s): Blender Foundation,
* Tianwei Shen
*
* ***** END GPL LICENSE BLOCK *****
*/
#include "intern/reconstructionN.h"
#include "intern/camera_intrinsics.h"
#include "intern/tracksN.h"
#include "intern/utildefines.h"
#include "libmv/logging/logging.h"
#include "libmv/autotrack/autotrack.h"
#include "libmv/autotrack/bundle.h"
#include "libmv/autotrack/frame_accessor.h"
#include "libmv/autotrack/keyframe_selection.h"
#include "libmv/autotrack/marker.h"
#include "libmv/autotrack/model.h"
#include "libmv/autotrack/pipeline.h"
#include "libmv/autotrack/predict_tracks.h"
#include "libmv/autotrack/quad.h"
#include "libmv/autotrack/reconstruction.h"
#include "libmv/autotrack/region.h"
#include "libmv/autotrack/tracks.h"
#include "libmv/simple_pipeline/callbacks.h"
using mv::Tracks;
using mv::Marker;
using mv::Reconstruction;
using libmv::CameraIntrinsics;
using libmv::ProgressUpdateCallback;
struct libmv_ReconstructionN {
mv::Reconstruction reconstruction;
/* Used for per-track average error calculation after reconstruction */
mv::Tracks tracks;
libmv::CameraIntrinsics *intrinsics;
double error;
bool is_valid;
};
namespace {
class MultiviewReconstructUpdateCallback : public ProgressUpdateCallback {
public:
MultiviewReconstructUpdateCallback(
multiview_reconstruct_progress_update_cb progress_update_callback,
void *callback_customdata) {
progress_update_callback_ = progress_update_callback;
callback_customdata_ = callback_customdata;
}
void invoke(double progress, const char* message) {
if (progress_update_callback_) {
progress_update_callback_(callback_customdata_, progress, message);
}
}
protected:
multiview_reconstruct_progress_update_cb progress_update_callback_;
void* callback_customdata_;
};
void mv_getNormalizedTracks(const Tracks &tracks,
const CameraIntrinsics &camera_intrinsics,
Tracks *normalized_tracks)
{
libmv::vector<Marker> markers = tracks.markers();
for (int i = 0; i < markers.size(); ++i) {
Marker &marker = markers[i];
// act in a calibrated fashion
double normalized_x, normalized_y;
camera_intrinsics.InvertIntrinsics(marker.center[0], marker.center[1],
&normalized_x, &normalized_y);
marker.center[0] = normalized_x;
marker.center[1] = normalized_y;
normalized_tracks->AddMarker(marker);
}
}
// Each clip has a fix camera intrinsics, set frames of a clip to that fixed intrinsics
bool ReconstructionUpdateFixedIntrinsics(libmv_ReconstructionN **all_libmv_reconstruction,
Tracks *tracks,
Reconstruction *reconstruction)
{
int clip_num = tracks->MaxClip() + 1;
for (int i = 0; i < clip_num; i++) {
CameraIntrinsics *camera_intrinsics = all_libmv_reconstruction[i]->intrinsics;
int cam_intrinsic_index = reconstruction->AddCameraIntrinsics(camera_intrinsics);
assert(cam_intrinsic_index == i);
}
reconstruction->InitIntrinsicsMapFixed(*tracks);
return true;
}
void libmv_solveRefineIntrinsics(
const Tracks &tracks,
const int refine_intrinsics,
const int bundle_constraints,
reconstruct_progress_update_cb progress_update_callback,
void* callback_customdata,
Reconstruction* reconstruction,
CameraIntrinsics* intrinsics) {
/* only a few combinations are supported but trust the caller/ */
int bundle_intrinsics = 0;
if (refine_intrinsics & LIBMV_REFINE_FOCAL_LENGTH) {
bundle_intrinsics |= mv::BUNDLE_FOCAL_LENGTH;
}
if (refine_intrinsics & LIBMV_REFINE_PRINCIPAL_POINT) {
bundle_intrinsics |= mv::BUNDLE_PRINCIPAL_POINT;
}
if (refine_intrinsics & LIBMV_REFINE_RADIAL_DISTORTION_K1) {
bundle_intrinsics |= mv::BUNDLE_RADIAL_K1;
}
if (refine_intrinsics & LIBMV_REFINE_RADIAL_DISTORTION_K2) {
bundle_intrinsics |= mv::BUNDLE_RADIAL_K2;
}
progress_update_callback(callback_customdata, 1.0, "Refining solution");
mv::EuclideanBundleCommonIntrinsics(tracks,
bundle_intrinsics,
bundle_constraints,
reconstruction,
intrinsics);
}
void finishMultiviewReconstruction(
const Tracks &tracks,
const CameraIntrinsics &camera_intrinsics,
libmv_ReconstructionN *libmv_reconstruction,
reconstruct_progress_update_cb progress_update_callback,
void *callback_customdata) {
Reconstruction &reconstruction = libmv_reconstruction->reconstruction;
/* Reprojection error calculation. */
progress_update_callback(callback_customdata, 1.0, "Finishing solution");
libmv_reconstruction->tracks = tracks;
libmv_reconstruction->error = mv::EuclideanReprojectionError(tracks,
reconstruction,
camera_intrinsics);
}
bool selectTwoClipKeyframesBasedOnGRICAndVariance(
const int clip_index,
Tracks& tracks,
Tracks& normalized_tracks,
CameraIntrinsics& camera_intrinsics,
int& keyframe1,
int& keyframe2) {
libmv::vector<int> keyframes;
/* Get list of all keyframe candidates first. */
mv::SelectClipKeyframesBasedOnGRICAndVariance(clip_index,
normalized_tracks,
camera_intrinsics,
keyframes);
if (keyframes.size() < 2) {
LG << "Not enough keyframes detected by GRIC";
return false;
} else if (keyframes.size() == 2) {
keyframe1 = keyframes[0];
keyframe2 = keyframes[1];
return true;
}
/* Now choose two keyframes with minimal reprojection error after initial
* reconstruction choose keyframes with the least reprojection error after
* solving from two candidate keyframes.
*
* In fact, currently libmv returns single pair only, so this code will
* not actually run. But in the future this could change, so let's stay
* prepared.
*/
int previous_keyframe = keyframes[0];
double best_error = std::numeric_limits<double>::max();
for (int i = 1; i < keyframes.size(); i++) {
Reconstruction reconstruction;
int current_keyframe = keyframes[i];
libmv::vector<mv::Marker> keyframe_markers;
normalized_tracks.GetMarkersForTracksInBothFrames(clip_index, previous_keyframe,
clip_index, current_keyframe,
&keyframe_markers);
Tracks keyframe_tracks(keyframe_markers);
/* get a solution from two keyframes only */
mv::ReconstructTwoFrames(keyframe_markers, 0, &reconstruction);
mv::EuclideanBundleAll(keyframe_tracks, &reconstruction);
mv::EuclideanCompleteMultiviewReconstruction(keyframe_tracks, &reconstruction, NULL);
double current_error = mv::EuclideanReprojectionError(tracks,
reconstruction,
camera_intrinsics);
LG << "Error between " << previous_keyframe
<< " and " << current_keyframe
<< ": " << current_error;
if (current_error < best_error) {
best_error = current_error;
keyframe1 = previous_keyframe;
keyframe2 = current_keyframe;
}
previous_keyframe = current_keyframe;
}
return true;
}
// re-apply camera intrinsics on the normalized 2d points
Marker libmv_projectMarker(const mv::Point& point,
const mv::CameraPose& camera,
const CameraIntrinsics& intrinsics) {
libmv::Vec3 projected = camera.R * point.X + camera.t;
projected /= projected(2);
mv::Marker reprojected_marker;
double origin_x, origin_y;
intrinsics.ApplyIntrinsics(projected(0), projected(1),
&origin_x, &origin_y);
reprojected_marker.center[0] = origin_x;
reprojected_marker.center[1] = origin_y;
reprojected_marker.clip = camera.clip;
reprojected_marker.frame = camera.frame;
reprojected_marker.track = point.track;
return reprojected_marker;
}
} // namespace
void libmv_reconstructionNDestroy(libmv_ReconstructionN* libmv_reconstructionN)
{
LIBMV_OBJECT_DELETE(libmv_reconstructionN->intrinsics, CameraIntrinsics);
LIBMV_OBJECT_DELETE(libmv_reconstructionN, libmv_ReconstructionN);
}
libmv_ReconstructionN** libmv_solveMultiviewReconstruction(
const int clip_num,
const libmv_TracksN **all_libmv_tracks,
const libmv_CameraIntrinsicsOptions *all_libmv_camera_intrinsics_options,
libmv_MultiviewReconstructionOptions *libmv_reconstruction_options,
multiview_reconstruct_progress_update_cb progress_update_callback,
void* callback_customdata)
{
libmv_ReconstructionN **all_libmv_reconstruction = LIBMV_STRUCT_NEW(libmv_ReconstructionN*, clip_num);
libmv::vector<Marker> keyframe_markers;
int keyframe1, keyframe2;
Tracks all_tracks, all_normalized_tracks; // normalized tracks of all clips
for (int i = 0; i < clip_num; i++) {
all_libmv_reconstruction[i] = LIBMV_OBJECT_NEW(libmv_ReconstructionN);
Tracks &tracks = *((Tracks *) all_libmv_tracks[i]); // Tracks are just a bunch of markers
all_tracks.AddTracks(tracks);
///* Retrieve reconstruction options from C-API to libmv API. */
CameraIntrinsics *camera_intrinsics;
camera_intrinsics = all_libmv_reconstruction[i]->intrinsics =
libmv_cameraIntrinsicsCreateFromOptions(&all_libmv_camera_intrinsics_options[i]);
///* Invert the camera intrinsics/ */
Tracks normalized_tracks;
mv_getNormalizedTracks(tracks, *camera_intrinsics, &normalized_tracks);
all_normalized_tracks.AddTracks(normalized_tracks);
if (i == 0) { // key frame from primary camera
///* keyframe selection. */
keyframe1 = libmv_reconstruction_options->keyframe1;
keyframe2 = libmv_reconstruction_options->keyframe2;
normalized_tracks.GetMarkersForTracksInBothFrames(i, keyframe1, i, keyframe2, &keyframe_markers);
}
}
// make reconstrution on the primary clip reconstruction
Reconstruction &reconstruction = all_libmv_reconstruction[0]->reconstruction;
LG << "number of clips: " << clip_num << "\n";
LG << "frames to init from: " << keyframe1 << " " << keyframe2 << "\n";
LG << "number of markers for init: " << keyframe_markers.size() << "\n";
if (keyframe_markers.size() < 8) {
LG << "No enough markers to initialize from";
for (int i = 0; i < clip_num; i++)
all_libmv_reconstruction[i]->is_valid = false;
return all_libmv_reconstruction;
}
// create multiview reconstruct update callback
MultiviewReconstructUpdateCallback update_callback =
MultiviewReconstructUpdateCallback(progress_update_callback,
callback_customdata);
if (libmv_reconstruction_options->select_keyframes) {
LG << "Using automatic keyframe selection";
update_callback.invoke(0, "Selecting keyframes");
// select two keyframes from the primary camera (camera_index == 0)
selectTwoClipKeyframesBasedOnGRICAndVariance(0,
all_tracks,
all_normalized_tracks,
*all_libmv_reconstruction[0]->intrinsics,
keyframe1,
keyframe2);
/* so keyframes in the interface would be updated */
libmv_reconstruction_options->keyframe1 = keyframe1;
libmv_reconstruction_options->keyframe2 = keyframe2;
}
///* Actual reconstruction. */
update_callback.invoke(0, "Initial reconstruction");
// update intrinsics mapping from (clip, frame) -> intrinsics
// TODO(tianwei): in the future we may support varing focal length,
// thus each (clip, frame) should have a unique intrinsics index.
// This function has to be called before ReconstructTwoFrames.
ReconstructionUpdateFixedIntrinsics(all_libmv_reconstruction, &all_normalized_tracks, &reconstruction);
// reconstruct two views from the main clip
if (!mv::ReconstructTwoFrames(keyframe_markers, 0, &reconstruction)) {
LG << "mv::ReconstrucTwoFrames failed\n";
all_libmv_reconstruction[0]->is_valid = false;
return all_libmv_reconstruction;
}
// bundle the two-view initial reconstruction
// (it is redundant for now since no 3d point is added at this stage)
//if(!mv::EuclideanBundleAll(all_normalized_tracks, &reconstruction)) {
// printf("mv::EuclideanBundleAll failed\n");
// all_libmv_reconstruction[0]->is_valid = false;
// return all_libmv_reconstruction;
//}
if (!mv::EuclideanCompleteMultiviewReconstruction(all_normalized_tracks, &reconstruction, &update_callback)) {
LG << "mv::EuclideanReconstructionComplete failed\n";
all_libmv_reconstruction[0]->is_valid = false;
return all_libmv_reconstruction;
}
LG << "[libmv_solveMultiviewReconstruction] Successfully do track intersection and camera resection\n";
/* Refinement/ */
//TODO(Tianwei): current api allows only one camera refine intrinsics
if (libmv_reconstruction_options->all_refine_intrinsics[0]) {
libmv_solveRefineIntrinsics(
all_tracks,
libmv_reconstruction_options->all_refine_intrinsics[0],
mv::BUNDLE_NO_CONSTRAINTS,
progress_update_callback,
callback_customdata,
&reconstruction,
all_libmv_reconstruction[0]->intrinsics);
}
LG << "[libmv_solveMultiviewReconstruction] Successfully refine camera intrinsics\n";
///* Set reconstruction scale to unity. */
mv::EuclideanScaleToUnity(&reconstruction);
/* Finish reconstruction. */
finishMultiviewReconstruction(all_tracks,
*(all_libmv_reconstruction[0]->intrinsics),
all_libmv_reconstruction[0],
progress_update_callback,
callback_customdata);
// a multi-view reconstruction is succesfuly iff all reconstruction falgs are set to true
for (int i = 0; i < clip_num; i++)
all_libmv_reconstruction[i]->is_valid = true;
return all_libmv_reconstruction;
}
bool libmv_multiviewReconstructionIsValid(const int clip_num,
const libmv_ReconstructionN **all_libmv_reconstruction)
{
bool valid_flag = true;
for (int i = 0; i < clip_num; i++)
valid_flag &= all_libmv_reconstruction[i]->is_valid;
return valid_flag;
}
double libmv_multiviewReprojectionError(const int clip_num,
const libmv_ReconstructionN **all_libmv_reconstruction)
{
// reprojection is computed as a whole and stored in all_libmv_reconstruction[0]->error
double error = all_libmv_reconstruction[0]->error;
return error;
}
libmv_CameraIntrinsics *libmv_reconstructionNExtractIntrinsics(libmv_ReconstructionN *libmv_reconstruction)
{
return (libmv_CameraIntrinsics *) libmv_reconstruction->intrinsics;
}
int libmv_multiviewPointForTrack(
const libmv_ReconstructionN *libmv_reconstruction,
int global_track,
double pos[3]) {
const Reconstruction *reconstruction = &libmv_reconstruction->reconstruction;
const mv::Point *point = reconstruction->PointForTrack(global_track);
if (point) {
pos[0] = point->X[0];
pos[1] = point->X[2];
pos[2] = point->X[1];
return 1;
}
return 0;
}
double libmv_multiviewReprojectionErrorForTrack(
const libmv_ReconstructionN *libmv_reconstruction,
int track) {
const Reconstruction *reconstruction = &libmv_reconstruction->reconstruction;
const CameraIntrinsics *intrinsics = libmv_reconstruction->intrinsics;
mv::vector<Marker> markers;
libmv_reconstruction->tracks.GetMarkersForTrack(track, &markers);
int num_reprojected = 0;
double total_error = 0.0;
for (int i = 0; i < markers.size(); ++i) {
double weight = markers[i].weight;
const mv::CameraPose *camera = reconstruction->CameraPoseForFrame(markers[i].clip, markers[i].frame);
const mv::Point *point = reconstruction->PointForTrack(markers[i].track);
if (!camera || !point || weight == 0.0) {
continue;
}
num_reprojected++;
Marker reprojected_marker = libmv_projectMarker(*point, *camera, *intrinsics);
double ex = (reprojected_marker.center[0] - markers[i].center[1]) * weight;
double ey = (reprojected_marker.center[0] - markers[i].center[1]) * weight;
total_error += sqrt(ex * ex + ey * ey);
}
return total_error / num_reprojected;
}
int libmv_multiviewCameraForFrame(
const libmv_ReconstructionN *libmv_reconstruction,
int clip,
int frame,
double mat[4][4]) {
const Reconstruction *reconstruction = &libmv_reconstruction->reconstruction;
const mv::CameraPose *camera = reconstruction->CameraPoseForFrame(clip, frame);
if (camera) {
for (int j = 0; j < 3; ++j) {
for (int k = 0; k < 3; ++k) {
int l = k;
if (k == 1) {
l = 2;
} else if (k == 2) {
l = 1;
}
if (j == 2) {
mat[j][l] = -camera->R(j, k);
} else {
mat[j][l] = camera->R(j, k);
}
}
mat[j][3] = 0.0;
}
libmv::Vec3 optical_center = -camera->R.transpose() * camera->t;
mat[3][0] = optical_center(0);
mat[3][1] = optical_center(2);
mat[3][2] = optical_center(1);
mat[3][3] = 1.0;
return 1;
}
return 0;
}
double libmv_multiviewReprojectionErrorForFrame(
const libmv_ReconstructionN *libmv_reconstruction,
int clip,
int frame) {
const Reconstruction *reconstruction = &libmv_reconstruction->reconstruction;
const CameraIntrinsics *intrinsics = libmv_reconstruction->intrinsics;
mv::vector<Marker> markers;
libmv_reconstruction->tracks.GetMarkersInFrame(clip, frame, &markers);
const mv::CameraPose *camera = reconstruction->CameraPoseForFrame(clip, frame);
int num_reprojected = 0;
double total_error = 0.0;
if (!camera) {
return 0.0;
}
for (int i = 0; i < markers.size(); ++i) {
const mv::Point *point =
reconstruction->PointForTrack(markers[i].track);
if (!point) {
continue;
}
num_reprojected++;
Marker reprojected_marker =
libmv_projectMarker(*point, *camera, *intrinsics);
double ex = (reprojected_marker.center[0] - markers[i].center[0]) * markers[i].weight;
double ey = (reprojected_marker.center[1] - markers[i].center[1]) * markers[i].weight;
total_error += sqrt(ex * ex + ey * ey);
}
return total_error / num_reprojected;
}

View File

@@ -0,0 +1,78 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2017 Blender Foundation.
* All rights reserved.
*
* Contributor(s): Blender Foundation,
* Tianwei Shen
*
* ***** END GPL LICENSE BLOCK *****
*/
#ifndef LIBMV_C_API_RECONSTRUCTIONN_H_
#define LIBMV_C_API_RECONSTRUCTIONN_H_
#include "intern/reconstruction.h"
#ifdef __cplusplus
extern "C" {
#endif
typedef struct libmv_ReconstructionN libmv_ReconstructionN;
typedef struct libmv_MultiviewReconstructionOptions {
int select_keyframes;
int keyframe1, keyframe2;
int *all_refine_intrinsics; /* this should be an array since each clip has its own refine_flags */
} libmv_MultiviewReconstructionOptions;
typedef void (*multiview_reconstruct_progress_update_cb) (void *customdata,
double progress,
const char *message);
void libmv_reconstructionNDestroy(libmv_ReconstructionN *libmv_reconstructionN);
libmv_ReconstructionN** libmv_solveMultiviewReconstruction(const int clip_num,
const struct libmv_TracksN **all_libmv_tracks,
const libmv_CameraIntrinsicsOptions *all_libmv_camera_intrinsics_options,
libmv_MultiviewReconstructionOptions *libmv_reconstruction_options,
multiview_reconstruct_progress_update_cb progress_update_callback,
void *callback_customdata);
bool libmv_multiviewReconstructionIsValid(const int clip_num,
const libmv_ReconstructionN **all_libmv_reconstruction);
double libmv_multiviewReprojectionError(const int clip_num,
const libmv_ReconstructionN **all_libmv_reconstruction);
libmv_CameraIntrinsics *libmv_reconstructionNExtractIntrinsics(libmv_ReconstructionN *libmv_reconstruction);
int libmv_multiviewPointForTrack(const libmv_ReconstructionN *libmv_reconstruction, int global_track, double pos[3]);
double libmv_multiviewReprojectionErrorForTrack(const libmv_ReconstructionN *libmv_reconstruction, int track);
int libmv_multiviewCameraForFrame(const libmv_ReconstructionN *libmv_reconstruction,
int clip, int frame, double mat[4][4]);
double libmv_multiviewReprojectionErrorForFrame(const libmv_ReconstructionN *libmv_reconstruction,
int clip, int frame);
#ifdef __cplusplus
}
#endif
#endif // LIBMV_C_API_RECONSTRUCTIONN_H_

View File

@@ -88,35 +88,30 @@ typedef struct libmv_Marker {
namespace mv {
struct Marker;
}
/* -------- libmv_Marker ------------------- */
void libmv_apiMarkerToMarker(const libmv_Marker& libmv_marker,
mv::Marker *marker);
void libmv_markerToApiMarker(const mv::Marker& marker,
libmv_Marker *libmv_marker);
#endif
/* -------- libmv_Tracks ------------------- */
libmv_TracksN* libmv_tracksNewN(void);
void libmv_tracksDestroyN(libmv_TracksN* libmv_tracks);
void libmv_tracksAddMarkerN(libmv_TracksN* libmv_tracks,
const libmv_Marker* libmv_marker);
void libmv_tracksGetMarkerN(libmv_TracksN* libmv_tracks,
int clip,
int frame,
int track,
libmv_Marker* libmv_marker);
void libmv_tracksRemoveMarkerN(libmv_TracksN* libmv_tracks,
int clip,
int frame,
int track);
void libmv_tracksRemoveMarkersForTrack(libmv_TracksN* libmv_tracks,
int track);
int libmv_tracksMaxClipN(libmv_TracksN* libmv_tracks);
int libmv_tracksMaxFrameN(libmv_TracksN* libmv_tracks, int clip);
int libmv_tracksMaxTrackN(libmv_TracksN* libmv_tracks);

View File

@@ -35,6 +35,7 @@
#include "intern/image.h"
#include "intern/logging.h"
#include "intern/reconstruction.h"
#include "intern/reconstructionN.h"
#include "intern/track_region.h"
#include "intern/tracks.h"
#include "intern/tracksN.h"

View File

@@ -0,0 +1,685 @@
// Copyright (c) 2011, 2012, 2013, 2016 libmv authors.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to
// deal in the Software without restriction, including without limitation the
// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
// sell copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
// IN THE SOFTWARE.
//
// Author: Tianwei Shen <shentianweipku@gmail.com>
//
// This is a autotrack equivalent bundle set, adapted from simple_pipeline,
// which replaces libmv with mv, includeing tracks and markers
#include "libmv/autotrack/bundle.h"
#include <cstdio>
#include <map>
#include "ceres/ceres.h"
#include "ceres/rotation.h"
#include "libmv/base/vector.h"
#include "libmv/logging/logging.h"
#include "libmv/multiview/fundamental.h"
#include "libmv/multiview/projection.h"
#include "libmv/numeric/numeric.h"
#include "libmv/simple_pipeline/camera_intrinsics.h"
#include "libmv/simple_pipeline/distortion_models.h"
#include "libmv/autotrack/reconstruction.h"
#include "libmv/autotrack/tracks.h"
#ifdef _OPENMP
# include <omp.h>
#endif
using libmv::DistortionModelType;
using libmv::Vec6;
using libmv::PolynomialCameraIntrinsics;
namespace mv {
// The intrinsics need to get combined into a single parameter block; use these
// enums to index instead of numeric constants.
enum {
// Camera calibration values.
OFFSET_FOCAL_LENGTH,
OFFSET_PRINCIPAL_POINT_X,
OFFSET_PRINCIPAL_POINT_Y,
// Distortion model coefficients.
OFFSET_K1,
OFFSET_K2,
OFFSET_K3,
OFFSET_P1,
OFFSET_P2,
// Maximal possible offset.
OFFSET_MAX,
};
#define FIRST_DISTORTION_COEFFICIENT OFFSET_K1
#define LAST_DISTORTION_COEFFICIENT OFFSET_P2
#define NUM_DISTORTION_COEFFICIENTS \
(LAST_DISTORTION_COEFFICIENT - FIRST_DISTORTION_COEFFICIENT + 1)
namespace {
// Cost functor which computes reprojection error of 3D point X
// on camera defined by angle-axis rotation and it's translation
// (which are in the same block due to optimization reasons).
//
// This functor uses a radial distortion model.
struct OpenCVReprojectionError {
OpenCVReprojectionError(const DistortionModelType distortion_model,
const double observed_x,
const double observed_y,
const double weight) :
distortion_model_(distortion_model),
observed_x_(observed_x), observed_y_(observed_y),
weight_(weight) {}
template <typename T>
bool operator()(const T* const intrinsics,
const T* const R_t, // Rotation denoted by angle axis followed with translation
const T* const X, // Point coordinates 3x1.
T* residuals) const {
// Unpack the intrinsics.
const T& focal_length = intrinsics[OFFSET_FOCAL_LENGTH];
const T& principal_point_x = intrinsics[OFFSET_PRINCIPAL_POINT_X];
const T& principal_point_y = intrinsics[OFFSET_PRINCIPAL_POINT_Y];
// Compute projective coordinates: x = RX + t.
T x[3];
ceres::AngleAxisRotatePoint(R_t, X, x);
x[0] += R_t[3];
x[1] += R_t[4];
x[2] += R_t[5];
// Prevent points from going behind the camera.
if (x[2] < T(0)) {
return false;
}
// Compute normalized coordinates: x /= x[2].
T xn = x[0] / x[2];
T yn = x[1] / x[2];
T predicted_x, predicted_y;
// Apply distortion to the normalized points to get (xd, yd).
// TODO(keir): Do early bailouts for zero distortion; these are expensive
// jet operations.
switch (distortion_model_) {
case libmv::DISTORTION_MODEL_POLYNOMIAL:
{
const T& k1 = intrinsics[OFFSET_K1];
const T& k2 = intrinsics[OFFSET_K2];
const T& k3 = intrinsics[OFFSET_K3];
const T& p1 = intrinsics[OFFSET_P1];
const T& p2 = intrinsics[OFFSET_P2];
libmv::ApplyPolynomialDistortionModel(focal_length,
focal_length,
principal_point_x,
principal_point_y,
k1, k2, k3,
p1, p2,
xn, yn,
&predicted_x,
&predicted_y);
break;
}
case libmv::DISTORTION_MODEL_DIVISION:
{
const T& k1 = intrinsics[OFFSET_K1];
const T& k2 = intrinsics[OFFSET_K2];
libmv::ApplyDivisionDistortionModel(focal_length,
focal_length,
principal_point_x,
principal_point_y,
k1, k2,
xn, yn,
&predicted_x,
&predicted_y);
break;
}
default:
LOG(FATAL) << "Unknown distortion model";
}
// The error is the difference between the predicted and observed position.
residuals[0] = (predicted_x - T(observed_x_)) * weight_;
residuals[1] = (predicted_y - T(observed_y_)) * weight_;
return true;
}
const DistortionModelType distortion_model_;
const double observed_x_;
const double observed_y_;
const double weight_;
};
// Print a message to the log which camera intrinsics are gonna to be optimixed.
void BundleIntrinsicsLogMessage(const int bundle_intrinsics) {
if (bundle_intrinsics == BUNDLE_NO_INTRINSICS) {
LOG(INFO) << "Bundling only camera positions.";
} else {
std::string bundling_message = "";
#define APPEND_BUNDLING_INTRINSICS(name, flag) \
if (bundle_intrinsics & flag) { \
if (!bundling_message.empty()) { \
bundling_message += ", "; \
} \
bundling_message += name; \
} (void)0
APPEND_BUNDLING_INTRINSICS("f", BUNDLE_FOCAL_LENGTH);
APPEND_BUNDLING_INTRINSICS("px, py", BUNDLE_PRINCIPAL_POINT);
APPEND_BUNDLING_INTRINSICS("k1", BUNDLE_RADIAL_K1);
APPEND_BUNDLING_INTRINSICS("k2", BUNDLE_RADIAL_K2);
APPEND_BUNDLING_INTRINSICS("p1", BUNDLE_TANGENTIAL_P1);
APPEND_BUNDLING_INTRINSICS("p2", BUNDLE_TANGENTIAL_P2);
LOG(INFO) << "Bundling " << bundling_message << ".";
}
}
// Pack intrinsics from object to an array for easier
// and faster minimization.
void PackIntrinisicsIntoArray(const CameraIntrinsics &intrinsics,
double ceres_intrinsics[OFFSET_MAX]) {
ceres_intrinsics[OFFSET_FOCAL_LENGTH] = intrinsics.focal_length();
ceres_intrinsics[OFFSET_PRINCIPAL_POINT_X] = intrinsics.principal_point_x();
ceres_intrinsics[OFFSET_PRINCIPAL_POINT_Y] = intrinsics.principal_point_y();
int num_distortion_parameters = intrinsics.num_distortion_parameters();
assert(num_distortion_parameters <= NUM_DISTORTION_COEFFICIENTS);
const double *distortion_parameters = intrinsics.distortion_parameters();
for (int i = 0; i < num_distortion_parameters; ++i) {
ceres_intrinsics[FIRST_DISTORTION_COEFFICIENT + i] =
distortion_parameters[i];
}
}
// Unpack intrinsics back from an array to an object.
void UnpackIntrinsicsFromArray(const double ceres_intrinsics[OFFSET_MAX],
CameraIntrinsics *intrinsics) {
intrinsics->SetFocalLength(ceres_intrinsics[OFFSET_FOCAL_LENGTH],
ceres_intrinsics[OFFSET_FOCAL_LENGTH]);
intrinsics->SetPrincipalPoint(ceres_intrinsics[OFFSET_PRINCIPAL_POINT_X],
ceres_intrinsics[OFFSET_PRINCIPAL_POINT_Y]);
int num_distortion_parameters = intrinsics->num_distortion_parameters();
assert(num_distortion_parameters <= NUM_DISTORTION_COEFFICIENTS);
double *distortion_parameters = intrinsics->distortion_parameters();
for (int i = 0; i < num_distortion_parameters; ++i) {
distortion_parameters[i] =
ceres_intrinsics[FIRST_DISTORTION_COEFFICIENT + i];
}
}
// Get a vector of camera's rotations denoted by angle axis
// conjuncted with translations into single block. Since we use clip and frame
// to access a camera pose, this function saves the (clip, frame)->global_index
// map in camera_pose_map
vector<Vec6> PackMultiCamerasRotationAndTranslation(
const Tracks &tracks,
const Reconstruction &reconstruction,
vector<vector<int> > &camera_pose_map) {
vector<Vec6> all_cameras_R_t;
int clip_num = tracks.MaxClip() + 1;
camera_pose_map.resize(clip_num);
int total_frame = 0;
for(int i = 0; i < clip_num; i++) {
total_frame += tracks.MaxFrame(i) + 1;
camera_pose_map[i].resize(tracks.MaxFrame(i) + 1);
}
all_cameras_R_t.resize(total_frame); // maximum possible number of camera poses
int frame_count = 0;
for(int i = 0; i < clip_num; i++) {
int max_frame = tracks.MaxFrame(i);
for(int j = 0; j <= max_frame; j++) {
const CameraPose *camera = reconstruction.CameraPoseForFrame(i, j);
if (camera) {
ceres::RotationMatrixToAngleAxis(&camera->R(0, 0),
&all_cameras_R_t[frame_count](0));
all_cameras_R_t[frame_count].tail<3>() = camera->t;
camera_pose_map[i][j] = frame_count; // save the global map
frame_count++;
}
}
}
return all_cameras_R_t;
}
// Convert cameras rotations fro mangle axis back to rotation matrix.
void UnpackMultiCamerasRotationAndTranslation(
const Tracks &tracks,
const vector<Vec6> &all_cameras_R_t,
Reconstruction *reconstruction) {
int clip_num = tracks.MaxClip() + 1;
int frame_count = 0;
for(int i = 0; i < clip_num; i++) {
int max_frame = tracks.MaxFrame(i);
for(int j = 0; j <= max_frame; j++) {
CameraPose *camera = reconstruction->CameraPoseForFrame(i, j);
if(camera) {
ceres::AngleAxisToRotationMatrix(&all_cameras_R_t[frame_count](0), &camera->R(0, 0));
camera->t = all_cameras_R_t[frame_count].tail<3>();
frame_count++;
}
}
}
}
// Converts sparse CRSMatrix to Eigen matrix, so it could be used
// all over in the pipeline.
//
// TODO(sergey): currently uses dense Eigen matrices, best would
// be to use sparse Eigen matrices
void CRSMatrixToEigenMatrix(const ceres::CRSMatrix &crs_matrix,
Mat *eigen_matrix) {
eigen_matrix->resize(crs_matrix.num_rows, crs_matrix.num_cols);
eigen_matrix->setZero();
for (int row = 0; row < crs_matrix.num_rows; ++row) {
int start = crs_matrix.rows[row];
int end = crs_matrix.rows[row + 1] - 1;
for (int i = start; i <= end; i++) {
int col = crs_matrix.cols[i];
double value = crs_matrix.values[i];
(*eigen_matrix)(row, col) = value;
}
}
}
void MultiviewBundlerPerformEvaluation(const Tracks &tracks,
Reconstruction *reconstruction,
vector<Vec6> *all_cameras_R_t,
ceres::Problem *problem,
BundleEvaluation *evaluation) {
int max_track = tracks.MaxTrack();
// Number of camera rotations equals to number of translation,
int num_cameras = all_cameras_R_t->size();
int num_points = 0;
vector<Point*> minimized_points;
for (int i = 0; i <= max_track; i++) {
Point *point = reconstruction->PointForTrack(i);
if (point) {
// We need to know whether the track is constant zero weight,
// so it wouldn't have parameter block in the problem.
//
// Getting all markers for track is not so bad currently since
// this code is only used by keyframe selection when there are
// not so much tracks and only 2 frames anyway.
vector<Marker> marker_of_track;
tracks.GetMarkersForTrack(i, &marker_of_track);
for (int j = 0; j < marker_of_track.size(); j++) {
if (marker_of_track.at(j).weight != 0.0) {
minimized_points.push_back(point);
num_points++;
break;
}
}
}
}
LG << "Number of cameras " << num_cameras;
LG << "Number of points " << num_points;
evaluation->num_cameras = num_cameras;
evaluation->num_points = num_points;
if (evaluation->evaluate_jacobian) { // Evaluate jacobian matrix.
ceres::CRSMatrix evaluated_jacobian;
ceres::Problem::EvaluateOptions eval_options;
// Cameras goes first in the ordering.
int frame_count = 0;
int clip_num = tracks.MaxClip() + 1;
for(int i = 0; i < clip_num; i++) {
int max_frame = tracks.MaxFrame(i);
for(int j = 0; j < max_frame; j++) {
const CameraPose *camera = reconstruction->CameraPoseForFrame(i, j);
if (camera) {
double *current_camera_R_t = &(*all_cameras_R_t)[frame_count](0);
// All cameras are variable now.
problem->SetParameterBlockVariable(current_camera_R_t);
eval_options.parameter_blocks.push_back(current_camera_R_t);
frame_count++;
}
}
}
// Points goes at the end of ordering,
for (int i = 0; i < minimized_points.size(); i++) {
Point *point = minimized_points.at(i);
eval_options.parameter_blocks.push_back(&point->X(0));
}
problem->Evaluate(eval_options,
NULL, NULL, NULL,
&evaluated_jacobian);
CRSMatrixToEigenMatrix(evaluated_jacobian, &evaluation->jacobian);
}
}
// This is an utility function to only bundle 3D position of
// given markers list.
//
// Main purpose of this function is to adjust positions of tracks
// which does have constant zero weight and so far only were using
// algebraic intersection to obtain their 3D positions.
//
// At this point we only need to bundle points positions, cameras
// are to be totally still here.
void EuclideanBundlePointsOnly(const DistortionModelType distortion_model,
const vector<Marker> &markers,
vector<Vec6> &all_cameras_R_t,
double ceres_intrinsics[OFFSET_MAX],
Reconstruction *reconstruction,
vector<vector<int> > &camera_pose_map) {
ceres::Problem::Options problem_options;
ceres::Problem problem(problem_options);
int num_residuals = 0;
for (int i = 0; i < markers.size(); ++i) {
const Marker &marker = markers[i];
CameraPose *camera = reconstruction->CameraPoseForFrame(marker.clip, marker.frame);
Point *point = reconstruction->PointForTrack(marker.track);
if (camera == NULL || point == NULL) {
continue;
}
// Rotation of camera denoted in angle axis followed with
// camera translaiton.
double *current_camera_R_t = &all_cameras_R_t[camera_pose_map[camera->clip][camera->frame]](0);
problem.AddResidualBlock(new ceres::AutoDiffCostFunction<
OpenCVReprojectionError, 2, OFFSET_MAX, 6, 3>(
new OpenCVReprojectionError(
distortion_model,
marker.center[0],
marker.center[1],
1.0)),
NULL,
ceres_intrinsics,
current_camera_R_t,
&point->X(0));
problem.SetParameterBlockConstant(current_camera_R_t);
num_residuals++;
}
LG << "Number of residuals: " << num_residuals;
if (!num_residuals) {
LG << "Skipping running minimizer with zero residuals";
return;
}
problem.SetParameterBlockConstant(ceres_intrinsics);
// Configure the solver.
ceres::Solver::Options options;
options.use_nonmonotonic_steps = true;
options.preconditioner_type = ceres::SCHUR_JACOBI;
options.linear_solver_type = ceres::ITERATIVE_SCHUR;
options.use_explicit_schur_complement = true;
options.use_inner_iterations = true;
options.max_num_iterations = 100;
#ifdef _OPENMP
options.num_threads = omp_get_max_threads();
options.num_linear_solver_threads = omp_get_max_threads();
#endif
// Solve!
ceres::Solver::Summary summary;
ceres::Solve(options, &problem, &summary);
LG << "Final report:\n" << summary.FullReport();
}
} // namespace
void EuclideanBundleCommonIntrinsics(
const Tracks &tracks,
const int bundle_intrinsics,
const int bundle_constraints,
Reconstruction *reconstruction,
CameraIntrinsics *intrinsics,
BundleEvaluation *evaluation) {
LG << "Original intrinsics: " << *intrinsics;
vector<Marker> markers;
tracks.GetAllMarkers(&markers);
// N-th element denotes whether track N is a constant zero-weigthed track.
vector<bool> zero_weight_tracks_flags(tracks.MaxTrack() + 1, true);
// Residual blocks with 10 parameters are unwieldly with Ceres, so pack the
// intrinsics into a single block and rely on local parameterizations to
// control which intrinsics are allowed to vary.
double ceres_intrinsics[OFFSET_MAX];
PackIntrinisicsIntoArray(*intrinsics, ceres_intrinsics);
// Convert cameras rotations to angle axis and merge with translation
// into single parameter block for maximal minimization speed.
//
// Block for minimization has got the following structure:
// <3 elements for angle-axis> <3 elements for translation>
vector<vector<int> > camera_pose_map;
vector<Vec6> all_cameras_R_t =
PackMultiCamerasRotationAndTranslation(tracks, *reconstruction, camera_pose_map);
// Parameterization used to restrict camera motion for modal solvers.
// TODO(tianwei): haven't think about modal solvers, leave it for now
ceres::SubsetParameterization *constant_translation_parameterization = NULL;
if (bundle_constraints & BUNDLE_NO_TRANSLATION) {
std::vector<int> constant_translation;
// First three elements are rotation, last three are translation.
constant_translation.push_back(3);
constant_translation.push_back(4);
constant_translation.push_back(5);
constant_translation_parameterization =
new ceres::SubsetParameterization(6, constant_translation);
}
// Add residual blocks to the problem.
ceres::Problem::Options problem_options;
ceres::Problem problem(problem_options);
int num_residuals = 0;
bool have_locked_camera = false;
for (int i = 0; i < markers.size(); ++i) {
const Marker &marker = markers[i];
CameraPose *camera = reconstruction->CameraPoseForFrame(marker.clip, marker.frame);
Point *point = reconstruction->PointForTrack(marker.track);
if (camera == NULL || point == NULL) {
continue;
}
// Rotation of camera denoted in angle axis followed with
// camera translaiton.
double *current_camera_R_t = &all_cameras_R_t[camera_pose_map[marker.clip][marker.frame]](0);
// Skip residual block for markers which does have absolutely
// no affect on the final solution.
// This way ceres is not gonna to go crazy.
if (marker.weight != 0.0) {
problem.AddResidualBlock(new ceres::AutoDiffCostFunction<
OpenCVReprojectionError, 2, OFFSET_MAX, 6, 3>(
new OpenCVReprojectionError(
intrinsics->GetDistortionModelType(),
marker.center[0],
marker.center[1],
marker.weight)),
NULL,
ceres_intrinsics,
current_camera_R_t,
&point->X(0));
// lock the first camera to deal with scene orientation ambiguity.
if (!have_locked_camera) {
problem.SetParameterBlockConstant(current_camera_R_t);
have_locked_camera = true;
}
if (bundle_constraints & BUNDLE_NO_TRANSLATION) {
problem.SetParameterization(current_camera_R_t,
constant_translation_parameterization);
}
zero_weight_tracks_flags[marker.track] = false;
num_residuals++;
}
}
LG << "Number of residuals: " << num_residuals << "\n";
if (!num_residuals) {
LG << "Skipping running minimizer with zero residuals\n";
return;
}
if (intrinsics->GetDistortionModelType() == libmv::DISTORTION_MODEL_DIVISION &&
(bundle_intrinsics & BUNDLE_TANGENTIAL) != 0) {
LOG(FATAL) << "Division model doesn't support bundling "
"of tangential distortion\n";
}
BundleIntrinsicsLogMessage(bundle_intrinsics);
if (bundle_intrinsics == BUNDLE_NO_INTRINSICS) {
// No camera intrinsics are being refined,
// set the whole parameter block as constant for best performance.
problem.SetParameterBlockConstant(ceres_intrinsics);
} else {
// Set the camera intrinsics that are not to be bundled as
// constant using some macro trickery.
std::vector<int> constant_intrinsics;
#define MAYBE_SET_CONSTANT(bundle_enum, offset) \
if (!(bundle_intrinsics & bundle_enum)) { \
constant_intrinsics.push_back(offset); \
}
MAYBE_SET_CONSTANT(BUNDLE_FOCAL_LENGTH, OFFSET_FOCAL_LENGTH);
MAYBE_SET_CONSTANT(BUNDLE_PRINCIPAL_POINT, OFFSET_PRINCIPAL_POINT_X);
MAYBE_SET_CONSTANT(BUNDLE_PRINCIPAL_POINT, OFFSET_PRINCIPAL_POINT_Y);
MAYBE_SET_CONSTANT(BUNDLE_RADIAL_K1, OFFSET_K1);
MAYBE_SET_CONSTANT(BUNDLE_RADIAL_K2, OFFSET_K2);
MAYBE_SET_CONSTANT(BUNDLE_TANGENTIAL_P1, OFFSET_P1);
MAYBE_SET_CONSTANT(BUNDLE_TANGENTIAL_P2, OFFSET_P2);
#undef MAYBE_SET_CONSTANT
// Always set K3 constant, it's not used at the moment.
constant_intrinsics.push_back(OFFSET_K3);
ceres::SubsetParameterization *subset_parameterization =
new ceres::SubsetParameterization(OFFSET_MAX, constant_intrinsics);
problem.SetParameterization(ceres_intrinsics, subset_parameterization);
}
// Configure the solver.
ceres::Solver::Options options;
options.use_nonmonotonic_steps = true;
options.preconditioner_type = ceres::SCHUR_JACOBI;
options.linear_solver_type = ceres::ITERATIVE_SCHUR;
options.use_explicit_schur_complement = true;
options.use_inner_iterations = true;
options.max_num_iterations = 100;
#ifdef _OPENMP
options.num_threads = omp_get_max_threads();
options.num_linear_solver_threads = omp_get_max_threads();
#endif
// Solve!
ceres::Solver::Summary summary;
ceres::Solve(options, &problem, &summary);
LG << "Final report:\n" << summary.FullReport();
// Copy rotations and translations back.
UnpackMultiCamerasRotationAndTranslation(tracks,
all_cameras_R_t,
reconstruction);
// Copy intrinsics back.
if (bundle_intrinsics != BUNDLE_NO_INTRINSICS)
UnpackIntrinsicsFromArray(ceres_intrinsics, intrinsics);
LG << "Final intrinsics: " << *intrinsics;
if (evaluation) {
MultiviewBundlerPerformEvaluation(tracks, reconstruction, &all_cameras_R_t,
&problem, evaluation);
}
// Separate step to adjust positions of tracks which are
// constant zero-weighted.
vector<Marker> zero_weight_markers;
for (int track = 0; track < tracks.MaxTrack(); ++track) {
if (zero_weight_tracks_flags[track]) {
vector<Marker> current_markers;
tracks.GetMarkersForTrack(track, &current_markers);
zero_weight_markers.reserve(zero_weight_markers.size() +
current_markers.size());
for (int i = 0; i < current_markers.size(); ++i) {
zero_weight_markers.push_back(current_markers[i]);
}
}
}
// zero-weight markers doesn't contribute to the bundle of pose
if (zero_weight_markers.size()) {
LG << "Refining position of constant zero-weighted tracks";
EuclideanBundlePointsOnly(intrinsics->GetDistortionModelType(),
zero_weight_markers,
all_cameras_R_t,
ceres_intrinsics,
reconstruction,
camera_pose_map);
}
}
bool EuclideanBundleAll(const Tracks &tracks,
Reconstruction *reconstruction) {
libmv::PolynomialCameraIntrinsics empty_intrinsics;
EuclideanBundleCommonIntrinsics(tracks,
BUNDLE_NO_INTRINSICS,
BUNDLE_NO_CONSTRAINTS,
reconstruction,
&empty_intrinsics,
NULL);
return true;
}
} // namespace mv

View File

@@ -0,0 +1,137 @@
// Copyright (c) 2011, 2016 libmv authors.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to
// deal in the Software without restriction, including without limitation the
// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
// sell copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
// IN THE SOFTWARE.
//
// Author: Tianwei Shen <shentianweipku@gmail.com>
//
// This is a autotrack equivalent bundle set, adapted from simple_pipeline,
// which replaces libmv with mv, includeing tracks and markers
#ifndef LIBMV_AUTOTRACK_BUNDLE_H
#define LIBMV_AUTOTRACK_BUNDLE_H
#include "libmv/numeric/numeric.h"
#include "libmv/autotrack/tracks.h"
#include "libmv/autotrack/reconstruction.h"
#include "libmv/simple_pipeline/callbacks.h"
#include "libmv/simple_pipeline/camera_intrinsics.h"
using libmv::Mat;
using libmv::CameraIntrinsics;
using mv::Tracks;
using mv::Reconstruction;
namespace mv {
class EuclideanReconstruction;
class ProjectiveReconstruction;
struct BundleEvaluation {
BundleEvaluation() :
num_cameras(0),
num_points(0),
evaluate_jacobian(false) {
}
int num_cameras; /* Number of cameras appeared in bundle adjustment problem */
int num_points; /* Number of points appeared in bundle adjustment problem */
/* When set to truth, jacobian of the problem after optimization will
* be evaluated and stored in \parameter jacobian */
bool evaluate_jacobian;
// Contains evaluated jacobian of the problem.
// Parameters are ordered in the following way:
// - Intrinsics block
// - Cameras (for each camera rotation goes first, then translation)
// - Points
Mat jacobian;
};
/*!
Refine camera poses and 3D coordinates using bundle adjustment.
This routine adjusts all cameras positions, points, and the camera
intrinsics (assumed common across all images) in \a *reconstruction. This
assumes a full observation for reconstructed tracks; this implies that if
there is a reconstructed 3D point (a bundle) for a track, then all markers
for that track will be included in the minimization. \a tracks should
contain markers used in the initial reconstruction.
The cameras, bundles, and intrinsics are refined in-place.
Constraints denotes which blocks to keep constant during bundling.
For example it is useful to keep camera translations constant
when bundling tripod motions.
If evaluaiton is not null, different evaluation statistics is filled in
there, plus all the requested additional information (like jacobian) is
also calculating there. Also see comments for BundleEvaluation.
\note This assumes an outlier-free set of markers.
\sa EuclideanResect, EuclideanIntersect, EuclideanReconstructTwoFrames
*/
enum BundleIntrinsics {
BUNDLE_NO_INTRINSICS = 0,
BUNDLE_FOCAL_LENGTH = 1,
BUNDLE_PRINCIPAL_POINT = 2,
BUNDLE_RADIAL_K1 = 4,
BUNDLE_RADIAL_K2 = 8,
BUNDLE_RADIAL = 12,
BUNDLE_TANGENTIAL_P1 = 16,
BUNDLE_TANGENTIAL_P2 = 32,
BUNDLE_TANGENTIAL = 48,
};
enum BundleConstraints {
BUNDLE_NO_CONSTRAINTS = 0,
BUNDLE_NO_TRANSLATION = 1,
};
void EuclideanBundleCommonIntrinsics(
const Tracks &tracks,
const int bundle_intrinsics,
const int bundle_constraints,
Reconstruction *reconstruction,
CameraIntrinsics *intrinsics,
BundleEvaluation *evaluation = NULL);
/*! Refine all camera poses and 3D coordinates from all clips using bundle adjustment.
This is a renewed version for autotrack, adapted from libmv/simple_pipeline
This routine adjusts all cameras and points in \a *reconstruction. This
assumes a full observation for reconstructed tracks; this implies that if
there is a reconstructed 3D point (a bundle) for a track, then all markers
for that track will be included in the minimization. \a tracks should
contain markers used in the initial reconstruction.
The cameras and bundles (3D points) are refined in-place.
\note This assumes an outlier-free set of markers.
\note This assumes a calibrated reconstruction, e.g. the markers are
already corrected for camera intrinsics and radial distortion.
\sa EuclideanResect, EuclideanIntersect, EuclideanReconstructTwoFrames
*/
bool EuclideanBundleAll(const Tracks &tracks,
Reconstruction *reconstruction);
} // namespace mv
#endif // LIBMV_AUTOTRACK_BUNDLE_H

View File

@@ -0,0 +1,182 @@
// Copyright (c) 2016 libmv authors.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to
// deal in the Software without restriction, including without limitation the
// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
// sell copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
// IN THE SOFTWARE.
//
// Author: Tianwei Shen <shentianweipku@gmail.com>
// adapted from simple_pipeline/intersect.cc
#include "libmv/autotrack/intersect.h"
#include "libmv/base/vector.h"
#include "libmv/logging/logging.h"
#include "libmv/multiview/projection.h"
#include "libmv/multiview/triangulation.h"
#include "libmv/multiview/nviewtriangulation.h"
#include "libmv/numeric/numeric.h"
#include "libmv/numeric/levenberg_marquardt.h"
#include "libmv/autotrack/reconstruction.h"
#include "libmv/autotrack/tracks.h"
#include "ceres/ceres.h"
using libmv::Mat;
using libmv::Mat34;
using libmv::Mat2X;
using libmv::Vec4;
namespace mv {
namespace {
class EuclideanIntersectCostFunctor {
public:
EuclideanIntersectCostFunctor(const Marker &marker,
const CameraPose &camera)
: marker_(marker), camera_(camera) {}
template<typename T>
bool operator()(const T *X, T *residuals) const {
typedef Eigen::Matrix<T, 3, 3> Mat3;
typedef Eigen::Matrix<T, 3, 1> Vec3;
Vec3 x(X);
Mat3 R(camera_.R.cast<T>());
Vec3 t(camera_.t.cast<T>());
Vec3 projected = R * x + t;
projected /= projected(2);
residuals[0] = (T(projected(0)) - T(marker_.center[0])) * T(marker_.weight);
residuals[1] = (T(projected(1)) - T(marker_.center[1])) * T(marker_.weight);
return true;
}
const Marker &marker_;
const CameraPose &camera_;
};
} // namespace
bool EuclideanIntersect(const vector<Marker> &markers,
Reconstruction *reconstruction) {
if (markers.size() < 2) {
return false;
}
// Compute projective camera matrices for the cameras the intersection is
// going to use.
Mat3 K = Mat3::Identity();
vector<Mat34> cameras;
Mat34 P;
for (int i = 0; i < markers.size(); ++i) {
LG << "marker clip and frame: " << markers[i].clip << " " << markers[i].frame << std::endl;
CameraPose *camera = reconstruction->CameraPoseForFrame(markers[i].clip, markers[i].frame);
libmv::P_From_KRt(K, camera->R, camera->t, &P);
cameras.push_back(P);
}
// Stack the 2D coordinates together as required by NViewTriangulate.
Mat2X points(2, markers.size());
for (int i = 0; i < markers.size(); ++i) {
points(0, i) = markers[i].center[0];
points(1, i) = markers[i].center[1];
}
Vec4 Xp;
LG << "Intersecting with " << markers.size() << " markers.\n";
libmv::NViewTriangulateAlgebraic(points, cameras, &Xp);
// Get euclidean version of the homogeneous point.
Xp /= Xp(3);
Vec3 X = Xp.head<3>();
ceres::Problem problem;
// Add residual blocks to the problem.
int num_residuals = 0;
for (int i = 0; i < markers.size(); ++i) {
const Marker &marker = markers[i];
if (marker.weight != 0.0) {
const CameraPose &camera =
*reconstruction->CameraPoseForFrame(marker.clip, marker.frame);
problem.AddResidualBlock(
new ceres::AutoDiffCostFunction<
EuclideanIntersectCostFunctor,
2, /* num_residuals */
3>(new EuclideanIntersectCostFunctor(marker, camera)),
NULL,
&X(0));
num_residuals++;
}
}
// TODO(sergey): Once we'll update Ceres to the next version
// we wouldn't need this check anymore -- Ceres will deal with
// zero-sized problems nicely.
LG << "Number of residuals: " << num_residuals << "\n";
if (!num_residuals) {
LG << "Skipping running minimizer with zero residuals\n";
// We still add 3D point for the track regardless it was
// optimized or not. If track is a constant zero it'll use
// algebraic intersection result as a 3D coordinate.
Vec3 point = X.head<3>();
Point mv_point(markers[0].track, point);
reconstruction->AddPoint(mv_point);
return true;
}
// Configure the solve.
ceres::Solver::Options solver_options;
solver_options.linear_solver_type = ceres::DENSE_QR;
solver_options.max_num_iterations = 50;
solver_options.update_state_every_iteration = true;
solver_options.parameter_tolerance = 1e-16;
solver_options.function_tolerance = 1e-16;
// Run the solve.
ceres::Solver::Summary summary;
ceres::Solve(solver_options, &problem, &summary);
VLOG(1) << "Summary:\n" << summary.FullReport();
// Try projecting the point; make sure it's in front of everyone.
for (int i = 0; i < cameras.size(); ++i) {
const CameraPose &camera =
*reconstruction->CameraPoseForFrame(markers[i].clip, markers[i].frame);
Vec3 x = camera.R * X + camera.t;
if (x(2) < 0) {
//LOG(ERROR) << "POINT BEHIND CAMERA " << markers[i].image << ": " << x.transpose();
return false;
}
}
Vec3 point = X.head<3>();
Point mv_point(markers[0].track, point);
reconstruction->AddPoint(mv_point);
// TODO(keir): Add proper error checking.
return true;
}
} // namespace mv

View File

@@ -0,0 +1,79 @@
// Copyright (c) 2016 libmv authors.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to
// deal in the Software without restriction, including without limitation the
// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
// sell copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
// IN THE SOFTWARE.
//
// Author: Tianwei Shen <shentianweipku@gmail.com>
#ifndef LIBMV_AUTOTRACK_INTERSECT_H
#define LIBMV_AUTOTRACK_INTERSECT_H
#include "libmv/base/vector.h"
#include "libmv/autotrack/tracks.h"
#include "libmv/autotrack/reconstruction.h"
namespace mv {
/*!
Estimate the 3D coordinates of a track by intersecting rays from images.
This takes a set of markers, where each marker is for the same track but
different images, and reconstructs the 3D position of that track. Each of
the frames for which there is a marker for that track must have a
corresponding reconstructed camera in \a *reconstruction.
\a markers should contain all \l Marker markers \endlink belonging to
tracks visible in all frames.
\a reconstruction should contain the cameras for all frames.
The new \l Point points \endlink will be inserted in \a reconstruction.
\note This assumes a calibrated reconstruction, e.g. the markers are
already corrected for camera intrinsics and radial distortion.
\note This assumes an outlier-free set of markers.
\sa EuclideanResect
*/
bool EuclideanIntersect(const vector<Marker> &markers,
Reconstruction *reconstruction);
/*!
Estimate the homogeneous coordinates of a track by intersecting rays.
This takes a set of markers, where each marker is for the same track but
different images, and reconstructs the homogeneous 3D position of that
track. Each of the frames for which there is a marker for that track must
have a corresponding reconstructed camera in \a *reconstruction.
\a markers should contain all \l Marker markers \endlink belonging to
tracks visible in all frames.
\a reconstruction should contain the cameras for all frames.
The new \l Point points \endlink will be inserted in \a reconstruction.
\note This assumes that radial distortion is already corrected for, but
does not assume that e.g. focal length and principal point are
accounted for.
\note This assumes an outlier-free set of markers.
\sa Resect
*/
//bool ProjectiveIntersect(const vector<Marker> &markers,
// ProjectiveReconstruction *reconstruction);
} // namespace mv
#endif // LIBMV_AUTOTRACK_INTERSECT_H

View File

@@ -0,0 +1,455 @@
// Copyright (c) 2016 libmv authors.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to
// deal in the Software without restriction, including without limitation the
// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
// sell copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
// IN THE SOFTWARE.
//
// Author: Tianwei Shen <shentianweipku@gmail.com>
// adapted from simple_pipeline/keyframe_selection.cc
#include "libmv/autotrack/keyframe_selection.h"
#include "libmv/numeric/numeric.h"
#include "ceres/ceres.h"
#include "libmv/logging/logging.h"
#include "libmv/multiview/homography.h"
#include "libmv/multiview/fundamental.h"
#include "libmv/autotrack/intersect.h"
#include "libmv/autotrack/bundle.h"
#include <Eigen/Eigenvalues>
namespace mv {
using libmv::Vec;
namespace {
Mat3 IntrinsicsNormalizationMatrix(const CameraIntrinsics &intrinsics) {
Mat3 T = Mat3::Identity(), S = Mat3::Identity();
T(0, 2) = -intrinsics.principal_point_x();
T(1, 2) = -intrinsics.principal_point_y();
S(0, 0) /= intrinsics.focal_length_x();
S(1, 1) /= intrinsics.focal_length_y();
return S * T;
}
// P.H.S. Torr
// Geometric Motion Segmentation and Model Selection
//
// http://reference.kfupm.edu.sa/content/g/e/geometric_motion_segmentation_and_model__126445.pdf
//
// d is the number of dimensions modeled
// (d = 3 for a fundamental matrix or 2 for a homography)
// k is the number of degrees of freedom in the model
// (k = 7 for a fundamental matrix or 8 for a homography)
// r is the dimension of the data
// (r = 4 for 2D correspondences between two frames)
double GRIC(const Vec &e, int d, int k, int r) {
int n = e.rows();
double lambda1 = log(static_cast<double>(r));
double lambda2 = log(static_cast<double>(r * n));
// lambda3 limits the residual error, and this paper
// http://elvera.nue.tu-berlin.de/files/0990Knorr2006.pdf
// suggests using lambda3 of 2
// same value is used in Torr's Problem of degeneracy in structure
// and motion recovery from uncalibrated image sequences
// http://www.robots.ox.ac.uk/~vgg/publications/papers/torr99.ps.gz
double lambda3 = 2.0;
// Variance of tracker position. Physically, this is typically about 0.1px,
// and when squared becomes 0.01 px^2.
double sigma2 = 0.01;
// Finally, calculate the GRIC score.
double gric = 0.0;
for (int i = 0; i < n; i++) {
gric += std::min(e(i) * e(i) / sigma2, lambda3 * (r - d));
}
gric += lambda1 * d * n;
gric += lambda2 * k;
return gric;
}
// Compute a generalized inverse using eigen value decomposition, clamping the
// smallest eigenvalues if requested. This is needed to compute the variance of
// reconstructed 3D points.
//
// TODO(keir): Consider moving this into the numeric code, since this is not
// related to keyframe selection.
Mat PseudoInverseWithClampedEigenvalues(const Mat &matrix,
int num_eigenvalues_to_clamp) {
Eigen::EigenSolver<Mat> eigen_solver(matrix);
Mat D = eigen_solver.pseudoEigenvalueMatrix();
Mat V = eigen_solver.pseudoEigenvectors();
// Clamp too-small singular values to zero to prevent numeric blowup.
double epsilon = std::numeric_limits<double>::epsilon();
for (int i = 0; i < D.cols(); ++i) {
if (D(i, i) > epsilon) {
D(i, i) = 1.0 / D(i, i);
} else {
D(i, i) = 0.0;
}
}
// Apply the clamp.
for (int i = D.cols() - num_eigenvalues_to_clamp; i < D.cols(); ++i) {
D(i, i) = 0.0;
}
return V * D * V.inverse();
}
void FilterZeroWeightMarkersFromTracks(const Tracks &tracks,
Tracks *filtered_tracks) {
vector<Marker> all_markers;
tracks.GetAllMarkers(&all_markers);
for (int i = 0; i < all_markers.size(); ++i) {
Marker &marker = all_markers[i];
if (marker.weight != 0.0) {
filtered_tracks->AddMarker(marker);
}
}
}
} // namespace
void SelectClipKeyframesBasedOnGRICAndVariance(const int clip_index,
const Tracks &_tracks,
const CameraIntrinsics &intrinsics,
vector<int> &keyframes) {
// Mirza Tahir Ahmed, Matthew N. Dailey
// Robust key frame extraction for 3D reconstruction from video streams
//
// http://www.cs.ait.ac.th/~mdailey/papers/Tahir-KeyFrame.pdf
Tracks filtered_tracks;
FilterZeroWeightMarkersFromTracks(_tracks, &filtered_tracks);
int max_frame = filtered_tracks.MaxFrame(clip_index);
int next_keyframe = 1;
int number_keyframes = 0;
// Limit correspondence ratio from both sides.
// On the one hand if number of correspondent features is too low,
// triangulation will suffer.
// On the other hand high correspondence likely means short baseline.
// which also will affect accuracy.
const double Tmin = 0.8;
const double Tmax = 1.0;
Mat3 N = IntrinsicsNormalizationMatrix(intrinsics);
Mat3 N_inverse = N.inverse();
double Sc_best = std::numeric_limits<double>::max();
double success_intersects_factor_best = 0.0f;
while (next_keyframe != -1) {
int current_keyframe = next_keyframe;
double Sc_best_candidate = std::numeric_limits<double>::max();
LG << "Found keyframe " << next_keyframe;
number_keyframes++;
next_keyframe = -1;
for (int candidate_image = current_keyframe + 1;
candidate_image <= max_frame;
candidate_image++) {
// Conjunction of all markers from both keyframes
vector<Marker> all_markers;
filtered_tracks.GetMarkersInBothFrames(clip_index, current_keyframe,
clip_index, candidate_image, &all_markers);
// Match keypoints between frames current_keyframe and candidate_image
vector<Marker> tracked_markers;
filtered_tracks.GetMarkersForTracksInBothFrames(clip_index, current_keyframe,
clip_index, candidate_image, &tracked_markers);
// Correspondences in normalized space
Mat x1, x2;
CoordinatesForMarkersInFrame(tracked_markers, clip_index, current_keyframe, &x1);
CoordinatesForMarkersInFrame(tracked_markers, clip_index, candidate_image, &x2);
LG << "Found " << x1.cols()
<< " correspondences between " << current_keyframe
<< " and " << candidate_image;
// Not enough points to construct fundamental matrix
if (x1.cols() < 8 || x2.cols() < 8)
continue;
// STEP 1: Correspondence ratio constraint
int Tc = tracked_markers.size();
int Tf = all_markers.size();
double Rc = static_cast<double>(Tc) / Tf;
LG << "Correspondence between " << current_keyframe
<< " and " << candidate_image
<< ": " << Rc;
if (Rc < Tmin || Rc > Tmax)
continue;
Mat3 H, F;
// Estimate homography using default options.
libmv::EstimateHomographyOptions estimate_homography_options;
libmv::EstimateHomography2DFromCorrespondences(x1,
x2,
estimate_homography_options,
&H);
// Convert homography to original pixel space.
H = N_inverse * H * N;
libmv::EstimateFundamentalOptions estimate_fundamental_options;
libmv::EstimateFundamentalFromCorrespondences(x1,
x2,
estimate_fundamental_options,
&F);
// Convert fundamental to original pixel space.
F = N_inverse * F * N;
// TODO(sergey): STEP 2: Discard outlier matches
// STEP 3: Geometric Robust Information Criteria
// Compute error values for homography and fundamental matrices
Vec H_e, F_e;
H_e.resize(x1.cols());
F_e.resize(x1.cols());
for (int i = 0; i < x1.cols(); i++) {
libmv::Vec2 current_x1, current_x2;
intrinsics.NormalizedToImageSpace(x1(0, i), x1(1, i),
&current_x1(0), &current_x1(1));
intrinsics.NormalizedToImageSpace(x2(0, i), x2(1, i),
&current_x2(0), &current_x2(1));
H_e(i) = libmv::SymmetricGeometricDistance(H, current_x1, current_x2);
F_e(i) = libmv::SymmetricEpipolarDistance(F, current_x1, current_x2);
}
LG << "H_e: " << H_e.transpose();
LG << "F_e: " << F_e.transpose();
// Degeneracy constraint
double GRIC_H = GRIC(H_e, 2, 8, 4);
double GRIC_F = GRIC(F_e, 3, 7, 4);
LG << "GRIC values for frames " << current_keyframe
<< " and " << candidate_image
<< ", H-GRIC: " << GRIC_H
<< ", F-GRIC: " << GRIC_F;
if (GRIC_H <= GRIC_F) {
LG << "Skip " << candidate_image << " since GRIC_H <= GRIC_F\n";
continue;
}
// TODO(sergey): STEP 4: PELC criterion
// STEP 5: Estimation of reconstruction error
//
// Uses paper Keyframe Selection for Camera Motion and Structure
// Estimation from Multiple Views
// Uses ftp://ftp.tnt.uni-hannover.de/pub/papers/2004/ECCV2004-TTHBAW.pdf
// Basically, equation (15)
//
// TODO(sergey): separate all the constraints into functions,
// this one is getting to much cluttered already
// Definitions in equation (15):
// - I is the number of 3D feature points
// - A is the number of essential parameters of one camera
Reconstruction reconstruction;
// The F matrix should be an E matrix, but squash it just to be sure
// Reconstruction should happen using normalized fundamental matrix
Mat3 F_normal = N * F * N_inverse;
Mat3 E;
libmv::FundamentalToEssential(F_normal, &E);
// Recover motion between the two images. Since this function assumes a
// calibrated camera, use the identity for K
Mat3 R;
Vec3 t;
Mat3 K = Mat3::Identity();
if (!libmv::MotionFromEssentialAndCorrespondence(E,
K, x1.col(0),
K, x2.col(0),
&R, &t)) {
LG << "Failed to compute R and t from E and K";
continue;
}
LG << "Camera transform between frames " << current_keyframe
<< " and " << candidate_image
<< ":\nR:\n" << R
<< "\nt:" << t.transpose();
// First camera is identity, second one is relative to it
CameraPose current_keyframe_pose(clip_index, current_keyframe, 0, Mat3::Identity(), Vec3::Zero());
reconstruction.AddCameraPose(current_keyframe_pose);
CameraPose candidate_keyframe_pose(clip_index, candidate_image, 0, R, t);
reconstruction.AddCameraPose(candidate_keyframe_pose);
// Reconstruct 3D points
int intersects_total = 0, intersects_success = 0;
for (int i = 0; i < tracked_markers.size(); i++) {
if (!reconstruction.PointForTrack(tracked_markers[i].track)) {
vector<Marker> reconstructed_markers;
int track = tracked_markers[i].track;
reconstructed_markers.push_back(tracked_markers[i]);
// We know there're always only two markers for a track
// Also, we're using brute-force search because we don't
// actually know about markers layout in a list, but
// at this moment this cycle will run just once, which
// is not so big deal
for (int j = i + 1; j < tracked_markers.size(); j++) {
if (tracked_markers[j].track == track) {
reconstructed_markers.push_back(tracked_markers[j]);
break;
}
}
intersects_total++;
if (EuclideanIntersect(reconstructed_markers, &reconstruction)) {
LG << "Ran Intersect() for track " << track;
intersects_success++;
} else {
LG << "Failed to intersect track " << track;
}
}
}
double success_intersects_factor =
(double) intersects_success / intersects_total;
if (success_intersects_factor < success_intersects_factor_best) {
LG << "Skip keyframe candidate because of "
"lower successful intersections ratio";
continue;
}
success_intersects_factor_best = success_intersects_factor;
Tracks two_frames_tracks(tracked_markers);
libmv::PolynomialCameraIntrinsics empty_intrinsics;
BundleEvaluation evaluation;
evaluation.evaluate_jacobian = true;
EuclideanBundleCommonIntrinsics(two_frames_tracks,
BUNDLE_NO_INTRINSICS,
BUNDLE_NO_CONSTRAINTS,
&reconstruction,
&empty_intrinsics,
&evaluation);
Mat &jacobian = evaluation.jacobian;
Mat JT_J = jacobian.transpose() * jacobian;
// There are 7 degrees of freedom, so clamp them out.
Mat JT_J_inv = PseudoInverseWithClampedEigenvalues(JT_J, 7);
Mat temp_derived = JT_J * JT_J_inv * JT_J;
bool is_inversed = (temp_derived - JT_J).cwiseAbs2().sum() <
1e-4 * std::min(temp_derived.cwiseAbs2().sum(),
JT_J.cwiseAbs2().sum());
LG << "Check on inversed: " << (is_inversed ? "true" : "false" )
<< ", det(JT_J): " << JT_J.determinant();
if (!is_inversed) {
LG << "Ignoring candidature due to poor jacobian stability";
continue;
}
Mat Sigma_P;
Sigma_P = JT_J_inv.bottomRightCorner(evaluation.num_points * 3,
evaluation.num_points * 3);
int I = evaluation.num_points;
int A = 12;
double Sc = static_cast<double>(I + A) / libmv::Square(3 * I) * Sigma_P.trace();
LG << "Expected estimation error between "
<< current_keyframe << " and "
<< candidate_image << ": " << Sc;
// Pairing with a lower Sc indicates a better choice
if (Sc > Sc_best_candidate)
continue;
Sc_best_candidate = Sc;
next_keyframe = candidate_image;
}
// This is a bit arbitrary and main reason of having this is to deal
// better with situations when there's no keyframes were found for
// current keyframe this could happen when there's no so much parallax
// in the beginning of image sequence and then most of features are
// getting occluded. In this case there could be good keyframe pair in
// the middle of the sequence
//
// However, it's just quick hack and smarter way to do this would be nice
if (next_keyframe == -1) {
next_keyframe = current_keyframe + 10;
number_keyframes = 0;
if (next_keyframe >= max_frame)
break;
LG << "Starting searching for keyframes starting from " << next_keyframe;
} else {
// New pair's expected reconstruction error is lower
// than existing pair's one.
//
// For now let's store just one candidate, easy to
// store more candidates but needs some thoughts
// how to choose best one automatically from them
// (or allow user to choose pair manually).
if (Sc_best > Sc_best_candidate) {
keyframes.clear();
keyframes.push_back(current_keyframe);
keyframes.push_back(next_keyframe);
Sc_best = Sc_best_candidate;
}
}
}
}
} // namespace mv

View File

@@ -0,0 +1,58 @@
// Copyright (c) 2016 libmv authors.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to
// deal in the Software without restriction, including without limitation the
// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
// sell copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
// IN THE SOFTWARE.
//
// Author: Tianwei Shen <shentianweipku@gmail.com>
// adapted from simple_pipeline/keyframe_selection.h
#ifndef LIBMV_AUTOTRACK_KEYFRAME_SELECTION_H_
#define LIBMV_AUTOTRACK_KEYFRAME_SELECTION_H_
#include "libmv/base/vector.h"
#include "libmv/autotrack/tracks.h"
#include "libmv/simple_pipeline/camera_intrinsics.h"
namespace mv {
// Get list of all images which are good enough to be as keyframes in a clip
// from camera reconstruction. Based on GRIC criteria and uses Pollefeys'
// approach for correspondence ratio constraint.
//
// As an additional, additional criteria based on reconstruction
// variance is used. This means if correspondence and GRIC criteria
// are passed, two-frames reconstruction using candidate keyframes
// happens. After reconstruction variance of 3D points is calculating
// and if expected error estimation is too large, keyframe candidate
// is rejecting.
//
// \param clip_index is the clip index to extract keyframes from
// \param tracks contains all tracked correspondences between frames
// expected to be undistorted and normalized
// \param intrinsics is camera intrinsics
// \param keyframes will contain all images number which are considered
// good to be used for reconstruction
void SelectClipKeyframesBasedOnGRICAndVariance(
const int clip_index,
const Tracks &tracks,
const libmv::CameraIntrinsics &intrinsics,
vector<int> &keyframes);
} // namespace mv
#endif // LIBMV_AUTOTRACK_KEYFRAME_SELECTION_H_

View File

@@ -0,0 +1,406 @@
// Copyright (c) 2016 libmv authors.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to
// deal in the Software without restriction, including without limitation the
// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
// sell copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
// IN THE SOFTWARE.
// Author: Tianwei Shen <shentianweipku@gmail.com>
// This file contains the multi-view reconstruction pipeline, such as camera resection.
#include "libmv/autotrack/pipeline.h"
#include <cstdio>
#include "libmv/logging/logging.h"
#include "libmv/autotrack/bundle.h"
#include "libmv/autotrack/reconstruction.h"
#include "libmv/autotrack/tracks.h"
#include "libmv/autotrack/intersect.h"
#include "libmv/autotrack/resect.h"
#include "libmv/simple_pipeline/camera_intrinsics.h"
#ifdef _MSC_VER
# define snprintf _snprintf
#endif
namespace mv {
namespace {
// Use this functor-like struct to reuse reconstruction pipeline code
// in the future, in case we will do projective reconstruction
struct EuclideanPipelineRoutines {
typedef ::mv::Reconstruction Reconstruction;
typedef CameraPose Camera;
typedef ::mv::Point Point;
static void Bundle(const Tracks &tracks,
Reconstruction *reconstruction) {
EuclideanBundleAll(tracks, reconstruction);
}
static bool Resect(const vector<Marker> &markers,
Reconstruction *reconstruction,
bool final_pass,
int intrinsics_index) {
return EuclideanResect(markers, reconstruction, final_pass, intrinsics_index);
}
static bool Intersect(const vector<Marker> &markers,
Reconstruction *reconstruction) {
return EuclideanIntersect(markers, reconstruction);
}
static Marker ProjectMarker(const Point &point,
const CameraPose &camera,
const CameraIntrinsics &intrinsics) {
Vec3 projected = camera.R * point.X + camera.t;
projected /= projected(2);
Marker reprojected_marker;
double normalized_x, normalized_y;
intrinsics.ApplyIntrinsics(projected(0),
projected(1),
&normalized_x,
&normalized_y);
reprojected_marker.center[0] = normalized_x;
reprojected_marker.center[1] = normalized_y;
reprojected_marker.clip = camera.clip;
reprojected_marker.frame = camera.frame;
reprojected_marker.track = point.track;
return reprojected_marker;
}
};
} // namespace
static void CompleteReconstructionLogProgress(
libmv::ProgressUpdateCallback *update_callback,
double progress,
const char *step = NULL) {
if (update_callback) {
char message[256];
if (step)
snprintf(message, sizeof(message), "Completing solution %d%% | %s",
(int)(progress*100), step);
else
snprintf(message, sizeof(message), "Completing solution %d%%",
(int)(progress*100));
update_callback->invoke(progress, message);
}
}
template<typename PipelineRoutines>
bool InternalCompleteReconstruction(
const Tracks &tracks,
typename PipelineRoutines::Reconstruction *reconstruction,
libmv::ProgressUpdateCallback *update_callback = NULL) {
int clip_num = tracks.MaxClip() + 1;
int num_frames = 0;
for(int i = 0; i < clip_num; i++) {
num_frames += tracks.MaxFrame(i) + 1;
}
int max_track = tracks.MaxTrack();
int num_resects = -1;
int num_intersects = -1;
int total_resects = 0;
LG << "Max track: " << max_track << "\n";
LG << "Number of total frames: " << num_frames << "\n";
LG << "Number of markers: " << tracks.NumMarkers() << "\n";
while (num_resects != 0 || num_intersects != 0) {
// Do all possible intersections.
num_intersects = 0;
for (int track = 0; track <= max_track; ++track) {
if (reconstruction->PointForTrack(track)) { // track has already been added
LG << "Skipping point: " << track << "\n";
continue;
}
vector<Marker> all_markers;
tracks.GetMarkersForTrack(track, &all_markers);
LG << "Got " << all_markers.size() << " markers for track " << track << "\n";
vector<Marker> reconstructed_markers;
for (int i = 0; i < all_markers.size(); ++i) {
if (reconstruction->CameraPoseForFrame(all_markers[i].clip, all_markers[i].frame)) {
reconstructed_markers.push_back(all_markers[i]);
}
}
LG << "Got " << reconstructed_markers.size() << " reconstructed markers for track " << track << "\n";
if (reconstructed_markers.size() >= 2) {
CompleteReconstructionLogProgress(update_callback,
(double)total_resects/(num_frames));
if (PipelineRoutines::Intersect(reconstructed_markers,
reconstruction)) {
num_intersects++;
LG << "Ran Intersect() for track " << track << "\n";
} else {
LG << "Failed Intersect() for track " << track << "\n";
}
}
}
// bundle the newly added points
if (num_intersects) {
CompleteReconstructionLogProgress(update_callback,
(double)total_resects/(num_frames),
"Bundling...");
PipelineRoutines::Bundle(tracks, reconstruction);
LG << "Ran Bundle() after intersections.";
}
LG << "Did " << num_intersects << " intersects.\n";
// Do all possible resections.
num_resects = 0;
for(int clip = 0; clip < clip_num; clip++) {
const int max_image = tracks.MaxFrame(clip);
for (int image = 0; image <= max_image; ++image) {
if (reconstruction->CameraPoseForFrame(clip, image)) { // this camera pose has been added
LG << "Skipping frame: " << clip << " " << image << "\n";
continue;
}
vector<Marker> all_markers;
tracks.GetMarkersInFrame(clip, image, &all_markers);
LG << "Got " << all_markers.size() << " markers for frame " << clip << ", " << image << "\n";
vector<Marker> reconstructed_markers;
for (int i = 0; i < all_markers.size(); ++i) {
if (reconstruction->PointForTrack(all_markers[i].track)) { // 3d points have been added
reconstructed_markers.push_back(all_markers[i]);
}
}
LG << "Got " << reconstructed_markers.size() << " reconstructed markers for frame "
<< clip << " " << image << "\n";
if (reconstructed_markers.size() >= 5) {
CompleteReconstructionLogProgress(update_callback,
(double)total_resects/(num_frames));
if (PipelineRoutines::Resect(reconstructed_markers,
reconstruction, false,
reconstruction->GetIntrinsicsMap(clip, image))) {
num_resects++;
total_resects++;
LG << "Ran Resect() for frame (" << clip << ", " << image << ")\n";
} else {
LG << "Failed Resect() for frame (" << clip << ", " << image << ")\n";
}
}
}
}
if (num_resects) {
CompleteReconstructionLogProgress(update_callback,
(double)total_resects/(num_frames),
"Bundling...");
PipelineRoutines::Bundle(tracks, reconstruction);
}
LG << "Did " << num_resects << " resects.\n";
}
// One last pass...
LG << "[InternalCompleteReconstruction] Ran last pass\n";
num_resects = 0;
for(int clip = 0; clip < clip_num; clip++) {
int max_image = tracks.MaxFrame(clip);
for (int image = 0; image <= max_image; ++image) {
if (reconstruction->CameraPoseForFrame(clip, image)) {
LG << "Skipping frame: " << clip << " " << image << "\n";
continue;
}
vector<Marker> all_markers;
tracks.GetMarkersInFrame(clip, image, &all_markers);
vector<Marker> reconstructed_markers;
for (int i = 0; i < all_markers.size(); ++i) {
if (reconstruction->PointForTrack(all_markers[i].track)) {
reconstructed_markers.push_back(all_markers[i]);
}
}
if (reconstructed_markers.size() >= 5) {
CompleteReconstructionLogProgress(update_callback,
(double)total_resects/(max_image));
if (PipelineRoutines::Resect(reconstructed_markers,
reconstruction, true,
reconstruction->GetIntrinsicsMap(clip, image))) {
num_resects++;
LG << "Ran final Resect() for image " << image;
} else {
LG << "Failed final Resect() for image " << image;
}
}
}
}
if (num_resects) {
CompleteReconstructionLogProgress(update_callback,
(double)total_resects/(num_frames),
"Bundling...");
PipelineRoutines::Bundle(tracks, reconstruction);
}
return true;
}
template<typename PipelineRoutines>
double InternalReprojectionError(
const Tracks &image_tracks,
const typename PipelineRoutines::Reconstruction &reconstruction,
const CameraIntrinsics &intrinsics) {
int num_skipped = 0;
int num_reprojected = 0;
double total_error = 0.0;
vector<Marker> markers;
image_tracks.GetAllMarkers(&markers);
for (int i = 0; i < markers.size(); ++i) {
double weight = markers[i].weight;
const typename PipelineRoutines::Camera *camera =
reconstruction.CameraPoseForFrame(markers[i].clip, markers[i].frame);
const typename PipelineRoutines::Point *point =
reconstruction.PointForTrack(markers[i].track);
if (!camera || !point || weight == 0.0) {
num_skipped++;
continue;
}
num_reprojected++;
Marker reprojected_marker =
PipelineRoutines::ProjectMarker(*point, *camera, intrinsics);
double ex = (reprojected_marker.center[0] - markers[i].center[0]) * weight;
double ey = (reprojected_marker.center[1] - markers[i].center[1]) * weight;
const int N = 100;
char line[N];
snprintf(line, N,
"frame (%d, %d) track %-3d "
"x %7.1f y %7.1f "
"rx %7.1f ry %7.1f "
"ex %7.1f ey %7.1f"
" e %7.1f",
markers[i].clip,
markers[i].frame,
markers[i].track,
markers[i].center[0],
markers[i].center[1],
reprojected_marker.center[0],
reprojected_marker.center[1],
ex,
ey,
sqrt(ex*ex + ey*ey));
VLOG(1) << line;
total_error += sqrt(ex*ex + ey*ey);
}
LG << "Skipped " << num_skipped << " markers.";
LG << "Reprojected " << num_reprojected << " markers.";
LG << "Total error: " << total_error;
LG << "Average error: " << (total_error / num_reprojected) << " [pixels].";
return total_error / num_reprojected;
}
double EuclideanReprojectionError(const Tracks &tracks,
const Reconstruction &reconstruction,
const CameraIntrinsics &intrinsics) {
return InternalReprojectionError<EuclideanPipelineRoutines>(tracks,
reconstruction,
intrinsics);
}
bool EuclideanCompleteMultiviewReconstruction(const Tracks &tracks,
Reconstruction *reconstruction,
libmv::ProgressUpdateCallback *update_callback) {
return InternalCompleteReconstruction<EuclideanPipelineRoutines>(tracks,
reconstruction,
update_callback);
}
void InvertIntrinsicsForTracks(const Tracks &raw_tracks,
const CameraIntrinsics &camera_intrinsics,
Tracks *calibrated_tracks) {
vector<Marker> markers;
raw_tracks.GetAllMarkers(&markers);
for (int i = 0; i < markers.size(); ++i) {
double normalized_x, normalized_y;
camera_intrinsics.InvertIntrinsics(markers[i].center[0], markers[i].center[1],
&normalized_x, &normalized_y);
markers[i].center[0] = normalized_x, markers[i].center[1] = normalized_y;
}
*calibrated_tracks = Tracks(markers);
}
void EuclideanScaleToUnity(Reconstruction *reconstruction) {
int clip_num = reconstruction->GetClipNum();
const vector<vector<CameraPose> >& all_cameras = reconstruction->camera_poses();
LG << "[EuclideanScaleToUnity] camera clip number: " << clip_num << '\n';
// Calculate center of the mass of all cameras.
int total_valid_cameras = 0;
Vec3 cameras_mass_center = Vec3::Zero();
for (int i = 0; i < clip_num; i++) {
for (int j = 0; j < all_cameras[i].size(); ++j) {
if (all_cameras[i][j].clip >= 0) { // primary camera index is 0
cameras_mass_center += all_cameras[i][j].t;
total_valid_cameras++;
}
}
}
if (total_valid_cameras == 0) {
LG << "[EuclideanScaleToUnity] no valid camera for rescaling\n";
return;
}
cameras_mass_center /= total_valid_cameras;
LG << "[EuclideanScaleToUnity] valid camera number: " << total_valid_cameras << '\n';
// Find the most distant camera from the mass center.
double max_distance = 0.0;
for(int i = 0; i < clip_num; i++) {
for (int j = 0; j < all_cameras[i].size(); ++j) {
double distance = (all_cameras[i][j].t - cameras_mass_center).squaredNorm();
if (distance > max_distance) {
max_distance = distance;
}
}
}
if (max_distance == 0.0) {
LG << "Cameras position variance is too small, can not rescale\n";
return;
}
double scale_factor = 1.0 / sqrt(max_distance);
LG << "rescale factor: " << scale_factor << "\n";
// Rescale cameras positions.
for(int i = 0; i < clip_num; i++) {
for (int j = 0; j < all_cameras[i].size(); ++j) {
int frame = all_cameras[i][j].frame;
CameraPose *camera = reconstruction->CameraPoseForFrame(i, frame);
if (camera != NULL)
camera->t = camera->t * scale_factor;
else
LG << "[EuclideanScaleToUnity] invalid camera: " << i << " " << frame << "\n";
}
}
// Rescale points positions.
vector<Point> all_points = reconstruction->AllPoints();
for (int i = 0; i < all_points.size(); ++i) {
int track = all_points[i].track;
Point *point = reconstruction->PointForTrack(track);
if (point != NULL)
point->X = point->X * scale_factor;
else
LG << "[EuclideanScaleToUnity] invalid point: " << i << "\n";
}
}
} // namespace mv

View File

@@ -0,0 +1,71 @@
// Copyright (c) 2016 libmv authors.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to
// deal in the Software without restriction, including without limitation the
// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
// sell copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
// IN THE SOFTWARE.
// Author: Tianwei Shen <shentianweipku@gmail.com>
// This file contains the multi-view reconstruction pipeline, such as camera resection.
#ifndef LIBMV_AUTOTRACK_PIPELINE_H_
#define LIBMV_AUTOTRACK_PIPELINE_H_
#include "libmv/autotrack/tracks.h"
#include "libmv/autotrack/reconstruction.h"
#include "libmv/simple_pipeline/callbacks.h"
#include "libmv/simple_pipeline/camera_intrinsics.h"
using libmv::CameraIntrinsics;
namespace mv {
/*!
Estimate multi-view camera poses and scene 3D coordinates for all frames and tracks.
This method should be used once there is an initial reconstruction in
place, for example by reconstructing from two frames that have a sufficient
baseline and number of tracks in common. This function iteratively
triangulates points that are visible by cameras that have their pose
estimated, then resections (i.e. estimates the pose) of cameras that are
not estimated yet that can see triangulated points. This process is
repeated until all points and cameras are estimated. Periodically, bundle
adjustment is run to ensure a quality reconstruction.
\a tracks should contain markers used in the reconstruction.
\a reconstruction should contain at least some 3D points or some estimated
cameras. The minimum number of cameras is two (with no 3D points) and the
minimum number of 3D points (with no estimated cameras) is 5.
\sa EuclideanResect, EuclideanIntersect, EuclideanBundle
*/
bool EuclideanCompleteMultiviewReconstruction(
const Tracks &tracks,
Reconstruction *reconstruction,
libmv::ProgressUpdateCallback *update_callback = NULL);
double EuclideanReprojectionError(const Tracks &image_tracks,
const Reconstruction &reconstruction,
const CameraIntrinsics &intrinsics);
void InvertIntrinsicsForTracks(const Tracks &raw_tracks,
const CameraIntrinsics &camera_intrinsics,
Tracks *calibrated_tracks);
void EuclideanScaleToUnity(Reconstruction *reconstruction);
} // namespace mv
#endif // LIBMV_AUTOTRACK_PIPELINE_H_

View File

@@ -0,0 +1,247 @@
// Copyright (c) 2016 libmv authors.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to
// deal in the Software without restriction, including without limitation the
// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
// sell copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
// IN THE SOFTWARE.
//
// Author: shentianweipku@gmail.com (Tianwei Shen)
#include "libmv/autotrack/reconstruction.h"
#include "libmv/multiview/fundamental.h"
#include "libmv/autotrack/marker.h"
#include "libmv/autotrack/tracks.h"
#include "libmv/numeric/numeric.h"
#include "libmv/logging/logging.h"
#include "libmv/simple_pipeline/camera_intrinsics.h"
using mv::Marker;
using mv::Tracks;
using libmv::Mat;
using libmv::Mat2;
using libmv::Mat3;
using libmv::Vec;
using libmv::Vec2;
namespace mv {
static void GetFramesInMarkers(const vector<Marker> &markers,
int *image1, int *image2) {
if (markers.size() < 2) {
return;
}
*image1 = markers[0].frame;
for (int i = 1; i < markers.size(); ++i) {
if (markers[i].frame != *image1) {
*image2 = markers[i].frame;
return;
}
}
*image2 = -1;
LOG(FATAL) << "Only one image in the markers.";
}
/* markers come from two views in the same clip,
* reconstruction should be new and empty
*/
bool ReconstructTwoFrames(const vector<Marker> &markers,
const int clip,
Reconstruction *reconstruction)
{
if (markers.size() < 16) {
LG << "Not enough markers to initialize from two frames: " << markers.size();
return false;
}
int frame1, frame2;
GetFramesInMarkers(markers, &frame1, &frame2);
Mat x1, x2;
CoordinatesForMarkersInFrame(markers, clip, frame1, &x1);
CoordinatesForMarkersInFrame(markers, clip, frame2, &x2);
Mat3 F;
libmv::NormalizedEightPointSolver(x1, x2, &F);
// The F matrix should be an E matrix, but squash it just to be sure.
Mat3 E;
libmv::FundamentalToEssential(F, &E);
// Recover motion between the two images. Since this function assumes a
// calibrated camera, use the identity for K.
Mat3 R;
Vec3 t;
Mat3 K = Mat3::Identity();
if (!libmv::MotionFromEssentialAndCorrespondence(E, K, x1.col(0), K, x2.col(0), &R, &t)) {
LG << "Failed to compute R and t from E and K.";
return false;
}
// frame 1 gets the reference frame, frame 2 gets the relative motion.
CameraPose pose1(clip, frame1, reconstruction->GetIntrinsicsMap(clip, frame1), Mat3::Identity(), Vec3::Zero());
CameraPose pose2(clip, frame2, reconstruction->GetIntrinsicsMap(clip, frame2), R, t);
reconstruction->AddCameraPose(pose1);
reconstruction->AddCameraPose(pose2);
LG << "From two frame reconstruction got:\nR:\n" << R << "\nt:" << t.transpose();
return true;
}
// ================== mv::Reconstruction implementation ===================
// push a new cameraIntrinsics and return the index
int Reconstruction::AddCameraIntrinsics(CameraIntrinsics *intrinsics_ptr) {
camera_intrinsics_.push_back(intrinsics_ptr);
return camera_intrinsics_.size()-1;
}
void Reconstruction::AddCameraPose(const CameraPose& pose) {
if (camera_poses_.size() < pose.clip + 1) {
camera_poses_.resize(pose.clip+1);
}
if (camera_poses_[pose.clip].size() < pose.frame + 1) {
camera_poses_[pose.clip].resize(pose.frame+1);
}
// copy form pose to camera_poses_
camera_poses_[pose.clip][pose.frame].clip = pose.clip;
camera_poses_[pose.clip][pose.frame].frame = pose.frame;
camera_poses_[pose.clip][pose.frame].intrinsics = pose.intrinsics;
camera_poses_[pose.clip][pose.frame].R = pose.R;
camera_poses_[pose.clip][pose.frame].t = pose.t;
}
int Reconstruction::GetClipNum() const {
return camera_poses_.size();
}
int Reconstruction::GetAllPoseNum() const {
int all_pose = 0;
for (int i = 0; i < camera_poses_.size(); ++i) {
all_pose += camera_poses_[i].size();
}
return all_pose;
}
CameraPose* Reconstruction::CameraPoseForFrame(int clip, int frame) {
if (camera_poses_.size() <= clip)
return NULL;
if (camera_poses_[clip].size() <= frame)
return NULL;
if (camera_poses_[clip][frame].clip == -1) // this CameraPose is uninitilized
return NULL;
return &(camera_poses_[clip][frame]);
}
const CameraPose* Reconstruction::CameraPoseForFrame(int clip, int frame) const {
if (camera_poses_.size() <= clip)
return NULL;
if (camera_poses_[clip].size() <= frame)
return NULL;
if (camera_poses_[clip][frame].clip == -1) // this CameraPose is uninitilized
return NULL;
return (const CameraPose*) &(camera_poses_[clip][frame]);
}
Point* Reconstruction::PointForTrack(int track) {
if (track < 0 || track >= points_.size()) {
return NULL;
}
Point *point = &points_[track];
if (point->track == -1) {
return NULL;
}
return point;
}
const Point* Reconstruction::PointForTrack(int track) const {
if (track < 0 || track >= points_.size()) {
return NULL;
}
const Point *point = &points_[track];
if (point->track == -1) { // initialized but not set, return NULL
return NULL;
}
return point;
}
int Reconstruction::AddPoint(const Point& point) {
LG << "InsertPoint " << point.track << ":\n" << point.X;
if (point.track >= points_.size()) {
points_.resize(point.track + 1);
}
points_[point.track].track = point.track;
points_[point.track].X = point.X;
return point.track;
}
const vector<vector<CameraPose> >& Reconstruction::camera_poses() const {
return camera_poses_;
}
const vector<Point>& Reconstruction::AllPoints() const {
return points_;
}
int Reconstruction::GetReconstructedCameraNum() const {
int reconstructed_num = 0;
for (int i = 0; i < camera_poses_.size(); i++) {
for (int j = 0; j < camera_poses_[i].size(); j++) {
if (camera_poses_[i][j].clip != -1 && camera_poses_[i][j].frame != -1)
reconstructed_num++;
}
}
return reconstructed_num;
}
void Reconstruction::InitIntrinsicsMap(Tracks &tracks) {
int clip_num = tracks.MaxClip() + 1;
intrinsics_map.resize(clip_num);
for (int i = 0; i < clip_num; i++) {
intrinsics_map.resize(tracks.MaxFrame(i)+1);
for (int j = 0; j < intrinsics_map.size(); j++) {
intrinsics_map[i][j] = -1;
}
}
}
void Reconstruction::InitIntrinsicsMapFixed(Tracks &tracks) {
int clip_num = tracks.MaxClip() + 1;
intrinsics_map.resize(clip_num);
for (int i = 0; i < clip_num; i++) {
intrinsics_map[i].resize(tracks.MaxFrame(i)+1);
for (int j = 0; j < intrinsics_map[i].size(); j++) {
intrinsics_map[i][j] = i;
}
}
}
bool Reconstruction::SetIntrinsicsMap(int clip, int frame, int intrinsics) {
if (intrinsics_map.size() <= clip)
return false;
if (intrinsics_map[clip].size() <= frame)
return false;
intrinsics_map[clip][frame] = intrinsics;
return true;
}
int Reconstruction::GetIntrinsicsMap(int clip, int frame) const {
if (intrinsics_map.size() <= clip)
return -1;
if (intrinsics_map[clip].size() <= frame)
return -1;
return intrinsics_map[clip][frame];
}
} // namespace mv

View File

@@ -1,4 +1,4 @@
// Copyright (c) 2014 libmv authors.
// Copyright (c) 2014, 2016 libmv authors.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to
@@ -19,6 +19,7 @@
// IN THE SOFTWARE.
//
// Author: mierle@gmail.com (Keir Mierle)
// shentianweipku@gmail.com (Tianwei Shen)
#ifndef LIBMV_AUTOTRACK_RECONSTRUCTION_H_
#define LIBMV_AUTOTRACK_RECONSTRUCTION_H_
@@ -31,10 +32,19 @@ namespace mv {
using libmv::CameraIntrinsics;
using libmv::vector;
using libmv::Mat3;
using libmv::Vec3;
class Model;
struct Marker;
class Tracks;
class CameraPose {
public:
CameraPose(): clip(-1), frame(-1) {} // uninitilized CameraPose is (-1, -1)
CameraPose(int clip_, int frame_, int intrinsics_, Mat3 R_, Vec3 t_):
clip(clip_), frame(frame_), intrinsics(intrinsics_), R(R_), t(t_) {}
int clip;
int frame;
int intrinsics;
@@ -43,6 +53,10 @@ class CameraPose {
};
class Point {
public:
Point(int track_ = -1, Vec3 X_ = Vec3(0, 0, 0))
: track(track_), X(X_){}
Point(const Point &p) : track(p.track), X(p.X) {}
int track;
// The coordinates of the point. Note that not all coordinates are always
@@ -51,39 +65,54 @@ class Point {
};
// A reconstruction for a set of tracks. The indexing for clip, frame, and
// track should match that of a Tracs object, stored elsewhere.
// track should match that of a Tracks object, stored elsewhere.
class Reconstruction {
public:
// All methods copy their input reference or take ownership of the pointer.
void AddCameraPose(const CameraPose& pose);
int AddCameraIntrinsics(CameraIntrinsics* intrinsics);
int AddPoint(const Point& point);
int AddCameraIntrinsics(CameraIntrinsics* intrinsics_ptr);
int AddPoint(const Point& point);
int AddModel(Model* model);
// Returns the corresponding pose or point or NULL if missing.
CameraPose* CameraPoseForFrame(int clip, int frame);
CameraPose* CameraPoseForFrame(int clip, int frame);
const CameraPose* CameraPoseForFrame(int clip, int frame) const;
Point* PointForTrack(int track);
Point* PointForTrack(int track);
const Point* PointForTrack(int track) const;
const vector<vector<CameraPose> >& camera_poses() const {
return camera_poses_;
}
const vector<vector<CameraPose> >& camera_poses() const;
const vector<Point>& AllPoints() const;
private:
int GetClipNum() const;
int GetAllPoseNum() const;
int GetReconstructedCameraNum() const;
// initialize all intrinsics map to -1
void InitIntrinsicsMap(Tracks &tracks);
// initialize intrinsics of clip i to i(CameraPose::intrinsics)
void InitIntrinsicsMapFixed(Tracks &tracks);
// set CameraPose::intrinsics for frame (clip, frame)
bool SetIntrinsicsMap(int clip, int frame, int intrinsics);
// return CameraPose::intrinsics if (clip, frame) is intrinsics_map, otherwise return -1
int GetIntrinsicsMap(int clip, int frame) const;
private:
// Indexed by CameraPose::intrinsics. Owns the intrinsics objects.
vector<CameraIntrinsics*> camera_intrinsics_;
// Indexed by Marker::clip then by Marker::frame.
vector<vector<CameraPose> > camera_poses_;
// Indexed by Marker::track.
vector<Point> points_;
// Indexed by Marker::model_id. Owns model objects.
vector<Model*> models_;
// Indexed by Marker::clip then by Marker::frame.
vector<vector<int> > intrinsics_map;
};
// Reconstruct two frames from the same clip, used as the initial reconstruction
bool ReconstructTwoFrames(const vector<Marker> &markers,
const int clip,
Reconstruction *reconstruction);
} // namespace mv
#endif // LIBMV_AUTOTRACK_RECONSTRUCTION_H_

View File

@@ -0,0 +1,187 @@
// Copyright (c) 2011 libmv authors.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to
// deal in the Software without restriction, including without limitation the
// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
// sell copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
// IN THE SOFTWARE.
#include "libmv/autotrack/resect.h"
#include <cstdio>
#include "libmv/base/vector.h"
#include "libmv/logging/logging.h"
#include "libmv/multiview/euclidean_resection.h"
#include "libmv/multiview/resection.h"
#include "libmv/multiview/projection.h"
#include "libmv/numeric/numeric.h"
#include "libmv/numeric/levenberg_marquardt.h"
#include "libmv/autotrack/reconstruction.h"
#include "libmv/autotrack/tracks.h"
using libmv::Mat2X;
using libmv::Mat3X;
using libmv::Mat4X;
using libmv::Mat34;
using libmv::Vec;
using libmv::Vec6;
namespace mv {
namespace {
Mat2X PointMatrixFromMarkers(const vector<Marker> &markers) {
Mat2X points(2, markers.size());
for (int i = 0; i < markers.size(); ++i) {
points(0, i) = markers[i].center[0];
points(1, i) = markers[i].center[1];
}
return points;
}
// Uses an incremental rotation:
//
// x = R' * R * X + t;
//
// to avoid issues with the rotation representation. R' is derived from a
// euler vector encoding the rotation in 3 parameters; the direction is the
// axis to rotate around and the magnitude is the amount of the rotation.
struct EuclideanResectCostFunction {
public:
typedef Vec FMatrixType;
typedef Vec6 XMatrixType;
EuclideanResectCostFunction(const vector<Marker> &markers,
const Reconstruction &reconstruction,
const Mat3 &initial_R)
: markers(markers),
reconstruction(reconstruction),
initial_R(initial_R) {}
// dRt has dR (delta R) encoded as a euler vector in the first 3 parameters,
// followed by t in the next 3 parameters.
Vec operator()(const Vec6 &dRt) const {
// Unpack R, t from dRt.
Mat3 R = libmv::RotationFromEulerVector(dRt.head<3>()) * initial_R;
Vec3 t = dRt.tail<3>();
// Compute the reprojection error for each coordinate.
Vec residuals(2 * markers.size());
residuals.setZero();
for (int i = 0; i < markers.size(); ++i) {
const Point &point =
*reconstruction.PointForTrack(markers[i].track);
Vec3 projected = R * point.X + t;
projected /= projected(2);
residuals[2*i + 0] = projected(0) - markers[i].center[0];
residuals[2*i + 1] = projected(1) - markers[i].center[1];
}
return residuals;
}
const vector<Marker> &markers;
const Reconstruction &reconstruction;
const Mat3 &initial_R;
};
} // namespace
bool EuclideanResect(const vector<Marker> &markers,
Reconstruction *reconstruction,
bool final_pass,
int intrinsics) {
if (markers.size() < 5) { // five-point algorithm
return false;
}
Mat2X points_2d = PointMatrixFromMarkers(markers);
Mat3X points_3d(3, markers.size());
for (int i = 0; i < markers.size(); i++) {
points_3d.col(i) = reconstruction->PointForTrack(markers[i].track)->X;
}
LG << "Number of points for resect: " << points_2d.cols() << "\n";
Mat3 R;
Vec3 t;
if (0 || !libmv::euclidean_resection::EuclideanResection(
points_2d, points_3d, &R, &t,
libmv::euclidean_resection::RESECTION_EPNP)) {
LG << "[EuclideanResect] Euclidean resection failed\n";
return false;
if (!final_pass) return false;
// Euclidean resection failed. Fall back to projective resection, which is
// less reliable but better conditioned when there are many points.
Mat34 P;
Mat4X points_3d_homogeneous(4, markers.size());
for (int i = 0; i < markers.size(); i++) {
points_3d_homogeneous.col(i).head<3>() = points_3d.col(i);
points_3d_homogeneous(3, i) = 1.0;
}
libmv::resection::Resection(points_2d, points_3d_homogeneous, &P);
if ((P * points_3d_homogeneous.col(0))(2) < 0) {
LG << "Point behind camera; switch sign.";
P = -P;
}
Mat3 ignored;
libmv::KRt_From_P(P, &ignored, &R, &t);
// The R matrix should be a rotation, but don't rely on it.
Eigen::JacobiSVD<Mat3> svd(R, Eigen::ComputeFullU | Eigen::ComputeFullV);
LG << "Resection rotation is: " << svd.singularValues().transpose();
LG << "Determinant is: " << R.determinant();
// Correct to make R a rotation.
R = svd.matrixU() * svd.matrixV().transpose();
Vec3 xx = R * points_3d.col(0) + t;
if (xx(2) < 0.0) {
LG << "Final point is still behind camera...";
}
// XXX Need to check if error is horrible and fail here too in that case.
}
// Refine the result.
typedef libmv::LevenbergMarquardt<EuclideanResectCostFunction> Solver;
// Give the cost our initial guess for R.
EuclideanResectCostFunction resect_cost(markers, *reconstruction, R);
// Encode the initial parameters: start with zero delta rotation, and the
// guess for t obtained from resection.
Vec6 dRt = Vec6::Zero();
dRt.tail<3>() = t;
Solver solver(resect_cost);
Solver::SolverParameters params;
/* Solver::Results results = */ solver.minimize(params, &dRt);
VLOG(1) << "LM found incremental rotation: " << dRt.head<3>().transpose();
// TODO(keir): Check results to ensure clean termination.
// Unpack the rotation and translation.
R = libmv::RotationFromEulerVector(dRt.head<3>()) * R;
t = dRt.tail<3>();
VLOG(1) << "Resection for frame " << markers[0].clip << " " << markers[0].frame
<< " got:\n" << "R:\n" << R << "\nt:\n" << t << "\n";
CameraPose pose(markers[0].clip, markers[0].frame, intrinsics, R, t);
reconstruction->AddCameraPose(pose);
return true;
}
} // namespace libmv

View File

@@ -0,0 +1,61 @@
// Copyright (c) 2016 libmv authors.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to
// deal in the Software without restriction, including without limitation the
// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
// sell copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
// IN THE SOFTWARE.
// Author: Tianwei Shen <shentianweipku@gmail.com>
// This file contains functions to do camera resection.
#ifndef LIBMV_AUTOTRACK_RESECT_H
#define LIBMV_AUTOTRACK_RESECT_H
#include "libmv/base/vector.h"
#include "libmv/autotrack/tracks.h"
#include "libmv/autotrack/reconstruction.h"
namespace mv {
/*!
Estimate the Euclidean pose of a camera from 2D to 3D correspondences.
This takes a set of markers visible in one frame (which is the one to
resection), such that the markers are also reconstructed in 3D in the
reconstruction object, and solves for the pose and orientation of the
camera for that frame.
\a markers should contain \l Marker markers \endlink belonging to tracks
visible in the one frame to be resectioned. Each of the tracks associated
with the markers must have a corresponding reconstructed 3D position in the
\a *reconstruction object.
\a *reconstruction should contain the 3D points associated with the tracks
for the markers present in \a markers.
\note This assumes a calibrated reconstruction, e.g. the markers are
already corrected for camera intrinsics and radial distortion.
\note This assumes an outlier-free set of markers.
\return True if the resection was successful, false otherwise.
\sa EuclideanIntersect, EuclideanReconstructTwoFrames
*/
bool EuclideanResect(const vector<Marker> &markers,
Reconstruction *reconstruction, bool final_pass, int intrinsics);
} // namespace mv
#endif // LIBMV_AUTOTRACK_RESECT_H

View File

@@ -78,7 +78,19 @@ void Tracks::GetMarkersInFrame(int clip,
}
}
void Tracks::GetMarkersForTracksInBothImages(int clip1, int frame1,
void Tracks::GetMarkersInBothFrames(int clip1, int frame1,
int clip2, int frame2,
vector<Marker> *markers) const {
for (int i = 0; i < markers_.size(); ++i) {
int clip = markers_[i].clip;
int frame = markers_[i].frame;
if ((clip == clip1 && frame == frame1) ||
(clip == clip2 && frame == frame2))
markers->push_back(markers_[i]);
}
}
void Tracks::GetMarkersForTracksInBothFrames(int clip1, int frame1,
int clip2, int frame2,
vector<Marker>* markers) const {
std::vector<int> image1_tracks;
@@ -118,6 +130,12 @@ void Tracks::GetMarkersForTracksInBothImages(int clip1, int frame1,
}
}
void Tracks::GetAllMarkers(vector<Marker>* markers) const {
for (int i = 0; i < markers_.size(); ++i) {
markers->push_back(markers_[i]);
}
}
void Tracks::AddMarker(const Marker& marker) {
// TODO(keir): This is quadratic for repeated insertions. Fix this by adding
// a smarter data structure like a set<>.
@@ -132,6 +150,13 @@ void Tracks::AddMarker(const Marker& marker) {
markers_.push_back(marker);
}
void Tracks::AddTracks(const Tracks& other_tracks) {
vector<Marker> markers;
other_tracks.GetAllMarkers(&markers);
for(int i = 0; i < markers.size(); ++i)
this->AddMarker(markers[i]);
}
void Tracks::SetMarkers(vector<Marker>* markers) {
std::swap(markers_, *markers);
}
@@ -190,4 +215,19 @@ int Tracks::NumMarkers() const {
return markers_.size();
}
void CoordinatesForMarkersInFrame(const vector<Marker> &markers,
int clip, int frame,
libmv::Mat *coordinates) {
vector<libmv::Vec2> coords;
for (int i = 0; i < markers.size(); ++i) {
const Marker &marker = markers[i];
if (markers[i].clip == clip && markers[i].frame == frame) {
coords.push_back(libmv::Vec2(marker.center[0], marker.center[1]));
}
}
coordinates->resize(2, coords.size());
for (int i = 0; i < coords.size(); i++) {
coordinates->col(i) = coords[i];
}
}
} // namespace mv

View File

@@ -19,6 +19,7 @@
// IN THE SOFTWARE.
//
// Author: mierle@gmail.com (Keir Mierle)
// shentianweipku@gmail.com (Tianwei Shen)
#ifndef LIBMV_AUTOTRACK_TRACKS_H_
#define LIBMV_AUTOTRACK_TRACKS_H_
@@ -36,30 +37,38 @@ class Tracks {
Tracks() { }
Tracks(const Tracks &other);
// Create a tracks object with markers already initialized. Copies markers.
explicit Tracks(const vector<Marker>& markers);
/// Create a tracks object with markers already initialized. Copies markers.
explicit Tracks(const vector<Marker> &markers);
// All getters append to the output argument vector.
bool GetMarker(int clip, int frame, int track, Marker* marker) const;
void GetMarkersForTrack(int track, vector<Marker>* markers) const;
/// All getters append to the output argument vector.
bool GetMarker(int clip, int frame, int track, Marker *marker) const;
void GetMarkersForTrack(int track, vector<Marker> *markers) const;
void GetMarkersForTrackInClip(int clip,
int track,
vector<Marker>* markers) const;
void GetMarkersInFrame(int clip, int frame, vector<Marker>* markers) const;
void GetMarkersInFrame(int clip, int frame, vector<Marker> *markers) const;
// Get the markers in frame1 and frame2 which have a common track.
//
/// Returns all the markers visible in \a image1 and \a image2.
void GetMarkersInBothFrames(int clip1, int frame1,
int clip2, int frame2,
vector<Marker> *markers) const;
/// Get the markers in frame1 and frame2 which have a common track.
// This is not the same as the union of the markers in frame1 and
// frame2; each marker is for a track that appears in both images.
void GetMarkersForTracksInBothImages(int clip1, int frame1,
void GetMarkersForTracksInBothFrames(int clip1, int frame1,
int clip2, int frame2,
vector<Marker>* markers) const;
vector<Marker> *markers) const;
void GetAllMarkers(vector<Marker> *markers) const;
/// add a marker
void AddMarker(const Marker& marker);
/// add markers from another Tracks
void AddTracks(const Tracks& other_tracks);
// Moves the contents of *markers over top of the existing markers. This
// destroys *markers in the process (but avoids copies).
void SetMarkers(vector<Marker>* markers);
void SetMarkers(vector<Marker> *markers);
bool RemoveMarker(int clip, int frame, int track);
void RemoveMarkersForTrack(int track);
@@ -77,6 +86,9 @@ class Tracks {
// linear lookup penalties for the accessors.
};
void CoordinatesForMarkersInFrame(const vector<Marker> &markers,
int clip, int frame,
libmv::Mat *coordinates);
} // namespace mv
#endif // LIBMV_AUTOTRACK_TRACKS_H_

View File

@@ -145,6 +145,39 @@ class CLIP_HT_header(Header):
row.prop(toolsettings, "proportional_edit_falloff",
text="", icon_only=True)
def _draw_correspondence(self, context):
layout = self.layout
toolsettings = context.tool_settings
sc = context.space_data
clip = sc.clip
row = layout.row(align=True)
row.template_header()
CLIP_MT_correspondence_editor_menus.draw_collapsible(context, layout)
row = layout.row()
row.template_ID(sc, "clip", open="clip.open")
row = layout.row() # clip open for witness camera
row.template_ID(sc, "secondary_clip", open="clip.open_secondary")
if clip:
tracking = clip.tracking
active_object = tracking.objects.active
layout.prop(sc, "mode", text="")
layout.prop(sc, "view", text="", expand=True)
layout.prop(sc, "pivot_point", text="", icon_only=True)
r = active_object.reconstruction
if r.is_valid and sc.view == 'CLIP':
layout.label(text="Solve error: %.4f" % (r.average_error))
else:
layout.prop(sc, "view", text="", expand=True)
def draw(self, context):
layout = self.layout
@@ -152,8 +185,10 @@ class CLIP_HT_header(Header):
if sc.mode == 'TRACKING':
self._draw_tracking(context)
else:
elif sc.mode == 'MASK':
self._draw_masking(context)
else: # sc.mode == 'CORRESPONDENCE'
self._draw_correspondence(context)
layout.template_running_jobs()
@@ -205,6 +240,30 @@ class CLIP_MT_masking_editor_menus(Menu):
layout.menu("CLIP_MT_clip") # XXX - remove?
class CLIP_MT_correspondence_editor_menus(Menu):
bl_idname = "CLIP_MT_correspondence_editor_menus"
bl_label = ""
def draw(self, context):
self.draw_menus(self.layout, context)
@staticmethod
def draw_menus(layout, context):
sc = context.space_data
clip = sc.clip
layout.menu("CLIP_MT_view")
if clip:
layout.menu("CLIP_MT_select")
layout.menu("CLIP_MT_clip")
layout.menu("CLIP_MT_track")
layout.menu("CLIP_MT_reconstruction")
else:
layout.menu("CLIP_MT_clip")
class CLIP_PT_clip_view_panel:
@classmethod
@@ -225,6 +284,16 @@ class CLIP_PT_tracking_panel:
return clip and sc.mode == 'TRACKING' and sc.view == 'CLIP'
class CLIP_PT_correspondence_panel:
@classmethod
def poll(cls, context):
sc = context.space_data
clip = sc.clip
return clip and sc.mode == 'CORRESPONDENCE' and sc.view == 'CLIP'
class CLIP_PT_reconstruction_panel:
@classmethod
@@ -400,6 +469,26 @@ class CLIP_PT_tools_tracking(CLIP_PT_tracking_panel, Panel):
row.operator("clip.join_tracks", text="Join Tracks")
class CLIP_PT_tools_correspondence(CLIP_PT_correspondence_panel, Panel):
bl_space_type = 'CLIP_EDITOR'
bl_region_type = 'TOOLS'
bl_label = "Correspondence"
bl_category = "Track"
def draw(self, context):
layout = self.layout
col = layout.column(align=True)
row = col.row(align=True)
row.operator("clip.add_correspondence", text="Link")
row.operator("clip.delete_correspondence", text="Unlink")
col = layout.column(align=True)
col.scale_y = 2.0
col.operator("clip.solve_multiview", text="Solve Multiview Camera")
class CLIP_PT_tools_plane_tracking(CLIP_PT_tracking_panel, Panel):
bl_space_type = 'CLIP_EDITOR'
bl_region_type = 'TOOLS'

View File

@@ -50,6 +50,7 @@ struct ScrArea;
struct SpaceLink;
struct View3D;
struct RegionView3D;
struct RegionSpaceClip;
struct StructRNA;
struct ToolSettings;
struct Image;
@@ -150,6 +151,7 @@ struct ReportList *CTX_wm_reports(const bContext *C);
struct View3D *CTX_wm_view3d(const bContext *C);
struct RegionView3D *CTX_wm_region_view3d(const bContext *C);
struct RegionSpaceClip *CTX_wm_region_clip(const bContext *C);
struct SpaceText *CTX_wm_space_text(const bContext *C);
struct SpaceImage *CTX_wm_space_image(const bContext *C);
struct SpaceConsole *CTX_wm_space_console(const bContext *C);

View File

@@ -34,9 +34,11 @@
struct bGPDlayer;
struct ImBuf;
struct ListBase;
struct MovieClip;
struct MovieReconstructContext;
struct MovieMultiviewReconstructContext;
struct MovieTrackingTrack;
struct MovieTrackingCorrespondence;
struct MovieTrackingMarker;
struct MovieTrackingPlaneTrack;
struct MovieTrackingPlaneMarker;
@@ -288,6 +290,26 @@ void BKE_tracking_stabilization_data_to_mat4(int width, int height, float aspect
void BKE_tracking_dopesheet_tag_update(struct MovieTracking *tracking);
void BKE_tracking_dopesheet_update(struct MovieTracking *tracking);
/* Correspondence and multiview */
void BKE_tracking_correspondence_unique_name(struct ListBase *tracksbase, struct MovieTrackingCorrespondence *corr);
struct MovieTrackingCorrespondence *BKE_tracking_correspondence_add(struct ListBase *corr_base,
struct MovieTrackingTrack *self_track,
struct MovieTrackingTrack *other_track,
struct MovieClip *self_clip,
struct MovieClip *other_clip,
char *error_msg, int error_size);
void BKE_tracking_multiview_reconstruction_solve(struct MovieMultiviewReconstructContext *context, short *stop, short *do_update,
float *progress, char *stats_message, int message_size);
struct MovieMultiviewReconstructContext *BKE_tracking_multiview_reconstruction_context_new(struct MovieClip **clips,
int num_clips,
struct MovieTrackingObject *object,
int keyframe1, int keyframe2,
int width, int height);
void BKE_tracking_multiview_reconstruction_context_free(struct MovieMultiviewReconstructContext *context);
bool BKE_tracking_multiview_reconstruction_check(struct MovieClip **clips, struct MovieTrackingObject *object,
char *error_msg, int error_size);
bool BKE_tracking_multiview_reconstruction_finish(struct MovieMultiviewReconstructContext *context, struct MovieClip** clips);
#define TRACK_SELECTED(track) ((track)->flag & SELECT || (track)->pat_flag & SELECT || (track)->search_flag & SELECT)
#define TRACK_AREA_SELECTED(track, area) ((area) == TRACK_AREA_POINT ? (track)->flag & SELECT : \

View File

@@ -177,6 +177,7 @@ set(SRC
intern/texture.c
intern/tracking.c
intern/tracking_auto.c
intern/tracking_correspondence.c
intern/tracking_detect.c
intern/tracking_plane_tracker.c
intern/tracking_region_tracker.c

View File

@@ -686,6 +686,37 @@ RegionView3D *CTX_wm_region_view3d(const bContext *C)
return NULL;
}
RegionSpaceClip *CTX_wm_region_clip(const bContext *C)
{
ScrArea *sa = CTX_wm_area(C);
ARegion *ar_curr = CTX_wm_region(C);
if (sa && sa->spacetype == SPACE_CLIP) {
/* only RGN_TYPE_WINDOW has regiondata */
if (ar_curr && ar_curr->regiontype == RGN_TYPE_WINDOW) {
return ar_curr->regiondata;
}
else {
/* search forward and backward to find regiondata */
ARegion *ar = ar_curr->prev;
while (ar) {
if (ar->regiontype == RGN_TYPE_WINDOW) {
return ar->regiondata;
}
ar = ar->prev;
}
ar = ar_curr->next;
while (ar) {
if (ar->regiontype == RGN_TYPE_WINDOW) {
return ar->regiondata;
}
ar = ar->next;
}
}
}
return NULL;
}
struct SpaceText *CTX_wm_space_text(const bContext *C)
{
ScrArea *sa = CTX_wm_area(C);

View File

@@ -619,6 +619,16 @@ void BKE_tracking_track_unique_name(ListBase *tracksbase, MovieTrackingTrack *tr
offsetof(MovieTrackingTrack, name), sizeof(track->name));
}
/* Ensure specified correspondence has got unique name,
* if it's not name of specified correspondence will be changed
* keeping names of all other correspondence unchanged.
*/
void BKE_tracking_correspondence_unique_name(ListBase *tracksbase, MovieTrackingCorrespondence *corr)
{
BLI_uniquename(tracksbase, corr, CTX_DATA_(BLT_I18NCONTEXT_ID_MOVIECLIP, "Correspondence"), '.',
offsetof(MovieTrackingCorrespondence, name), sizeof(corr->name));
}
/* Free specified track, only frees contents of a structure
* (if track is allocated in heap, it shall be handled outside).
*

View File

@@ -0,0 +1,705 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2011 Blender Foundation.
* All rights reserved.
*
* Contributor(s): Blender Foundation,
* Tianwei Shen
*
* ***** END GPL LICENSE BLOCK *****
*/
/** \file blender/blenkernel/intern/tracking_correspondence.c
* \ingroup bke
*
* This file contains blender-side correspondence functions for witness camera support
*/
#include <limits.h>
#include "MEM_guardedalloc.h"
#include "DNA_movieclip_types.h"
#include "DNA_object_types.h" /* SELECT */
#include "DNA_anim_types.h"
#include "BLI_utildefines.h"
#include "BLI_listbase.h"
#include "BLI_string.h"
#include "BLI_math.h"
#include "BLI_ghash.h"
#include "BLT_translation.h"
#include "BKE_tracking.h"
#include "BKE_fcurve.h"
#include "BKE_movieclip.h"
#include "RNA_access.h"
#include "IMB_imbuf_types.h"
#include "libmv-capi.h"
#include "tracking_private.h"
struct ReconstructProgressData;
typedef struct MovieMultiviewReconstructContext {
int clip_num; /* number of clips in this reconstruction */
struct libmv_TracksN **all_tracks; /* set of tracks from all clips (API in autotrack) */
// TODO(tianwei): might be proper to make it libmv_multiview_Reconstruction
struct libmv_ReconstructionN **all_reconstruction; /* reconstruction for each clip (API in autotrack) */
libmv_CameraIntrinsicsOptions *all_camera_intrinsics_options; /* camera intrinsic of each camera */
TracksMap **all_tracks_map; /* tracks_map of each clip */
int *all_sfra, *all_efra; /* start and end frame of each clip */
int *all_refine_flags; /* refine flags of each clip */
int **track_global_index; /* track global index */
bool select_keyframes;
int keyframe1, keyframe2; /* the key frames selected from the primary camera */
char object_name[MAX_NAME];
bool is_camera;
short motion_flag;
float reprojection_error; /* average reprojection error for all clips and tracks */
} MovieMultiviewReconstructContext;
typedef struct MultiviewReconstructProgressData {
short *stop;
short *do_update;
float *progress;
char *stats_message;
int message_size;
} MultiviewReconstructProgressData;
/* Add new correspondence to a specified correspondence base.
*/
MovieTrackingCorrespondence *BKE_tracking_correspondence_add(ListBase *corr_base,
MovieTrackingTrack *self_track,
MovieTrackingTrack *other_track,
MovieClip* self_clip,
MovieClip* other_clip,
char *error_msg, int error_size)
{
MovieTrackingCorrespondence *corr = NULL;
/* check self correspondences */
if (self_track == other_track) {
BLI_strncpy(error_msg, N_("Cannot link a track to itself"), error_size);
return NULL;
}
/* check duplicate correspondences or conflict correspondence */
for (corr = corr_base->first; corr != NULL; corr = corr->next)
{
if (corr->self_clip == self_clip && strcmp(corr->self_track_name, self_track->name) == 0) {
/* duplicate correspondences */
if (corr->other_clip == other_clip && strcmp(corr->other_track_name, other_track->name) == 0) {
BLI_strncpy(error_msg, N_("This correspondence has been added"), error_size);
return NULL;
}
/* conflict correspondence */
else {
BLI_strncpy(error_msg, N_("Conflict correspondence, consider first deleting the old one"), error_size);
return NULL;
}
}
if (corr->other_clip == other_clip && strcmp(corr->other_track_name, other_track->name) == 0) {
if (corr->self_clip == self_clip && strcmp(corr->self_track_name, self_track->name) == 0) {
BLI_strncpy(error_msg, N_("This correspondence has been added"), error_size);
return NULL;
}
else {
BLI_strncpy(error_msg, N_("Conflict correspondence, consider first deleting the old one"), error_size);
return NULL;
}
}
}
corr = MEM_callocN(sizeof(MovieTrackingCorrespondence), "add correspondence");
strcpy(corr->name, "Correspondence");
strcpy(corr->self_track_name, self_track->name);
strcpy(corr->other_track_name, other_track->name);
corr->self_clip = self_clip;
corr->other_clip = other_clip;
BLI_addtail(corr_base, corr);
BKE_tracking_correspondence_unique_name(corr_base, corr);
return corr;
}
/* Convert blender's multiview refinement flags to libmv's.
* These refined flags would be the same as the single view version
*/
static int multiview_refine_intrinsics_get_flags(MovieTracking *tracking, MovieTrackingObject *object)
{
int refine = tracking->settings.refine_camera_intrinsics;
int flags = 0;
if ((object->flag & TRACKING_OBJECT_CAMERA) == 0)
return 0;
if (refine & REFINE_FOCAL_LENGTH)
flags |= LIBMV_REFINE_FOCAL_LENGTH;
if (refine & REFINE_PRINCIPAL_POINT)
flags |= LIBMV_REFINE_PRINCIPAL_POINT;
if (refine & REFINE_RADIAL_DISTORTION_K1)
flags |= LIBMV_REFINE_RADIAL_DISTORTION_K1;
if (refine & REFINE_RADIAL_DISTORTION_K2)
flags |= LIBMV_REFINE_RADIAL_DISTORTION_K2;
return flags;
}
/* Create new libmv Tracks structure from blender's tracks list. */
static struct libmv_TracksN *libmv_multiview_tracks_new(MovieClip *clip, int clip_id, ListBase *tracksbase,
int *global_track_index, int width, int height)
{
int tracknr = 0;
MovieTrackingTrack *track;
struct libmv_TracksN *tracks = libmv_tracksNewN();
track = tracksbase->first;
while (track) {
FCurve *weight_fcurve;
int a = 0;
weight_fcurve = id_data_find_fcurve(&clip->id, track, &RNA_MovieTrackingTrack,
"weight", 0, NULL);
for (a = 0; a < track->markersnr; a++) {
MovieTrackingMarker *marker = &track->markers[a];
if ((marker->flag & MARKER_DISABLED) == 0) {
float weight = track->weight;
if (weight_fcurve) {
int scene_framenr =
BKE_movieclip_remap_clip_to_scene_frame(clip, marker->framenr);
weight = evaluate_fcurve(weight_fcurve, scene_framenr);
}
libmv_Marker libmv_marker;
libmv_marker.clip = clip_id;
libmv_marker.frame = marker->framenr;
libmv_marker.track = global_track_index[tracknr];
libmv_marker.center[0] = (marker->pos[0] + track->offset[0]) * width;
libmv_marker.center[1] = (marker->pos[1] + track->offset[1]) * height;
for (int i = 0; i < 4; i++) {
libmv_marker.patch[i][0] = marker->pattern_corners[i][0];
libmv_marker.patch[i][1] = marker->pattern_corners[i][1];
}
for (int i = 0; i < 2; i++) {
libmv_marker.search_region_min[i] = marker->search_min[i];
libmv_marker.search_region_max[i] = marker->search_max[i];
}
libmv_marker.weight = weight;
// the following the unused in the current pipeline
// TODO(tianwei): figure out how to fill in reference clip and frame
if (marker->flag & MARKER_TRACKED) {
libmv_marker.source = LIBMV_MARKER_SOURCE_TRACKED;
}
else {
libmv_marker.source = LIBMV_MARKER_SOURCE_MANUAL;
}
libmv_marker.status = LIBMV_MARKER_STATUS_UNKNOWN;
libmv_marker.reference_clip = clip_id;
libmv_marker.reference_frame = -1;
libmv_marker.model_type = LIBMV_MARKER_MODEL_TYPE_POINT;
libmv_marker.model_id = 0;
libmv_marker.disabled_channels =
((track->flag & TRACK_DISABLE_RED) ? LIBMV_MARKER_CHANNEL_R : 0) |
((track->flag & TRACK_DISABLE_GREEN) ? LIBMV_MARKER_CHANNEL_G : 0) |
((track->flag & TRACK_DISABLE_BLUE) ? LIBMV_MARKER_CHANNEL_B : 0);
//TODO(tianwei): why some framenr is negative????
if (clip_id < 0 || marker->framenr < 0)
continue;
libmv_tracksAddMarkerN(tracks, &libmv_marker);
}
}
track = track->next;
tracknr++;
}
return tracks;
}
/* get correspondences from blender tracking to libmv correspondences
* return the number of correspondences converted
*/
static int libmv_CorrespondencesFromTracking(ListBase *tracking_correspondences,
MovieClip **clips,
const int clip_num,
int **global_track_index)
{
int num_valid_corrs = 0;
MovieTrackingCorrespondence *corr;
corr = tracking_correspondences->first;
while (corr) {
int clip1 = -1, clip2 = -1, track1 = -1, track2 = -1;
MovieClip *self_clip = corr->self_clip;
MovieClip *other_clip = corr->other_clip;
/* iterate through all the clips to get the local clip id */
for (int i = 0; i < clip_num; i++) {
MovieTracking *tracking = &clips[i]->tracking;
ListBase *tracksbase = &tracking->tracks;
MovieTrackingTrack *track = tracksbase->first;
int tracknr = 0;
/* check primary clip */
if (self_clip == clips[i]) {
clip1 = i;
while (track) {
if (strcmp(corr->self_track_name, track->name) == 0) {
track1 = tracknr;
break;
}
track = track->next;
tracknr++;
}
}
/* check witness clip */
if (other_clip == clips[i]) {
clip2 = i;
while (track) {
if (strcmp(corr->other_track_name, track->name) == 0) {
track2 = tracknr;
break;
}
track = track->next;
tracknr++;
}
}
}
if (clip1 != -1 && clip2 != -1 && track1 != -1 && track2 != -1 && clip1 != clip2) {
num_valid_corrs++;
}
/* change the global index of clip2-track2 to clip1-track1 */
global_track_index[clip2][track2] = global_track_index[clip1][track1];
corr = corr->next;
}
return num_valid_corrs;
}
/* Create context for camera/object motion reconstruction.
* Copies all data needed for reconstruction from movie
* clip datablock, so editing this clip is safe during
* reconstruction job is in progress.
*/
MovieMultiviewReconstructContext *
BKE_tracking_multiview_reconstruction_context_new(MovieClip **clips,
int num_clips,
MovieTrackingObject *object,
int keyframe1, int keyframe2,
int width, int height)
{
MovieMultiviewReconstructContext *context = MEM_callocN(sizeof(MovieMultiviewReconstructContext),
"MRC data");
// alloc memory for the field members
context->all_tracks = MEM_callocN(num_clips * sizeof(libmv_TracksN*), "MRC libmv_Tracks");
context->all_reconstruction = MEM_callocN(num_clips * sizeof(struct libmv_ReconstructionN*), "MRC libmv reconstructions");
context->all_tracks_map = MEM_callocN(num_clips * sizeof(TracksMap*), "MRC TracksMap");
context->all_camera_intrinsics_options = MEM_callocN(num_clips * sizeof(libmv_CameraIntrinsicsOptions), "MRC camera intrinsics");
context->all_sfra = MEM_callocN(num_clips * sizeof(int), "MRC start frames");
context->all_efra = MEM_callocN(num_clips * sizeof(int), "MRC end frames");
context->all_refine_flags = MEM_callocN(num_clips * sizeof(int), "MRC refine flags");
context->keyframe1 = keyframe1;
context->keyframe2 = keyframe2;
context->clip_num = num_clips;
// initial global track index to [0,..., N1 - 1], [N1,..., N1+N2-1], so on and so forth
context->track_global_index = MEM_callocN(num_clips * sizeof(int*), "global track index of each clip");
int global_index = 0;
for (int i = 0; i < num_clips; i++) {
MovieClip *clip = clips[i];
MovieTracking *tracking = &clip->tracking;
ListBase *tracksbase = BKE_tracking_object_get_tracks(tracking, object);
int num_tracks = BLI_listbase_count(tracksbase);
context->track_global_index[i] = MEM_callocN(num_tracks * sizeof(int), "global track index for clip i");
for (int j = 0; j < num_tracks; j++) {
context->track_global_index[i][j] = global_index++;
}
}
for (int i = 0; i < num_clips; i++) {
MovieClip *clip = clips[i];
MovieTracking *tracking = &clip->tracking;
ListBase *tracksbase = BKE_tracking_object_get_tracks(tracking, object);
float aspy = 1.0f / tracking->camera.pixel_aspect;
int num_tracks = BLI_listbase_count(tracksbase);
if (i == 0)
printf("%d tracks in the primary clip 0\n", num_tracks);
else
printf("%d tracks in the witness camera %d\n", num_tracks, i);
int sfra = INT_MAX, efra = INT_MIN;
MovieTrackingTrack *track;
// setting context only from information in the primary clip
if (i == 0) {
// correspondences are recorded in the primary clip (0), convert local track id to global track id
int num_valid_corrs = libmv_CorrespondencesFromTracking(&tracking->correspondences, clips, num_clips,
context->track_global_index);
printf("num valid corrs: %d\n", num_valid_corrs);
BLI_assert(num_valid_corrs == BLI_listbase_count(&tracking->correspondences));
BLI_strncpy(context->object_name, object->name, sizeof(context->object_name));
context->is_camera = object->flag & TRACKING_OBJECT_CAMERA;
context->motion_flag = tracking->settings.motion_flag;
context->select_keyframes =
(tracking->settings.reconstruction_flag & TRACKING_USE_KEYFRAME_SELECTION) != 0;
}
tracking_cameraIntrinscisOptionsFromTracking(tracking,
width, height,
&context->all_camera_intrinsics_options[i]);
context->all_tracks_map[i] = tracks_map_new(context->object_name, context->is_camera, num_tracks, 0);
context->all_refine_flags[i] = multiview_refine_intrinsics_get_flags(tracking, object);
track = tracksbase->first;
while (track) {
int first = 0, last = track->markersnr - 1;
MovieTrackingMarker *first_marker = &track->markers[0];
MovieTrackingMarker *last_marker = &track->markers[track->markersnr - 1];
/* find first not-disabled marker */
while (first <= track->markersnr - 1 && first_marker->flag & MARKER_DISABLED) {
first++;
first_marker++;
}
/* find last not-disabled marker */
while (last >= 0 && last_marker->flag & MARKER_DISABLED) {
last--;
last_marker--;
}
if (first <= track->markersnr - 1)
sfra = min_ii(sfra, first_marker->framenr);
if (last >= 0)
efra = max_ii(efra, last_marker->framenr);
tracks_map_insert(context->all_tracks_map[i], track, NULL);
track = track->next;
}
context->all_sfra[i] = sfra;
context->all_efra[i] = efra;
context->all_tracks[i] = libmv_multiview_tracks_new(clip, i, tracksbase, context->track_global_index[i],
width, height * aspy);
}
return context;
}
/* Free memory used by a reconstruction process. */
void BKE_tracking_multiview_reconstruction_context_free(MovieMultiviewReconstructContext *context)
{
for (int i = 0; i < context->clip_num; i++) {
libmv_tracksDestroyN(context->all_tracks[i]);
if (context->all_reconstruction[i])
libmv_reconstructionNDestroy(context->all_reconstruction[i]);
if (context->track_global_index[i])
MEM_freeN(context->track_global_index[i]);
tracks_map_free(context->all_tracks_map[i], NULL);
}
MEM_freeN(context->all_tracks);
MEM_freeN(context->all_reconstruction);
MEM_freeN(context->all_camera_intrinsics_options);
MEM_freeN(context->all_tracks_map);
MEM_freeN(context->all_sfra);
MEM_freeN(context->all_efra);
MEM_freeN(context->all_refine_flags);
MEM_freeN(context->track_global_index);
MEM_freeN(context);
}
/* Fill in multiview reconstruction options structure from reconstruction context. */
static void multiviewReconstructionOptionsFromContext(libmv_MultiviewReconstructionOptions *reconstruction_options,
MovieMultiviewReconstructContext *context)
{
reconstruction_options->select_keyframes = context->select_keyframes;
reconstruction_options->keyframe1 = context->keyframe1;
reconstruction_options->keyframe2 = context->keyframe2;
reconstruction_options->all_refine_intrinsics = context->all_refine_flags;
}
/* Callback which is called from libmv side to update progress in the interface. */
static void multiview_reconstruct_update_solve_cb(void *customdata, double progress, const char *message)
{
MultiviewReconstructProgressData *progressdata = customdata;
if (progressdata->progress) {
*progressdata->progress = progress;
*progressdata->do_update = true;
}
BLI_snprintf(progressdata->stats_message, progressdata->message_size, "Solving cameras | %s", message);
}
/* stop is not actually used at this moment, so reconstruction
* job could not be stopped.
*
* do_update, progress and stat_message are set by reconstruction
* callback in libmv side and passing to an interface.
*/
void BKE_tracking_multiview_reconstruction_solve(MovieMultiviewReconstructContext *context, short *stop, short *do_update,
float *progress, char *stats_message, int message_size)
{
float error;
MultiviewReconstructProgressData progressdata;
libmv_MultiviewReconstructionOptions reconstruction_options;
progressdata.stop = stop;
progressdata.do_update = do_update;
progressdata.progress = progress;
progressdata.stats_message = stats_message;
progressdata.message_size = message_size;
multiviewReconstructionOptionsFromContext(&reconstruction_options, context);
if (context->motion_flag & TRACKING_MOTION_MODAL) {
// TODO(tianwei): leave tracking solve object for now
//context->reconstruction = libmv_solveModal(context->tracks,
// &context->camera_intrinsics_options,
// &reconstruction_options,
// multiview_reconstruct_update_solve_cb, &progressdata);
}
else {
context->all_reconstruction = libmv_solveMultiviewReconstruction(
context->clip_num,
(const libmv_TracksN **) context->all_tracks,
(const libmv_CameraIntrinsicsOptions *) context->all_camera_intrinsics_options,
&reconstruction_options,
multiview_reconstruct_update_solve_cb, &progressdata);
if (context->select_keyframes) {
/* store actual keyframes used for reconstruction to update them in the interface later */
context->keyframe1 = reconstruction_options.keyframe1;
context->keyframe2 = reconstruction_options.keyframe2;
}
}
error = libmv_multiviewReprojectionError(context->clip_num,
(const libmv_ReconstructionN**)context->all_reconstruction);
context->reprojection_error = error;
}
/* Retrieve multiview refined camera intrinsics from libmv to blender. */
static void multiview_reconstruct_retrieve_libmv_intrinsics(MovieMultiviewReconstructContext *context,
int clip_id,
MovieTracking *tracking)
{
struct libmv_ReconstructionN *libmv_reconstruction = context->all_reconstruction[clip_id];
struct libmv_CameraIntrinsics *libmv_intrinsics = libmv_reconstructionNExtractIntrinsics(libmv_reconstruction);
libmv_CameraIntrinsicsOptions camera_intrinsics_options;
libmv_cameraIntrinsicsExtractOptions(libmv_intrinsics, &camera_intrinsics_options);
tracking_trackingCameraFromIntrinscisOptions(tracking,
&camera_intrinsics_options);
}
/* Retrieve multiview reconstructed tracks from libmv to blender.
* and also copies reconstructed cameras from libmv to movie clip datablock.
*/
static bool multiview_reconstruct_retrieve_libmv_info(MovieMultiviewReconstructContext *context,
int clip_id,
MovieTracking *tracking)
{
// libmv reconstruction results in saved in context->all_reconstruction[0]
struct libmv_ReconstructionN *libmv_reconstruction = context->all_reconstruction[0];
MovieTrackingReconstruction *reconstruction = NULL;
MovieReconstructedCamera *reconstructed;
MovieTrackingTrack *track;
ListBase *tracksbase = NULL;
int tracknr = 0, a;
bool ok = true;
bool origin_set = false;
int sfra = context->all_sfra[clip_id], efra = context->all_efra[clip_id];
float imat[4][4];
if (context->is_camera) {
tracksbase = &tracking->tracks;
reconstruction = &tracking->reconstruction;
}
else {
MovieTrackingObject *object = BKE_tracking_object_get_named(tracking, context->object_name);
tracksbase = &object->tracks;
reconstruction = &object->reconstruction;
}
unit_m4(imat);
int *track_index_map = context->track_global_index[clip_id]; // this saves the global track index mapping
track = tracksbase->first;
while (track) {
double pos[3];
if (libmv_multiviewPointForTrack(libmv_reconstruction, track_index_map[tracknr], pos)) {
track->bundle_pos[0] = pos[0];
track->bundle_pos[1] = pos[1];
track->bundle_pos[2] = pos[2];
track->flag |= TRACK_HAS_BUNDLE;
track->error = libmv_multiviewReprojectionErrorForTrack(libmv_reconstruction, track_index_map[tracknr]);
}
else {
track->flag &= ~TRACK_HAS_BUNDLE;
ok = false;
printf("Unable to reconstruct position for track #%d '%s'\n", tracknr, track->name);
}
track = track->next;
tracknr++;
}
if (reconstruction->cameras)
MEM_freeN(reconstruction->cameras);
reconstruction->camnr = 0;
reconstruction->cameras = NULL;
reconstructed = MEM_callocN((efra - sfra + 1) * sizeof(MovieReconstructedCamera),
"temp reconstructed camera");
for (a = sfra; a <= efra; a++) {
double matd[4][4];
if (libmv_multiviewCameraForFrame(libmv_reconstruction, clip_id, a, matd)) {
int i, j;
float mat[4][4];
float error = libmv_multiviewReprojectionErrorForFrame(libmv_reconstruction, clip_id, a);
for (i = 0; i < 4; i++) {
for (j = 0; j < 4; j++)
mat[i][j] = matd[i][j];
}
/* Ensure first camera has got zero rotation and transform.
* This is essential for object tracking to work -- this way
* we'll always know object and environment are properly
* oriented.
*
* There's one weak part tho, which is requirement object
* motion starts at the same frame as camera motion does,
* otherwise that;' be a russian roulette whether object is
* aligned correct or not.
*/
if (!origin_set) {
invert_m4_m4(imat, mat);
unit_m4(mat);
origin_set = true;
}
else {
mul_m4_m4m4(mat, imat, mat);
}
copy_m4_m4(reconstructed[reconstruction->camnr].mat, mat);
reconstructed[reconstruction->camnr].framenr = a;
reconstructed[reconstruction->camnr].error = error;
reconstruction->camnr++;
}
else {
ok = false;
printf("No camera for frame %d %d\n", clip_id, a);
}
}
if (reconstruction->camnr) {
int size = reconstruction->camnr * sizeof(MovieReconstructedCamera);
reconstruction->cameras = MEM_callocN(size, "reconstructed camera");
memcpy(reconstruction->cameras, reconstructed, size);
}
if (origin_set) {
track = tracksbase->first;
while (track) {
if (track->flag & TRACK_HAS_BUNDLE)
mul_v3_m4v3(track->bundle_pos, imat, track->bundle_pos);
track = track->next;
}
}
MEM_freeN(reconstructed);
return ok;
}
/* Retrieve all the multiview reconstruction libmv data from context to blender's side data blocks. */
static int multiview_reconstruct_retrieve_libmv(MovieMultiviewReconstructContext *context,
int clip_id,
MovieTracking *tracking)
{
/* take the intrinsics back from libmv */
multiview_reconstruct_retrieve_libmv_intrinsics(context, clip_id, tracking);
/* take reconstructed camera and tracks info back from libmv */
return multiview_reconstruct_retrieve_libmv_info(context, clip_id, tracking);
}
/* Finish multiview reconstruction process by copying reconstructed data
* to the each movie clip datablock.
*/
bool BKE_tracking_multiview_reconstruction_finish(MovieMultiviewReconstructContext *context, MovieClip** clips)
{
if (!libmv_multiviewReconstructionIsValid(context->clip_num,
(const libmv_ReconstructionN**) context->all_reconstruction)) {
printf("Failed solve the multiview motion: at least one clip failed\n");
return false;
}
for (int i = 0; i < context->clip_num; i++) {
MovieTrackingReconstruction *reconstruction;
MovieTrackingObject *object;
MovieClip *clip = clips[i];
MovieTracking *tracking = &clip->tracking;
tracks_map_merge(context->all_tracks_map[i], tracking);
BKE_tracking_dopesheet_tag_update(tracking);
object = BKE_tracking_object_get_named(tracking, context->object_name);
if (context->is_camera)
reconstruction = &tracking->reconstruction;
else
reconstruction = &object->reconstruction;
///* update keyframe in the interface */
if (context->select_keyframes) {
object->keyframe1 = context->keyframe1;
object->keyframe2 = context->keyframe2;
}
reconstruction->error = context->reprojection_error;
reconstruction->flag |= TRACKING_RECONSTRUCTED;
if (!multiview_reconstruct_retrieve_libmv(context, i, tracking))
return false;
}
return true;
}

View File

@@ -581,3 +581,33 @@ void BKE_tracking_reconstruction_scale(MovieTracking *tracking, float scale[3])
tracking_scale_reconstruction(tracksbase, reconstruction, scale);
}
}
/********************** Multi-view reconstruction functions *********************/
/* Perform early check on whether everything is fine to start reconstruction. */
bool BKE_tracking_multiview_reconstruction_check(MovieClip **clips, MovieTrackingObject *object,
char *error_msg, int error_size)
{
#ifndef WITH_LIBMV
BLI_strncpy(error_msg, N_("Blender is compiled without motion tracking library"), error_size);
return false;
#endif
MovieClip *primary_clip = clips[0];
MovieTracking *tracking = &primary_clip->tracking;
if (tracking->settings.motion_flag & TRACKING_MOTION_MODAL) {
/* TODO: check for number of tracks? */
return true;
}
else if ((tracking->settings.reconstruction_flag & TRACKING_USE_KEYFRAME_SELECTION) == 0) {
/* automatic keyframe selection does not require any pre-process checks */
if (reconstruct_count_tracks_on_both_keyframes(tracking, object) < 8) {
BLI_strncpy(error_msg,
N_("At least 8 common tracks on both keyframes are needed for reconstruction"),
error_size);
return false;
}
}
return true;
}

View File

@@ -6560,6 +6560,7 @@ static void lib_link_screen(FileData *fd, Main *main)
SpaceClip *sclip = (SpaceClip *)sl;
sclip->clip = newlibadr_real_us(fd, sc->id.lib, sclip->clip);
sclip->secondary_clip = newlibadr_real_us(fd, sc->id.lib, sclip->secondary_clip);
sclip->mask_info.mask = newlibadr_real_us(fd, sc->id.lib, sclip->mask_info.mask);
break;
}
@@ -6949,6 +6950,7 @@ void blo_lib_link_screen_restore(Main *newmain, bScreen *curscreen, Scene *cursc
SpaceClip *sclip = (SpaceClip *)sl;
sclip->clip = restore_pointer_by_name(id_map, (ID *)sclip->clip, USER_REAL);
sclip->secondary_clip = restore_pointer_by_name(id_map, (ID *)sclip->secondary_clip, USER_REAL);
sclip->mask_info.mask = restore_pointer_by_name(id_map, (ID *)sclip->mask_info.mask, USER_REAL);
sclip->scopes.ok = 0;
@@ -7588,6 +7590,12 @@ static void direct_link_moviePlaneTracks(FileData *fd, ListBase *plane_tracks_ba
}
}
static void direct_link_movieCorrespondences(FileData *fd,
ListBase *correspondences)
{
link_list(fd, correspondences);
}
static void direct_link_movieclip(FileData *fd, MovieClip *clip)
{
MovieTracking *tracking = &clip->tracking;
@@ -7603,6 +7611,7 @@ static void direct_link_movieclip(FileData *fd, MovieClip *clip)
direct_link_movieTracks(fd, &tracking->tracks);
direct_link_moviePlaneTracks(fd, &tracking->plane_tracks);
direct_link_movieCorrespondences(fd, &tracking->correspondences);
direct_link_movieReconstruction(fd, &tracking->reconstruction);
clip->tracking.act_track = newdataadr(fd, clip->tracking.act_track);
@@ -7646,6 +7655,17 @@ static void lib_link_moviePlaneTracks(FileData *fd, MovieClip *clip, ListBase *t
}
}
static void lib_link_movieCorrespondences(FileData *fd,
MovieClip *clip,
ListBase *correspondences)
{
MovieTrackingCorrespondence *corr;
for (corr = correspondences->first; corr != NULL; corr = corr->next) {
corr->other_clip = newlibadr(fd, clip->id.lib, corr->other_clip);
corr->self_clip = newlibadr(fd, clip->id.lib, corr->self_clip);
}
}
static void lib_link_movieclip(FileData *fd, Main *main)
{
for (MovieClip *clip = main->movieclip.first; clip; clip = clip->id.next) {
@@ -7659,6 +7679,7 @@ static void lib_link_movieclip(FileData *fd, Main *main)
lib_link_movieTracks(fd, clip, &tracking->tracks);
lib_link_moviePlaneTracks(fd, clip, &tracking->plane_tracks);
lib_link_movieCorrespondences(fd, clip, &tracking->correspondences);
for (MovieTrackingObject *object = tracking->objects.first; object; object = object->next) {
lib_link_movieTracks(fd, clip, &object->tracks);

View File

@@ -290,7 +290,9 @@ void blo_do_versions_270(FileData *fd, Library *UNUSED(lib), Main *main)
if (space_link->spacetype == SPACE_CLIP) {
SpaceClip *space_clip = (SpaceClip *) space_link;
if (space_clip->mode != SC_MODE_MASKEDIT) {
space_clip->mode = SC_MODE_TRACKING;
if (space_clip->mode != SC_MODE_TRACKING) {
space_clip->mode = SC_MODE_CORRESPONDENCE;
}
}
}
}
@@ -1609,6 +1611,33 @@ void blo_do_versions_270(FileData *fd, Library *UNUSED(lib), Main *main)
}
}
}
/* initialize regiondata for each SpaceClip, due to the newly brought RegionSpaceClip */
if (!DNA_struct_elem_find(fd->filesdna, "SpaceClip", "MovieClip", "*secondary_clip")) {
for (bScreen *screen = main->screen.first; screen != NULL; screen = screen->id.next) {
for (ScrArea *sa = screen->areabase.first; sa != NULL; sa = sa->next) {
for (SpaceLink *sl = sa->spacedata.first; sl != NULL; sl = sl->next) {
if (sl->spacetype == SPACE_CLIP) {
ListBase *regionbase = (sl == sa->spacedata.first) ? &sa->regionbase : &sl->regionbase;
for (ARegion *ar = regionbase->first; ar != NULL; ar = ar->next) {
if (ar->regiontype == RGN_TYPE_WINDOW) {
SpaceClip *sc = (SpaceClip *)sl;
RegionSpaceClip *rsc = MEM_callocN(sizeof(RegionSpaceClip), "region data for clip");
rsc->xof = sc->xof;
rsc->yof = sc->yof;
rsc->xlockof = sc->xlockof;
rsc->ylockof = sc->ylockof;
rsc->zoom = sc->zoom;
rsc->flag = RSC_MAIN_CLIP;
ar->regiondata = rsc;
}
}
}
}
}
}
}
}
}

View File

@@ -2805,10 +2805,16 @@ static void write_region(WriteData *wd, ARegion *ar, int spacetype)
}
else
printf("regiondata write missing!\n");
printf("spaceview3d regiondata write missing!\n");
break;
case SPACE_CLIP:
if (ar->regiontype == RGN_TYPE_WINDOW) {
RegionSpaceClip *rsc = ar->regiondata;
writestruct(wd, DATA, RegionSpaceClip, 1, rsc);
}
break;
default:
printf("regiondata write missing!\n");
printf("default regiondata write missing!\n");
}
}
}
@@ -3286,6 +3292,14 @@ static void write_moviePlaneTracks(WriteData *wd, ListBase *plane_tracks_base)
}
}
static void write_movieCorrespondences(WriteData *wd, ListBase *correspondence_base)
{
MovieTrackingCorrespondence *corr;
for (corr = correspondence_base->first; corr != NULL; corr = corr->next) {
writestruct(wd, DATA, MovieTrackingCorrespondence, 1, corr);
}
}
static void write_movieReconstruction(WriteData *wd, MovieTrackingReconstruction *reconstruction)
{
if (reconstruction->camnr) {
@@ -3309,6 +3323,7 @@ static void write_movieclip(WriteData *wd, MovieClip *clip)
write_movieTracks(wd, &tracking->tracks);
write_moviePlaneTracks(wd, &tracking->plane_tracks);
write_movieReconstruction(wd, &tracking->reconstruction);
write_movieCorrespondences(wd, &tracking->correspondences);
object = tracking->objects.first;
while (object) {

View File

@@ -51,6 +51,7 @@ int ED_space_clip_view_clip_poll(struct bContext *C);
int ED_space_clip_tracking_poll(struct bContext *C);
int ED_space_clip_maskedit_poll(struct bContext *C);
int ED_space_clip_maskedit_mask_poll(struct bContext *C);
int ED_space_clip_correspondence_poll(struct bContext *C);
void ED_space_clip_get_size(struct SpaceClip *sc, int *width, int *height);
void ED_space_clip_get_size_fl(struct SpaceClip *sc, float size[2]);
@@ -60,8 +61,8 @@ void ED_space_clip_get_aspect_dimension_aware(struct SpaceClip *sc, float *aspx,
int ED_space_clip_get_clip_frame_number(struct SpaceClip *sc);
struct ImBuf *ED_space_clip_get_buffer(struct SpaceClip *sc);
struct ImBuf *ED_space_clip_get_stable_buffer(struct SpaceClip *sc, float loc[2], float *scale, float *angle);
struct ImBuf *ED_space_clip_get_buffer(struct SpaceClip *sc, struct ARegion *ar);
struct ImBuf *ED_space_clip_get_stable_buffer(struct SpaceClip *sc, struct ARegion *ar, float loc[2], float *scale, float *angle);
bool ED_space_clip_color_sample(struct Scene *scene, struct SpaceClip *sc, struct ARegion *ar, int mval[2], float r_col[3]);
@@ -75,9 +76,16 @@ void ED_clip_mouse_pos(struct SpaceClip *sc, struct ARegion *ar, const int mval[
bool ED_space_clip_check_show_trackedit(struct SpaceClip *sc);
bool ED_space_clip_check_show_maskedit(struct SpaceClip *sc);
bool ED_space_clip_check_show_correspondence(struct SpaceClip *sc);
struct MovieClip *ED_space_clip_get_clip(struct SpaceClip *sc);
struct MovieClip *ED_space_clip_get_secondary_clip(struct SpaceClip *sc);
struct MovieClip *ED_space_clip_get_clip_in_region(struct SpaceClip *sc, struct ARegion *ar);
void ED_space_clip_set_clip(struct bContext *C, struct bScreen *screen, struct SpaceClip *sc, struct MovieClip *clip);
void ED_space_clip_set_secondary_clip(struct bContext *C, struct bScreen *screen, struct SpaceClip *sc, struct MovieClip *secondary_clip);
void ED_clip_update_correspondence_mode(struct bContext *C, struct SpaceClip *sc);
void ED_space_clip_region_set_lock_zero(struct bContext *C);
struct Mask *ED_space_clip_get_mask(struct SpaceClip *sc);
void ED_space_clip_set_mask(struct bContext *C, struct SpaceClip *sc, struct Mask *mask);

View File

@@ -591,8 +591,8 @@ DEF_ICON(MOD_TRIANGULATE)
DEF_ICON(MOD_WIREFRAME)
DEF_ICON(MOD_DATA_TRANSFER)
DEF_ICON(MOD_NORMALEDIT)
DEF_ICON(MOD_CORRESPONDENCE)
#ifndef DEF_ICON_BLANK_SKIP
DEF_ICON(BLANK169)
DEF_ICON(BLANK170)
DEF_ICON(BLANK171)
DEF_ICON(BLANK172)

View File

@@ -301,7 +301,10 @@ enum {
TH_METADATA_TEXT,
TH_EDGE_BEVEL,
TH_VERTEX_BEVEL
TH_VERTEX_BEVEL,
/* color theme for linked markers */
TH_LINKED_MARKER,
TH_SEL_LINKED_MARKER
};
/* XXX WARNING: previous is saved in file, so do not change order! */

View File

@@ -597,6 +597,10 @@ const unsigned char *UI_ThemeGetColorPtr(bTheme *btheme, int spacetype, int colo
cp = ts->act_marker; break;
case TH_SEL_MARKER:
cp = ts->sel_marker; break;
case TH_LINKED_MARKER:
cp = ts->linked_marker; break;
case TH_SEL_LINKED_MARKER:
cp = ts->sel_linked_marker; break;
case TH_BUNDLE_SOLID:
cp = ts->bundle_solid; break;
case TH_DIS_MARKER:
@@ -1196,6 +1200,8 @@ void ui_theme_init_default(void)
rgba_char_args_set(btheme->tclip.act_marker, 0xff, 0xff, 0xff, 255);
rgba_char_args_set(btheme->tclip.sel_marker, 0xff, 0xff, 0x00, 255);
rgba_char_args_set(btheme->tclip.dis_marker, 0x7f, 0x00, 0x00, 255);
rgba_char_args_set(btheme->tclip.linked_marker, 0x85, 0xc1, 0xe9, 255);
rgba_char_args_set(btheme->tclip.sel_linked_marker, 0xcb, 0x43, 0x35, 255);
rgba_char_args_set(btheme->tclip.lock_marker, 0x7f, 0x7f, 0x7f, 255);
rgba_char_args_set(btheme->tclip.path_before, 0xff, 0x00, 0x00, 255);
rgba_char_args_set(btheme->tclip.path_after, 0x00, 0x00, 0xff, 255);
@@ -2739,6 +2745,9 @@ void init_userdef_do_versions(void)
for (btheme = U.themes.first; btheme; btheme = btheme->next) {
if (btheme->tact.keyframe_scale_fac < 0.1f)
btheme->tact.keyframe_scale_fac = 1.0f;
rgba_char_args_set(btheme->tclip.linked_marker, 0x85, 0xc1, 0xe9, 255);
rgba_char_args_set(btheme->tclip.sel_linked_marker, 0xcb, 0x43, 0x35, 255);
}
}

View File

@@ -53,6 +53,7 @@ set(SRC
clip_utils.c
space_clip.c
tracking_ops.c
tracking_ops_correspondence.c
tracking_ops_detect.c
tracking_ops_orient.c
tracking_ops_plane.c

View File

@@ -105,8 +105,9 @@ void uiTemplateMovieClip(uiLayout *layout, bContext *C, PointerRNA *ptr, const c
uiLayoutSetContextPointer(layout, "edit_movieclip", &clipptr);
if (!compact)
if (!compact) {
uiTemplateID(layout, C, ptr, propname, NULL, "CLIP_OT_open", NULL);
}
if (clip) {
uiLayout *col;

View File

@@ -251,7 +251,7 @@ static void draw_movieclip_cache(SpaceClip *sc, ARegion *ar, MovieClip *clip, Sc
static void draw_movieclip_notes(SpaceClip *sc, ARegion *ar)
{
MovieClip *clip = ED_space_clip_get_clip(sc);
MovieClip *clip = ED_space_clip_get_clip_in_region(sc, ar);
MovieTracking *tracking = &clip->tracking;
char str[256] = {0};
bool full_redraw = false;
@@ -285,7 +285,7 @@ static void draw_movieclip_muted(ARegion *ar, int width, int height, float zoomx
static void draw_movieclip_buffer(const bContext *C, SpaceClip *sc, ARegion *ar, ImBuf *ibuf,
int width, int height, float zoomx, float zoomy)
{
MovieClip *clip = ED_space_clip_get_clip(sc);
MovieClip *clip = ED_space_clip_get_clip_in_region(sc, ar);
int filter = GL_LINEAR;
int x, y;
@@ -328,7 +328,7 @@ static void draw_movieclip_buffer(const bContext *C, SpaceClip *sc, ARegion *ar,
static void draw_stabilization_border(SpaceClip *sc, ARegion *ar, int width, int height, float zoomx, float zoomy)
{
int x, y;
MovieClip *clip = ED_space_clip_get_clip(sc);
MovieClip *clip = ED_space_clip_get_clip_in_region(sc, ar);
/* find window pixel coordinates of origin */
UI_view2d_view_to_region(&ar->v2d, 0.0f, 0.0f, &x, &y);
@@ -472,7 +472,7 @@ static void draw_track_path(SpaceClip *sc, MovieClip *UNUSED(clip), MovieTrackin
glEnd();
}
static void draw_marker_outline(SpaceClip *sc, MovieTrackingTrack *track, MovieTrackingMarker *marker,
static void draw_marker_outline(SpaceClip *sc, ARegion *ar, MovieTrackingTrack *track, MovieTrackingMarker *marker,
const float marker_pos[2], int width, int height)
{
int tiny = sc->flag & SC_SHOW_TINY_MARKER;
@@ -481,8 +481,9 @@ static void draw_marker_outline(SpaceClip *sc, MovieTrackingTrack *track, MovieT
UI_ThemeColor(TH_MARKER_OUTLINE);
px[0] = 1.0f / width / sc->zoom;
px[1] = 1.0f / height / sc->zoom;
RegionSpaceClip *rsc = (RegionSpaceClip*) ar->regiondata;
px[0] = 1.0f / width / rsc->zoom;
px[1] = 1.0f / height / rsc->zoom;
glLineWidth(tiny ? 1.0f : 3.0f);
@@ -547,7 +548,22 @@ static void draw_marker_outline(SpaceClip *sc, MovieTrackingTrack *track, MovieT
glPopMatrix();
}
static void track_colors(MovieTrackingTrack *track, int act, float col[3], float scol[3])
/* return whether the track is a linked track by iterating the correspondence base in tracking. */
static bool is_track_linked(MovieTracking *tracking, MovieClip *clip, MovieTrackingTrack *track)
{
MovieTrackingCorrespondence *corr = tracking->correspondences.first;
while (corr) {
if ((corr->self_clip == clip && strcmp(corr->self_track_name, track->name) == 0) ||
(corr->other_clip == clip && strcmp(corr->other_track_name, track->name) == 0))
{
return true;
}
corr = corr->next;
}
return false;
}
static void track_colors(MovieTrackingTrack *track, int act, int link, float col[3], float scol[3])
{
if (track->flag & TRACK_CUSTOMCOLOR) {
if (act)
@@ -558,26 +574,36 @@ static void track_colors(MovieTrackingTrack *track, int act, float col[3], float
mul_v3_v3fl(col, track->color, 0.5f);
}
else {
UI_GetThemeColor3fv(TH_MARKER, col);
if (link)
UI_GetThemeColor3fv(TH_LINKED_MARKER, col);
else
UI_GetThemeColor3fv(TH_MARKER, col);
if (act)
UI_GetThemeColor3fv(TH_ACT_MARKER, scol);
else
UI_GetThemeColor3fv(TH_SEL_MARKER, scol);
if (link)
UI_GetThemeColor3fv(TH_SEL_LINKED_MARKER, scol);
else
UI_GetThemeColor3fv(TH_SEL_MARKER, scol);
}
}
static void draw_marker_areas(SpaceClip *sc, MovieTrackingTrack *track, MovieTrackingMarker *marker,
static void draw_marker_areas(SpaceClip *sc, ARegion *ar, MovieTrackingTrack *track, MovieTrackingMarker *marker,
const float marker_pos[2], int width, int height, int act, int sel)
{
int tiny = sc->flag & SC_SHOW_TINY_MARKER;
bool show_search = false;
float col[3], scol[3], px[2];
track_colors(track, act, col, scol);
MovieClip *mc = ED_space_clip_get_clip_in_region(sc, ar);
MovieTracking *tracking = &mc->tracking;
bool link = is_track_linked(tracking, mc, track);
track_colors(track, act, link, col, scol);
px[0] = 1.0f / width / sc->zoom;
px[1] = 1.0f / height / sc->zoom;
RegionSpaceClip *rsc = (RegionSpaceClip*) ar->regiondata;
px[0] = 1.0f / width / rsc->zoom;
px[1] = 1.0f / height / rsc->zoom;
glLineWidth(1.0f);
@@ -785,7 +811,7 @@ static void draw_marker_slide_triangle(float x, float y, float dx, float dy, int
glEnd();
}
static void draw_marker_slide_zones(SpaceClip *sc, MovieTrackingTrack *track, MovieTrackingMarker *marker,
static void draw_marker_slide_zones(SpaceClip *sc, ARegion *ar, MovieTrackingTrack *track, MovieTrackingMarker *marker,
const float marker_pos[2], int outline, int sel, int act, int width, int height)
{
float dx, dy, patdx, patdy, searchdx, searchdy;
@@ -798,7 +824,10 @@ static void draw_marker_slide_zones(SpaceClip *sc, MovieTrackingTrack *track, Mo
if (!TRACK_VIEW_SELECTED(sc, track) || track->flag & TRACK_LOCKED)
return;
track_colors(track, act, col, scol);
MovieClip *mc = ED_space_clip_get_clip_in_region(sc, ar);
MovieTracking *tracking = &mc->tracking;
bool link = is_track_linked(tracking, mc, track);
track_colors(track, act, link, col, scol);
if (outline) {
UI_ThemeColor(TH_MARKER_OUTLINE);
@@ -807,8 +836,9 @@ static void draw_marker_slide_zones(SpaceClip *sc, MovieTrackingTrack *track, Mo
glPushMatrix();
glTranslate2fv(marker_pos);
dx = 6.0f / width / sc->zoom;
dy = 6.0f / height / sc->zoom;
RegionSpaceClip *rsc = (RegionSpaceClip*) ar->regiondata;
dx = 6.0f / width / rsc->zoom;
dy = 6.0f / height / rsc->zoom;
side = get_shortest_pattern_side(marker);
patdx = min_ff(dx * 2.0f / 3.0f, side / 6.0f) * UI_DPI_FAC;
@@ -817,8 +847,8 @@ static void draw_marker_slide_zones(SpaceClip *sc, MovieTrackingTrack *track, Mo
searchdx = min_ff(dx, (marker->search_max[0] - marker->search_min[0]) / 6.0f) * UI_DPI_FAC;
searchdy = min_ff(dy, (marker->search_max[1] - marker->search_min[1]) / 6.0f) * UI_DPI_FAC;
px[0] = 1.0f / sc->zoom / width / sc->scale;
px[1] = 1.0f / sc->zoom / height / sc->scale;
px[0] = 1.0f / rsc->zoom / width / sc->scale;
px[1] = 1.0f / rsc->zoom / height / sc->scale;
if ((sc->flag & SC_SHOW_MARKER_SEARCH) && ((track->search_flag & SELECT) == sel || outline)) {
if (!outline) {
@@ -1099,7 +1129,7 @@ static void draw_plane_marker_image(Scene *scene,
BKE_image_release_ibuf(image, ibuf, lock);
}
static void draw_plane_marker_ex(SpaceClip *sc, Scene *scene, MovieTrackingPlaneTrack *plane_track,
static void draw_plane_marker_ex(SpaceClip *sc, ARegion *ar, Scene *scene, MovieTrackingPlaneTrack *plane_track,
MovieTrackingPlaneMarker *plane_marker, bool is_active_track,
bool draw_outline, int width, int height)
{
@@ -1124,8 +1154,9 @@ static void draw_plane_marker_ex(SpaceClip *sc, Scene *scene, MovieTrackingPlane
}
}
px[0] = 1.0f / width / sc->zoom;
px[1] = 1.0f / height / sc->zoom;
RegionSpaceClip *rsc = (RegionSpaceClip*) ar->regiondata;
px[0] = 1.0f / width / rsc->zoom;
px[1] = 1.0f / height / rsc->zoom;
/* Draw image */
if (draw_outline == false) {
@@ -1162,12 +1193,12 @@ static void draw_plane_marker_ex(SpaceClip *sc, Scene *scene, MovieTrackingPlane
glBegin(GL_LINES);
getArrowEndPoint(width, height, sc->zoom, plane_marker->corners[0], plane_marker->corners[1], end_point);
getArrowEndPoint(width, height, rsc->zoom, plane_marker->corners[0], plane_marker->corners[1], end_point);
glColor3f(1.0, 0.0, 0.0f);
glVertex2fv(plane_marker->corners[0]);
glVertex2fv(end_point);
getArrowEndPoint(width, height, sc->zoom, plane_marker->corners[0], plane_marker->corners[3], end_point);
getArrowEndPoint(width, height, rsc->zoom, plane_marker->corners[0], plane_marker->corners[3], end_point);
glColor3f(0.0, 1.0, 0.0f);
glVertex2fv(plane_marker->corners[0]);
glVertex2fv(end_point);
@@ -1195,28 +1226,28 @@ static void draw_plane_marker_ex(SpaceClip *sc, Scene *scene, MovieTrackingPlane
}
}
static void draw_plane_marker_outline(SpaceClip *sc, Scene *scene, MovieTrackingPlaneTrack *plane_track,
static void draw_plane_marker_outline(SpaceClip *sc, ARegion *ar, Scene *scene, MovieTrackingPlaneTrack *plane_track,
MovieTrackingPlaneMarker *plane_marker, int width, int height)
{
draw_plane_marker_ex(sc, scene, plane_track, plane_marker, false, true, width, height);
draw_plane_marker_ex(sc, ar, scene, plane_track, plane_marker, false, true, width, height);
}
static void draw_plane_marker(SpaceClip *sc, Scene *scene, MovieTrackingPlaneTrack *plane_track,
static void draw_plane_marker(SpaceClip *sc, ARegion *ar, Scene *scene, MovieTrackingPlaneTrack *plane_track,
MovieTrackingPlaneMarker *plane_marker, bool is_active_track,
int width, int height)
{
draw_plane_marker_ex(sc, scene, plane_track, plane_marker, is_active_track, false, width, height);
draw_plane_marker_ex(sc, ar, scene, plane_track, plane_marker, is_active_track, false, width, height);
}
static void draw_plane_track(SpaceClip *sc, Scene *scene, MovieTrackingPlaneTrack *plane_track,
static void draw_plane_track(SpaceClip *sc, ARegion *ar, Scene *scene, MovieTrackingPlaneTrack *plane_track,
int framenr, bool is_active_track, int width, int height)
{
MovieTrackingPlaneMarker *plane_marker;
plane_marker = BKE_tracking_plane_marker_get(plane_track, framenr);
draw_plane_marker_outline(sc, scene, plane_track, plane_marker, width, height);
draw_plane_marker(sc, scene, plane_track, plane_marker, is_active_track, width, height);
draw_plane_marker_outline(sc, ar, scene, plane_track, plane_marker, width, height);
draw_plane_marker(sc, ar, scene, plane_track, plane_marker, is_active_track, width, height);
}
/* Draw all kind of tracks. */
@@ -1260,7 +1291,7 @@ static void draw_tracking_tracks(SpaceClip *sc, Scene *scene, ARegion *ar, Movie
plane_track = plane_track->next)
{
if ((plane_track->flag & PLANE_TRACK_HIDDEN) == 0) {
draw_plane_track(sc, scene, plane_track, framenr, plane_track == active_plane_track, width, height);
draw_plane_track(sc, ar, scene, plane_track, framenr, plane_track == active_plane_track, width, height);
}
}
@@ -1325,10 +1356,10 @@ static void draw_tracking_tracks(SpaceClip *sc, Scene *scene, ARegion *ar, Movie
if (MARKER_VISIBLE(sc, track, marker)) {
copy_v2_v2(cur_pos, fp ? fp : marker->pos);
draw_marker_outline(sc, track, marker, cur_pos, width, height);
draw_marker_areas(sc, track, marker, cur_pos, width, height, 0, 0);
draw_marker_slide_zones(sc, track, marker, cur_pos, 1, 0, 0, width, height);
draw_marker_slide_zones(sc, track, marker, cur_pos, 0, 0, 0, width, height);
draw_marker_outline(sc, ar, track, marker, cur_pos, width, height);
draw_marker_areas(sc, ar, track, marker, cur_pos, width, height, 0, 0);
draw_marker_slide_zones(sc, ar, track, marker, cur_pos, 1, 0, 0, width, height);
draw_marker_slide_zones(sc, ar, track, marker, cur_pos, 0, 0, 0, width, height);
if (fp)
fp += 2;
@@ -1351,8 +1382,8 @@ static void draw_tracking_tracks(SpaceClip *sc, Scene *scene, ARegion *ar, Movie
if (!act) {
copy_v2_v2(cur_pos, fp ? fp : marker->pos);
draw_marker_areas(sc, track, marker, cur_pos, width, height, 0, 1);
draw_marker_slide_zones(sc, track, marker, cur_pos, 0, 1, 0, width, height);
draw_marker_areas(sc, ar, track, marker, cur_pos, width, height, 0, 1);
draw_marker_slide_zones(sc, ar, track, marker, cur_pos, 0, 1, 0, width, height);
}
if (fp)
@@ -1371,8 +1402,8 @@ static void draw_tracking_tracks(SpaceClip *sc, Scene *scene, ARegion *ar, Movie
if (MARKER_VISIBLE(sc, act_track, marker)) {
copy_v2_v2(cur_pos, active_pos ? active_pos : marker->pos);
draw_marker_areas(sc, act_track, marker, cur_pos, width, height, 1, 1);
draw_marker_slide_zones(sc, act_track, marker, cur_pos, 0, 1, 1, width, height);
draw_marker_areas(sc, ar, act_track, marker, cur_pos, width, height, 1, 1);
draw_marker_slide_zones(sc, ar, act_track, marker, cur_pos, 0, 1, 1, width, height);
}
}
}
@@ -1658,7 +1689,7 @@ static void draw_distortion(SpaceClip *sc, ARegion *ar, MovieClip *clip,
void clip_draw_main(const bContext *C, SpaceClip *sc, ARegion *ar)
{
MovieClip *clip = ED_space_clip_get_clip(sc);
MovieClip *clip = ED_space_clip_get_clip_in_region(sc, ar);
Scene *scene = CTX_data_scene(C);
ImBuf *ibuf = NULL;
int width, height;
@@ -1679,7 +1710,7 @@ void clip_draw_main(const bContext *C, SpaceClip *sc, ARegion *ar)
float smat[4][4], ismat[4][4];
if ((sc->flag & SC_MUTE_FOOTAGE) == 0) {
ibuf = ED_space_clip_get_stable_buffer(sc, sc->loc,
ibuf = ED_space_clip_get_stable_buffer(sc, ar, sc->loc,
&sc->scale, &sc->angle);
}
@@ -1699,7 +1730,7 @@ void clip_draw_main(const bContext *C, SpaceClip *sc, ARegion *ar)
mul_m4_series(sc->unistabmat, smat, sc->stabmat, ismat);
}
else if ((sc->flag & SC_MUTE_FOOTAGE) == 0) {
ibuf = ED_space_clip_get_buffer(sc);
ibuf = ED_space_clip_get_buffer(sc, ar);
zero_v2(sc->loc);
sc->scale = 1.0f;
@@ -1728,7 +1759,7 @@ void clip_draw_main(const bContext *C, SpaceClip *sc, ARegion *ar)
void clip_draw_cache_and_notes(const bContext *C, SpaceClip *sc, ARegion *ar)
{
Scene *scene = CTX_data_scene(C);
MovieClip *clip = ED_space_clip_get_clip(sc);
MovieClip *clip = ED_space_clip_get_clip_in_region(sc, ar);
if (clip) {
draw_movieclip_cache(sc, ar, clip, scene);
draw_movieclip_notes(sc, ar);
@@ -1739,7 +1770,8 @@ void clip_draw_cache_and_notes(const bContext *C, SpaceClip *sc, ARegion *ar)
void clip_draw_grease_pencil(bContext *C, int onlyv2d)
{
SpaceClip *sc = CTX_wm_space_clip(C);
MovieClip *clip = ED_space_clip_get_clip(sc);
ARegion *ar = CTX_wm_region(C);
MovieClip *clip = ED_space_clip_get_clip_in_region(sc, ar);
if (!clip)
return;

View File

@@ -49,6 +49,7 @@
#include "BLI_math.h"
#include "BLI_rect.h"
#include "BLI_task.h"
#include "BLI_listbase.h"
#include "BKE_global.h"
#include "BKE_main.h"
@@ -57,7 +58,7 @@
#include "BKE_context.h"
#include "BKE_tracking.h"
#include "BKE_library.h"
#include "BKE_screen.h"
#include "IMB_colormanagement.h"
#include "IMB_imbuf_types.h"
@@ -133,6 +134,17 @@ int ED_space_clip_maskedit_mask_poll(bContext *C)
return false;
}
int ED_space_clip_correspondence_poll(bContext *C)
{
SpaceClip *sc = CTX_wm_space_clip(C);
if (sc && sc->clip) {
return ED_space_clip_check_show_correspondence(sc);
}
return false;
}
/* ******** common editing functions ******** */
void ED_space_clip_get_size(SpaceClip *sc, int *width, int *height)
@@ -225,12 +237,13 @@ int ED_space_clip_get_clip_frame_number(SpaceClip *sc)
return BKE_movieclip_remap_scene_to_clip_frame(clip, sc->user.framenr);
}
ImBuf *ED_space_clip_get_buffer(SpaceClip *sc)
ImBuf *ED_space_clip_get_buffer(SpaceClip *sc, ARegion *ar)
{
if (sc->clip) {
MovieClip *clip = ED_space_clip_get_clip_in_region(sc, ar);
if (clip) {
ImBuf *ibuf;
ibuf = BKE_movieclip_get_postprocessed_ibuf(sc->clip, &sc->user, sc->postproc_flag);
ibuf = BKE_movieclip_get_postprocessed_ibuf(clip, &sc->user, sc->postproc_flag);
if (ibuf && (ibuf->rect || ibuf->rect_float))
return ibuf;
@@ -242,12 +255,13 @@ ImBuf *ED_space_clip_get_buffer(SpaceClip *sc)
return NULL;
}
ImBuf *ED_space_clip_get_stable_buffer(SpaceClip *sc, float loc[2], float *scale, float *angle)
ImBuf *ED_space_clip_get_stable_buffer(SpaceClip *sc, ARegion *ar, float loc[2], float *scale, float *angle)
{
if (sc->clip) {
MovieClip *clip = ED_space_clip_get_clip_in_region(sc, ar);
if (clip) {
ImBuf *ibuf;
ibuf = BKE_movieclip_get_stable_ibuf(sc->clip, &sc->user, loc, scale, angle, sc->postproc_flag);
ibuf = BKE_movieclip_get_stable_ibuf(clip, &sc->user, loc, scale, angle, sc->postproc_flag);
if (ibuf && (ibuf->rect || ibuf->rect_float))
return ibuf;
@@ -268,7 +282,7 @@ bool ED_space_clip_color_sample(Scene *scene, SpaceClip *sc, ARegion *ar, int mv
float fx, fy, co[2];
bool ret = false;
ibuf = ED_space_clip_get_buffer(sc);
ibuf = ED_space_clip_get_buffer(sc, ar);
if (!ibuf) {
return false;
}
@@ -406,6 +420,7 @@ static bool selected_boundbox(const bContext *C, float min[2], float max[2])
bool ED_clip_view_selection(const bContext *C, ARegion *ar, bool fit)
{
SpaceClip *sc = CTX_wm_space_clip(C);
RegionSpaceClip *rsc = (RegionSpaceClip*) ar->regiondata;
int w, h, frame_width, frame_height;
float min[2], max[2];
@@ -418,8 +433,8 @@ bool ED_clip_view_selection(const bContext *C, ARegion *ar, bool fit)
return false;
/* center view */
clip_view_center_to_point(sc, (max[0] + min[0]) / (2 * frame_width),
(max[1] + min[1]) / (2 * frame_height));
clip_view_center_to_point(sc, rsc, (max[0] + min[0]) / (2 * frame_width),
(max[1] + min[1]) / (2 * frame_height));
w = max[0] - min[0];
h = max[1] - min[1];
@@ -439,8 +454,8 @@ bool ED_clip_view_selection(const bContext *C, ARegion *ar, bool fit)
newzoom = 1.0f / power_of_2(1.0f / min_ff(zoomx, zoomy));
if (fit || sc->zoom > newzoom)
sc->zoom = newzoom;
if (fit || rsc->zoom > newzoom)
rsc->zoom = newzoom;
}
return true;
@@ -549,6 +564,15 @@ bool ED_space_clip_check_show_maskedit(SpaceClip *sc)
return false;
}
bool ED_space_clip_check_show_correspondence(SpaceClip *sc)
{
if (sc) {
return sc->mode == SC_MODE_CORRESPONDENCE;
}
return false;
}
/* ******** clip editing functions ******** */
MovieClip *ED_space_clip_get_clip(SpaceClip *sc)
@@ -556,6 +580,27 @@ MovieClip *ED_space_clip_get_clip(SpaceClip *sc)
return sc->clip;
}
MovieClip *ED_space_clip_get_secondary_clip(SpaceClip *sc)
{
return sc->secondary_clip;
}
MovieClip *ED_space_clip_get_clip_in_region(SpaceClip *sc, ARegion *ar)
{
RegionSpaceClip *rsc = ar->regiondata;
/* dopsheet and graph don't have regiondata, so just return the main clip */
if (!rsc) {
return ED_space_clip_get_clip(sc);
}
else if (rsc->flag == RSC_MAIN_CLIP) {
return ED_space_clip_get_clip(sc);
}
else { //rsc->flag == RSC_SECONDARY_CLIP
return ED_space_clip_get_secondary_clip(sc);
}
}
void ED_space_clip_set_clip(bContext *C, bScreen *screen, SpaceClip *sc, MovieClip *clip)
{
MovieClip *old_clip;
@@ -603,6 +648,161 @@ void ED_space_clip_set_clip(bContext *C, bScreen *screen, SpaceClip *sc, MovieCl
WM_event_add_notifier(C, NC_MOVIECLIP | NA_SELECTED, sc->clip);
}
void ED_space_clip_set_secondary_clip(bContext *C, bScreen *screen, SpaceClip *sc, MovieClip *secondary_clip)
{
MovieClip *old_clip;
bool old_clip_visible = false;
if (!screen && C)
screen = CTX_wm_screen(C);
old_clip = sc->secondary_clip;
sc->secondary_clip = secondary_clip;
id_us_ensure_real((ID *)sc->secondary_clip);
if (screen && sc->view == SC_VIEW_CLIP) {
ScrArea *area;
SpaceLink *sl;
for (area = screen->areabase.first; area; area = area->next) {
for (sl = area->spacedata.first; sl; sl = sl->next) {
if (sl->spacetype == SPACE_CLIP) {
SpaceClip *cur_sc = (SpaceClip *) sl;
if (cur_sc != sc) {
if (cur_sc->view == SC_VIEW_CLIP) {
if (cur_sc->secondary_clip == old_clip)
old_clip_visible = true;
}
else {
if (cur_sc->secondary_clip == old_clip || cur_sc->secondary_clip == NULL) {
cur_sc->secondary_clip = secondary_clip;
}
}
}
}
}
}
}
/* If clip is no longer visible on screen, free memory used by it's cache */
if (old_clip && old_clip != secondary_clip && !old_clip_visible) {
BKE_movieclip_clear_cache(old_clip);
}
if (C)
WM_event_add_notifier(C, NC_MOVIECLIP | NA_SELECTED, sc->secondary_clip);
}
/* ******** split view when changing to correspondence mode ******** */
static void region_splitview_init(ScrArea *sa, ARegion *ar, SpaceClip *sc, eRegionSpaceClip_Flag flag)
{
/* set the region type so that clip_main_region_draw is aware of this */
RegionSpaceClip *rsc = ar->regiondata;
rsc->flag = flag;
/* XXX: Hack to make proper alignment decisions made in
* region_rect_recursive().
*
* This is quite bad and should ideally be addressed by the layout
* management which currently check whether it is enough space to fit both
* regions based on their current size.
*
* What we want instead happening in the layout engine is that the regions
* will pr properly scaled so they all fit and use the whole available
* space. Just similar to what's happening with quad split.
*
* However, it is a can of worms on it's own, so for now we just scale the
* regions in a way that they pass fitting tests in region_rect_recursive().
*
* TODO(sergey): Need to check this with the interface department!
*/
ar->sizey /= 2;
}
void ED_clip_update_correspondence_mode(bContext *C, SpaceClip *sc)
{
/* search forward and backward to find the drawing region */
ARegion *ar_origin = CTX_wm_region(C);
bool find_draw_region = false;
ARegion *ar = ar_origin;
while (ar && !find_draw_region) {
if (ar->regiontype == RGN_TYPE_WINDOW) {
find_draw_region = true;
break;
}
ar = ar->prev;
}
/* recover to the current ARegion and search backwards*/
ar = ar_origin;
while (ar && !find_draw_region) {
if (ar->regiontype == RGN_TYPE_WINDOW) {
find_draw_region = true;
break;
}
ar = ar->next;
}
BLI_assert(find_draw_region == true && ar != NULL);
/* some rules related to changing between correspondence mode and other mode*/
if (ar->regiontype != RGN_TYPE_WINDOW) {
return;
}
else if (sc->mode != SC_MODE_CORRESPONDENCE && ar->alignment == RGN_ALIGN_VSPLIT) {
///* Exit split-view */
ScrArea *sa = CTX_wm_area(C);
ARegion *arn;
/* keep current region */
ar->alignment = 0;
for (ar = sa->regionbase.first; ar; ar = arn) {
arn = ar->next;
if (ar->alignment == RGN_ALIGN_VSPLIT) {
ED_region_exit(C, ar);
BKE_area_region_free(sa->type, ar);
BLI_remlink(&sa->regionbase, ar);
MEM_freeN(ar);
}
}
ED_area_tag_redraw(sa);
WM_event_add_notifier(C, NC_SCREEN | NA_EDITED, NULL);
}
else if (ar->next) {
return; // Only last region can be splitted
}
else if (sc->mode == SC_MODE_CORRESPONDENCE) {
/* Enter split-view */
ScrArea *sa = CTX_wm_area(C);
ar->alignment = RGN_ALIGN_VSPLIT;
/* copy the current ar */
ARegion *newar = BKE_area_region_copy(sa->type, ar);
BLI_addtail(&sa->regionbase, newar);
/* update split view */
if (sa->spacetype == SPACE_CLIP) {
region_splitview_init(sa, ar, sc, RSC_MAIN_CLIP);
region_splitview_init(sa, (ar = ar->next), sc, RSC_SECONDARY_CLIP);
}
ED_area_tag_redraw(sa);
WM_event_add_notifier(C, NC_SCREEN | NA_EDITED, NULL);
}
}
/* ******** space clip region functions ******** */
void ED_space_clip_region_set_lock_zero(bContext *C)
{
RegionSpaceClip *rsc = CTX_wm_region_clip(C);
rsc->xlockof = 0.f;
rsc->ylockof = 0.f;
}
/* ******** masking editing functions ******** */
Mask *ED_space_clip_get_mask(SpaceClip *sc)

View File

@@ -40,6 +40,7 @@ struct MovieTrackingTrack;
struct Scene;
struct ScrArea;
struct SpaceClip;
struct RegionSpaceClip;
struct wmOperatorType;
/* channel heights */
@@ -94,6 +95,7 @@ void CLIP_OT_graph_disable_markers(struct wmOperatorType *ot);
/* clip_ops.c */
void CLIP_OT_open(struct wmOperatorType *ot);
void CLIP_OT_open_secondary(struct wmOperatorType *ot);
void CLIP_OT_reload(struct wmOperatorType *ot);
void CLIP_OT_view_pan(struct wmOperatorType *ot);
void CLIP_OT_view_zoom(wmOperatorType *ot);
@@ -139,7 +141,7 @@ void clip_graph_tracking_iterate(struct SpaceClip *sc, bool selected_only, bool
void clip_delete_track(struct bContext *C, struct MovieClip *clip, struct MovieTrackingTrack *track);
void clip_delete_marker(struct bContext *C, struct MovieClip *clip, struct MovieTrackingTrack *track, struct MovieTrackingMarker *marker);
void clip_view_center_to_point(SpaceClip *sc, float x, float y);
void clip_view_center_to_point(SpaceClip *sc, RegionSpaceClip *rsc, float x, float y);
void clip_draw_cfra(struct SpaceClip *sc, struct ARegion *ar, struct Scene *scene);
void clip_draw_sfra_efra(struct View2D *v2d, struct Scene *scene);
@@ -206,6 +208,11 @@ void CLIP_OT_slide_plane_marker(struct wmOperatorType *ot);
void CLIP_OT_keyframe_insert(struct wmOperatorType *ot);
void CLIP_OT_keyframe_delete(struct wmOperatorType *ot);
/* tracking_ops_correspondence */
void CLIP_OT_add_correspondence(wmOperatorType *ot);
void CLIP_OT_delete_correspondence(wmOperatorType *ot);
void CLIP_OT_solve_multiview(wmOperatorType *ot);
/* tracking_select.c */
void CLIP_OT_select(struct wmOperatorType *ot);
void CLIP_OT_select_all(struct wmOperatorType *ot);

View File

@@ -91,24 +91,25 @@ static void sclip_zoom_set(const bContext *C, float zoom, float location[2])
SpaceClip *sc = CTX_wm_space_clip(C);
ARegion *ar = CTX_wm_region(C);
float oldzoom = sc->zoom;
RegionSpaceClip *rsc = CTX_wm_region_clip(C);
float oldzoom = rsc->zoom;
int width, height;
sc->zoom = zoom;
rsc->zoom = zoom;
if (sc->zoom < 0.1f || sc->zoom > 4.0f) {
if (rsc->zoom < 0.1f || rsc->zoom > 4.0f) {
/* check zoom limits */
ED_space_clip_get_size(sc, &width, &height);
width *= sc->zoom;
height *= sc->zoom;
width *= rsc->zoom;
height *= rsc->zoom;
if ((width < 4) && (height < 4) && sc->zoom < oldzoom)
sc->zoom = oldzoom;
else if (BLI_rcti_size_x(&ar->winrct) <= sc->zoom)
sc->zoom = oldzoom;
else if (BLI_rcti_size_y(&ar->winrct) <= sc->zoom)
sc->zoom = oldzoom;
if ((width < 4) && (height < 4))
rsc->zoom = oldzoom;
else if (BLI_rcti_size_x(&ar->winrct) <= rsc->zoom)
rsc->zoom = oldzoom;
else if (BLI_rcti_size_y(&ar->winrct) <= rsc->zoom)
rsc->zoom = oldzoom;
}
if ((U.uiflag & USER_ZOOM_TO_MOUSEPOS) && location) {
@@ -116,25 +117,25 @@ static void sclip_zoom_set(const bContext *C, float zoom, float location[2])
ED_space_clip_get_size(sc, &width, &height);
dx = ((location[0] - 0.5f) * width - sc->xof) * (sc->zoom - oldzoom) / sc->zoom;
dy = ((location[1] - 0.5f) * height - sc->yof) * (sc->zoom - oldzoom) / sc->zoom;
dx = ((location[0] - 0.5f) * width - rsc->xof) * (rsc->zoom - oldzoom) / rsc->zoom;
dy = ((location[1] - 0.5f) * height - rsc->yof) * (rsc->zoom - oldzoom) / rsc->zoom;
if (sc->flag & SC_LOCK_SELECTION) {
sc->xlockof += dx;
sc->ylockof += dy;
rsc->xlockof += dx;
rsc->ylockof += dy;
}
else {
sc->xof += dx;
sc->yof += dy;
rsc->xof += dx;
rsc->yof += dy;
}
}
}
static void sclip_zoom_set_factor(const bContext *C, float zoomfac, float location[2])
{
SpaceClip *sc = CTX_wm_space_clip(C);
RegionSpaceClip *rsc = CTX_wm_region_data(C);
sclip_zoom_set(C, sc->zoom * zoomfac, location);
sclip_zoom_set(C, rsc->zoom * zoomfac, location);
}
static void sclip_zoom_set_factor_exec(bContext *C, const wmEvent *event, float factor)
@@ -305,6 +306,133 @@ void CLIP_OT_open(wmOperatorType *ot)
WM_FILESEL_RELPATH | WM_FILESEL_FILES | WM_FILESEL_DIRECTORY, FILE_DEFAULTDISPLAY, FILE_SORT_ALPHA);
}
/******************** open a secondary clip in Correspondence mode operator ********************/
static int open_secondary_exec(bContext *C, wmOperator *op)
{
SpaceClip *sc = CTX_wm_space_clip(C);
bScreen *screen = CTX_wm_screen(C);
Main *bmain = CTX_data_main(C);
PropertyPointerRNA *pprop;
PointerRNA idptr;
MovieClip *clip = NULL;
char str[FILE_MAX];
if (RNA_collection_length(op->ptr, "files")) {
PointerRNA fileptr;
PropertyRNA *prop;
char dir_only[FILE_MAX], file_only[FILE_MAX];
bool relative = RNA_boolean_get(op->ptr, "relative_path");
RNA_string_get(op->ptr, "directory", dir_only);
if (relative)
BLI_path_rel(dir_only, G.main->name);
prop = RNA_struct_find_property(op->ptr, "files");
RNA_property_collection_lookup_int(op->ptr, prop, 0, &fileptr);
RNA_string_get(&fileptr, "name", file_only);
BLI_join_dirfile(str, sizeof(str), dir_only, file_only);
}
else {
BKE_report(op->reports, RPT_ERROR, "No files selected to be opened");
return OPERATOR_CANCELLED;
}
/* default to frame 1 if there's no scene in context */
errno = 0;
clip = BKE_movieclip_file_add_exists(bmain, str);
if (!clip) {
if (op->customdata)
MEM_freeN(op->customdata);
BKE_reportf(op->reports, RPT_ERROR, "Cannot read '%s': %s", str,
errno ? strerror(errno) : TIP_("unsupported movie clip format"));
return OPERATOR_CANCELLED;
}
if (!op->customdata)
open_init(C, op);
/* hook into UI */
pprop = op->customdata;
if (pprop->prop) {
/* when creating new ID blocks, use is already 1, but RNA
* pointer se also increases user, so this compensates it */
id_us_min(&clip->id);
RNA_id_pointer_create(&clip->id, &idptr);
RNA_property_pointer_set(&pprop->ptr, pprop->prop, idptr);
RNA_property_update(C, &pprop->ptr, pprop->prop);
}
else if (sc) {
ED_space_clip_set_secondary_clip(C, screen, sc, clip);
}
WM_event_add_notifier(C, NC_MOVIECLIP | NA_ADDED, clip);
MEM_freeN(op->customdata);
return OPERATOR_FINISHED;
}
static int open_secondary_invoke(bContext *C, wmOperator *op, const wmEvent *UNUSED(event))
{
SpaceClip *sc = CTX_wm_space_clip(C);
char path[FILE_MAX];
MovieClip *secondary_clip = NULL;
if (sc)
secondary_clip = ED_space_clip_get_secondary_clip(sc);
if (secondary_clip) {
BLI_strncpy(path, secondary_clip->name, sizeof(path));
BLI_path_abs(path, G.main->name);
BLI_parent_dir(path);
}
else {
BLI_strncpy(path, U.textudir, sizeof(path));
}
if (RNA_struct_property_is_set(op->ptr, "files"))
return open_secondary_exec(C, op);
if (!RNA_struct_property_is_set(op->ptr, "relative_path"))
RNA_boolean_set(op->ptr, "relative_path", (U.flag & USER_RELPATHS) != 0);
open_init(C, op);
clip_filesel(C, op, path);
return OPERATOR_RUNNING_MODAL;
}
void CLIP_OT_open_secondary(wmOperatorType *ot)
{
/* identifiers */
ot->name = "Open Secondary Clip";
ot->description = "Load a sequence of frames or a movie file in correspondence mode";
ot->idname = "CLIP_OT_open_secondary";
/* api callbacks */
ot->exec = open_secondary_exec;
ot->invoke = open_secondary_invoke;
ot->cancel = open_cancel;
/* flags */
ot->flag = OPTYPE_REGISTER | OPTYPE_UNDO;
/* properties */
WM_operator_properties_filesel(
ot, FILE_TYPE_FOLDER | FILE_TYPE_IMAGE | FILE_TYPE_MOVIE, FILE_SPECIAL, FILE_OPENFILE,
WM_FILESEL_RELPATH | WM_FILESEL_FILES | WM_FILESEL_DIRECTORY, FILE_DEFAULTDISPLAY, FILE_SORT_ALPHA);
}
/******************* reload clip operator *********************/
static int reload_exec(bContext *C, wmOperator *UNUSED(op))
@@ -344,6 +472,7 @@ typedef struct ViewPanData {
static void view_pan_init(bContext *C, wmOperator *op, const wmEvent *event)
{
SpaceClip *sc = CTX_wm_space_clip(C);
RegionSpaceClip *rsc = CTX_wm_region_clip(C);
ViewPanData *vpd;
op->customdata = vpd = MEM_callocN(sizeof(ViewPanData), "ClipViewPanData");
@@ -353,9 +482,9 @@ static void view_pan_init(bContext *C, wmOperator *op, const wmEvent *event)
vpd->y = event->y;
if (sc->flag & SC_LOCK_SELECTION)
vpd->vec = &sc->xlockof;
vpd->vec = &rsc->xlockof;
else
vpd->vec = &sc->xof;
vpd->vec = &rsc->xof;
copy_v2_v2(&vpd->xof, vpd->vec);
copy_v2_v2(&vpd->xorig, &vpd->xof);
@@ -382,17 +511,18 @@ static void view_pan_exit(bContext *C, wmOperator *op, bool cancel)
static int view_pan_exec(bContext *C, wmOperator *op)
{
SpaceClip *sc = CTX_wm_space_clip(C);
RegionSpaceClip *rsc = CTX_wm_region_clip(C);
float offset[2];
RNA_float_get_array(op->ptr, "offset", offset);
if (sc->flag & SC_LOCK_SELECTION) {
sc->xlockof += offset[0];
sc->ylockof += offset[1];
rsc->xlockof += offset[0];
rsc->ylockof += offset[1];
}
else {
sc->xof += offset[0];
sc->yof += offset[1];
rsc->xof += offset[0];
rsc->yof += offset[1];
}
ED_region_tag_redraw(CTX_wm_region(C));
@@ -403,11 +533,11 @@ static int view_pan_exec(bContext *C, wmOperator *op)
static int view_pan_invoke(bContext *C, wmOperator *op, const wmEvent *event)
{
if (event->type == MOUSEPAN) {
SpaceClip *sc = CTX_wm_space_clip(C);
RegionSpaceClip *rsc = CTX_wm_region_clip(C);
float offset[2];
offset[0] = (event->prevx - event->x) / sc->zoom;
offset[1] = (event->prevy - event->y) / sc->zoom;
offset[0] = (event->prevx - event->x) / rsc->zoom;
offset[1] = (event->prevy - event->y) / rsc->zoom;
RNA_float_set_array(op->ptr, "offset", offset);
@@ -424,15 +554,15 @@ static int view_pan_invoke(bContext *C, wmOperator *op, const wmEvent *event)
static int view_pan_modal(bContext *C, wmOperator *op, const wmEvent *event)
{
SpaceClip *sc = CTX_wm_space_clip(C);
RegionSpaceClip *rsc = CTX_wm_region_clip(C);
ViewPanData *vpd = op->customdata;
float offset[2];
switch (event->type) {
case MOUSEMOVE:
copy_v2_v2(vpd->vec, &vpd->xorig);
offset[0] = (vpd->x - event->x) / sc->zoom;
offset[1] = (vpd->y - event->y) / sc->zoom;
offset[0] = (vpd->x - event->x) / rsc->zoom;
offset[1] = (vpd->y - event->y) / rsc->zoom;
RNA_float_set_array(op->ptr, "offset", offset);
view_pan_exec(C, op);
break;
@@ -497,6 +627,7 @@ typedef struct ViewZoomData {
static void view_zoom_init(bContext *C, wmOperator *op, const wmEvent *event)
{
SpaceClip *sc = CTX_wm_space_clip(C);
RegionSpaceClip *rsc = CTX_wm_region_clip(C);
ARegion *ar = CTX_wm_region(C);
ViewZoomData *vpd;
@@ -511,7 +642,7 @@ static void view_zoom_init(bContext *C, wmOperator *op, const wmEvent *event)
vpd->x = event->x;
vpd->y = event->y;
vpd->zoom = sc->zoom;
vpd->zoom = rsc->zoom;
vpd->event_type = event->type;
ED_clip_mouse_pos(sc, ar, event->mval, vpd->location);
@@ -521,11 +652,11 @@ static void view_zoom_init(bContext *C, wmOperator *op, const wmEvent *event)
static void view_zoom_exit(bContext *C, wmOperator *op, bool cancel)
{
SpaceClip *sc = CTX_wm_space_clip(C);
RegionSpaceClip *rsc = CTX_wm_region_clip(C);
ViewZoomData *vpd = op->customdata;
if (cancel) {
sc->zoom = vpd->zoom;
rsc->zoom = vpd->zoom;
ED_region_tag_redraw(CTX_wm_region(C));
}
@@ -578,7 +709,7 @@ static void view_zoom_apply(bContext *C,
float factor;
if (U.viewzoom == USER_ZOOM_CONT) {
SpaceClip *sclip = CTX_wm_space_clip(C);
RegionSpaceClip *rsc = CTX_wm_region_clip(C);
double time = PIL_check_seconds_timer();
float time_step = (float)(time - vpd->timer_lastdraw);
float fac;
@@ -597,7 +728,7 @@ static void view_zoom_apply(bContext *C,
zfac = 1.0f + ((fac / 20.0f) * time_step);
vpd->timer_lastdraw = time;
factor = (sclip->zoom * zfac) / vpd->zoom;
factor = (rsc->zoom * zfac) / vpd->zoom;
}
else {
float delta = event->x - vpd->x + event->y - vpd->y;
@@ -766,13 +897,13 @@ void CLIP_OT_view_zoom_out(wmOperatorType *ot)
static int view_zoom_ratio_exec(bContext *C, wmOperator *op)
{
SpaceClip *sc = CTX_wm_space_clip(C);
RegionSpaceClip *rsc = CTX_wm_region_clip(C);
sclip_zoom_set(C, RNA_float_get(op->ptr, "ratio"), NULL);
/* ensure pixel exact locations for draw */
sc->xof = (int) sc->xof;
sc->yof = (int) sc->yof;
rsc->xof = (int) rsc->xof;
rsc->yof = (int) rsc->yof;
ED_region_tag_redraw(CTX_wm_region(C));
@@ -808,6 +939,7 @@ static int view_all_exec(bContext *C, wmOperator *op)
/* retrieve state */
sc = CTX_wm_space_clip(C);
RegionSpaceClip *rsc = CTX_wm_region_clip(C);
ar = CTX_wm_region(C);
ED_space_clip_get_size(sc, &w, &h);
@@ -840,7 +972,7 @@ static int view_all_exec(bContext *C, wmOperator *op)
sclip_zoom_set(C, 1.0f, NULL);
}
sc->xof = sc->yof = 0.0f;
rsc->xof = rsc->yof = 0.0f;
ED_region_tag_redraw(ar);
@@ -869,11 +1001,11 @@ void CLIP_OT_view_all(wmOperatorType *ot)
static int view_selected_exec(bContext *C, wmOperator *UNUSED(op))
{
SpaceClip *sc = CTX_wm_space_clip(C);
RegionSpaceClip *rsc = CTX_wm_region_clip(C);
ARegion *ar = CTX_wm_region(C);
sc->xlockof = 0.0f;
sc->ylockof = 0.0f;
rsc->xlockof = 0.0f;
rsc->ylockof = 0.0f;
ED_clip_view_selection(C, ar, 1);
ED_region_tag_redraw(ar);
@@ -916,6 +1048,10 @@ static void change_frame_apply(bContext *C, wmOperator *op)
/* do updates */
BKE_sound_seek_scene(CTX_data_main(C), scene);
WM_event_add_notifier(C, NC_SCENE | ND_FRAME, scene);
SpaceClip *sc = CTX_wm_space_clip(C);
BKE_movieclip_user_set_frame(&sc->user, RNA_int_get(op->ptr, "frame"));
WM_event_add_notifier(C, NC_MOVIECLIP | ND_DISPLAY, NULL);
}
static int change_frame_exec(bContext *C, wmOperator *op)
@@ -1419,7 +1555,7 @@ static int clip_view_ndof_invoke(bContext *C, wmOperator *UNUSED(op), const wmEv
if (event->type != NDOF_MOTION)
return OPERATOR_CANCELLED;
else {
SpaceClip *sc = CTX_wm_space_clip(C);
RegionSpaceClip *rsc = CTX_wm_region_clip(C);
ARegion *ar = CTX_wm_region(C);
float pan_vec[3];
@@ -1428,12 +1564,12 @@ static int clip_view_ndof_invoke(bContext *C, wmOperator *UNUSED(op), const wmEv
WM_event_ndof_pan_get(ndof, pan_vec, true);
mul_v2_fl(pan_vec, (speed * ndof->dt) / sc->zoom);
mul_v2_fl(pan_vec, (speed * ndof->dt) / rsc->zoom);
pan_vec[2] *= -ndof->dt;
sclip_zoom_set_factor(C, 1.0f + pan_vec[2], NULL);
sc->xof += pan_vec[0];
sc->yof += pan_vec[1];
rsc->xof += pan_vec[0];
rsc->yof += pan_vec[1];
ED_region_tag_redraw(ar);

View File

@@ -224,7 +224,7 @@ void clip_delete_marker(bContext *C, MovieClip *clip, MovieTrackingTrack *track,
}
}
void clip_view_center_to_point(SpaceClip *sc, float x, float y)
void clip_view_center_to_point(SpaceClip *sc, RegionSpaceClip *rsc, float x, float y)
{
int width, height;
float aspx, aspy;
@@ -232,8 +232,8 @@ void clip_view_center_to_point(SpaceClip *sc, float x, float y)
ED_space_clip_get_size(sc, &width, &height);
ED_space_clip_get_aspect(sc, &aspx, &aspy);
sc->xof = (x - 0.5f) * width * aspx;
sc->yof = (y - 0.5f) * height * aspy;
rsc->xof = (x - 0.5f) * width * aspx;
rsc->yof = (y - 0.5f) * height * aspy;
}
void clip_draw_cfra(SpaceClip *sc, ARegion *ar, Scene *scene)

View File

@@ -236,7 +236,6 @@ static SpaceLink *clip_new(const bContext *C)
sc->spacetype = SPACE_CLIP;
sc->flag = SC_SHOW_MARKER_PATTERN | SC_SHOW_TRACK_PATH |
SC_SHOW_GRAPH_TRACKS_MOTION | SC_SHOW_GRAPH_FRAMES | SC_SHOW_GPENCIL;
sc->zoom = 1.0f;
sc->path_length = 20;
sc->scopes.track_preview_height = 120;
sc->around = V3D_AROUND_LOCAL_ORIGINS;
@@ -291,6 +290,12 @@ static SpaceLink *clip_new(const bContext *C)
BLI_addtail(&sc->regionbase, ar);
ar->regiontype = RGN_TYPE_WINDOW;
/* region data for main region */
RegionSpaceClip *rsc = MEM_callocN(sizeof(RegionSpaceClip), "region data for clip");
rsc->zoom = 1.0f;
rsc->flag = RSC_MAIN_CLIP;
ar->regiondata = rsc;
return (SpaceLink *) sc;
}
@@ -418,6 +423,7 @@ static void clip_operatortypes(void)
{
/* ** clip_ops.c ** */
WM_operatortype_append(CLIP_OT_open);
WM_operatortype_append(CLIP_OT_open_secondary);
WM_operatortype_append(CLIP_OT_reload);
WM_operatortype_append(CLIP_OT_view_pan);
WM_operatortype_append(CLIP_OT_view_zoom);
@@ -519,6 +525,11 @@ static void clip_operatortypes(void)
WM_operatortype_append(CLIP_OT_keyframe_insert);
WM_operatortype_append(CLIP_OT_keyframe_delete);
/* Correspondence */
WM_operatortype_append(CLIP_OT_add_correspondence);
WM_operatortype_append(CLIP_OT_delete_correspondence);
WM_operatortype_append(CLIP_OT_solve_multiview);
/* ** clip_graph_ops.c ** */
/* graph editing */
@@ -551,6 +562,8 @@ static void clip_keymap(struct wmKeyConfig *keyconf)
keymap = WM_keymap_find(keyconf, "Clip", SPACE_CLIP, 0);
WM_keymap_add_item(keymap, "CLIP_OT_open", OKEY, KM_PRESS, KM_ALT, 0);
//TODO(tianwei): think about a shortcut for open_secondary clip
//WM_keymap_add_item(keymap, "CLIP_OT_open_secondary", OKEY, KM_PRESS, KM_ALT, 0);
WM_keymap_add_item(keymap, "CLIP_OT_tools", TKEY, KM_PRESS, 0, 0);
WM_keymap_add_item(keymap, "CLIP_OT_properties", NKEY, KM_PRESS, 0, 0);
@@ -876,6 +889,8 @@ static void clip_dropboxes(void)
ListBase *lb = WM_dropboxmap_find("Clip", SPACE_CLIP, 0);
WM_dropbox_add(lb, "CLIP_OT_open", clip_drop_poll, clip_drop_copy);
//TODO(tianwei): think about dropbox for secondary clip
//WM_dropbox_add(lb, "CLIP_OT_open_secondary", clip_drop_poll, clip_drop_copy);
}
static void clip_refresh(const bContext *C, ScrArea *sa)
@@ -1085,6 +1100,7 @@ static void clip_refresh(const bContext *C, ScrArea *sa)
static void movieclip_main_area_set_view2d(const bContext *C, ARegion *ar)
{
SpaceClip *sc = CTX_wm_space_clip(C);
RegionSpaceClip *rsc = (RegionSpaceClip*) ar->regiondata;
float x1, y1, w, h, aspx, aspy;
int width, height, winx, winy;
@@ -1107,19 +1123,19 @@ static void movieclip_main_area_set_view2d(const bContext *C, ARegion *ar)
ar->v2d.mask.ymax = winy;
/* which part of the image space do we see? */
x1 = ar->winrct.xmin + (winx - sc->zoom * w) / 2.0f;
y1 = ar->winrct.ymin + (winy - sc->zoom * h) / 2.0f;
x1 = ar->winrct.xmin + (winx - rsc->zoom * w) / 2.0f;
y1 = ar->winrct.ymin + (winy - rsc->zoom * h) / 2.0f;
x1 -= sc->zoom * sc->xof;
y1 -= sc->zoom * sc->yof;
x1 -= rsc->zoom * rsc->xof;
y1 -= rsc->zoom * rsc->yof;
/* relative display right */
ar->v2d.cur.xmin = (ar->winrct.xmin - (float)x1) / sc->zoom;
ar->v2d.cur.xmax = ar->v2d.cur.xmin + ((float)winx / sc->zoom);
ar->v2d.cur.xmin = (ar->winrct.xmin - (float)x1) / rsc->zoom;
ar->v2d.cur.xmax = ar->v2d.cur.xmin + ((float)winx / rsc->zoom);
/* relative display left */
ar->v2d.cur.ymin = (ar->winrct.ymin - (float)y1) / sc->zoom;
ar->v2d.cur.ymax = ar->v2d.cur.ymin + ((float)winy / sc->zoom);
ar->v2d.cur.ymin = (ar->winrct.ymin - (float)y1) / rsc->zoom;
ar->v2d.cur.ymax = ar->v2d.cur.ymin + ((float)winy / rsc->zoom);
/* normalize 0.0..1.0 */
ar->v2d.cur.xmin /= w;
@@ -1151,7 +1167,8 @@ static void clip_main_region_draw(const bContext *C, ARegion *ar)
{
/* draw entirely, view changes should be handled here */
SpaceClip *sc = CTX_wm_space_clip(C);
MovieClip *clip = ED_space_clip_get_clip(sc);
RegionSpaceClip *rsc = CTX_wm_region_clip(C);
MovieClip *clip = ED_space_clip_get_clip_in_region(sc, ar);
float aspx, aspy, zoomx, zoomy, x, y;
int width, height;
bool show_cursor = false;
@@ -1165,12 +1182,12 @@ static void clip_main_region_draw(const bContext *C, ARegion *ar)
ImBuf *tmpibuf = NULL;
if (clip && clip->tracking.stabilization.flag & TRACKING_2D_STABILIZATION) {
tmpibuf = ED_space_clip_get_stable_buffer(sc, NULL, NULL, NULL);
tmpibuf = ED_space_clip_get_stable_buffer(sc, ar, NULL, NULL, NULL);
}
if (ED_clip_view_selection(C, ar, 0)) {
sc->xof += sc->xlockof;
sc->yof += sc->ylockof;
rsc->xof += rsc->xlockof;
rsc->yof += rsc->ylockof;
}
if (tmpibuf)
@@ -1251,6 +1268,16 @@ static void clip_main_region_listener(bScreen *UNUSED(sc), ScrArea *UNUSED(sa),
}
}
static void clip_main_region_free(ARegion *ar)
{
RegionSpaceClip *rsc = ar->regiondata;
if (rsc) {
MEM_freeN(rsc);
ar->regiondata = NULL;
}
}
/****************** preview region ******************/
static void clip_preview_region_init(wmWindowManager *wm, ARegion *ar)
@@ -1555,6 +1582,7 @@ void ED_spacetype_clip(void)
art->init = clip_main_region_init;
art->draw = clip_main_region_draw;
art->listener = clip_main_region_listener;
art->free = clip_main_region_free;
art->keymapflag = ED_KEYMAP_FRAMES | ED_KEYMAP_UI | ED_KEYMAP_GPENCIL;
BLI_addhead(&st->regiontypes, art);

View File

@@ -93,6 +93,7 @@ static bool add_marker(const bContext *C, float x, float y)
static int add_marker_exec(bContext *C, wmOperator *op)
{
SpaceClip *sc = CTX_wm_space_clip(C);
RegionSpaceClip *rsc = CTX_wm_region_clip(C);
MovieClip *clip = ED_space_clip_get_clip(sc);
float pos[2];
@@ -105,8 +106,8 @@ static int add_marker_exec(bContext *C, wmOperator *op)
/* Reset offset from locked position, so frame jumping wouldn't be so
* confusing.
*/
sc->xlockof = 0;
sc->ylockof = 0;
rsc->xlockof = 0;
rsc->ylockof = 0;
WM_event_add_notifier(C, NC_MOVIECLIP | NA_EDITED, clip);
@@ -594,6 +595,7 @@ MovieTrackingTrack *tracking_marker_check_slide(bContext *C,
{
const float distance_clip_squared = 12.0f * 12.0f;
SpaceClip *sc = CTX_wm_space_clip(C);
RegionSpaceClip *rsc = CTX_wm_region_clip(C);
ARegion *ar = CTX_wm_region(C);
MovieClip *clip = ED_space_clip_get_clip(sc);
@@ -724,7 +726,7 @@ MovieTrackingTrack *tracking_marker_check_slide(bContext *C,
track = track->next;
}
if (global_min_distance_squared < distance_clip_squared / sc->zoom) {
if (global_min_distance_squared < distance_clip_squared / rsc->zoom) {
if (area_r) {
*area_r = min_area;
}
@@ -856,6 +858,7 @@ static void free_slide_data(SlideMarkerData *data)
static int slide_marker_modal(bContext *C, wmOperator *op, const wmEvent *event)
{
SpaceClip *sc = CTX_wm_space_clip(C);
RegionSpaceClip *rsc = CTX_wm_region_clip(C);
ARegion *ar = CTX_wm_region(C);
SlideMarkerData *data = (SlideMarkerData *)op->customdata;
@@ -881,13 +884,13 @@ static int slide_marker_modal(bContext *C, wmOperator *op, const wmEvent *event)
mdelta[0] = event->mval[0] - data->mval[0];
mdelta[1] = event->mval[1] - data->mval[1];
dx = mdelta[0] / data->width / sc->zoom;
dx = mdelta[0] / data->width / rsc->zoom;
if (data->lock) {
dy = -dx / data->height * data->width;
}
else {
dy = mdelta[1] / data->height / sc->zoom;
dy = mdelta[1] / data->height / rsc->zoom;
}
if (data->accurate) {

View File

@@ -0,0 +1,460 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2016 Blender Foundation.
* All rights reserved.
*
*
* Contributor(s): Blender Foundation,
* Tianwei Shen
*
* ***** END GPL LICENSE BLOCK *****
*/
/** \file blender/editors/space_clip/tracking_ops_correspondence.c
* \ingroup spclip
*/
#include "MEM_guardedalloc.h"
#include "DNA_screen_types.h"
#include "DNA_space_types.h"
#include "DNA_camera_types.h"
#include "DNA_object_types.h"
#include "BLI_utildefines.h"
#include "BLI_ghash.h"
#include "BLI_math.h"
#include "BLI_blenlib.h"
#include "BLI_string.h"
#include "BKE_main.h"
#include "BKE_context.h"
#include "BKE_movieclip.h"
#include "BKE_tracking.h"
#include "BKE_depsgraph.h"
#include "BKE_report.h"
#include "BKE_global.h"
#include "BKE_library.h"
#include "WM_api.h"
#include "WM_types.h"
#include "ED_screen.h"
#include "ED_clip.h"
#include "RNA_access.h"
#include "RNA_define.h"
#include "BLT_translation.h"
#include "clip_intern.h"
#include "tracking_ops_intern.h"
/********************** add correspondence operator *********************/
/* return the pointer to a single selected track, if more than one is selected, return NULL */
static MovieTrackingTrack *get_single_track(SpaceClip *sc, ListBase *tracksbase)
{
int num_selected_tracks = 0;
MovieTrackingTrack *selected_track;
for (MovieTrackingTrack *track = tracksbase->first; track; track = track->next) {
if (TRACK_VIEW_SELECTED(sc, track)) {
selected_track = track;
num_selected_tracks++;
}
}
if (num_selected_tracks == 1) {
return selected_track;
}
return NULL;
}
static int add_correspondence_exec(bContext *C, wmOperator *op)
{
SpaceClip *sc = CTX_wm_space_clip(C);
/* get primary clip */
MovieClip *clip = ED_space_clip_get_clip(sc);
MovieTracking *tracking = &clip->tracking;
ListBase *tracksbase = BKE_tracking_get_active_tracks(tracking);
/* get one track from each clip and link them */
MovieTrackingTrack *primary_track = NULL, *witness_track = NULL;
/* get a single selected tracks in the primary camera */
primary_track = get_single_track(sc, tracksbase);
/* get a single selected tracks in the witness camera, only one witness camera is allowed */
MovieClip *second_clip = ED_space_clip_get_secondary_clip(sc);
MovieTracking *second_tracking = &second_clip->tracking;
ListBase *second_tracksbase = BKE_tracking_get_active_tracks(second_tracking);
witness_track = get_single_track(sc, second_tracksbase);
if (!primary_track || !witness_track) {
BKE_report(op->reports, RPT_ERROR, "Select exactly one track in each clip");
return OPERATOR_CANCELLED;
}
// add these correspondence
char error_msg[256] = "\0";
if (!BKE_tracking_correspondence_add(&(tracking->correspondences), primary_track, witness_track,
clip, second_clip, error_msg, sizeof(error_msg))) {
if (error_msg[0])
BKE_report(op->reports, RPT_ERROR, error_msg);
return OPERATOR_CANCELLED;
}
return OPERATOR_FINISHED;
}
void CLIP_OT_add_correspondence(wmOperatorType *ot)
{
/* identifiers */
ot->name = "Add Correspondence";
ot->idname = "CLIP_OT_add_correspondence";
ot->description = "Add correspondence between primary camera and witness camera";
/* api callbacks */
ot->exec = add_correspondence_exec;
ot->poll = ED_space_clip_correspondence_poll;
/* flags */
ot->flag = OPTYPE_REGISTER | OPTYPE_UNDO;
}
/********************** delete correspondence operator *********************/
static int delete_correspondence_exec(bContext *C, wmOperator *UNUSED(op))
{
SpaceClip *sc = CTX_wm_space_clip(C);
MovieClip *clip = ED_space_clip_get_clip(sc);
MovieTracking *tracking = &clip->tracking;
bool changed = false;
/* Remove track correspondences from correspondence base */
MovieTrackingObject *object = BKE_tracking_object_get_active(tracking);
ListBase *correspondence_base = &tracking->correspondences;
for (MovieTrackingCorrespondence *corr = correspondence_base->first;
corr != NULL;
corr = corr->next)
{
MovieTrackingTrack *track;
track = BKE_tracking_track_get_named(tracking, object, corr->self_track_name);
if (TRACK_VIEW_SELECTED(sc, track)) {
BLI_freelinkN(correspondence_base, corr);
changed = true;
}
}
/* Nothing selected now, unlock view so it can be scrolled nice again. */
sc->flag &= ~SC_LOCK_SELECTION;
if (changed) {
WM_event_add_notifier(C, NC_MOVIECLIP | NA_EDITED, clip);
}
return OPERATOR_FINISHED;
}
void CLIP_OT_delete_correspondence(wmOperatorType *ot)
{
/* identifiers */
ot->name = "Delete Correspondence";
ot->idname = "CLIP_OT_delete_correspondence";
ot->description = "Delete selected tracker correspondene between primary and witness camera";
/* api callbacks */
ot->invoke = WM_operator_confirm;
ot->exec = delete_correspondence_exec;
ot->poll = ED_space_clip_correspondence_poll;
/* flags */
ot->flag = OPTYPE_REGISTER | OPTYPE_UNDO;
}
/********************** solve multiview operator *********************/
typedef struct {
Scene *scene;
int clip_num; /* the number of active clips for multi-view reconstruction*/
MovieClip **clips; /* a clip pointer array that records all the clip pointers */
MovieClipUser user;
ReportList *reports;
char stats_message[256];
struct MovieMultiviewReconstructContext *context;
} SolveMultiviewJob;
/* initialize multiview reconstruction solve
* which is assumed to be triggered only in the primary clip
*/
static bool solve_multiview_initjob(bContext *C,
SolveMultiviewJob *smj,
wmOperator *op,
char *error_msg,
int max_error)
{
SpaceClip *sc = CTX_wm_space_clip(C);
MovieClip *clip = ED_space_clip_get_clip(sc);
Scene *scene = CTX_data_scene(C);
MovieTracking *tracking = &clip->tracking;
MovieTrackingObject *object = BKE_tracking_object_get_active(tracking);
int width, height;
/* count all clips number, primary clip will always be the first
* iterate over bContext->data.main.movieclip to get open clips.
*/
Main *main = CTX_data_main(C);
ListBase *mc_base = &(main->movieclip);
smj->clip_num = BLI_listbase_count(mc_base);
printf("%d open clips for reconstruction\n", smj->clip_num);
smj->clips = MEM_callocN(smj->clip_num * sizeof(MovieClip*), "multiview clip pointers");
smj->clips[0] = clip;
/* do multi-view reconstruction, fill in witness clips from Main.movieclip */
int mc_counter = 1;
for (Link *link = main->movieclip.first;
link != NULL;
link = link->next)
{
MovieClip *mc_link = (MovieClip*) link;
if (mc_link != smj->clips[0]) {
smj->clips[mc_counter++] = mc_link;
}
}
BLI_assert(mc_counter == smj->clip_num);
if (!BKE_tracking_multiview_reconstruction_check(smj->clips,
object,
error_msg,
max_error))
{
return false;
}
/* Could fail if footage uses images with different sizes. */
BKE_movieclip_get_size(clip, &sc->user, &width, &height);
smj->scene = scene;
smj->reports = op->reports;
smj->user = sc->user;
// create multiview reconstruction context and pass the tracks and markers to libmv
smj->context = BKE_tracking_multiview_reconstruction_context_new(smj->clips,
smj->clip_num,
object,
object->keyframe1,
object->keyframe2,
width,
height);
printf("new multiview reconstruction context\n");
tracking->stats = MEM_callocN(sizeof(MovieTrackingStats), "solve multiview stats");
return true;
}
static void solve_multiview_updatejob(void *scv)
{
SolveMultiviewJob *smj = (SolveMultiviewJob *)scv;
MovieClip *primary_clip = smj->clips[0];
MovieTracking *tracking = &primary_clip->tracking;
BLI_strncpy(tracking->stats->message,
smj->stats_message,
sizeof(tracking->stats->message));
}
static void solve_multiview_startjob(void *scv, short *stop, short *do_update, float *progress)
{
SolveMultiviewJob *smj = (SolveMultiviewJob *)scv;
BKE_tracking_multiview_reconstruction_solve(smj->context,
stop,
do_update,
progress,
smj->stats_message,
sizeof(smj->stats_message));
}
// TODO(tianwei): not sure about the scene for witness cameras, check with Sergey
static void solve_multiview_freejob(void *scv)
{
SolveMultiviewJob *smj = (SolveMultiviewJob *)scv;
MovieClip *clip = smj->clips[0]; // primary camera
MovieTracking *tracking = &clip->tracking;
Scene *scene = smj->scene;
int solved;
if (!smj->context) {
/* job weren't fully initialized due to some error */
MEM_freeN(smj);
return;
}
solved = BKE_tracking_multiview_reconstruction_finish(smj->context, smj->clips);
if (!solved) {
BKE_report(smj->reports,
RPT_WARNING,
"Some data failed to reconstruct (see console for details)");
}
else {
BKE_reportf(smj->reports,
RPT_INFO,
"Average re-projection error: %.3f",
tracking->reconstruction.error);
}
/* Set the currently solved primary clip as active for scene. */
if (scene->clip != NULL) {
id_us_min(&clip->id);
}
scene->clip = clip;
id_us_plus(&clip->id);
/* Set blender camera focal length so result would look fine there. */
if (scene->camera != NULL &&
scene->camera->data &&
GS(((ID *) scene->camera->data)->name) == ID_CA) {
Camera *camera = (Camera *)scene->camera->data;
int width, height;
BKE_movieclip_get_size(clip, &smj->user, &width, &height);
BKE_tracking_camera_to_blender(tracking, scene, camera, width, height);
WM_main_add_notifier(NC_OBJECT, camera);
}
MEM_freeN(tracking->stats);
tracking->stats = NULL;
DAG_id_tag_update(&clip->id, 0);
WM_main_add_notifier(NC_MOVIECLIP | NA_EVALUATED, clip);
WM_main_add_notifier(NC_OBJECT | ND_TRANSFORM, NULL);
/* Update active clip displayed in scene buttons. */
WM_main_add_notifier(NC_SCENE, scene);
BKE_tracking_multiview_reconstruction_context_free(smj->context);
MEM_freeN(smj);
}
static int solve_multiview_exec(bContext *C, wmOperator *op)
{
SolveMultiviewJob *scj;
char error_msg[256] = "\0";
scj = MEM_callocN(sizeof(SolveMultiviewJob), "SolveMultiviewJob data");
if (!solve_multiview_initjob(C, scj, op, error_msg, sizeof(error_msg))) {
if (error_msg[0]) {
BKE_report(op->reports, RPT_ERROR, error_msg);
}
solve_multiview_freejob(scj);
return OPERATOR_CANCELLED;
}
solve_multiview_startjob(scj, NULL, NULL, NULL);
solve_multiview_freejob(scj);
return OPERATOR_FINISHED;
}
static int solve_multiview_invoke(bContext *C,
wmOperator *op,
const wmEvent *UNUSED(event))
{
SolveMultiviewJob *scj;
ScrArea *sa = CTX_wm_area(C);
SpaceClip *sc = CTX_wm_space_clip(C);
MovieClip *clip = ED_space_clip_get_clip(sc);
MovieTracking *tracking = &clip->tracking;
MovieTrackingReconstruction *reconstruction =
BKE_tracking_get_active_reconstruction(tracking);
wmJob *wm_job;
char error_msg[256] = "\0";
if (WM_jobs_test(CTX_wm_manager(C), sa, WM_JOB_TYPE_ANY)) {
/* only one solve is allowed at a time */
return OPERATOR_CANCELLED;
}
scj = MEM_callocN(sizeof(SolveMultiviewJob), "SolveCameraJob data");
if (!solve_multiview_initjob(C, scj, op, error_msg, sizeof(error_msg))) {
if (error_msg[0]) {
BKE_report(op->reports, RPT_ERROR, error_msg);
}
solve_multiview_freejob(scj);
return OPERATOR_CANCELLED;
}
BLI_strncpy(tracking->stats->message,
"Solving multiview | Preparing solve",
sizeof(tracking->stats->message));
/* Hide reconstruction statistics from previous solve. */
reconstruction->flag &= ~TRACKING_RECONSTRUCTED;
WM_event_add_notifier(C, NC_MOVIECLIP | NA_EVALUATED, clip);
/* Setup job. */
wm_job = WM_jobs_get(CTX_wm_manager(C), CTX_wm_window(C), sa, "Solve Camera",
WM_JOB_PROGRESS, WM_JOB_TYPE_CLIP_SOLVE_CAMERA);
WM_jobs_customdata_set(wm_job, scj, solve_multiview_freejob);
WM_jobs_timer(wm_job, 0.1, NC_MOVIECLIP | NA_EVALUATED, 0);
WM_jobs_callbacks(wm_job,
solve_multiview_startjob,
NULL,
solve_multiview_updatejob,
NULL);
G.is_break = false;
WM_jobs_start(CTX_wm_manager(C), wm_job);
WM_cursor_wait(0);
/* add modal handler for ESC */
WM_event_add_modal_handler(C, op);
return OPERATOR_RUNNING_MODAL;
}
static int solve_multiview_modal(bContext *C,
wmOperator *UNUSED(op),
const wmEvent *event)
{
/* No running solver, remove handler and pass through. */
if (0 == WM_jobs_test(CTX_wm_manager(C), CTX_wm_area(C), WM_JOB_TYPE_ANY))
return OPERATOR_FINISHED | OPERATOR_PASS_THROUGH;
/* Running solver. */
switch (event->type) {
case ESCKEY:
return OPERATOR_RUNNING_MODAL;
}
return OPERATOR_PASS_THROUGH;
}
void CLIP_OT_solve_multiview(wmOperatorType *ot)
{
/* identifiers */
ot->name = "Solve multi-view reconstruction";
ot->idname = "CLIP_OT_solve_multiview";
ot->description = "Solve multiview reconstruction";
/* api callbacks */
ot->exec = solve_multiview_exec;
ot->invoke = solve_multiview_invoke;
ot->modal = solve_multiview_modal;
ot->poll = ED_space_clip_correspondence_poll;
/* flags */
ot->flag = OPTYPE_REGISTER | OPTYPE_UNDO;
}

View File

@@ -138,6 +138,7 @@ static MovieTrackingPlaneTrack *tracking_plane_marker_check_slide(
{
const float distance_clip_squared = 12.0f * 12.0f;
SpaceClip *sc = CTX_wm_space_clip(C);
RegionSpaceClip *rsc = CTX_wm_region_clip(C);
ARegion *ar = CTX_wm_region(C);
MovieClip *clip = ED_space_clip_get_clip(sc);
MovieTracking *tracking = &clip->tracking;
@@ -180,7 +181,7 @@ static MovieTrackingPlaneTrack *tracking_plane_marker_check_slide(
}
}
if (min_distance_squared < distance_clip_squared / sc->zoom) {
if (min_distance_squared < distance_clip_squared / rsc->zoom) {
if (corner_r != NULL) {
*corner_r = min_corner;
}
@@ -286,6 +287,7 @@ static int slide_plane_marker_modal(bContext *C,
const wmEvent *event)
{
SpaceClip *sc = CTX_wm_space_clip(C);
RegionSpaceClip *rsc = CTX_wm_region_clip(C);
MovieClip *clip = ED_space_clip_get_clip(sc);
SlidePlaneMarkerData *data = (SlidePlaneMarkerData *) op->customdata;
float dx, dy, mdelta[2];
@@ -307,8 +309,8 @@ static int slide_plane_marker_modal(bContext *C,
mdelta[0] = event->mval[0] - data->previous_mval[0];
mdelta[1] = event->mval[1] - data->previous_mval[1];
dx = mdelta[0] / data->width / sc->zoom;
dy = mdelta[1] / data->height / sc->zoom;
dx = mdelta[0] / data->width / rsc->zoom;
dy = mdelta[1] / data->height / rsc->zoom;
if (data->accurate) {
dx /= 5.0f;

View File

@@ -270,7 +270,9 @@ void ed_tracking_delect_all_plane_tracks(ListBase *plane_tracks_base)
static int mouse_select(bContext *C, float co[2], int extend)
{
SpaceClip *sc = CTX_wm_space_clip(C);
MovieClip *clip = ED_space_clip_get_clip(sc);
ARegion *ar = CTX_wm_region(C);
RegionSpaceClip *rsc = CTX_wm_region_clip(C);
MovieClip *clip = ED_space_clip_get_clip_in_region(sc, ar);
MovieTracking *tracking = &clip->tracking;
ListBase *tracksbase = BKE_tracking_get_active_tracks(tracking);
ListBase *plane_tracks_base = BKE_tracking_get_active_plane_tracks(tracking);
@@ -339,8 +341,8 @@ static int mouse_select(bContext *C, float co[2], int extend)
}
if (!extend) {
sc->xlockof = 0.0f;
sc->ylockof = 0.0f;
rsc->xlockof = 0.0f;
rsc->ylockof = 0.0f;
}
BKE_tracking_dopesheet_tag_update(tracking);
@@ -384,7 +386,7 @@ static int select_invoke(bContext *C, wmOperator *op, const wmEvent *event)
MovieTrackingTrack *track = tracking_marker_check_slide(C, event, NULL, NULL, NULL);
if (track) {
MovieClip *clip = ED_space_clip_get_clip(sc);
MovieClip *clip = ED_space_clip_get_clip_in_region(sc, ar);
clip->tracking.act_track = track;
@@ -429,7 +431,7 @@ static int border_select_exec(bContext *C, wmOperator *op)
SpaceClip *sc = CTX_wm_space_clip(C);
ARegion *ar = CTX_wm_region(C);
MovieClip *clip = ED_space_clip_get_clip(sc);
MovieClip *clip = ED_space_clip_get_clip_in_region(sc, ar);
MovieTracking *tracking = &clip->tracking;
MovieTrackingTrack *track;
MovieTrackingPlaneTrack *plane_track;
@@ -539,7 +541,7 @@ static int do_lasso_select_marker(bContext *C, const int mcords[][2], const shor
SpaceClip *sc = CTX_wm_space_clip(C);
ARegion *ar = CTX_wm_region(C);
MovieClip *clip = ED_space_clip_get_clip(sc);
MovieClip *clip = ED_space_clip_get_clip_in_region(sc, ar);
MovieTracking *tracking = &clip->tracking;
MovieTrackingTrack *track;
MovieTrackingPlaneTrack *plane_track;
@@ -684,7 +686,7 @@ static int circle_select_exec(bContext *C, wmOperator *op)
SpaceClip *sc = CTX_wm_space_clip(C);
ARegion *ar = CTX_wm_region(C);
MovieClip *clip = ED_space_clip_get_clip(sc);
MovieClip *clip = ED_space_clip_get_clip_in_region(sc, ar);
MovieTracking *tracking = &clip->tracking;
MovieTrackingTrack *track;
MovieTrackingPlaneTrack *plane_track;
@@ -793,7 +795,8 @@ void CLIP_OT_select_circle(wmOperatorType *ot)
static int select_all_exec(bContext *C, wmOperator *op)
{
SpaceClip *sc = CTX_wm_space_clip(C);
MovieClip *clip = ED_space_clip_get_clip(sc);
ARegion *ar = CTX_wm_region(C);
MovieClip *clip = ED_space_clip_get_clip_in_region(sc, ar);
MovieTracking *tracking = &clip->tracking;
MovieTrackingTrack *track = NULL; /* selected track */
MovieTrackingPlaneTrack *plane_track = NULL; /* selected plane track */
@@ -912,7 +915,8 @@ void CLIP_OT_select_all(wmOperatorType *ot)
static int select_groped_exec(bContext *C, wmOperator *op)
{
SpaceClip *sc = CTX_wm_space_clip(C);
MovieClip *clip = ED_space_clip_get_clip(sc);
ARegion *ar = CTX_wm_region(C);
MovieClip *clip = ED_space_clip_get_clip_in_region(sc, ar);
MovieTrackingTrack *track;
MovieTrackingMarker *marker;
MovieTracking *tracking = &clip->tracking;

View File

@@ -1262,12 +1262,13 @@ typedef struct SpaceClip {
ListBase regionbase; /* storage of regions for inactive spaces */
int spacetype;
float xof, yof; /* user defined offset, image is centered */
float xlockof, ylockof; /* user defined offset from locked position */
float zoom; /* user defined zoom level */
float xof DNA_DEPRECATED, yof DNA_DEPRECATED; /* user defined offset, image is centered */
float xlockof DNA_DEPRECATED, ylockof DNA_DEPRECATED; /* user defined offset from locked position */
float zoom DNA_DEPRECATED; /* user defined zoom level */
struct MovieClipUser user; /* user of clip */
struct MovieClip *clip; /* clip data */
struct MovieClip *secondary_clip; /* secondary clip data */
struct MovieClipScopes scopes; /* different scoped displayed in space panels */
int flag; /* flags */
@@ -1295,6 +1296,19 @@ typedef struct SpaceClip {
MaskSpaceInfo mask_info;
} SpaceClip;
typedef enum eRegionSpaceClip_Flag {
RSC_MAIN_CLIP = (1 << 0),
RSC_SECONDARY_CLIP = (1 << 1),
} eRegionSpaceClip_Flag;
/* region-related settings for Clip Editor */
typedef struct RegionSpaceClip {
float xof, yof; /* user defined offset, image is centered */
float xlockof, ylockof; /* user defined offset from locked position */
float zoom; /* user defined zoom level */
int flag; /* region type (main clip/secondary_clip), used in correspondence mode */
} RegionSpaceClip;
/* SpaceClip->flag */
typedef enum eSpaceClip_Flag {
SC_SHOW_MARKER_PATTERN = (1 << 0),
@@ -1328,6 +1342,7 @@ typedef enum eSpaceClip_Mode {
/*SC_MODE_RECONSTRUCTION = 1,*/ /* DEPRECATED */
/*SC_MODE_DISTORTION = 2,*/ /* DEPRECATED */
SC_MODE_MASKEDIT = 3,
SC_MODE_CORRESPONDENCE = 4,
} eSpaceClip_Mode;
/* SpaceClip->view */

View File

@@ -44,10 +44,12 @@
struct bGPdata;
struct Image;
struct MovieClip;
struct MovieReconstructedCamera;
struct MovieTrackingCamera;
struct MovieTrackingMarker;
struct MovieTrackingTrack;
struct MovieTrackingCorrespondence;
struct MovieTracking;
typedef struct MovieReconstructedCamera {
@@ -164,6 +166,18 @@ typedef struct MovieTrackingTrack {
float weight_stab;
} MovieTrackingTrack;
typedef struct MovieTrackingCorrespondence {
struct MovieTrackingCorrespondence *next, *prev;
char name[64]; /* MAX_NAME */
char self_track_name[64];
char other_track_name[64];
struct MovieClip *self_clip;
struct MovieClip *other_clip;
} MovieTrackingCorrespondence;
typedef struct MovieTrackingPlaneMarker {
/* Corners of the plane in the following order:
*
@@ -348,6 +362,7 @@ typedef struct MovieTracking {
MovieTrackingCamera camera; /* camera intrinsics */
ListBase tracks; /* list of tracks used for camera object */
ListBase plane_tracks; /* list of plane tracks used by camera object */
ListBase correspondences; /* list of correspondence for multi-view support */
MovieTrackingReconstruction reconstruction; /* reconstruction data for camera object */
MovieTrackingStabilization stabilization; /* stabilization data */
MovieTrackingTrack *act_track; /* active track */

View File

@@ -298,6 +298,7 @@ typedef struct ThemeSpace {
char clipping_border_3d[4];
char marker_outline[4], marker[4], act_marker[4], sel_marker[4], dis_marker[4], lock_marker[4];
char linked_marker[4], sel_linked_marker[4]; /* cross-clip linked markers and selected linked markers */
char bundle_solid[4];
char path_before[4], path_after[4];
char camera_path[4];

View File

@@ -189,6 +189,7 @@ EnumPropertyItem rna_enum_viewport_shade_items[] = {
EnumPropertyItem rna_enum_clip_editor_mode_items[] = {
{SC_MODE_TRACKING, "TRACKING", ICON_ANIM_DATA, "Tracking", "Show tracking and solving tools"},
{SC_MODE_MASKEDIT, "MASK", ICON_MOD_MASK, "Mask", "Show mask editing tools"},
{SC_MODE_CORRESPONDENCE, "CORRESPONDENCE", ICON_MOD_CORRESPONDENCE, "Correspondence", "Show correspondence editing tools"},
{0, NULL, 0, NULL, NULL}
};
@@ -1558,6 +1559,14 @@ static void rna_SpaceClipEditor_clip_set(PointerRNA *ptr, PointerRNA value)
ED_space_clip_set_clip(NULL, screen, sc, (MovieClip *)value.data);
}
static void rna_SpaceClipEditor_secondary_clip_set(PointerRNA *ptr, PointerRNA value)
{
SpaceClip *sc = (SpaceClip *)(ptr->data);
bScreen *screen = (bScreen *)ptr->id.data;
ED_space_clip_set_secondary_clip(NULL, screen, sc, (MovieClip *)value.data);
}
static void rna_SpaceClipEditor_mask_set(PointerRNA *ptr, PointerRNA value)
{
SpaceClip *sc = (SpaceClip *)(ptr->data);
@@ -1565,19 +1574,19 @@ static void rna_SpaceClipEditor_mask_set(PointerRNA *ptr, PointerRNA value)
ED_space_clip_set_mask(NULL, sc, (Mask *)value.data);
}
static void rna_SpaceClipEditor_clip_mode_update(Main *UNUSED(bmain), Scene *UNUSED(scene), PointerRNA *ptr)
static void rna_SpaceClipEditor_clip_mode_update(bContext *C, PointerRNA *ptr)
{
SpaceClip *sc = (SpaceClip *)(ptr->data);
sc->scopes.ok = 0;
/* update split view if in correspondence mode */
ED_clip_update_correspondence_mode(C, sc);
}
static void rna_SpaceClipEditor_lock_selection_update(Main *UNUSED(bmain), Scene *UNUSED(scene), PointerRNA *ptr)
static void rna_SpaceClipEditor_lock_selection_update(bContext *C, PointerRNA *ptr)
{
SpaceClip *sc = (SpaceClip *)(ptr->data);
sc->xlockof = 0.f;
sc->ylockof = 0.f;
ED_space_clip_region_set_lock_zero(C);
}
static void rna_SpaceClipEditor_view_type_update(Main *UNUSED(bmain), Scene *UNUSED(scene), PointerRNA *ptr)
@@ -4548,6 +4557,13 @@ static void rna_def_space_clip(BlenderRNA *brna)
RNA_def_property_pointer_funcs(prop, NULL, "rna_SpaceClipEditor_clip_set", NULL, NULL);
RNA_def_property_update(prop, NC_SPACE | ND_SPACE_CLIP, NULL);
/* secondary movieclip */
prop = RNA_def_property(srna, "secondary_clip", PROP_POINTER, PROP_NONE);
RNA_def_property_flag(prop, PROP_EDITABLE);
RNA_def_property_ui_text(prop, "Secondary Movie Clip", "Secondary Movie clip displayed and edited in this space");
RNA_def_property_pointer_funcs(prop, NULL, "rna_SpaceClipEditor_secondary_clip_set", NULL, NULL);
RNA_def_property_update(prop, NC_SPACE | ND_SPACE_CLIP, NULL);
/* clip user */
prop = RNA_def_property(srna, "clip_user", PROP_POINTER, PROP_NONE);
RNA_def_property_flag(prop, PROP_NEVER_NULL);
@@ -4565,6 +4581,7 @@ static void rna_def_space_clip(BlenderRNA *brna)
RNA_def_property_enum_sdna(prop, NULL, "mode");
RNA_def_property_enum_items(prop, rna_enum_clip_editor_mode_items);
RNA_def_property_ui_text(prop, "Mode", "Editing context being displayed");
RNA_def_property_flag(prop, PROP_CONTEXT_UPDATE);
RNA_def_property_update(prop, NC_SPACE | ND_SPACE_CLIP, "rna_SpaceClipEditor_clip_mode_update");
/* view */
@@ -4591,6 +4608,7 @@ static void rna_def_space_clip(BlenderRNA *brna)
prop = RNA_def_property(srna, "lock_selection", PROP_BOOLEAN, PROP_NONE);
RNA_def_property_ui_text(prop, "Lock to Selection", "Lock viewport to selected markers during playback");
RNA_def_property_boolean_sdna(prop, NULL, "flag", SC_LOCK_SELECTION);
RNA_def_property_flag(prop, PROP_CONTEXT_UPDATE);
RNA_def_property_update(prop, NC_SPACE | ND_SPACE_CLIP, "rna_SpaceClipEditor_lock_selection_update");
/* lock to time cursor */

View File

@@ -2899,6 +2899,18 @@ static void rna_def_userdef_theme_space_clip(BlenderRNA *brna)
RNA_def_property_ui_text(prop, "Selected Marker", "Color of selected marker");
RNA_def_property_update(prop, 0, "rna_userdef_update");
prop = RNA_def_property(srna, "linked_marker", PROP_FLOAT, PROP_COLOR_GAMMA);
RNA_def_property_float_sdna(prop, NULL, "linked_marker");
RNA_def_property_array(prop, 3);
RNA_def_property_ui_text(prop, "Linked Marker", "Color of linked marker");
RNA_def_property_update(prop, 0, "rna_userdef_update");
prop = RNA_def_property(srna, "selected_linked_marker", PROP_FLOAT, PROP_COLOR_GAMMA);
RNA_def_property_float_sdna(prop, NULL, "sel_linked_marker");
RNA_def_property_array(prop, 3);
RNA_def_property_ui_text(prop, "Selected Linked Marker", "Color of selected linked marker");
RNA_def_property_update(prop, 0, "rna_userdef_update");
prop = RNA_def_property(srna, "disabled_marker", PROP_FLOAT, PROP_COLOR_GAMMA);
RNA_def_property_float_sdna(prop, NULL, "dis_marker");
RNA_def_property_array(prop, 3);

View File

@@ -396,6 +396,9 @@ void ED_screen_set_scene(struct bContext *C, struct bScreen *screen, struct Scen
struct MovieClip *ED_space_clip_get_clip(struct SpaceClip *sc) RET_NULL
void ED_space_clip_set_clip(struct bContext *C, struct bScreen *screen, struct SpaceClip *sc, struct MovieClip *clip) RET_NONE
void ED_space_clip_set_mask(struct bContext *C, struct SpaceClip *sc, struct Mask *mask) RET_NONE
void ED_space_clip_set_secondary_clip(struct bContext *C, struct bScreen *screen, struct SpaceClip *sc, struct MovieClip *secondary_clip) RET_NONE
void ED_clip_update_correspondence_mode(struct bContext *C, struct SpaceClip *sc) RET_NONE
void ED_space_clip_region_set_lock_zero(struct bContext *C) RET_NONE
void ED_space_image_set_mask(struct bContext *C, struct SpaceImage *sima, struct Mask *mask) RET_NONE
void ED_area_tag_redraw_regiontype(struct ScrArea *sa, int regiontype) RET_NONE