1
1

Compare commits

...

48 Commits

Author SHA1 Message Date
9e023dc9cc Code cleanup: remove a duplicate define 2013-09-23 14:51:35 +00:00
9c6a60ab60 Remove multicamera solver option
Reverting the previous commit because Sergey threatened puppies with an axe for every time I add an option.
2013-09-18 22:21:32 +00:00
d7e23e1783 Add checkbox for multicamera solving
To enable reconstruction using tracks from multiple cameras, the "Multicamera" checkbox in the Solve panel should be checked. This is currently non-functional.
2013-09-18 13:32:57 +00:00
4bffc83835 Add simple panel for creating multicamera correspondences
The create correspondence operator is currently non-functional, but it is expected that it will create correspondences between tracks in different movie clips.
2013-09-18 12:58:12 +00:00
2a7a733784 Allow empty intrinsics vector for bundle adjustment
If an intrinsics vector is passed to the bundle adjuster with size less than the number of cameras, the vector is padded out to that size with empty intrinsics.
2013-09-18 09:25:07 +00:00
9fdd86106c Multicamera-ify bundle adjustment
Bundle adjustment now uses the appropriate camera intrinsics when calculating the reprojection error for each marker. Refinement options are currently only applied to the camera intrinsics for camera 0 (all other camera intrinsics are made constant).
2013-09-18 01:33:40 +00:00
2e9c5fd705 Merging from trunk r60137 into soc-2013-motion_track 2013-09-14 11:11:00 +00:00
be42beaacb Fix crash with fetching all reconstructed views 2013-09-14 11:01:05 +00:00
39943f1486 Scale all camera reconstructions to unity 2013-09-14 10:08:13 +00:00
1fb09b9d60 Multicamera-ify reconstruction initialization
Initialization can now reconstruct from two images regardless of the source camera.
2013-09-14 08:46:37 +00:00
e494139b59 Modify keyframe selection algorithm to be camera-aware
The keyframe selection algorithm can now select keyframes from the camera passed to it as an argument. It's worth looking into multicamera algorithms too.
2013-09-12 16:16:08 +00:00
b319768af9 Improve modal solver image iteration
Instead of iterating over all images and filtering through only those from camera 0, iterate over only camera 0 images. With the current way markers are stored, this is very computationally inefficient, but this should be improved soon.
2013-09-12 14:45:50 +00:00
b6a6aaf937 Use multicamera terminology for initial image selection
To initialise reconstruction, two images need to be selected with high variance. The GRIC and variance algorithm can continue to refer to the images as keyframes, as it is intended to work with images from a single camera, but the libmv API should be more general than that, referring to selected *images*. This is a non-functional change.
2013-09-12 14:09:31 +00:00
0a9163c885 Modify modal solver to use only camera 0
Match recent changes to how tracks are stored.
2013-09-12 00:13:51 +00:00
78b484831d Clearer association between markers and cameras
As the camera identifier associated with a marker is simply bookkeeping for multicamera reconstruction, make it an optional attribute when handling tracks and reconstructed views. This means that the libmv API is still nice for users that don't need to associate images with cameras.
2013-09-11 14:48:29 +00:00
e4b201f7b7 Merging from trunk r60041 into soc-2013-motion_track 2013-09-11 11:18:08 +00:00
d24a4a9b63 Fix: BKE tracking compilation error
Previous commit was incomplete and didn't include blender-side changes to the libmv API.
2013-09-11 10:34:54 +00:00
cb21085634 Modify data structure for storing tracks
The Tracks data structure stores Markers. The current design has it so that a marker's image identifier denotes the frame from the associated camera that the marker belongs to. That is, it treats the image identifier more as a frame identifier. You could have markers in image 0 for camera 0 and image 0 for camera 1 and they wouldn't be considered to be in the same image.

However, this design doesn't make much sense because the reconstruction algorithms don't care about frames or the cameras from which a frame came, only that there is a set of images observing the same scene. A better design is for all frames from all cameras to have unique image identifiers. The camera identifiers can then denote a subset of those images. No image captured by camera 0 will have the same image identifier as images from camera 1. This way, the algorithms can continue to just treat all frames as one big set of images and the camera identifier provides the association of an image to the camera it originated from. This makes it much easier to implement multicamera reconstruction without significant changes to the way libmv behaves.
2013-09-10 21:07:01 +00:00
ce17c21303 Update doxygen comments for reconstruction initialization
EuclideanReconstructTwoFrames and ProjectiveReconstructTwoFrames currently assume that the passed markers belong to a single camera. That is, we're not reconstructing frames from different cameras. Ideally, this restriction should be removed.
2013-09-10 10:57:49 +00:00
8896abe995 Code cleanup: Google-style constant names
Google style guide recommends kConstantName for constants.
2013-09-09 14:50:31 +00:00
14816b3a9c Code cleanup: keyframe selection explicitly single-camera
The current keyframe selection algorithm only looks at frames from camera 0 to find two frames with high variance. It's definitely worth looking at algorithms for keyframe selection from multiple cameras - different cameras are likely to have high variance.
2013-09-09 13:55:24 +00:00
600145d273 Code cleanup: use const references for input parameters 2013-09-09 10:14:05 +00:00
d56589fc61 Code cleanup: modal solver explicitly single-camera
Non-functional change to the modal solver to explicitly document it as being a single-camera algorithm, working with only camera 0.
2013-09-09 09:42:37 +00:00
67313f607a Improved naming of cameras/views
The recent change that distinguished "multicamera reconstruction" from "multiview reconstruction" led to some confusing naming. A EuclideanReconstruction would store EuclideanCameras, which are the position and orientation of a physical camera for a specific image. That is, a EuclideanCamera is a camera at an instance, rather than representing the physical camera itself. This meant that EuclideanCameras had associated cameras, which is clearly confusing. This commit attempts to unify the terminology:

- View - a specific instance of a camera at some orientation and point in space (giving an image of the scene)
- Multiview reconstruction - reconstruction of a scene from multiple views
- Camera - a physical video camera
- Multicamera reconstruction - a form of multiview reconstruction, where the views are associated with some number of video cameras

To this end, EuclideanCamera and ProjectiveCamera have been renamed to EuclideanView and ProjectiveView, as they represent a camera at a specific instance.

Some functions continue to have variables named "camera" where "view" would be more appropriate.
2013-09-07 14:47:32 +00:00
ac183d3da0 Modify constraints user interface and usability
I have recently been preparing focal length constraints for committing to trunk but unfortunately missed BCon2, so it'll have to wait until next time. In the meantime, this commit brings the changes that were made according to code reviews and user feedback into my branch. The interface changes include:

- Constraint value inputs are now in a column.
- User interface button names have been simplified.
- The focal length camera property is not forced to fit inside the focal length constraint range. Instead, the focal length is clamped to the range when solving.
- When the constraint is not enabled, the min and max values will adjust to match the focal length camera property. This gives the user an anchor to help understand the meaning of the values.
2013-09-05 22:59:34 +00:00
bc875d1742 Merging from trunk r59849 into soc-2013-motion_track 2013-09-05 17:56:20 +00:00
18020c064c Merging from trunk r59488 into soc-2013-motion_track 2013-08-24 22:58:20 +00:00
d08a4292f1 Fix compilation with WITH_LIBMV=OFF
Once again, I forgot to update libmv-capi_stub.cc to match the header.
2013-08-24 22:06:36 +00:00
c8c9996e54 Fix ignored constraint flag
Incorrect flags were being checked for whether a focal lenght constraint should be applied, causing it to always be applied whenever the focal length was chosen for refinement.
2013-08-23 20:38:36 +00:00
6e83ce4b70 Fix invalid returned camera intrinsics
libmv provides a way to get the camera intrinsics from a reconstruction, which may be different to the intrinsics passed to the solver if refinement has taken place. My multicamera changes introduced a bug caused by a pointer cast between incompatible types causing the returned intrinsics to have screwed up values. This patch fixes that cast.
2013-08-23 20:11:11 +00:00
9dfc38fe46 More framework for multicamera reconstruction
All algorithms have been adjusted to take into account multiple cameras and their intrinsics except for the bundle adjustment. This means it will give an incorrect result for multicamera reconstruction at the moment, but single camera reconstruction still functions normally.
2013-08-05 23:22:56 +00:00
cefc0dd68e Use multicamera intrinsics to calculate reprojection error
The reprojection error calculation would only take a single CameraIntrinsics and would apply it to all markers. Adjust it to take a vector of CameraIntrinsics and apply the intrinsics corresponding to the marker's associated camera.
2013-08-05 01:17:46 +00:00
9ab6dba74b Add correct handling of multicamera intrinsics
One argument to libmv_solve is an array of camera intrinsics options. The code now converts all elements of the array to internal CameraIntrinsics, instead of just the first. The output struct, libmv_Reconstruction, now contains a vector of output camera intrinsics (each element corresponding to a different camera).
2013-08-05 00:37:13 +00:00
1305ae3380 Change from multiview to multicamera terminology
All 3D reconstruction is multiview reconstruction. A better, unambigious term for reconstructing multiple cameras is multicamera reconstruction, so change all references to "views" to "cameras".

Unfortunately, this introduces some confusion in simple_pipeline/reconstruction.h, but I'm currently discussing this with Keir.
2013-08-04 23:17:27 +00:00
058985fbb9 Rename CameraForViewImage to CameraForImage
It is uncommon to refer to an image without a particular view, so shorten the function name for simplicity. If a function is needed that references all views at once, suffixing with AllViews will do.
2013-08-03 10:30:39 +00:00
46510ad0ea Modify Tracks to multiview only
The Tracks class stores Markers that correspond to tracks across different views and images. There's no need to have an interface for both singleview and multiview reconstruction, so remove the single view interface. For now, all algorithms assume view 0, but should instead be passed the view that they should work on.
2013-08-02 09:40:42 +00:00
0a087e78e7 Merging from trunk r58769 into soc-2013-motion_track
Mostly to have the cleaned up libmv C API from r58765.
2013-07-31 17:30:10 +00:00
280b13c3a5 Modify libmv output types to store multiview reconstructions
The libmv_solve function returns a libmv_Reconstruction which contains a EuclideanReconstruction. This class and its brother, ProjectiveReconstruction, store the results of a reconstruction. To make way for multiview reconstruction, modify these classes to store reconstructed camera poses associated with particular views.

All libmv algorithms have been adjusted to meet the new API, temporarily assuming view 0 everywhere until they have also been multiviewified.
2013-07-31 01:10:34 +00:00
8d49700939 Fix alignment indentation
Although file is tab indented, split lines should be aligned with spaces.
2013-07-28 09:36:21 +00:00
48f811f9d6 Refactor bundle refinement in libmv C API
Instead of having libmv_solveRefineIntrinsics, which creates bundle options from reconstruction options and passes them to the libmv bundle adjuster, have a simpler function that just creates and returns the options. This brings all of the solving logic into libmv_solve.
2013-07-26 23:18:26 +00:00
de6c513163 Fix error when compiling with WITH_LIBMV=OFF
Adjust libmv-capi_stub.cc to match recent changes to the C API header, otherwise it won't compile with WITH_LIBMV=OFF.
2013-07-26 23:13:14 +00:00
8e1cac34cb Unify libmv C API for solving
The current libmv C API exposes the libmv_solve function which forwards to either libmv_solveReconstruction or libmv_solveModal, depending on whether the user selected Tripod Motion in the UI. These functions contain a fair bit of common code. Unify them into the single libmv_solve function, leading the way to a clean multiview reconstruction solution.
2013-07-26 00:37:26 +00:00
6fd3c9e2e1 Modify Tracks and libmv C API for multiview reconstruction
Add a view identifier to the Markers that are contained in a Tracks set and the appropriate interface for inserting and fetching from the set according to view. Also modify the libmv C API to allow multiview insertion into Tracks from outside libmv.
2013-07-19 00:09:03 +00:00
0efe5d47e2 Refactor and tidy up bundle options and interfaces
Non-functional changes to a few data types and interfaces involved in passing bundle options from the UI to libmv. A significant change is that the decision to perform a modal reconstruction or not is moved into the libmv C API and the type of camera movement is passed as an option.
2013-07-18 09:21:52 +00:00
732b3720d1 Merging from trunk r58267 into soc-2013-motion_track 2013-07-15 14:19:02 +00:00
0f0be925bb Add constraints for focal length refinement
The user can apply a ranged constraint to the focal length during refinement. When the focal length is selected for refinement in the solve panel, a constraint can be applied by checking "Constrain Focal Length" and entering the appropriate minimum and maximum values. The camera focal length will only be refined within this range. This is useful when an unconstrained refinement gives an incorrect result and a suitable range is known.

This includes the following changes:
- Addition of constraint UI with corresponding RNA properties. The constraint is displayed in the same units chosen for the camera focal length. The values are restricted such that min is no greater than focal length and max is no less than it.
- Data structure modifications to allow passing of constraint values from the UI to libmv.
- Modifications to the libmv simple pipeline API to incorporate constraints.
- Constraining focal length when interfacing with Ceres (which only does unconstrained solving) through a sinusoidal mapping.

Results of tests are given at: http://wiki.blender.org/index.php/User:Sftrabbit/GSoC_2013/Documentation/Constraint_Tests
2013-07-14 16:21:35 +00:00
7d53177d57 Merging r57392 through r57521 from trunk into soc-2013-motion_track 2013-06-17 15:32:33 +00:00
132acfbb44 gsoc 2013 branch soc-2013-motion_track 2013-06-12 04:03:56 +00:00
30 changed files with 1172 additions and 631 deletions

View File

@@ -36,6 +36,8 @@
#include "libmv-capi.h"
#include <vector>
#include <cstdlib>
#include <cassert>
@@ -69,7 +71,7 @@ struct libmv_Reconstruction {
/* used for per-track average error calculation after reconstruction */
libmv::Tracks tracks;
libmv::CameraIntrinsics intrinsics;
std::vector<libmv::CameraIntrinsics> intrinsics;
double error;
};
@@ -398,13 +400,24 @@ void libmv_tracksDestroy(struct libmv_Tracks *libmv_tracks)
delete (libmv::Tracks*) libmv_tracks;
}
void libmv_tracksInsert(struct libmv_Tracks *libmv_tracks, int image, int track, double x, double y)
void libmv_tracksInsert(struct libmv_Tracks *libmv_tracks, int image, int track, double x, double y, int camera)
{
((libmv::Tracks*) libmv_tracks)->Insert(image, track, x, y);
((libmv::Tracks*)libmv_tracks)->Insert(image, track, x, y, camera);
}
/* ************ Reconstruction ************ */
class LoggingCallback : public ceres::IterationCallback {
public:
~LoggingCallback() {}
ceres::CallbackReturnType operator()(const ceres::IterationSummary &summary) {
LG << "Iteration #" << summary.iteration
<< ": " << summary.cost;
return ceres::SOLVER_CONTINUE;
}
};
class ReconstructUpdateCallback : public libmv::ProgressUpdateCallback {
public:
ReconstructUpdateCallback(reconstruct_progress_update_cb progress_update_callback, void *callback_customdata)
@@ -424,39 +437,6 @@ protected:
void *callback_customdata_;
};
static void libmv_solveRefineIntrinsics(const libmv::Tracks &tracks,
const int refine_intrinsics,
const int bundle_constraints,
reconstruct_progress_update_cb progress_update_callback,
void *callback_customdata,
libmv::EuclideanReconstruction *reconstruction,
libmv::CameraIntrinsics *intrinsics)
{
/* only a few combinations are supported but trust the caller */
int bundle_intrinsics = 0;
if (refine_intrinsics & LIBMV_REFINE_FOCAL_LENGTH) {
bundle_intrinsics |= libmv::BUNDLE_FOCAL_LENGTH;
}
if (refine_intrinsics & LIBMV_REFINE_PRINCIPAL_POINT) {
bundle_intrinsics |= libmv::BUNDLE_PRINCIPAL_POINT;
}
if (refine_intrinsics & LIBMV_REFINE_RADIAL_DISTORTION_K1) {
bundle_intrinsics |= libmv::BUNDLE_RADIAL_K1;
}
if (refine_intrinsics & LIBMV_REFINE_RADIAL_DISTORTION_K2) {
bundle_intrinsics |= libmv::BUNDLE_RADIAL_K2;
}
progress_update_callback(callback_customdata, 1.0, "Refining solution");
libmv::EuclideanBundleCommonIntrinsics(tracks,
bundle_intrinsics,
bundle_constraints,
reconstruction,
intrinsics);
}
static void cameraIntrinsicsFromOptions(const libmv_CameraIntrinsicsOptions *camera_intrinsics_options,
libmv::CameraIntrinsics *camera_intrinsics)
{
@@ -474,20 +454,140 @@ static void cameraIntrinsicsFromOptions(const libmv_CameraIntrinsicsOptions *cam
camera_intrinsics_options->image_height);
}
static libmv::Tracks getNormalizedTracks(const libmv::Tracks &tracks, const libmv::CameraIntrinsics &camera_intrinsics)
static void cameraIntrinsicsFromOptions(const libmv_CameraIntrinsicsOptions camera_intrinsics_options[],
int num_cameras,
std::vector<libmv::CameraIntrinsics> &camera_intrinsics)
{
for (int i = 0; i < num_cameras; ++i) {
camera_intrinsics.push_back(libmv::CameraIntrinsics());
cameraIntrinsicsFromOptions(camera_intrinsics_options, &camera_intrinsics[i]);
}
}
static libmv::Tracks getNormalizedTracks(const libmv::Tracks &tracks,
const std::vector<libmv::CameraIntrinsics> &camera_intrinsics)
{
libmv::vector<libmv::Marker> markers = tracks.AllMarkers();
for (int i = 0; i < markers.size(); ++i) {
camera_intrinsics.InvertIntrinsics(markers[i].x, markers[i].y,
int camera = markers[i].camera;
camera_intrinsics[camera].InvertIntrinsics(markers[i].x, markers[i].y,
&(markers[i].x), &(markers[i].y));
}
return libmv::Tracks(markers);
}
static void finishReconstruction(const libmv::Tracks &tracks, const libmv::CameraIntrinsics &camera_intrinsics,
struct libmv_Reconstruction *libmv_reconstruction,
static bool selectTwoInitialImages(
libmv::Tracks &tracks,
libmv::Tracks &normalized_tracks,
std::vector<libmv::CameraIntrinsics> &camera_intrinsics,
libmv::ReconstructionOptions &reconstruction_options,
int &image1,
int &image2)
{
libmv::vector<int> images;
/* Get list of all image candidates first. */
SelectKeyframesBasedOnGRICAndVariance(normalized_tracks,
camera_intrinsics,
images);
if (images.size() < 2) {
LG << "Not enough keyframes detected by GRIC";
return false;
}
else if (images.size() == 2) {
image1 = images[0];
image2 = images[1];
return true;
}
/* Now choose two images with minimal reprojection error after initial
* reconstruction choose images with the least reprojection error after
* solving from two candidate images.
*
* In fact, currently libmv returns single pair only, so this code will
* not actually run. But in the future this could change, so let's stay
* prepared.
*/
int previous_image = images[0];
double best_error = std::numeric_limits<double>::max();
for (int i = 1; i < images.size(); i++) {
libmv::EuclideanReconstruction reconstruction;
int current_image = images[i];
libmv::vector<libmv::Marker> image_markers =
normalized_tracks.MarkersForTracksInBothImages(previous_image,
current_image);
libmv::Tracks image_tracks(image_markers);
/* get a solution from two images only */
libmv::EuclideanReconstructTwoFrames(image_markers, &reconstruction);
libmv::EuclideanBundle(image_tracks, &reconstruction);
libmv::EuclideanCompleteReconstruction(reconstruction_options,
image_tracks,
&reconstruction, NULL);
double current_error =
libmv::EuclideanReprojectionError(tracks,
reconstruction,
camera_intrinsics);
LG << "Error between " << previous_image
<< " and " << current_image
<< ": " << current_error;
if (current_error < best_error) {
best_error = current_error;
image1 = previous_image;
image2 = current_image;
}
previous_image = current_image;
}
return true;
}
static libmv::BundleOptions refinementOptionsFromReconstructionOptions(
const libmv_ReconstructionOptions &reconstruction_options)
{
/* only a few combinations are supported but trust the caller */
libmv::BundleOptions bundle_options;
const int motion_flag = reconstruction_options.motion_flag;
const int refine_intrinsics = reconstruction_options.refine_intrinsics;
const int constrain_intrinsics = reconstruction_options.constrain_intrinsics;
if (motion_flag & LIBMV_TRACKING_MOTION_MODAL) {
bundle_options.constraints |= libmv::BUNDLE_NO_TRANSLATION;
}
if (refine_intrinsics & LIBMV_REFINE_FOCAL_LENGTH) {
bundle_options.bundle_intrinsics |= libmv::BUNDLE_FOCAL_LENGTH;
if (constrain_intrinsics & LIBMV_CONSTRAIN_FOCAL_LENGTH) {
bundle_options.constraints |= libmv::BUNDLE_CONSTRAIN_FOCAL_LENGTH;
bundle_options.focal_length_min = reconstruction_options.focal_length_min;
bundle_options.focal_length_max = reconstruction_options.focal_length_max;
}
}
if (refine_intrinsics & LIBMV_REFINE_PRINCIPAL_POINT) {
bundle_options.bundle_intrinsics |= libmv::BUNDLE_PRINCIPAL_POINT;
}
if (refine_intrinsics & LIBMV_REFINE_RADIAL_DISTORTION_K1) {
bundle_options.bundle_intrinsics |= libmv::BUNDLE_RADIAL_K1;
}
if (refine_intrinsics & LIBMV_REFINE_RADIAL_DISTORTION_K2) {
bundle_options.bundle_intrinsics |= libmv::BUNDLE_RADIAL_K2;
}
return bundle_options;
}
static void finishReconstruction(const libmv::Tracks &tracks, const std::vector<libmv::CameraIntrinsics> &camera_intrinsics,
libmv_Reconstruction *libmv_reconstruction,
reconstruct_progress_update_cb progress_update_callback,
void *callback_customdata)
{
@@ -499,96 +599,24 @@ static void finishReconstruction(const libmv::Tracks &tracks, const libmv::Camer
libmv_reconstruction->error = libmv::EuclideanReprojectionError(tracks, reconstruction, camera_intrinsics);
}
static bool selectTwoKeyframesBasedOnGRICAndVariance(
libmv::Tracks &tracks,
libmv::Tracks &normalized_tracks,
libmv::CameraIntrinsics &camera_intrinsics,
libmv::ReconstructionOptions &reconstruction_options,
int &keyframe1,
int &keyframe2)
{
libmv::vector<int> keyframes;
/* Get list of all keyframe candidates first. */
SelectKeyframesBasedOnGRICAndVariance(normalized_tracks,
camera_intrinsics,
keyframes);
if (keyframes.size() < 2) {
LG << "Not enough keyframes detected by GRIC";
return false;
}
else if (keyframes.size() == 2) {
keyframe1 = keyframes[0];
keyframe2 = keyframes[1];
return true;
}
/* Now choose two keyframes with minimal reprojection error after initial
* reconstruction choose keyframes with the least reprojection error after
* solving from two candidate keyframes.
*
* In fact, currently libmv returns single pair only, so this code will
* not actually run. But in the future this could change, so let's stay
* prepared.
*/
int previous_keyframe = keyframes[0];
double best_error = std::numeric_limits<double>::max();
for (int i = 1; i < keyframes.size(); i++) {
libmv::EuclideanReconstruction reconstruction;
int current_keyframe = keyframes[i];
libmv::vector<libmv::Marker> keyframe_markers =
normalized_tracks.MarkersForTracksInBothImages(previous_keyframe,
current_keyframe);
libmv::Tracks keyframe_tracks(keyframe_markers);
/* get a solution from two keyframes only */
libmv::EuclideanReconstructTwoFrames(keyframe_markers, &reconstruction);
libmv::EuclideanBundle(keyframe_tracks, &reconstruction);
libmv::EuclideanCompleteReconstruction(reconstruction_options,
keyframe_tracks,
&reconstruction, NULL);
double current_error =
libmv::EuclideanReprojectionError(tracks,
reconstruction,
camera_intrinsics);
LG << "Error between " << previous_keyframe
<< " and " << current_keyframe
<< ": " << current_error;
if (current_error < best_error) {
best_error = current_error;
keyframe1 = previous_keyframe;
keyframe2 = current_keyframe;
}
previous_keyframe = current_keyframe;
}
return true;
}
struct libmv_Reconstruction *libmv_solveReconstruction(const struct libmv_Tracks *libmv_tracks,
const libmv_CameraIntrinsicsOptions *libmv_camera_intrinsics_options,
libmv_Reconstruction *libmv_solve(const libmv_Tracks *libmv_tracks,
const libmv_CameraIntrinsicsOptions libmv_camera_intrinsics_options[],
libmv_ReconstructionOptions *libmv_reconstruction_options,
reconstruct_progress_update_cb progress_update_callback,
void *callback_customdata)
{
struct libmv_Reconstruction *libmv_reconstruction = new libmv_Reconstruction();
libmv::EuclideanReconstruction &reconstruction = libmv_reconstruction->reconstruction;
std::vector<libmv::CameraIntrinsics> &camera_intrinsics = libmv_reconstruction->intrinsics;
libmv::Tracks &tracks = *((libmv::Tracks *) libmv_tracks);
libmv::EuclideanReconstruction &reconstruction = libmv_reconstruction->reconstruction;
libmv::CameraIntrinsics &camera_intrinsics = libmv_reconstruction->intrinsics;
int num_cameras = tracks.MaxCamera() + 1;
ReconstructUpdateCallback update_callback =
ReconstructUpdateCallback(progress_update_callback, callback_customdata);
ReconstructUpdateCallback update_callback(progress_update_callback,
callback_customdata);
/* Retrieve reconstruction options from C-API to libmv API */
cameraIntrinsicsFromOptions(libmv_camera_intrinsics_options, &camera_intrinsics);
/* Convert options from C-API to libmv API */
cameraIntrinsicsFromOptions(libmv_camera_intrinsics_options, num_cameras, camera_intrinsics);
libmv::ReconstructionOptions reconstruction_options;
reconstruction_options.success_threshold = libmv_reconstruction_options->success_threshold;
@@ -597,104 +625,70 @@ struct libmv_Reconstruction *libmv_solveReconstruction(const struct libmv_Tracks
/* Invert the camera intrinsics */
libmv::Tracks normalized_tracks = getNormalizedTracks(tracks, camera_intrinsics);
/* keyframe selection */
int keyframe1 = libmv_reconstruction_options->keyframe1,
keyframe2 = libmv_reconstruction_options->keyframe2;
if (libmv_reconstruction_options->motion_flag & LIBMV_TRACKING_MOTION_MODAL &&
num_cameras == 1) {
/* Perform modal solving */
libmv::ModalSolver(normalized_tracks, &reconstruction, &update_callback);
/* Perform bundle adjustment */
libmv::EuclideanBundleModal(tracks,
&reconstruction);
}
else {
/* Image selection */
int &image1 = libmv_reconstruction_options->keyframe1,
&image2 = libmv_reconstruction_options->keyframe2;
if (libmv_reconstruction_options->select_keyframes) {
LG << "Using automatic keyframe selection";
if (libmv_reconstruction_options->select_keyframes) {
LG << "Using automatic image selection";
update_callback.invoke(0, "Selecting keyframes");
update_callback.invoke(0, "Selecting images");
selectTwoKeyframesBasedOnGRICAndVariance(tracks,
normalized_tracks,
camera_intrinsics,
reconstruction_options,
keyframe1,
keyframe2);
selectTwoInitialImages(tracks,
normalized_tracks,
camera_intrinsics,
reconstruction_options,
image1,
image2);
}
/* so keyframes in the interface would be updated */
libmv_reconstruction_options->keyframe1 = keyframe1;
libmv_reconstruction_options->keyframe2 = keyframe2;
LG << "frames to init from: " << image1 << " " << image2;
/* Reconstruct for two images */
libmv::vector<libmv::Marker> image_markers =
normalized_tracks.MarkersForTracksInBothImages(image1, image2);
LG << "number of markers for init: " << image_markers.size();
update_callback.invoke(0, "Initial reconstruction");
libmv::EuclideanReconstructTwoFrames(image_markers, &reconstruction);
/* Perform bundle adjustment */
libmv::EuclideanBundle(normalized_tracks, &reconstruction);
/* Reconstruct for all frames */
libmv::EuclideanCompleteReconstruction(reconstruction_options, normalized_tracks,
&reconstruction, &update_callback);
}
/* actual reconstruction */
LG << "frames to init from: " << keyframe1 << " " << keyframe2;
libmv::vector<libmv::Marker> keyframe_markers =
normalized_tracks.MarkersForTracksInBothImages(keyframe1, keyframe2);
LG << "number of markers for init: " << keyframe_markers.size();
update_callback.invoke(0, "Initial reconstruction");
libmv::EuclideanReconstructTwoFrames(keyframe_markers, &reconstruction);
libmv::EuclideanBundle(normalized_tracks, &reconstruction);
libmv::EuclideanCompleteReconstruction(reconstruction_options, normalized_tracks,
&reconstruction, &update_callback);
/* refinement */
if (libmv_reconstruction_options->refine_intrinsics) {
libmv_solveRefineIntrinsics(tracks,
libmv_reconstruction_options->refine_intrinsics,
libmv::BUNDLE_NO_CONSTRAINTS,
progress_update_callback,
callback_customdata,
&reconstruction,
&camera_intrinsics);
/* Refine bundle parameters */
libmv::BundleOptions refinement_bundle_options =
refinementOptionsFromReconstructionOptions(*libmv_reconstruction_options);
progress_update_callback(callback_customdata, 1.0, "Refining solution");
libmv::EuclideanBundleCommonIntrinsics(tracks,
refinement_bundle_options,
&reconstruction,
camera_intrinsics);
}
/* set reconstruction scale to unity */
/* Set reconstruction scale to unity */
libmv::EuclideanScaleToUnity(&reconstruction);
/* finish reconstruction */
finishReconstruction(tracks, camera_intrinsics, libmv_reconstruction,
progress_update_callback, callback_customdata);
return (struct libmv_Reconstruction *)libmv_reconstruction;
}
struct libmv_Reconstruction *libmv_solveModal(const struct libmv_Tracks *libmv_tracks,
const libmv_CameraIntrinsicsOptions *libmv_camera_intrinsics_options,
const libmv_ReconstructionOptions *libmv_reconstruction_options,
reconstruct_progress_update_cb progress_update_callback,
void *callback_customdata)
{
struct libmv_Reconstruction *libmv_reconstruction = new libmv_Reconstruction();
libmv::Tracks &tracks = *((libmv::Tracks *) libmv_tracks);
libmv::EuclideanReconstruction &reconstruction = libmv_reconstruction->reconstruction;
libmv::CameraIntrinsics &camera_intrinsics = libmv_reconstruction->intrinsics;
ReconstructUpdateCallback update_callback =
ReconstructUpdateCallback(progress_update_callback, callback_customdata);
cameraIntrinsicsFromOptions(libmv_camera_intrinsics_options, &camera_intrinsics);
/* Invert the camera intrinsics. */
libmv::Tracks normalized_tracks = getNormalizedTracks(tracks, camera_intrinsics);
/* Actual reconstruction. */
libmv::ModalSolver(normalized_tracks, &reconstruction, &update_callback);
libmv::CameraIntrinsics empty_intrinsics;
libmv::EuclideanBundleCommonIntrinsics(normalized_tracks,
libmv::BUNDLE_NO_INTRINSICS,
libmv::BUNDLE_NO_TRANSLATION,
&reconstruction,
&empty_intrinsics);
/* Refinement. */
if (libmv_reconstruction_options->refine_intrinsics) {
libmv_solveRefineIntrinsics(tracks,
libmv_reconstruction_options->refine_intrinsics,
libmv::BUNDLE_NO_TRANSLATION,
progress_update_callback, callback_customdata,
&reconstruction,
&camera_intrinsics);
}
/* Finish reconstruction. */
/* Finish reconstruction */
finishReconstruction(tracks, camera_intrinsics, libmv_reconstruction,
progress_update_callback, callback_customdata);
@@ -723,16 +717,16 @@ int libmv_reprojectionPointForTrack(const struct libmv_Reconstruction *libmv_rec
}
static libmv::Marker ProjectMarker(const libmv::EuclideanPoint &point,
const libmv::EuclideanCamera &camera,
const libmv::EuclideanView &view,
const libmv::CameraIntrinsics &intrinsics)
{
libmv::Vec3 projected = camera.R * point.X + camera.t;
libmv::Vec3 projected = view.R * point.X + view.t;
projected /= projected(2);
libmv::Marker reprojected_marker;
intrinsics.ApplyIntrinsics(projected(0), projected(1), &reprojected_marker.x, &reprojected_marker.y);
reprojected_marker.image = camera.image;
reprojected_marker.image = view.image;
reprojected_marker.track = point.track;
return reprojected_marker;
@@ -741,23 +735,25 @@ static libmv::Marker ProjectMarker(const libmv::EuclideanPoint &point,
double libmv_reprojectionErrorForTrack(const struct libmv_Reconstruction *libmv_reconstruction, int track)
{
const libmv::EuclideanReconstruction *reconstruction = &libmv_reconstruction->reconstruction;
const libmv::CameraIntrinsics *intrinsics = &libmv_reconstruction->intrinsics;
const std::vector<libmv::CameraIntrinsics> *intrinsics = &libmv_reconstruction->intrinsics;
libmv::vector<libmv::Marker> markers = libmv_reconstruction->tracks.MarkersForTrack(track);
int num_reprojected = 0;
double total_error = 0.0;
for (int i = 0; i < markers.size(); ++i) {
const libmv::EuclideanCamera *camera = reconstruction->CameraForImage(markers[i].image);
int camera = markers[i].camera;
const libmv::EuclideanView *view = reconstruction->ViewForImage(markers[i].image);
const libmv::EuclideanPoint *point = reconstruction->PointForTrack(markers[i].track);
const libmv::CameraIntrinsics camera_intrinsics = (*intrinsics)[camera];
if (!camera || !point) {
if (!view || !point) {
continue;
}
num_reprojected++;
libmv::Marker reprojected_marker = ProjectMarker(*point, *camera, *intrinsics);
libmv::Marker reprojected_marker = ProjectMarker(*point, *view, camera_intrinsics);
double ex = reprojected_marker.x - markers[i].x;
double ey = reprojected_marker.y - markers[i].y;
@@ -767,16 +763,18 @@ double libmv_reprojectionErrorForTrack(const struct libmv_Reconstruction *libmv_
return total_error / num_reprojected;
}
double libmv_reprojectionErrorForImage(const struct libmv_Reconstruction *libmv_reconstruction, int image)
double libmv_reprojectionErrorForImage(const struct libmv_Reconstruction *libmv_reconstruction,
int image)
{
const libmv::EuclideanReconstruction *reconstruction = &libmv_reconstruction->reconstruction;
const libmv::CameraIntrinsics *intrinsics = &libmv_reconstruction->intrinsics;
int camera = libmv_reconstruction->tracks.CameraFromImage(image);
const libmv::CameraIntrinsics *intrinsics = &libmv_reconstruction->intrinsics[camera];
libmv::vector<libmv::Marker> markers = libmv_reconstruction->tracks.MarkersInImage(image);
const libmv::EuclideanCamera *camera = reconstruction->CameraForImage(image);
const libmv::EuclideanView *view = reconstruction->ViewForImage(image);
int num_reprojected = 0;
double total_error = 0.0;
if (!camera)
if (!view)
return 0;
for (int i = 0; i < markers.size(); ++i) {
@@ -788,7 +786,7 @@ double libmv_reprojectionErrorForImage(const struct libmv_Reconstruction *libmv_
num_reprojected++;
libmv::Marker reprojected_marker = ProjectMarker(*point, *camera, *intrinsics);
libmv::Marker reprojected_marker = ProjectMarker(*point, *view, *intrinsics);
double ex = reprojected_marker.x - markers[i].x;
double ey = reprojected_marker.y - markers[i].y;
@@ -799,12 +797,12 @@ double libmv_reprojectionErrorForImage(const struct libmv_Reconstruction *libmv_
}
int libmv_reprojectionCameraForImage(const struct libmv_Reconstruction *libmv_reconstruction,
int image, double mat[4][4])
int image, double mat[4][4])
{
const libmv::EuclideanReconstruction *reconstruction = &libmv_reconstruction->reconstruction;
const libmv::EuclideanCamera *camera = reconstruction->CameraForImage(image);
const libmv::EuclideanView *view = reconstruction->ViewForImage(image);
if (camera) {
if (view) {
for (int j = 0; j < 3; ++j) {
for (int k = 0; k < 3; ++k) {
int l = k;
@@ -812,13 +810,13 @@ int libmv_reprojectionCameraForImage(const struct libmv_Reconstruction *libmv_re
if (k == 1) l = 2;
else if (k == 2) l = 1;
if (j == 2) mat[j][l] = -camera->R(j,k);
else mat[j][l] = camera->R(j,k);
if (j == 2) mat[j][l] = -view->R(j,k);
else mat[j][l] = view->R(j,k);
}
mat[j][3] = 0.0;
}
libmv::Vec3 optical_center = -camera->R.transpose() * camera->t;
libmv::Vec3 optical_center = -view->R.transpose() * view->t;
mat[3][0] = optical_center(0);
mat[3][1] = optical_center(2);
@@ -837,9 +835,10 @@ double libmv_reprojectionError(const struct libmv_Reconstruction *libmv_reconstr
return libmv_reconstruction->error;
}
struct libmv_CameraIntrinsics *libmv_reconstructionExtractIntrinsics(struct libmv_Reconstruction *libmv_reconstruction)
struct libmv_CameraIntrinsics *libmv_reconstructionExtractIntrinsics(struct libmv_Reconstruction *libmv_reconstruction,
int camera)
{
return (struct libmv_CameraIntrinsics *)&libmv_reconstruction->intrinsics;
return (struct libmv_CameraIntrinsics *)&libmv_reconstruction->intrinsics[camera];
}
/* ************ Feature detector ************ */

View File

@@ -73,14 +73,18 @@ void libmv_samplePlanarPatch(const float *image, int width, int height,
/* Tracks */
struct libmv_Tracks *libmv_tracksNew(void);
void libmv_tracksDestroy(struct libmv_Tracks *libmv_tracks);
void libmv_tracksInsert(struct libmv_Tracks *libmv_tracks, int image, int track, double x, double y);
void libmv_tracksInsert(struct libmv_Tracks *libmv_tracks, int image, int track, double x, double y, int camera);
/* Reconstruction */
#define LIBMV_TRACKING_MOTION_MODAL (1 << 0)
#define LIBMV_REFINE_FOCAL_LENGTH (1 << 0)
#define LIBMV_REFINE_PRINCIPAL_POINT (1 << 1)
#define LIBMV_REFINE_RADIAL_DISTORTION_K1 (1 << 2)
#define LIBMV_REFINE_RADIAL_DISTORTION_K2 (1 << 4)
#define LIBMV_CONSTRAIN_FOCAL_LENGTH (1 << 0)
typedef struct libmv_CameraIntrinsicsOptions {
double focal_length;
double principal_point_x, principal_point_y;
@@ -93,7 +97,11 @@ typedef struct libmv_ReconstructionOptions {
int select_keyframes;
int keyframe1, keyframe2;
int refine_intrinsics;
short motion_flag;
short refine_intrinsics;
short constrain_intrinsics;
double focal_length_min, focal_length_max;
double success_threshold;
int use_fallback_reconstruction;
@@ -101,16 +109,11 @@ typedef struct libmv_ReconstructionOptions {
typedef void (*reconstruct_progress_update_cb) (void *customdata, double progress, const char *message);
struct libmv_Reconstruction *libmv_solveReconstruction(const struct libmv_Tracks *libmv_tracks,
const libmv_CameraIntrinsicsOptions *libmv_camera_intrinsics_options,
struct libmv_Reconstruction *libmv_solve(const struct libmv_Tracks *libmv_tracks,
const libmv_CameraIntrinsicsOptions libmv_camera_intrinsics_options[],
libmv_ReconstructionOptions *libmv_reconstruction_options,
reconstruct_progress_update_cb progress_update_callback,
void *callback_customdata);
struct libmv_Reconstruction *libmv_solveModal(const struct libmv_Tracks *libmv_tracks,
const libmv_CameraIntrinsicsOptions *libmv_camera_intrinsics_options,
const libmv_ReconstructionOptions *libmv_reconstruction_options,
reconstruct_progress_update_cb progress_update_callback,
void *callback_customdata);
void libmv_reconstructionDestroy(struct libmv_Reconstruction *libmv_reconstruction);
int libmv_reprojectionPointForTrack(const struct libmv_Reconstruction *libmv_reconstruction, int track, double pos[3]);
double libmv_reprojectionErrorForTrack(const struct libmv_Reconstruction *libmv_reconstruction, int track);
@@ -118,7 +121,8 @@ double libmv_reprojectionErrorForImage(const struct libmv_Reconstruction *libmv_
int libmv_reprojectionCameraForImage(const struct libmv_Reconstruction *libmv_reconstruction,
int image, double mat[4][4]);
double libmv_reprojectionError(const struct libmv_Reconstruction *libmv_reconstruction);
struct libmv_CameraIntrinsics *libmv_reconstructionExtractIntrinsics(struct libmv_Reconstruction *libmv_Reconstruction);
struct libmv_CameraIntrinsics *libmv_reconstructionExtractIntrinsics(struct libmv_Reconstruction *libmv_Reconstruction,
int camera);
/* Feature detector */
struct libmv_Features *libmv_detectFeaturesFAST(const unsigned char *data, int width, int height, int stride,

View File

@@ -84,8 +84,10 @@ struct libmv_Tracks *libmv_tracksNew(void)
return NULL;
}
void libmv_tracksInsert(struct libmv_Tracks * /*libmv_tracks*/, int /*image*/,
int /*track*/, double /*x*/, double /*y*/)
void libmv_tracksInsert(struct libmv_Tracks * /*libmv_tracks*/,
int /*image*/, int /*track*/,
double /*x*/, double /*y*/,
int /*camera*/)
{
}
@@ -95,7 +97,7 @@ void libmv_tracksDestroy(struct libmv_Tracks * /*libmv_tracks*/)
/* ************ Reconstruction solver ************ */
struct libmv_Reconstruction *libmv_solveReconstruction(const struct libmv_Tracks * /*libmv_tracks*/,
libmv_Reconstruction *libmv_solve(const libmv_Tracks * /*libmv_tracks*/,
const libmv_CameraIntrinsicsOptions * /*libmv_camera_intrinsics_options*/,
libmv_ReconstructionOptions * /*libmv_reconstruction_options*/,
reconstruct_progress_update_cb /*progress_update_callback*/,
@@ -104,15 +106,6 @@ struct libmv_Reconstruction *libmv_solveReconstruction(const struct libmv_Tracks
return NULL;
}
struct libmv_Reconstruction *libmv_solveModal(const struct libmv_Tracks * /*libmv_tracks*/,
const libmv_CameraIntrinsicsOptions * /*libmv_camera_intrinsics_options*/,
const libmv_ReconstructionOptions * /*libmv_reconstruction_options*/,
reconstruct_progress_update_cb /*progress_update_callback*/,
void * /*callback_customdata*/)
{
return NULL;
}
int libmv_reprojectionPointForTrack(const struct libmv_Reconstruction * /*libmv_reconstruction*/,
int /*track*/, double /*pos*/[3])
{
@@ -124,13 +117,14 @@ double libmv_reprojectionErrorForTrack(const struct libmv_Reconstruction * /*lib
return 0.0;
}
double libmv_reprojectionErrorForImage(const struct libmv_Reconstruction * /*libmv_reconstruction*/, int /*image*/)
double libmv_reprojectionErrorForImage(const struct libmv_Reconstruction * /*libmv_reconstruction*/,
int /*image*/)
{
return 0.0;
}
int libmv_reprojectionCameraForImage(const struct libmv_Reconstruction * /*libmv_reconstruction*/, int /*image*/,
double /*mat*/[4][4])
int libmv_reprojectionCameraForImage(const struct libmv_Reconstruction * /*libmv_reconstruction*/,
int /*image*/, double /*mat*/[4][4])
{
return 0;
}
@@ -180,7 +174,8 @@ void libmv_featuresDestroy(struct libmv_Features * /*libmv_features*/)
/* ************ camera intrinsics ************ */
struct libmv_CameraIntrinsics *libmv_reconstructionExtractIntrinsics(
struct libmv_Reconstruction * /*libmv_reconstruction*/)
struct libmv_Reconstruction * /*libmv_reconstruction*/,
int /*camera*/)
{
return NULL;
}

View File

@@ -21,8 +21,10 @@
#include "libmv/simple_pipeline/bundle.h"
#include <map>
#include <cmath>
#include "ceres/ceres.h"
#include "ceres/iteration_callback.h"
#include "ceres/rotation.h"
#include "libmv/base/vector.h"
#include "libmv/logging/logging.h"
@@ -54,14 +56,42 @@ enum {
namespace {
// Convert a constrained focal length into an uncontrained bundle parameter for
// passing to Ceres.
template <typename T>
T ConstrainFocalLength(const T focal_length,
const double min_focal,
const double max_focal) {
using std::asin;
return asin(-T(1) + (((focal_length - T(min_focal))
/ (T(max_focal) - T(min_focal)))
* T(2.0)));
}
// Convert an unconstrained bundle parameter received from Ceres back to the
// corresponding constrained focal length.
template <typename T>
T UnconstrainFocalLength(const T bundled_focal,
const double min_focal,
const double max_focal) {
using std::sin;
return T(min_focal) + (T(max_focal) - T(min_focal))
* (T(1) + sin(bundled_focal))
/ T(2.0);
}
// Cost functor which computes reprojection error of 3D point X
// on camera defined by angle-axis rotation and it's translation
// (which are in the same block due to optimization reasons).
//
// This functor uses a radial distortion model.
struct OpenCVReprojectionError {
OpenCVReprojectionError(const double observed_x, const double observed_y)
: observed_x(observed_x), observed_y(observed_y) {}
OpenCVReprojectionError(const double observed_x, const double observed_y,
const double focal_length_min = 0.0,
const double focal_length_max = 0.0)
: observed_x(observed_x), observed_y(observed_y),
focal_length_min(focal_length_min),
focal_length_max(focal_length_max) {}
template <typename T>
bool operator()(const T* const intrinsics,
@@ -98,11 +128,17 @@ struct OpenCVReprojectionError {
T predicted_x, predicted_y;
T unconstrained_focal = focal_length;
if (focal_length_min != 0.0 || focal_length_max != 0.0) {
unconstrained_focal = UnconstrainFocalLength(
focal_length, focal_length_min, focal_length_max);
}
// Apply distortion to the normalized points to get (xd, yd).
// TODO(keir): Do early bailouts for zero distortion; these are expensive
// jet operations.
ApplyRadialDistortionCameraIntrinsics(focal_length,
focal_length,
ApplyRadialDistortionCameraIntrinsics(unconstrained_focal,
unconstrained_focal,
principal_point_x,
principal_point_y,
k1, k2, k3,
@@ -119,6 +155,8 @@ struct OpenCVReprojectionError {
const double observed_x;
const double observed_y;
const double focal_length_min;
const double focal_length_max;
};
// Print a message to the log which camera intrinsics are gonna to be optimixed.
@@ -148,34 +186,66 @@ void BundleIntrinsicsLogMessage(const int bundle_intrinsics) {
}
// Pack intrinsics from object to an array for easier
// and faster minimization.
void PackIntrinisicsIntoArray(const CameraIntrinsics &intrinsics,
double ceres_intrinsics[8]) {
ceres_intrinsics[OFFSET_FOCAL_LENGTH] = intrinsics.focal_length();
ceres_intrinsics[OFFSET_PRINCIPAL_POINT_X] = intrinsics.principal_point_x();
ceres_intrinsics[OFFSET_PRINCIPAL_POINT_Y] = intrinsics.principal_point_y();
ceres_intrinsics[OFFSET_K1] = intrinsics.k1();
ceres_intrinsics[OFFSET_K2] = intrinsics.k2();
ceres_intrinsics[OFFSET_K3] = intrinsics.k3();
ceres_intrinsics[OFFSET_P1] = intrinsics.p1();
ceres_intrinsics[OFFSET_P2] = intrinsics.p2();
// and faster minimization, constraining the focal length
// according to the given BundleOptions.
vector<Vec8> PackIntrinsics(const std::vector<CameraIntrinsics> &intrinsics,
const BundleOptions &bundle_options) {
int num_intrinsics = intrinsics.size();
vector<Vec8> ceres_intrinsics(num_intrinsics);
for (int i = 0; i < num_intrinsics; ++i) {
if (bundle_options.constraints & BUNDLE_CONSTRAIN_FOCAL_LENGTH) {
ceres_intrinsics[i][OFFSET_FOCAL_LENGTH] = ConstrainFocalLength(
intrinsics[i].focal_length(),
bundle_options.focal_length_min,
bundle_options.focal_length_max);
} else {
ceres_intrinsics[i][OFFSET_FOCAL_LENGTH] = intrinsics[i].focal_length();
}
ceres_intrinsics[i][OFFSET_PRINCIPAL_POINT_X] = intrinsics[i].principal_point_x();
ceres_intrinsics[i][OFFSET_PRINCIPAL_POINT_Y] = intrinsics[i].principal_point_y();
ceres_intrinsics[i][OFFSET_K1] = intrinsics[i].k1();
ceres_intrinsics[i][OFFSET_K2] = intrinsics[i].k2();
ceres_intrinsics[i][OFFSET_K3] = intrinsics[i].k3();
ceres_intrinsics[i][OFFSET_P1] = intrinsics[i].p1();
ceres_intrinsics[i][OFFSET_P2] = intrinsics[i].p2();
}
return ceres_intrinsics;
}
// Unpack intrinsics back from an array to an object.
void UnpackIntrinsicsFromArray(const double ceres_intrinsics[8],
CameraIntrinsics *intrinsics) {
intrinsics->SetFocalLength(ceres_intrinsics[OFFSET_FOCAL_LENGTH],
ceres_intrinsics[OFFSET_FOCAL_LENGTH]);
// Unpack intrinsics back from an array to an object, converting
// unconstrained bundle parameter to constrained focal length.
std::vector<CameraIntrinsics> UnpackIntrinsics(
const vector<Vec8> &ceres_intrinsics,
const BundleOptions &bundle_options) {
int num_intrinsics = ceres_intrinsics.size();
std::vector<CameraIntrinsics> intrinsics(num_intrinsics);
intrinsics->SetPrincipalPoint(ceres_intrinsics[OFFSET_PRINCIPAL_POINT_X],
ceres_intrinsics[OFFSET_PRINCIPAL_POINT_Y]);
for (int i = 0; i < num_intrinsics; ++i) {
if (bundle_options.constraints & BUNDLE_CONSTRAIN_FOCAL_LENGTH) {
double focal_length = UnconstrainFocalLength(
ceres_intrinsics[i][OFFSET_FOCAL_LENGTH],
bundle_options.focal_length_min,
bundle_options.focal_length_max);
intrinsics[i].SetFocalLength(focal_length, focal_length);
} else {
intrinsics[i].SetFocalLength(ceres_intrinsics[i][OFFSET_FOCAL_LENGTH],
ceres_intrinsics[i][OFFSET_FOCAL_LENGTH]);
}
intrinsics->SetRadialDistortion(ceres_intrinsics[OFFSET_K1],
ceres_intrinsics[OFFSET_K2],
ceres_intrinsics[OFFSET_K3]);
intrinsics[i].SetPrincipalPoint(ceres_intrinsics[i][OFFSET_PRINCIPAL_POINT_X],
ceres_intrinsics[i][OFFSET_PRINCIPAL_POINT_Y]);
intrinsics->SetTangentialDistortion(ceres_intrinsics[OFFSET_P1],
ceres_intrinsics[OFFSET_P2]);
intrinsics[i].SetRadialDistortion(ceres_intrinsics[i][OFFSET_K1],
ceres_intrinsics[i][OFFSET_K2],
ceres_intrinsics[i][OFFSET_K3]);
intrinsics[i].SetTangentialDistortion(ceres_intrinsics[i][OFFSET_P1],
ceres_intrinsics[i][OFFSET_P2]);
}
return intrinsics;
}
// Get a vector of camera's rotations denoted by angle axis
@@ -183,45 +253,45 @@ void UnpackIntrinsicsFromArray(const double ceres_intrinsics[8],
//
// Element with index i matches to a rotation+translation for
// camera at image i.
vector<Vec6> PackCamerasRotationAndTranslation(
vector<Vec6> PackViewsRotationAndTranslation(
const Tracks &tracks,
const EuclideanReconstruction &reconstruction) {
vector<Vec6> all_cameras_R_t;
vector<Vec6> all_views_R_t;
int max_image = tracks.MaxImage();
all_cameras_R_t.resize(max_image + 1);
all_views_R_t.resize(max_image + 1);
for (int i = 0; i <= max_image; i++) {
const EuclideanCamera *camera = reconstruction.CameraForImage(i);
const EuclideanView *view = reconstruction.ViewForImage(i);
if (!camera) {
if (!view) {
continue;
}
ceres::RotationMatrixToAngleAxis(&camera->R(0, 0),
&all_cameras_R_t[i](0));
all_cameras_R_t[i].tail<3>() = camera->t;
ceres::RotationMatrixToAngleAxis(&view->R(0, 0),
&all_views_R_t[i](0));
all_views_R_t[i].tail<3>() = view->t;
}
return all_cameras_R_t;
return all_views_R_t;
}
// Convert cameras rotations fro mangle axis back to rotation matrix.
void UnpackCamerasRotationAndTranslation(
void UnpackViewsRotationAndTranslation(
const Tracks &tracks,
const vector<Vec6> &all_cameras_R_t,
const vector<Vec6> &all_views_R_t,
EuclideanReconstruction *reconstruction) {
int max_image = tracks.MaxImage();
for (int i = 0; i <= max_image; i++) {
EuclideanCamera *camera = reconstruction->CameraForImage(i);
EuclideanView *view = reconstruction->ViewForImage(i);
if (!camera) {
if (!view) {
continue;
}
ceres::AngleAxisToRotationMatrix(&all_cameras_R_t[i](0),
&camera->R(0, 0));
camera->t = all_cameras_R_t[i].tail<3>();
ceres::AngleAxisToRotationMatrix(&all_views_R_t[i](0),
&view->R(0, 0));
view->t = all_views_R_t[i].tail<3>();
}
}
@@ -280,8 +350,8 @@ void EuclideanBundlerPerformEvaluation(const Tracks &tracks,
int max_image = tracks.MaxImage();
bool is_first_camera = true;
for (int i = 0; i <= max_image; i++) {
const EuclideanCamera *camera = reconstruction->CameraForImage(i);
if (camera) {
const EuclideanView *view = reconstruction->ViewForImage(i);
if (view) {
double *current_camera_R_t = &(*all_cameras_R_t)[i](0);
// All cameras are variable now.
@@ -312,42 +382,68 @@ void EuclideanBundlerPerformEvaluation(const Tracks &tracks,
} // namespace
void EuclideanBundle(const Tracks &tracks,
EuclideanReconstruction *reconstruction) {
CameraIntrinsics intrinsics;
EuclideanBundleCommonIntrinsics(tracks,
BUNDLE_NO_INTRINSICS,
BUNDLE_NO_CONSTRAINTS,
reconstruction,
&intrinsics,
NULL);
BundleOptions::BundleOptions() :
bundle_intrinsics(BUNDLE_NO_INTRINSICS),
constraints(BUNDLE_NO_CONSTRAINTS),
focal_length_min(0.0),
focal_length_max(0.0),
iteration_callback(NULL) {
}
void EuclideanBundle(const Tracks &tracks,
EuclideanReconstruction *reconstruction) {
BundleOptions bundle_options;
std::vector<CameraIntrinsics> empty_intrinsics;
EuclideanBundleCommonIntrinsics(tracks,
bundle_options,
reconstruction,
empty_intrinsics);
}
void EuclideanBundleModal(const Tracks &tracks,
EuclideanReconstruction *reconstruction) {
BundleOptions bundle_options;
bundle_options.constraints = libmv::BUNDLE_NO_TRANSLATION;
std::vector<CameraIntrinsics> empty_intrinsics;
EuclideanBundleCommonIntrinsics(tracks,
bundle_options,
reconstruction,
empty_intrinsics);
}
void EuclideanBundleCommonIntrinsics(const Tracks &tracks,
const int bundle_intrinsics,
const int bundle_constraints,
const BundleOptions &bundle_options,
EuclideanReconstruction *reconstruction,
CameraIntrinsics *intrinsics,
std::vector<CameraIntrinsics> &intrinsics,
BundleEvaluation *evaluation) {
LG << "Original intrinsics: " << *intrinsics;
const int bundle_intrinsics = bundle_options.bundle_intrinsics;
const int bundle_constraints = bundle_options.constraints;
//LG << "Original intrinsics: " << *intrinsics;
vector<Marker> markers = tracks.AllMarkers();
int num_cameras = tracks.MaxCamera() + 1;
if (intrinsics.size() < num_cameras) {
intrinsics.resize(num_cameras);
}
ceres::Problem::Options problem_options;
ceres::Problem problem(problem_options);
// Residual blocks with 10 parameters are unwieldly with Ceres, so pack the
// intrinsics into a single block and rely on local parameterizations to
// control which intrinsics are allowed to vary.
double ceres_intrinsics[8];
PackIntrinisicsIntoArray(*intrinsics, ceres_intrinsics);
vector<Vec8> ceres_intrinsics = PackIntrinsics(intrinsics, bundle_options);
// Convert cameras rotations to angle axis and merge with translation
// Convert views rotations to angle axis and merge with translation
// into single parameter block for maximal minimization speed.
//
// Block for minimization has got the following structure:
// <3 elements for angle-axis> <3 elements for translation>
vector<Vec6> all_cameras_R_t =
PackCamerasRotationAndTranslation(tracks, *reconstruction);
vector<Vec6> all_views_R_t =
PackViewsRotationAndTranslation(tracks, *reconstruction);
// Parameterization used to restrict camera motion for modal solvers.
ceres::SubsetParameterization *constant_translation_parameterization = NULL;
@@ -365,37 +461,46 @@ void EuclideanBundleCommonIntrinsics(const Tracks &tracks,
// Add residual blocks to the problem.
int num_residuals = 0;
bool have_locked_camera = false;
bool have_locked_view = false;
for (int i = 0; i < markers.size(); ++i) {
const Marker &marker = markers[i];
EuclideanCamera *camera = reconstruction->CameraForImage(marker.image);
EuclideanView *view = reconstruction->ViewForImage(marker.image);
EuclideanPoint *point = reconstruction->PointForTrack(marker.track);
if (camera == NULL || point == NULL) {
if (view == NULL || point == NULL) {
continue;
}
int camera = view->camera;
// Rotation of camera denoted in angle axis followed with
// camera translaiton.
double *current_camera_R_t = &all_cameras_R_t[camera->image](0);
double *current_view_R_t = &all_views_R_t[view->image](0);
OpenCVReprojectionError *cost_function;
if (bundle_options.constraints & BUNDLE_CONSTRAIN_FOCAL_LENGTH) {
cost_function = new OpenCVReprojectionError(
marker.x, marker.y,
bundle_options.focal_length_min,
bundle_options.focal_length_max);
} else {
cost_function = new OpenCVReprojectionError(marker.x, marker.y);
}
problem.AddResidualBlock(new ceres::AutoDiffCostFunction<
OpenCVReprojectionError, 2, 8, 6, 3>(
new OpenCVReprojectionError(
marker.x,
marker.y)),
OpenCVReprojectionError, 2, 8, 6, 3>(cost_function),
NULL,
ceres_intrinsics,
current_camera_R_t,
&ceres_intrinsics[camera](0),
current_view_R_t,
&point->X(0));
// We lock the first camera to better deal with scene orientation ambiguity.
if (!have_locked_camera) {
problem.SetParameterBlockConstant(current_camera_R_t);
have_locked_camera = true;
if (!have_locked_view) {
problem.SetParameterBlockConstant(current_view_R_t);
have_locked_view = true;
}
if (bundle_constraints & BUNDLE_NO_TRANSLATION) {
problem.SetParameterization(current_camera_R_t,
problem.SetParameterization(current_view_R_t,
constant_translation_parameterization);
}
@@ -413,7 +518,7 @@ void EuclideanBundleCommonIntrinsics(const Tracks &tracks,
if (bundle_intrinsics == BUNDLE_NO_INTRINSICS) {
// No camera intrinsics are being refined,
// set the whole parameter block as constant for best performance.
problem.SetParameterBlockConstant(ceres_intrinsics);
problem.SetParameterBlockConstant(&ceres_intrinsics[0](0));
} else {
// Set the camera intrinsics that are not to be bundled as
// constant using some macro trickery.
@@ -438,7 +543,11 @@ void EuclideanBundleCommonIntrinsics(const Tracks &tracks,
ceres::SubsetParameterization *subset_parameterization =
new ceres::SubsetParameterization(8, constant_intrinsics);
problem.SetParameterization(ceres_intrinsics, subset_parameterization);
problem.SetParameterization(&ceres_intrinsics[0](0), subset_parameterization);
}
for (int i = 1; i < ceres_intrinsics.size(); ++i) {
problem.SetParameterBlockConstant(&ceres_intrinsics[i](0));
}
// Configure the solver.
@@ -448,6 +557,9 @@ void EuclideanBundleCommonIntrinsics(const Tracks &tracks,
options.linear_solver_type = ceres::ITERATIVE_SCHUR;
options.use_inner_iterations = true;
options.max_num_iterations = 100;
if (bundle_options.iteration_callback != NULL) {
options.callbacks.push_back(bundle_options.iteration_callback);
}
#ifdef _OPENMP
options.num_threads = omp_get_max_threads();
@@ -461,18 +573,20 @@ void EuclideanBundleCommonIntrinsics(const Tracks &tracks,
LG << "Final report:\n" << summary.FullReport();
// Copy rotations and translations back.
UnpackCamerasRotationAndTranslation(tracks,
all_cameras_R_t,
reconstruction);
UnpackViewsRotationAndTranslation(tracks,
all_views_R_t,
reconstruction);
// Copy intrinsics back.
if (bundle_intrinsics != BUNDLE_NO_INTRINSICS)
UnpackIntrinsicsFromArray(ceres_intrinsics, intrinsics);
if (bundle_intrinsics != BUNDLE_NO_INTRINSICS) {
intrinsics = UnpackIntrinsics(ceres_intrinsics,
bundle_options);
}
LG << "Final intrinsics: " << *intrinsics;
//LG << "Final intrinsics: " << *intrinsics;
if (evaluation) {
EuclideanBundlerPerformEvaluation(tracks, reconstruction, &all_cameras_R_t,
EuclideanBundlerPerformEvaluation(tracks, reconstruction, &all_views_R_t,
&problem, evaluation);
}
}

View File

@@ -21,6 +21,10 @@
#ifndef LIBMV_SIMPLE_PIPELINE_BUNDLE_H
#define LIBMV_SIMPLE_PIPELINE_BUNDLE_H
#include <vector>
#include "ceres/types.h"
#include "ceres/iteration_callback.h"
#include "libmv/numeric/numeric.h"
namespace libmv {
@@ -75,6 +79,27 @@ struct BundleEvaluation {
void EuclideanBundle(const Tracks &tracks,
EuclideanReconstruction *reconstruction);
/*!
Refine camera poses and 3D coordinates using bundle adjustment.
This routine adjusts all cameras and points in \a *reconstruction. This
assumes a full observation for reconstructed tracks; this implies that if
there is a reconstructed 3D point (a bundle) for a track, then all markers
for that track will be included in the minimization. \a tracks should
contain markers used in the initial reconstruction.
The cameras and bundles (3D points), except the camera positions, are
refined in-place.
\note This assumes an outlier-free set of markers.
\note This assumes a calibrated reconstruction, e.g. the markers are
already corrected for camera intrinsics and radial distortion.
\sa EuclideanResect, EuclideanIntersect, EuclideanReconstructTwoFrames
*/
void EuclideanBundleModal(const Tracks &tracks,
EuclideanReconstruction *reconstruction);
/*!
Refine camera poses and 3D coordinates using bundle adjustment.
@@ -91,7 +116,7 @@ void EuclideanBundle(const Tracks &tracks,
For example it is useful to keep camera translations constant
when bundling tripod motions.
If evaluaiton is not null, different evaluation statistics is filled in
If evaluation is not null, different evaluation statistics is filled in
there, plus all the requested additional information (like jacobian) is
also calculating there. Also see comments for BundleEvaluation.
@@ -110,15 +135,36 @@ enum BundleIntrinsics {
BUNDLE_TANGENTIAL_P2 = 32,
BUNDLE_TANGENTIAL = 48,
};
enum BundleConstraints {
BUNDLE_NO_CONSTRAINTS = 0,
BUNDLE_NO_TRANSLATION = 1,
BUNDLE_CONSTRAIN_FOCAL_LENGTH = 2
};
struct BundleOptions {
BundleOptions();
// Bitfield denoting the camera intrinsics to adjust during
// bundling. Use BundleIntrinsics flags.
short bundle_intrinsics;
// Bitfield denoting the constraints to place on bundle parameters.
// Use BundleConstraints flags.
int constraints;
// Minimum and maximum constraints for the focal length. To be useful,
// BUNDLE_CONSTRAIN_FOCAL_LENGTH flag must be set on constraints.
double focal_length_min, focal_length_max;
// Callback which is called after each iteration of the solver.
ceres::IterationCallback *iteration_callback;
};
void EuclideanBundleCommonIntrinsics(const Tracks &tracks,
const int bundle_intrinsics,
const int bundle_constraints,
const BundleOptions &bundle_options,
EuclideanReconstruction *reconstruction,
CameraIntrinsics *intrinsics,
std::vector<CameraIntrinsics> &intrinsics,
BundleEvaluation *evaluation = NULL);
/*!

View File

@@ -32,15 +32,18 @@
namespace libmv {
namespace {
void GetImagesInMarkers(const vector<Marker> &markers,
int *image1, int *image2) {
void GetImagesAndCamerasInMarkers(const vector<Marker> &markers,
int *image1, int *image2,
int *camera1, int *camera2) {
if (markers.size() < 2) {
return;
}
*image1 = markers[0].image;
*camera1 = markers[0].camera;
for (int i = 1; i < markers.size(); ++i) {
if (markers[i].image != *image1) {
*image2 = markers[i].image;
*camera2 = markers[i].camera;
return;
}
}
@@ -58,7 +61,8 @@ bool EuclideanReconstructTwoFrames(const vector<Marker> &markers,
}
int image1, image2;
GetImagesInMarkers(markers, &image1, &image2);
int camera1, camera2;
GetImagesAndCamerasInMarkers(markers, &image1, &image2, &camera1, &camera2);
Mat x1, x2;
CoordinatesForMarkersInImage(markers, image1, &x1);
@@ -85,8 +89,8 @@ bool EuclideanReconstructTwoFrames(const vector<Marker> &markers,
}
// Image 1 gets the reference frame, image 2 gets the relative motion.
reconstruction->InsertCamera(image1, Mat3::Identity(), Vec3::Zero());
reconstruction->InsertCamera(image2, R, t);
reconstruction->InsertView(image1, Mat3::Identity(), Vec3::Zero(), camera1);
reconstruction->InsertView(image2, R, t, camera2);
LG << "From two frame reconstruction got:\nR:\n" << R
<< "\nt:" << t.transpose();
@@ -146,7 +150,8 @@ bool ProjectiveReconstructTwoFrames(const vector<Marker> &markers,
}
int image1, image2;
GetImagesInMarkers(markers, &image1, &image2);
int camera1, camera2;
GetImagesAndCamerasInMarkers(markers, &image1, &image2, &camera1, &camera2);
Mat x1, x2;
CoordinatesForMarkersInImage(markers, image1, &x1);
@@ -186,8 +191,8 @@ bool ProjectiveReconstructTwoFrames(const vector<Marker> &markers,
Mat34 P2;
ProjectionsFromFundamental(F, &P1, &P2);
reconstruction->InsertCamera(image1, P1);
reconstruction->InsertCamera(image2, P2);
reconstruction->InsertView(image1, P1, camera1);
reconstruction->InsertView(image2, P2, camera2);
LG << "From two frame reconstruction got P2:\n" << P2;
return true;

View File

@@ -34,8 +34,8 @@ class ProjectiveReconstruction;
two frames.
\a markers should contain all \l Marker markers \endlink belonging to
tracks visible in both frames. The pose estimation of the camera for
these frames will be inserted into \a *reconstruction.
tracks visible in both frames from the same camera. The pose estimation of
the camera for these frames will be inserted into \a *reconstruction.
\note The two frames need to have both enough parallax and enough common tracks
for accurate reconstruction. At least 8 tracks are suggested.
@@ -55,8 +55,8 @@ bool EuclideanReconstructTwoFrames(const vector<Marker> &markers,
two frames.
\a markers should contain all \l Marker markers \endlink belonging to
tracks visible in both frames. An estimate of the projection matrices for
the two frames will get added to the reconstruction.
tracks visible in both frames from the same camera. An estimate of the
projection matrices for the two frames will get added to the reconstruction.
\note The two frames need to have both enough parallax and enough common tracks
for accurate reconstruction. At least 8 tracks are suggested.

View File

@@ -39,8 +39,8 @@ namespace {
class EuclideanIntersectCostFunctor {
public:
EuclideanIntersectCostFunctor(const Marker &marker,
const EuclideanCamera &camera)
: marker_(marker), camera_(camera) {}
const EuclideanView &view)
: marker_(marker), view_(view) {}
template<typename T>
bool operator()(const T *X, T *residuals) const {
@@ -48,8 +48,8 @@ class EuclideanIntersectCostFunctor {
typedef Eigen::Matrix<T, 3, 1> Vec3;
Vec3 x(X);
Mat3 R(camera_.R.cast<T>());
Vec3 t(camera_.t.cast<T>());
Mat3 R(view_.R.cast<T>());
Vec3 t(view_.t.cast<T>());
Vec3 projected = R * x + t;
projected /= projected(2);
@@ -61,7 +61,7 @@ class EuclideanIntersectCostFunctor {
}
const Marker &marker_;
const EuclideanCamera &camera_;
const EuclideanView &view_;
};
} // namespace
@@ -78,8 +78,8 @@ bool EuclideanIntersect(const vector<Marker> &markers,
vector<Mat34> cameras;
Mat34 P;
for (int i = 0; i < markers.size(); ++i) {
EuclideanCamera *camera = reconstruction->CameraForImage(markers[i].image);
P_From_KRt(K, camera->R, camera->t, &P);
EuclideanView *view = reconstruction->ViewForImage(markers[i].image);
P_From_KRt(K, view->R, view->t, &P);
cameras.push_back(P);
}
@@ -102,14 +102,14 @@ bool EuclideanIntersect(const vector<Marker> &markers,
for (int i = 0; i < markers.size(); ++i) {
const Marker &marker = markers[i];
const EuclideanCamera &camera =
*reconstruction->CameraForImage(marker.image);
const EuclideanView &view =
*reconstruction->ViewForImage(marker.image);
problem.AddResidualBlock(
new ceres::AutoDiffCostFunction<
EuclideanIntersectCostFunctor,
2, /* num_residuals */
3>(new EuclideanIntersectCostFunctor(marker, camera)),
3>(new EuclideanIntersectCostFunctor(marker, view)),
NULL,
&X(0));
}
@@ -130,9 +130,9 @@ bool EuclideanIntersect(const vector<Marker> &markers,
// Try projecting the point; make sure it's in front of everyone.
for (int i = 0; i < cameras.size(); ++i) {
const EuclideanCamera &camera =
*reconstruction->CameraForImage(markers[i].image);
Vec3 x = camera.R * X + camera.t;
const EuclideanView &view =
*reconstruction->ViewForImage(markers[i].image);
Vec3 x = view.R * X + view.t;
if (x(2) < 0) {
LOG(ERROR) << "POINT BEHIND CAMERA " << markers[i].image
<< ": " << x.transpose();
@@ -163,9 +163,9 @@ struct ProjectiveIntersectCostFunction {
Vec residuals(2 * markers.size());
residuals.setZero();
for (int i = 0; i < markers.size(); ++i) {
const ProjectiveCamera &camera =
*reconstruction.CameraForImage(markers[i].image);
Vec3 projected = camera.P * X;
const ProjectiveView &view =
*reconstruction.ViewForImage(markers[i].image);
Vec3 projected = view.P * X;
projected /= projected(2);
residuals[2*i + 0] = projected(0) - markers[i].x;
residuals[2*i + 1] = projected(1) - markers[i].y;
@@ -187,8 +187,8 @@ bool ProjectiveIntersect(const vector<Marker> &markers,
// Get the cameras to use for the intersection.
vector<Mat34> cameras;
for (int i = 0; i < markers.size(); ++i) {
ProjectiveCamera *camera = reconstruction->CameraForImage(markers[i].image);
cameras.push_back(camera->P);
ProjectiveView *view = reconstruction->ViewForImage(markers[i].image);
cameras.push_back(view->P);
}
// Stack the 2D coordinates together as required by NViewTriangulate.
@@ -214,9 +214,9 @@ bool ProjectiveIntersect(const vector<Marker> &markers,
// Try projecting the point; make sure it's in front of everyone.
for (int i = 0; i < cameras.size(); ++i) {
const ProjectiveCamera &camera =
*reconstruction->CameraForImage(markers[i].image);
Vec3 x = camera.P * X;
const ProjectiveView &view =
*reconstruction->ViewForImage(markers[i].image);
Vec3 x = view.P * X;
if (x(2) < 0) {
LOG(ERROR) << "POINT BEHIND CAMERA " << markers[i].image
<< ": " << x.transpose();

View File

@@ -20,6 +20,8 @@
#include "libmv/simple_pipeline/keyframe_selection.h"
#include <vector>
#include "libmv/numeric/numeric.h"
#include "ceres/ceres.h"
#include "libmv/logging/logging.h"
@@ -97,7 +99,7 @@ class HomographySymmetricGeometricCostFunctor {
};
void ComputeHomographyFromCorrespondences(const Mat &x1, const Mat &x2,
CameraIntrinsics &intrinsics,
const CameraIntrinsics &intrinsics,
Mat3 *H) {
// Algebraic homography estimation, happens with normalized coordinates
Homography2DFromCorrespondencesLinear(x1, x2, H, 1e-12);
@@ -172,7 +174,7 @@ class FundamentalSymmetricEpipolarCostFunctor {
};
void ComputeFundamentalFromCorrespondences(const Mat &x1, const Mat &x2,
CameraIntrinsics &intrinsics,
const CameraIntrinsics &intrinsics,
Mat3 *F) {
// Algebraic fundamental estimation, happens with normalized coordinates
NormalizedEightPointSolver(x1, x2, F);
@@ -288,15 +290,20 @@ Mat pseudoInverse(const Mat &matrix) {
}
} // namespace
void SelectKeyframesBasedOnGRICAndVariance(const Tracks &tracks,
CameraIntrinsics &intrinsics,
vector<int> &keyframes) {
void SelectKeyframesBasedOnGRICAndVariance(
const Tracks &tracks,
const std::vector<CameraIntrinsics> &intrinsics,
int camera,
vector<int> &keyframes) {
// Mirza Tahir Ahmed, Matthew N. Dailey
// Robust key frame extraction for 3D reconstruction from video streams
//
// http://www.cs.ait.ac.th/~mdailey/papers/Tahir-KeyFrame.pdf
int max_image = tracks.MaxImage();
Tracks camera_tracks(tracks.MarkersForCamera(camera));
const CameraIntrinsics &camera_intrinsics = intrinsics[camera];
int max_image = camera_tracks.MaxImage();
int next_keyframe = 1;
int number_keyframes = 0;
@@ -308,7 +315,7 @@ void SelectKeyframesBasedOnGRICAndVariance(const Tracks &tracks,
const double Tmin = 0.8;
const double Tmax = 1.0;
Mat3 N = IntrinsicsNormalizationMatrix(intrinsics);
Mat3 N = IntrinsicsNormalizationMatrix(camera_intrinsics);
Mat3 N_inverse = N.inverse();
double Sc_best = std::numeric_limits<double>::max();
@@ -328,11 +335,11 @@ void SelectKeyframesBasedOnGRICAndVariance(const Tracks &tracks,
candidate_image++) {
// Conjunction of all markers from both keyframes
vector<Marker> all_markers =
tracks.MarkersInBothImages(current_keyframe, candidate_image);
camera_tracks.MarkersInBothImages(current_keyframe, candidate_image);
// Match keypoints between frames current_keyframe and candidate_image
vector<Marker> tracked_markers =
tracks.MarkersForTracksInBothImages(current_keyframe, candidate_image);
camera_tracks.MarkersForTracksInBothImages(current_keyframe, candidate_image);
// Correspondences in normalized space
Mat x1, x2;
@@ -360,8 +367,8 @@ void SelectKeyframesBasedOnGRICAndVariance(const Tracks &tracks,
continue;
Mat3 H, F;
ComputeHomographyFromCorrespondences(x1, x2, intrinsics, &H);
ComputeFundamentalFromCorrespondences(x1, x2, intrinsics, &F);
ComputeHomographyFromCorrespondences(x1, x2, camera_intrinsics, &H);
ComputeFundamentalFromCorrespondences(x1, x2, camera_intrinsics, &F);
// TODO(sergey): STEP 2: Discard outlier matches
@@ -373,9 +380,9 @@ void SelectKeyframesBasedOnGRICAndVariance(const Tracks &tracks,
F_e.resize(x1.cols());
for (int i = 0; i < x1.cols(); i++) {
Vec2 current_x1 =
NorrmalizedToPixelSpace(Vec2(x1(0, i), x1(1, i)), intrinsics);
NorrmalizedToPixelSpace(Vec2(x1(0, i), x1(1, i)), camera_intrinsics);
Vec2 current_x2 =
NorrmalizedToPixelSpace(Vec2(x2(0, i), x2(1, i)), intrinsics);
NorrmalizedToPixelSpace(Vec2(x2(0, i), x2(1, i)), camera_intrinsics);
H_e(i) = SymmetricGeometricDistance(H, current_x1, current_x2);
F_e(i) = SymmetricEpipolarDistance(F, current_x1, current_x2);
@@ -442,10 +449,11 @@ void SelectKeyframesBasedOnGRICAndVariance(const Tracks &tracks,
<< "\nt:" << t.transpose();
// First camera is identity, second one is relative to it
reconstruction.InsertCamera(current_keyframe,
Mat3::Identity(),
Vec3::Zero());
reconstruction.InsertCamera(candidate_image, R, t);
reconstruction.InsertView(current_keyframe,
Mat3::Identity(),
Vec3::Zero(),
camera);
reconstruction.InsertView(candidate_image, R, t, camera);
// Reconstruct 3D points
int intersects_total = 0, intersects_success = 0;
@@ -494,15 +502,15 @@ void SelectKeyframesBasedOnGRICAndVariance(const Tracks &tracks,
success_intersects_factor_best = success_intersects_factor;
Tracks two_frames_tracks(tracked_markers);
CameraIntrinsics empty_intrinsics;
std::vector<CameraIntrinsics> empty_intrinsics;
BundleEvaluation evaluation;
evaluation.evaluate_jacobian = true;
BundleOptions bundle_options;
EuclideanBundleCommonIntrinsics(two_frames_tracks,
BUNDLE_NO_INTRINSICS,
BUNDLE_NO_CONSTRAINTS,
bundle_options,
&reconstruction,
&empty_intrinsics,
empty_intrinsics,
&evaluation);
Mat &jacobian = evaluation.jacobian;
@@ -579,4 +587,11 @@ void SelectKeyframesBasedOnGRICAndVariance(const Tracks &tracks,
}
}
void SelectKeyframesBasedOnGRICAndVariance(
const Tracks &tracks,
const std::vector<CameraIntrinsics> &intrinsics,
vector<int> &keyframes) {
SelectKeyframesBasedOnGRICAndVariance(tracks, intrinsics, 0, keyframes);
}
} // namespace libmv

View File

@@ -21,31 +21,42 @@
#ifndef LIBMV_SIMPLE_PIPELINE_KEYFRAME_SELECTION_H_
#define LIBMV_SIMPLE_PIPELINE_KEYFRAME_SELECTION_H_
#include <vector>
#include "libmv/base/vector.h"
#include "libmv/simple_pipeline/tracks.h"
#include "libmv/simple_pipeline/camera_intrinsics.h"
namespace libmv {
// Get list of all images which are good enough to be as keyframes from
// camera reconstruction. Based on GRIC criteria and uses Pollefeys'
// approach for correspondence ratio constraint.
// Get list of all images from given camera which are good enough to be as
// keyframes for camera reconstruction. Based on GRIC criteria and uses
// Pollefeys' approach for correspondence ratio constraint.
//
// As an additional, additional criteria based on reconstruction
// variance is used. This means if correspondence and GRIC criteria
// are passed, two-frames reconstruction using candidate keyframes
// happens. After reconstruction variance of 3D points is calculating
// and if expected error estimation is too large, keyframe candidate
// is rejecting.
// Additional criteria based on reconstruction variance are used. This means if
// correspondence and GRIC criteria are passed, two-frame reconstruction using
// candidate keyframes happens. After reconstruction, the variance of 3D points
// is calculated and, if the expected error estimation is too large, the keyframe
// candidate is rejected.
//
// \param tracks contains all tracked correspondences between frames
// expected to be undistorted and normalized
// \param intrinsics is camera intrinsics
// \param keyframes will contain all images number which are considered
// good to be used for reconstruction
void SelectKeyframesBasedOnGRICAndVariance(const Tracks &tracks,
CameraIntrinsics &intrinsics,
vector<int> &keyframes);
//
// TODO(sftrabbit): Use algorithm that finds images of high variance from
// multiple cameras.
void SelectKeyframesBasedOnGRICAndVariance(
const Tracks &tracks,
const std::vector<CameraIntrinsics> &intrinsics,
int camera,
vector<int> &keyframes);
void SelectKeyframesBasedOnGRICAndVariance(
const Tracks &tracks,
const std::vector<CameraIntrinsics> &intrinsics,
vector<int> &keyframes);
} // namespace libmv

View File

@@ -99,8 +99,11 @@ struct ModalReprojectionError {
void ModalSolver(const Tracks &tracks,
EuclideanReconstruction *reconstruction,
ProgressUpdateCallback *update_callback) {
int max_image = tracks.MaxImage();
int max_track = tracks.MaxTrack();
static const int kModalCamera = 0;
Tracks camera_tracks(tracks.MarkersForCamera(kModalCamera));
int max_image = camera_tracks.MaxImage();
int max_track = camera_tracks.MaxTrack();
LG << "Max image: " << max_image;
LG << "Max track: " << max_track;
@@ -111,12 +114,12 @@ void ModalSolver(const Tracks &tracks,
ceres::AngleAxisToQuaternion(&zero_rotation(0), &quaternion(0));
for (int image = 0; image <= max_image; ++image) {
vector<Marker> all_markers = tracks.MarkersInImage(image);
vector<Marker> image_markers = camera_tracks.MarkersInImage(image);
ModalSolverLogProress(update_callback, (float) image / max_image);
// Skip empty images without doing anything.
if (all_markers.size() == 0) {
if (image_markers.size() == 0) {
LG << "Skipping image: " << image;
continue;
}
@@ -124,13 +127,12 @@ void ModalSolver(const Tracks &tracks,
// STEP 1: Estimate rotation analytically.
Mat3 current_R;
ceres::QuaternionToRotation(&quaternion(0), &current_R(0, 0));
// Construct point cloud for current and previous images,
// using markers appear at current image for which we know
// 3D positions.
Mat x1, x2;
for (int i = 0; i < all_markers.size(); ++i) {
Marker &marker = all_markers[i];
for (int i = 0; i < image_markers.size(); ++i) {
Marker &marker = image_markers[i];
EuclideanPoint *point = reconstruction->PointForTrack(marker.track);
if (point) {
Vec3 X;
@@ -176,8 +178,8 @@ void ModalSolver(const Tracks &tracks,
new ceres::QuaternionParameterization;
int num_residuals = 0;
for (int i = 0; i < all_markers.size(); ++i) {
Marker &marker = all_markers[i];
for (int i = 0; i < image_markers.size(); ++i) {
Marker &marker = image_markers[i];
EuclideanPoint *point = reconstruction->PointForTrack(marker.track);
if (point) {
@@ -218,7 +220,7 @@ void ModalSolver(const Tracks &tracks,
// Convert quaternion to rotation matrix.
Mat3 R;
ceres::QuaternionToRotation(&quaternion(0), &R(0, 0));
reconstruction->InsertCamera(image, R, Vec3::Zero());
reconstruction->InsertView(image, R, Vec3::Zero(), kModalCamera);
// STEP 3: reproject all new markers appeared at image
@@ -226,7 +228,7 @@ void ModalSolver(const Tracks &tracks,
// and reproject them on sphere to obtain 3D position/
for (int track = 0; track <= max_track; ++track) {
if (!reconstruction->PointForTrack(track)) {
Marker marker = tracks.MarkerInImageForTrack(image, track);
Marker marker = camera_tracks.MarkerInImageForTrack(image, track);
if (marker.image == image) {
// New track appeared on this image,

View File

@@ -29,15 +29,16 @@ namespace libmv {
/*!
This solver solves such camera motion as tripod rotation, reconstructing
only camera motion itself. Bundles are not reconstructing properly, they're
just getting projected onto sphere.
only camera motion itself. Bundles are not reconstructing properly; they're
projected onto sphere.
Markers from tracks object would be used for recosntruction, and algorithm
assumes thir's positions are undistorted already and they're in nnormalized
space.
Markers from tracks object are used for reconstruction. The algorithm
assumes their positions are undistorted already and that they're in
normalized space.
Reconstructed cameras and projected bundles would be added to reconstruction
object.
Reconstructed cameras and projected bundles are added to the given
reconstruction object. Only camera 0 is reconstructed, using only the
markers associated with camera 0.
*/
void ModalSolver(const Tracks &tracks,
EuclideanReconstruction *reconstruction,

View File

@@ -43,7 +43,7 @@ namespace {
// template bloat and making interface changes much easier.
struct EuclideanPipelineRoutines {
typedef EuclideanReconstruction Reconstruction;
typedef EuclideanCamera Camera;
typedef EuclideanView View;
typedef EuclideanPoint Point;
static void Bundle(const Tracks &tracks,
@@ -63,9 +63,9 @@ struct EuclideanPipelineRoutines {
}
static Marker ProjectMarker(const EuclideanPoint &point,
const EuclideanCamera &camera,
const EuclideanView &view,
const CameraIntrinsics &intrinsics) {
Vec3 projected = camera.R * point.X + camera.t;
Vec3 projected = view.R * point.X + view.t;
projected /= projected(2);
Marker reprojected_marker;
@@ -74,7 +74,7 @@ struct EuclideanPipelineRoutines {
&reprojected_marker.x,
&reprojected_marker.y);
reprojected_marker.image = camera.image;
reprojected_marker.image = view.image;
reprojected_marker.track = point.track;
return reprojected_marker;
}
@@ -82,7 +82,7 @@ struct EuclideanPipelineRoutines {
struct ProjectivePipelineRoutines {
typedef ProjectiveReconstruction Reconstruction;
typedef ProjectiveCamera Camera;
typedef ProjectiveView View;
typedef ProjectivePoint Point;
static void Bundle(const Tracks &tracks,
@@ -105,9 +105,9 @@ struct ProjectivePipelineRoutines {
}
static Marker ProjectMarker(const ProjectivePoint &point,
const ProjectiveCamera &camera,
const ProjectiveView &view,
const CameraIntrinsics &intrinsics) {
Vec3 projected = camera.P * point.X;
Vec3 projected = view.P * point.X;
projected /= projected(2);
Marker reprojected_marker;
@@ -116,7 +116,7 @@ struct ProjectivePipelineRoutines {
&reprojected_marker.x,
&reprojected_marker.y);
reprojected_marker.image = camera.image;
reprojected_marker.image = view.image;
reprojected_marker.track = point.track;
return reprojected_marker;
}
@@ -169,7 +169,7 @@ void InternalCompleteReconstruction(
vector<Marker> reconstructed_markers;
for (int i = 0; i < all_markers.size(); ++i) {
if (reconstruction->CameraForImage(all_markers[i].image)) {
if (reconstruction->ViewForImage(all_markers[i].image)) {
reconstructed_markers.push_back(all_markers[i]);
}
}
@@ -199,7 +199,7 @@ void InternalCompleteReconstruction(
// Do all possible resections.
num_resects = 0;
for (int image = 0; image <= max_image; ++image) {
if (reconstruction->CameraForImage(image)) {
if (reconstruction->ViewForImage(image)) {
LG << "Skipping frame: " << image;
continue;
}
@@ -239,7 +239,7 @@ void InternalCompleteReconstruction(
// One last pass...
num_resects = 0;
for (int image = 0; image <= max_image; ++image) {
if (reconstruction->CameraForImage(image)) {
if (reconstruction->ViewForImage(image)) {
LG << "Skipping frame: " << image;
continue;
}
@@ -275,24 +275,24 @@ template<typename PipelineRoutines>
double InternalReprojectionError(
const Tracks &image_tracks,
const typename PipelineRoutines::Reconstruction &reconstruction,
const CameraIntrinsics &intrinsics) {
const std::vector<CameraIntrinsics> &intrinsics) {
int num_skipped = 0;
int num_reprojected = 0;
double total_error = 0.0;
vector<Marker> markers = image_tracks.AllMarkers();
for (int i = 0; i < markers.size(); ++i) {
const typename PipelineRoutines::Camera *camera =
reconstruction.CameraForImage(markers[i].image);
const typename PipelineRoutines::View *view =
reconstruction.ViewForImage(markers[i].image);
const typename PipelineRoutines::Point *point =
reconstruction.PointForTrack(markers[i].track);
if (!camera || !point) {
if (!view || !point) {
num_skipped++;
continue;
}
num_reprojected++;
Marker reprojected_marker =
PipelineRoutines::ProjectMarker(*point, *camera, intrinsics);
PipelineRoutines::ProjectMarker(*point, *view, intrinsics[markers[i].camera]);
double ex = reprojected_marker.x - markers[i].x;
double ey = reprojected_marker.y - markers[i].y;
@@ -326,16 +326,15 @@ double InternalReprojectionError(
double EuclideanReprojectionError(const Tracks &image_tracks,
const EuclideanReconstruction &reconstruction,
const CameraIntrinsics &intrinsics) {
const std::vector<CameraIntrinsics> &intrinsics) {
return InternalReprojectionError<EuclideanPipelineRoutines>(image_tracks,
reconstruction,
intrinsics);
}
double ProjectiveReprojectionError(
const Tracks &image_tracks,
const ProjectiveReconstruction &reconstruction,
const CameraIntrinsics &intrinsics) {
double ProjectiveReprojectionError(const Tracks &image_tracks,
const ProjectiveReconstruction &reconstruction,
const std::vector<CameraIntrinsics> &intrinsics) {
return InternalReprojectionError<ProjectivePipelineRoutines>(image_tracks,
reconstruction,
intrinsics);

View File

@@ -21,6 +21,8 @@
#ifndef LIBMV_SIMPLE_PIPELINE_PIPELINE_H_
#define LIBMV_SIMPLE_PIPELINE_PIPELINE_H_
#include <vector>
#include "libmv/simple_pipeline/callbacks.h"
#include "libmv/simple_pipeline/tracks.h"
#include "libmv/simple_pipeline/reconstruction.h"
@@ -90,12 +92,11 @@ class CameraIntrinsics;
double EuclideanReprojectionError(const Tracks &image_tracks,
const EuclideanReconstruction &reconstruction,
const CameraIntrinsics &intrinsics);
const std::vector<CameraIntrinsics> &intrinsics);
double ProjectiveReprojectionError(
const Tracks &image_tracks,
const ProjectiveReconstruction &reconstruction,
const CameraIntrinsics &intrinsics);
double ProjectiveReprojectionError(const Tracks &image_tracks,
const ProjectiveReconstruction &reconstruction,
const std::vector<CameraIntrinsics> &intrinsics);
void InvertIntrinsicsForTracks(const Tracks &raw_tracks,
const CameraIntrinsics &camera_intrinsics,

View File

@@ -18,6 +18,8 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
// IN THE SOFTWARE.
#include <vector>
#include "libmv/simple_pipeline/reconstruction.h"
#include "libmv/numeric/numeric.h"
#include "libmv/logging/logging.h"
@@ -27,29 +29,32 @@ namespace libmv {
EuclideanReconstruction::EuclideanReconstruction() {}
EuclideanReconstruction::EuclideanReconstruction(
const EuclideanReconstruction &other) {
cameras_ = other.cameras_;
views_ = other.views_;
points_ = other.points_;
}
EuclideanReconstruction &EuclideanReconstruction::operator=(
const EuclideanReconstruction &other) {
if (&other != this) {
cameras_ = other.cameras_;
views_ = other.views_;
points_ = other.points_;
}
return *this;
}
void EuclideanReconstruction::InsertCamera(int image,
const Mat3 &R,
const Vec3 &t) {
LG << "InsertCamera " << image << ":\nR:\n"<< R << "\nt:\n" << t;
if (image >= cameras_.size()) {
cameras_.resize(image + 1);
void EuclideanReconstruction::InsertView(int image,
const Mat3 &R,
const Vec3 &t,
int camera) {
LG << "InsertView camera " << camera << ", image " << image
<< ":\nR:\n"<< R << "\nt:\n" << t;
if (image >= views_.size()) {
views_.resize(image + 1);
}
cameras_[image].image = image;
cameras_[image].R = R;
cameras_[image].t = t;
views_[image].image = image;
views_[image].R = R;
views_[image].t = t;
views_[image].camera = camera;
}
void EuclideanReconstruction::InsertPoint(int track, const Vec3 &X) {
@@ -61,32 +66,43 @@ void EuclideanReconstruction::InsertPoint(int track, const Vec3 &X) {
points_[track].X = X;
}
EuclideanCamera *EuclideanReconstruction::CameraForImage(int image) {
return const_cast<EuclideanCamera *>(
EuclideanView *EuclideanReconstruction::ViewForImage(int image) {
return const_cast<EuclideanView *>(
static_cast<const EuclideanReconstruction *>(
this)->CameraForImage(image));
this)->ViewForImage(image));
}
const EuclideanCamera *EuclideanReconstruction::CameraForImage(
int image) const {
if (image < 0 || image >= cameras_.size()) {
const EuclideanView *EuclideanReconstruction::ViewForImage(int image) const {
if (image < 0 || image >= views_.size()) {
return NULL;
}
const EuclideanCamera *camera = &cameras_[image];
if (camera->image == -1) {
const EuclideanView *view = &views_[image];
if (view->camera == -1 || view->image == -1) {
return NULL;
}
return camera;
return view;
}
vector<EuclideanCamera> EuclideanReconstruction::AllCameras() const {
vector<EuclideanCamera> cameras;
for (int i = 0; i < cameras_.size(); ++i) {
if (cameras_[i].image != -1) {
cameras.push_back(cameras_[i]);
vector<EuclideanView> EuclideanReconstruction::AllViews(
) const {
vector<EuclideanView> views;
for (int i = 0; i < views_.size(); ++i) {
if (views_[i].camera != -1 && views_[i].image != -1) {
views.push_back(views_[i]);
}
}
return cameras;
return views;
}
vector<EuclideanView> EuclideanReconstruction::AllViewsForCamera(
int camera) const {
vector<EuclideanView> views;
for (int i = 0; i < views_.size(); ++i) {
if (views_[i].image != -1 && views_[i].camera == camera) {
views.push_back(views_[i]);
}
}
return views;
}
EuclideanPoint *EuclideanReconstruction::PointForTrack(int track) {
@@ -115,14 +131,17 @@ vector<EuclideanPoint> EuclideanReconstruction::AllPoints() const {
return points;
}
void ProjectiveReconstruction::InsertCamera(int image,
const Mat34 &P) {
LG << "InsertCamera " << image << ":\nP:\n"<< P;
if (image >= cameras_.size()) {
cameras_.resize(image + 1);
void ProjectiveReconstruction::InsertView(int image,
const Mat34 &P,
int camera) {
LG << "InsertView camera " << camera << ", image " << image
<< ":\nP:\n"<< P;
if (image >= views_.size()) {
views_.resize(image + 1);
}
cameras_[image].image = image;
cameras_[image].P = P;
views_[image].image = image;
views_[image].P = P;
views_[image].camera = camera;
}
void ProjectiveReconstruction::InsertPoint(int track, const Vec4 &X) {
@@ -134,32 +153,42 @@ void ProjectiveReconstruction::InsertPoint(int track, const Vec4 &X) {
points_[track].X = X;
}
ProjectiveCamera *ProjectiveReconstruction::CameraForImage(int image) {
return const_cast<ProjectiveCamera *>(
ProjectiveView *ProjectiveReconstruction::ViewForImage(int image) {
return const_cast<ProjectiveView *>(
static_cast<const ProjectiveReconstruction *>(
this)->CameraForImage(image));
this)->ViewForImage(image));
}
const ProjectiveCamera *ProjectiveReconstruction::CameraForImage(
int image) const {
if (image < 0 || image >= cameras_.size()) {
const ProjectiveView *ProjectiveReconstruction::ViewForImage(int image) const {
if (image < 0 || image >= views_.size()) {
return NULL;
}
const ProjectiveCamera *camera = &cameras_[image];
if (camera->image == -1) {
const ProjectiveView *view = &views_[image];
if (view->camera == -1 || view->image == -1) {
return NULL;
}
return camera;
return view;
}
vector<ProjectiveCamera> ProjectiveReconstruction::AllCameras() const {
vector<ProjectiveCamera> cameras;
for (int i = 0; i < cameras_.size(); ++i) {
if (cameras_[i].image != -1) {
cameras.push_back(cameras_[i]);
vector<ProjectiveView> ProjectiveReconstruction::AllViews() const {
vector<ProjectiveView> views;
for (int i = 0; i < views_.size(); ++i) {
if (views_[i].camera != 1 && views_[i].image != -1) {
views.push_back(views_[i]);
}
}
return cameras;
return views;
}
vector<ProjectiveView> ProjectiveReconstruction::AllViewsForCamera(
int camera) const {
vector<ProjectiveView> views;
for (int i = 0; i < views_.size(); ++i) {
if (views_[i].image != -1 && views_[i].camera == camera) {
views.push_back(views_[i]);
}
}
return views;
}
ProjectivePoint *ProjectiveReconstruction::PointForTrack(int track) {

View File

@@ -21,6 +21,8 @@
#ifndef LIBMV_SIMPLE_PIPELINE_RECONSTRUCTION_H_
#define LIBMV_SIMPLE_PIPELINE_RECONSTRUCTION_H_
#include <vector>
#include "libmv/base/vector.h"
#include "libmv/numeric/numeric.h"
@@ -38,21 +40,28 @@ struct ReconstructionOptions {
};
/*!
A EuclideanCamera is the location and rotation of the camera viewing \a image.
A EuclideanView is the location and orientation of a camera viewing an
\a image. A EuclideanView may be associated with a particular \a camera for
a multicamera reconstruction. All EuclideanViews for the same \a camera
represent the motion of a particular video camera across all of its
corresponding images.
\a image identify which image from \l Tracks this camera represents.
\a R is a 3x3 matrix representing the rotation of the camera.
\a image identify which image from \l Tracks this view represents.
\a R is a 3x3 matrix representing the rotation of the view.
\a t is a translation vector representing its positions.
\a camera identify which camera from \l Tracks this view is associated with.
\sa Reconstruction
*/
struct EuclideanCamera {
EuclideanCamera() : image(-1) {}
EuclideanCamera(const EuclideanCamera &c) : image(c.image), R(c.R), t(c.t) {}
struct EuclideanView {
EuclideanView() : image(-1), camera(0) {}
EuclideanView(const EuclideanView &v)
: image(v.image), R(v.R), t(v.t), camera(v.camera) {}
int image;
Mat3 R;
Vec3 t;
int camera;
};
/*!
@@ -71,16 +80,16 @@ struct EuclideanPoint {
};
/*!
The EuclideanReconstruction class stores \link EuclideanCamera cameras
The EuclideanReconstruction class stores \link EuclideanView views
\endlink and \link EuclideanPoint points \endlink.
The EuclideanReconstruction container is intended as the store of 3D
reconstruction data to be used with the MultiView API.
The container has lookups to query a \a EuclideanCamera from an \a image or
a \a EuclideanPoint from a \a track.
The container has lookups to query a \a EuclideanView for an \a image, or a
\a EuclideanPoint from a \a track.
\sa Camera, Point
\sa View, Point
*/
class EuclideanReconstruction {
public:
@@ -93,16 +102,20 @@ class EuclideanReconstruction {
EuclideanReconstruction &operator=(const EuclideanReconstruction &other);
/*!
Insert a camera into the set. If there is already a camera for the given
\a image, the existing camera is replaced. If there is no camera for the
given \a image, a new one is added.
Insert a view into the set. If there is already a view for the given
\a image, the existing view is replaced. If there is no view for the given
\a image, a new one is added. A view may be associated with a \a camera
in a multicamera reconstruction.
\a image is the key used to retrieve the cameras with the other methods
in this class.
\a image and \a camera are the keys used to retrieve the views with the
other methods in this class.
\note You should use the same \a image identifier as in \l Tracks.
\note You should use the same \a camera and \a image identifiers as in
\l Tracks.
\note All markers for a single \a image should have the same \a camera
identifiers.
*/
void InsertCamera(int image, const Mat3 &R, const Vec3 &t);
void InsertView(int image, const Mat3 &R, const Vec3 &t, int camera = 0);
/*!
Insert a point into the reconstruction. If there is already a point for
@@ -116,12 +129,15 @@ class EuclideanReconstruction {
*/
void InsertPoint(int track, const Vec3 &X);
/// Returns a pointer to the camera corresponding to \a image.
EuclideanCamera *CameraForImage(int image);
const EuclideanCamera *CameraForImage(int image) const;
/// Returns a pointer to the view corresponding to \a image.
EuclideanView *ViewForImage(int image);
const EuclideanView *ViewForImage(int image) const;
/// Returns all cameras.
vector<EuclideanCamera> AllCameras() const;
/// Returns all views for all images.
vector<EuclideanView> AllViews() const;
/// Returns all views for a particular \a camera.
vector<EuclideanView> AllViewsForCamera(int camera) const;
/// Returns a pointer to the point corresponding to \a track.
EuclideanPoint *PointForTrack(int track);
@@ -131,24 +147,31 @@ class EuclideanReconstruction {
vector<EuclideanPoint> AllPoints() const;
private:
vector<EuclideanCamera> cameras_;
vector<EuclideanView> views_;
vector<EuclideanPoint> points_;
};
/*!
A ProjectiveCamera is the projection matrix for the camera of \a image.
A ProjectiveView is the projection matrix for the view of an \a image. A
ProjectiveView may be associated with a particular \a camera for a
multicamera reconstruction. All ProjectiveViews for the same \a camera
represent the motion of a particular video camera across all of its
corresponding images.
\a image identify which image from \l Tracks this camera represents.
\a image identify which image from \l Tracks this view represents.
\a P is the 3x4 projection matrix.
\a camera identify which camera from \l Tracks this view is associated with.
\sa ProjectiveReconstruction
*/
struct ProjectiveCamera {
ProjectiveCamera() : image(-1) {}
ProjectiveCamera(const ProjectiveCamera &c) : image(c.image), P(c.P) {}
struct ProjectiveView {
ProjectiveView() : image(-1), camera(0) {}
ProjectiveView(const ProjectiveView &v)
: image(v.image), P(v.P), camera(v.camera) {}
int image;
Mat34 P;
int camera;
};
/*!
@@ -167,30 +190,34 @@ struct ProjectivePoint {
};
/*!
The ProjectiveReconstruction class stores \link ProjectiveCamera cameras
The ProjectiveReconstruction class stores \link ProjectiveView views
\endlink and \link ProjectivePoint points \endlink.
The ProjectiveReconstruction container is intended as the store of 3D
reconstruction data to be used with the MultiView API.
The container has lookups to query a \a ProjectiveCamera from an \a image or
The container has lookups to query a \a ProjectiveView for an \a image, or
a \a ProjectivePoint from a \a track.
\sa Camera, Point
\sa View, Point
*/
class ProjectiveReconstruction {
public:
/*!
Insert a camera into the set. If there is already a camera for the given
\a image, the existing camera is replaced. If there is no camera for the
given \a image, a new one is added.
Insert a view into the set. If there is already a view for the given
\a image, the existing view is replaced. If there is no view for the given
\a image, a new one is added. A view may be associated with a \a camera
in a multicamera reconstruction.
\a image is the key used to retrieve the cameras with the other methods
in this class.
\a image and \a camera are the keys used to retrieve the views with the
other methods in this class.
\note You should use the same \a image identifier as in \l Tracks.
\note You should use the same \a camera and \a image identifiers as in
\l Tracks.
\note All markers for a single \a image should have the same \a camera
identifiers.
*/
void InsertCamera(int image, const Mat34 &P);
void InsertView(int image, const Mat34 &P, int camera = 0);
/*!
Insert a point into the reconstruction. If there is already a point for
@@ -204,12 +231,15 @@ class ProjectiveReconstruction {
*/
void InsertPoint(int track, const Vec4 &X);
/// Returns a pointer to the camera corresponding to \a image.
ProjectiveCamera *CameraForImage(int image);
const ProjectiveCamera *CameraForImage(int image) const;
/// Returns a pointer to the view corresponding to \a image.
ProjectiveView *ViewForImage(int image);
const ProjectiveView *ViewForImage(int image) const;
/// Returns all cameras.
vector<ProjectiveCamera> AllCameras() const;
/// Returns all views.
vector<ProjectiveView> AllViews() const;
/// Returns all views for a particular \a camera.
vector<ProjectiveView> AllViewsForCamera(int camera) const;
/// Returns a pointer to the point corresponding to \a track.
ProjectivePoint *PointForTrack(int track);
@@ -219,7 +249,7 @@ class ProjectiveReconstruction {
vector<ProjectivePoint> AllPoints() const;
private:
vector<ProjectiveCamera> cameras_;
vector<ProjectiveView> views_;
vector<ProjectivePoint> points_;
};

View File

@@ -24,37 +24,37 @@
namespace libmv {
void EuclideanScaleToUnity(EuclideanReconstruction *reconstruction) {
vector<EuclideanCamera> all_cameras = reconstruction->AllCameras();
vector<EuclideanView> all_views = reconstruction->AllViews();
vector<EuclideanPoint> all_points = reconstruction->AllPoints();
// Calculate center of the mass of all cameras.
Vec3 cameras_mass_center = Vec3::Zero();
for (int i = 0; i < all_cameras.size(); ++i) {
cameras_mass_center += all_cameras[i].t;
// Calculate center of the mass of all views.
Vec3 views_mass_center = Vec3::Zero();
for (int i = 0; i < all_views.size(); ++i) {
views_mass_center += all_views[i].t;
}
cameras_mass_center /= all_cameras.size();
views_mass_center /= all_views.size();
// Find the most distant camera from the mass center.
// Find the most distant view from the mass center.
double max_distance = 0.0;
for (int i = 0; i < all_cameras.size(); ++i) {
double distance = (all_cameras[i].t - cameras_mass_center).squaredNorm();
for (int i = 0; i < all_views.size(); ++i) {
double distance = (all_views[i].t - views_mass_center).squaredNorm();
if (distance > max_distance) {
max_distance = distance;
}
}
if (max_distance == 0.0) {
LG << "Cameras position variance is too small, can not rescale";
LG << "Views position variance is too small, can not rescale";
return;
}
double scale_factor = 1.0 / sqrt(max_distance);
// Rescale cameras positions.
for (int i = 0; i < all_cameras.size(); ++i) {
int image = all_cameras[i].image;
EuclideanCamera *camera = reconstruction->CameraForImage(image);
camera->t = camera->t * scale_factor;
// Rescale views positions.
for (int i = 0; i < all_views.size(); ++i) {
int image = all_views[i].image;
EuclideanView *view = reconstruction->ViewForImage(image);
view->t = view->t * scale_factor;
}
// Rescale points positions.

View File

@@ -183,7 +183,7 @@ bool EuclideanResect(const ReconstructionOptions &options,
LG << "Resection for image " << markers[0].image << " got:\n"
<< "R:\n" << R << "\nt:\n" << t;
reconstruction->InsertCamera(markers[0].image, R, t);
reconstruction->InsertView(markers[0].image, R, t, markers[0].camera);
return true;
}
@@ -273,7 +273,7 @@ bool ProjectiveResect(const vector<Marker> &markers,
LG << "Resection for image " << markers[0].image << " got:\n"
<< "P:\n" << P;
reconstruction->InsertCamera(markers[0].image, P);
reconstruction->InsertView(markers[0].image, P, markers[0].camera);
return true;
}
} // namespace libmv

View File

@@ -23,6 +23,7 @@
#include <algorithm>
#include <vector>
#include <iterator>
#include <cstdio>
#include "libmv/numeric/numeric.h"
@@ -34,7 +35,7 @@ Tracks::Tracks(const Tracks &other) {
Tracks::Tracks(const vector<Marker> &markers) : markers_(markers) {}
void Tracks::Insert(int image, int track, double x, double y) {
void Tracks::Insert(int image, int track, double x, double y, int camera) {
// TODO(keir): Wow, this is quadratic for repeated insertions. Fix this by
// adding a smarter data structure like a set<>.
for (int i = 0; i < markers_.size(); ++i) {
@@ -45,7 +46,7 @@ void Tracks::Insert(int image, int track, double x, double y) {
return;
}
}
Marker marker = { image, track, x, y };
Marker marker = { image, track, x, y, camera };
markers_.push_back(marker);
}
@@ -73,12 +74,23 @@ vector<Marker> Tracks::MarkersForTrack(int track) const {
return markers;
}
vector<Marker> Tracks::MarkersForCamera(int camera) const {
vector<Marker> markers;
for (int i = 0; i < markers_.size(); ++i) {
if (camera == markers_[i].camera) {
markers.push_back(markers_[i]);
}
}
return markers;
}
vector<Marker> Tracks::MarkersInBothImages(int image1, int image2) const {
vector<Marker> markers;
for (int i = 0; i < markers_.size(); ++i) {
int image = markers_[i].image;
if (image == image1 || image == image2)
if (image == image1 || image == image2) {
markers.push_back(markers_[i]);
}
}
return markers;
}
@@ -118,14 +130,25 @@ vector<Marker> Tracks::MarkersForTracksInBothImages(int image1,
Marker Tracks::MarkerInImageForTrack(int image, int track) const {
for (int i = 0; i < markers_.size(); ++i) {
if (markers_[i].image == image && markers_[i].track == track) {
if (markers_[i].image == image &&
markers_[i].track == track) {
return markers_[i];
}
}
Marker null = { -1, -1, -1, -1 };
Marker null = { -1, -1, -1, -1, -1 };
return null;
}
void Tracks::RemoveMarkersInImage(int image) {
int size = 0;
for (int i = 0; i < markers_.size(); ++i) {
if (markers_[i].image != image) {
markers_[size++] = markers_[i];
}
}
markers_.resize(size);
}
void Tracks::RemoveMarkersForTrack(int track) {
int size = 0;
for (int i = 0; i < markers_.size(); ++i) {
@@ -136,16 +159,36 @@ void Tracks::RemoveMarkersForTrack(int track) {
markers_.resize(size);
}
void Tracks::RemoveMarker(int image, int track) {
void Tracks::RemoveMarkersForCamera(int camera) {
int size = 0;
for (int i = 0; i < markers_.size(); ++i) {
if (markers_[i].image != image || markers_[i].track != track) {
if (markers_[i].camera != camera) {
markers_[size++] = markers_[i];
}
}
markers_.resize(size);
}
void Tracks::RemoveMarker(int image, int track) {
int size = 0;
for (int i = 0; i < markers_.size(); ++i) {
if (markers_[i].image != image ||
markers_[i].track != track) {
markers_[size++] = markers_[i];
}
}
markers_.resize(size);
}
int Tracks::CameraFromImage(int image) const {
for (int i = 0; i < markers_.size(); ++i) {
if (markers_[i].image == image) {
return markers_[i].camera;
}
}
return -1;
}
int Tracks::MaxImage() const {
// TODO(MatthiasF): maintain a max_image_ member (updated on Insert)
int max_image = 0;
@@ -164,6 +207,15 @@ int Tracks::MaxTrack() const {
return max_track;
}
int Tracks::MaxCamera() const {
// TODO(MatthiasF): maintain a max_camera_ member (updated on Insert)
int max_camera = 0;
for (int i = 0; i < markers_.size(); ++i) {
max_camera = std::max(markers_[i].camera, max_camera);
}
return max_camera;
}
int Tracks::NumMarkers() const {
return markers_.size();
}
@@ -174,7 +226,7 @@ void CoordinatesForMarkersInImage(const vector<Marker> &markers,
vector<Vec2> coords;
for (int i = 0; i < markers.size(); ++i) {
const Marker &marker = markers[i];
if (markers[i].image == image) {
if (marker.image == image) {
coords.push_back(Vec2(marker.x, marker.y));
}
}

View File

@@ -30,8 +30,11 @@ namespace libmv {
A Marker is the 2D location of a tracked point in an image.
\a x, \a y is the position of the marker in pixels from the top left corner
in the image identified by \a image. All markers for to the same target
form a track identified by a common \a track number.
of the image identified by \a image. All markers for the same target form a
track identified by a common \a track number. All markers for a single
\a image can be associated with a particular camera identified by \a camera.
Markers for a particular track but with different \a camera numbers
correspond to the same target filmed from an alternate viewpoint.
\note Markers are typically aggregated with the help of the \l Tracks class.
@@ -41,6 +44,7 @@ struct Marker {
int image;
int track;
double x, y;
int camera;
};
/*!
@@ -67,29 +71,36 @@ class Tracks {
/*!
Inserts a marker into the set. If there is already a marker for the given
\a image and \a track, the existing marker is replaced. If there is no
marker for the given \a image and \a track, a new one is added.
marker for the given \a image and \a track, a new one is added. The marker
can be associated with a \a camera for use with multicamera reconsturction.
\a image and \a track are the keys used to retrieve the markers with the
other methods in this class.
\a camera, \a image and \a track are the keys used to retrieve the markers
with the other methods in this class.
\note To get an identifier for a new track, use \l MaxTrack() + 1.
\note All markers for a single \a image should belong to the same
\a camera.
*/
void Insert(int image, int track, double x, double y);
void Insert(int image, int track, double x, double y, int camera = 0);
/// Returns all the markers.
vector<Marker> AllMarkers() const;
/// Returns all the markers visible in an \a image.
vector<Marker> MarkersInImage(int image) const;
/// Returns all the markers belonging to a track.
vector<Marker> MarkersForTrack(int track) const;
/// Returns all the markers visible in \a image.
vector<Marker> MarkersInImage(int image) const;
/// Returns all the markers visible from a \a camera.
vector<Marker> MarkersForCamera(int camera) const;
/// Returns all the markers visible in \a image1 and \a image2.
vector<Marker> MarkersInBothImages(int image1, int image2) const;
/*!
Returns the markers in \a image1 and \a image2 which have a common track.
Returns the markers in \a image1 and \a image2 (both from \a camera) which
have a common track.
This is not the same as the union of the markers in \a image1 and \a
image2; each marker is for a track that appears in both images.
@@ -99,18 +110,30 @@ class Tracks {
/// Returns the marker in \a image belonging to \a track.
Marker MarkerInImageForTrack(int image, int track) const;
/// Removes all the markers belonging to \a image.
void RemoveMarkersInImage(int image);
/// Removes all the markers belonging to \a track.
void RemoveMarkersForTrack(int track);
/// Removes the marker in \a image belonging to \a track.
/// Removes all the markers belonging to \a camera.
void RemoveMarkersForCamera(int camera);
/// Removes the marker in \a image of \a camera belonging to \a track.
void RemoveMarker(int image, int track);
/// Returns the camera that \a image belongs to.
int CameraFromImage(int image) const;
/// Returns the maximum image identifier used.
int MaxImage() const;
/// Returns the maximum track identifier used.
int MaxTrack() const;
/// Returns the maximum camera identifier used.
int MaxCamera() const;
/// Returns the number of markers.
int NumMarkers() const;

Binary file not shown.

View File

@@ -352,6 +352,19 @@ class CLIP_PT_tools_solve(CLIP_PT_tracking_panel, Panel):
col.label(text="Refine:")
col.prop(settings, "refine_intrinsics", text="")
col = layout.column(align=True)
col.active = "FOCAL_LENGTH" in settings.refine_intrinsics
col.prop(settings, "use_focal_length_constraint");
sub = col.column(align=True)
sub.active = settings.use_focal_length_constraint
if tracking.camera.units == 'MILLIMETERS':
sub.prop(settings, "focal_length_min", text="Min")
sub.prop(settings, "focal_length_max", text="Max")
else:
sub.prop(settings, "focal_length_min_pixels", text="Min")
sub.prop(settings, "focal_length_max_pixels", text="Max")
col = layout.column(align=True)
col.active = not settings.use_tripod_solver
col.prop(settings, "use_fallback_reconstruction",
@@ -943,6 +956,17 @@ class CLIP_PT_tools_clip(CLIP_PT_clip_view_panel, Panel):
layout.operator("clip.set_scene_frames")
class CLIP_PT_multicamera_clip(CLIP_PT_clip_view_panel, Panel):
bl_space_type = 'CLIP_EDITOR'
bl_region_type = 'TOOLS'
bl_label = "Multicamera"
def draw(self, context):
layout = self.layout
layout.operator("clip.create_correspondence")
class CLIP_MT_view(Menu):
bl_label = "View"

View File

@@ -160,6 +160,7 @@ struct MovieTrackingReconstruction *BKE_tracking_object_get_reconstruction(struc
struct MovieTrackingObject *object);
/* **** Camera **** */
void BKE_tracking_camera_focal_length_set(struct MovieTracking *tracking, float value);
void BKE_tracking_camera_shift_get(struct MovieTracking *tracking, int winx, int winy, float *shiftx, float *shifty);
void BKE_tracking_camera_to_blender(struct MovieTracking *tracking, struct Scene *scene,
struct Camera *camera, int width, int height);

View File

@@ -57,6 +57,8 @@
#include "DNA_scene_types.h"
#include "DNA_view3d_types.h"
#include "RNA_access.h"
#include "BLI_utildefines.h"
#include "BLI_blenlib.h"
@@ -657,10 +659,12 @@ MovieClip *BKE_movieclip_file_add(Main *bmain, const char *name)
detect_clip_source(clip);
movieclip_load_get_szie(clip);
if (clip->lastsize[0]) {
int width = clip->lastsize[0];
clip->tracking.camera.focal = 24.0f * width / clip->tracking.camera.sensor_width;
float focal = 24.0f * width / clip->tracking.camera.sensor_width;
BKE_tracking_camera_focal_length_set(&clip->tracking, focal);
}
movieclip_calc_length(clip);

View File

@@ -213,6 +213,8 @@ void BKE_tracking_settings_init(MovieTracking *tracking)
tracking->settings.dist = 1;
tracking->settings.object_distance = 1;
tracking->settings.reconstruction_success_threshold = 1e-3f;
tracking->settings.focal_length_min = 0.0f;
tracking->settings.focal_length_max = 5000.0f;
tracking->stabilization.scaleinf = 1.0f;
tracking->stabilization.locinf = 1.0f;
@@ -1798,6 +1800,20 @@ static void reconstructed_camera_scale_set(MovieTrackingObject *object, float ma
}
}
void BKE_tracking_camera_focal_length_set(MovieTracking *tracking, float value)
{
MovieTrackingSettings *settings = &tracking->settings;
MovieTrackingCamera *camera = &tracking->camera;
if (!(settings->refine_intrinsics & REFINE_FOCAL_LENGTH) ||
!(settings->constrain_intrinsics & CONSTRAIN_FOCAL_LENGTH))
{
settings->focal_length_min = value;
settings->focal_length_max = value;
}
camera->focal = value;
}
/* converts principal offset from center to offset of blender's camera */
void BKE_tracking_camera_shift_get(MovieTracking *tracking, int winx, int winy, float *shiftx, float *shifty)
@@ -3415,7 +3431,9 @@ typedef struct MovieReconstructContext {
struct libmv_Tracks *tracks;
bool select_keyframes;
int keyframe1, keyframe2;
int refine_flags;
short refine_intrinsics;
short constrain_intrinsics;
float focal_length_min, focal_length_max;
struct libmv_Reconstruction *reconstruction;
@@ -3464,7 +3482,8 @@ static struct libmv_Tracks *libmv_tracks_new(ListBase *tracksbase, int width, in
if ((marker->flag & MARKER_DISABLED) == 0) {
libmv_tracksInsert(tracks, marker->framenr, tracknr,
(marker->pos[0] + track->offset[0]) * width,
(marker->pos[1] + track->offset[1]) * height);
(marker->pos[1] + track->offset[1]) * height,
0);
}
}
@@ -3479,7 +3498,7 @@ static struct libmv_Tracks *libmv_tracks_new(ListBase *tracksbase, int width, in
static void reconstruct_retrieve_libmv_intrinsics(MovieReconstructContext *context, MovieTracking *tracking)
{
struct libmv_Reconstruction *libmv_reconstruction = context->reconstruction;
struct libmv_CameraIntrinsics *libmv_intrinsics = libmv_reconstructionExtractIntrinsics(libmv_reconstruction);
struct libmv_CameraIntrinsics *libmv_intrinsics = libmv_reconstructionExtractIntrinsics(libmv_reconstruction, 0);
float aspy = 1.0f / tracking->camera.pixel_aspect;
@@ -3634,27 +3653,38 @@ static int reconstruct_retrieve_libmv(MovieReconstructContext *context, MovieTra
}
/* Convert blender's refinement flags to libmv's. */
static int reconstruct_refine_intrinsics_get_flags(MovieTracking *tracking, MovieTrackingObject *object)
static void reconstruct_refine_intrinsics_get_flags(const MovieTracking *tracking,
const MovieTrackingObject *object,
MovieReconstructContext *context)
{
int refine = tracking->settings.refine_camera_intrinsics;
int flags = 0;
short refine = tracking->settings.refine_intrinsics;
short constrain = tracking->settings.constrain_intrinsics;
short libmv_refine = 0;
short libmv_constrain = 0;
if ((object->flag & TRACKING_OBJECT_CAMERA) == 0)
return 0;
if ((object->flag & TRACKING_OBJECT_CAMERA) == 0) {
context->refine_intrinsics = 0;
context->constrain_intrinsics = 0;
return;
}
if (refine & REFINE_FOCAL_LENGTH)
flags |= LIBMV_REFINE_FOCAL_LENGTH;
if (refine & REFINE_FOCAL_LENGTH) {
libmv_refine |= LIBMV_REFINE_FOCAL_LENGTH;
if (constrain & CONSTRAIN_FOCAL_LENGTH)
libmv_constrain |= LIBMV_CONSTRAIN_FOCAL_LENGTH;
}
if (refine & REFINE_PRINCIPAL_POINT)
flags |= LIBMV_REFINE_PRINCIPAL_POINT;
libmv_refine |= LIBMV_REFINE_PRINCIPAL_POINT;
if (refine & REFINE_RADIAL_DISTORTION_K1)
flags |= LIBMV_REFINE_RADIAL_DISTORTION_K1;
libmv_refine |= LIBMV_REFINE_RADIAL_DISTORTION_K1;
if (refine & REFINE_RADIAL_DISTORTION_K2)
flags |= LIBMV_REFINE_RADIAL_DISTORTION_K2;
libmv_refine |= LIBMV_REFINE_RADIAL_DISTORTION_K2;
return flags;
context->refine_intrinsics = libmv_refine;
context->constrain_intrinsics = libmv_constrain;
}
/* Count tracks which has markers at both of keyframes. */
@@ -3780,7 +3810,10 @@ MovieReconstructContext *BKE_tracking_reconstruction_context_new(MovieTracking *
context->tracks = libmv_tracks_new(tracksbase, width, height * aspy);
context->keyframe1 = keyframe1;
context->keyframe2 = keyframe2;
context->refine_flags = reconstruct_refine_intrinsics_get_flags(tracking, object);
reconstruct_refine_intrinsics_get_flags(tracking, object, context);
context->focal_length_min = tracking->settings.focal_length_min;
context->focal_length_max = tracking->settings.focal_length_max;
return context;
}
@@ -3811,10 +3844,12 @@ static void reconstruct_update_solve_cb(void *customdata, double progress, const
BLI_snprintf(progressdata->stats_message, progressdata->message_size, "Solving camera | %s", message);
}
/* FIll in camera intrinsics structure from reconstruction context. */
static void camraIntrincicsOptionsFromContext(libmv_CameraIntrinsicsOptions *camera_intrinsics_options,
MovieReconstructContext *context)
static void cameraIntrinsicsOptionsFromContext(const MovieReconstructContext *context,
libmv_CameraIntrinsicsOptions *camera_intrinsics_options)
{
camera_intrinsics_options->focal_length = context->focal_length;
CLAMP(camera_intrinsics_options->focal_length,
context->focal_length_min, context->focal_length_max);
camera_intrinsics_options->principal_point_x = context->principal_point[0];
camera_intrinsics_options->principal_point_y = context->principal_point[1];
@@ -3828,15 +3863,20 @@ static void camraIntrincicsOptionsFromContext(libmv_CameraIntrinsicsOptions *cam
}
/* Fill in reconstruction options structure from reconstruction context. */
static void reconstructionOptionsFromContext(libmv_ReconstructionOptions *reconstruction_options,
MovieReconstructContext *context)
static void reconstructionOptionsFromContext(MovieReconstructContext *context,
libmv_ReconstructionOptions *reconstruction_options)
{
reconstruction_options->select_keyframes = context->select_keyframes;
reconstruction_options->keyframe1 = context->keyframe1;
reconstruction_options->keyframe2 = context->keyframe2;
reconstruction_options->refine_intrinsics = context->refine_flags;
reconstruction_options->motion_flag = context->motion_flag;
reconstruction_options->refine_intrinsics = context->refine_intrinsics;
reconstruction_options->constrain_intrinsics = context->constrain_intrinsics;
reconstruction_options->focal_length_min = context->focal_length_min;
reconstruction_options->focal_length_max = context->focal_length_max;
reconstruction_options->success_threshold = context->success_threshold;
reconstruction_options->use_fallback_reconstruction = context->use_fallback_reconstruction;
@@ -3867,26 +3907,20 @@ void BKE_tracking_reconstruction_solve(MovieReconstructContext *context, short *
progressdata.stats_message = stats_message;
progressdata.message_size = message_size;
camraIntrincicsOptionsFromContext(&camera_intrinsics_options, context);
reconstructionOptionsFromContext(&reconstruction_options, context);
cameraIntrinsicsOptionsFromContext(context, &camera_intrinsics_options);
reconstructionOptionsFromContext(context, &reconstruction_options);
if (context->motion_flag & TRACKING_MOTION_MODAL) {
context->reconstruction = libmv_solveModal(context->tracks,
&camera_intrinsics_options,
&reconstruction_options,
reconstruct_update_solve_cb, &progressdata);
}
else {
context->reconstruction = libmv_solveReconstruction(context->tracks,
&camera_intrinsics_options,
&reconstruction_options,
reconstruct_update_solve_cb, &progressdata);
if (context->select_keyframes) {
/* store actual keyframes used for reconstruction to update them in the interface later */
context->keyframe1 = reconstruction_options.keyframe1;
context->keyframe2 = reconstruction_options.keyframe2;
}
context->reconstruction = libmv_solve(context->tracks,
&camera_intrinsics_options,
&reconstruction_options,
reconstruct_update_solve_cb,
&progressdata);
if ((context->motion_flag & TRACKING_MOTION_MODAL) &&
context->select_keyframes)
{
/* store actual keyframes used for reconstruction to update them in the interface later */
context->keyframe1 = reconstruction_options.keyframe1;
context->keyframe2 = reconstruction_options.keyframe2;
}
error = libmv_reprojectionError(context->reconstruction);

View File

@@ -201,6 +201,8 @@ void CLIP_OT_slide_plane_marker(struct wmOperatorType *ot);
void CLIP_OT_keyframe_insert(struct wmOperatorType *ot);
void CLIP_OT_keyframe_delete(struct wmOperatorType *ot);
void CLIP_OT_create_correspondence(struct wmOperatorType *ot);
/* tracking_select.c */
void CLIP_OT_select(struct wmOperatorType *ot);
void CLIP_OT_select_all(struct wmOperatorType *ot);

View File

@@ -526,6 +526,9 @@ static void clip_operatortypes(void)
WM_operatortype_append(CLIP_OT_keyframe_insert);
WM_operatortype_append(CLIP_OT_keyframe_delete);
/* Multicamera */
WM_operatortype_append(CLIP_OT_create_correspondence);
/* ** clip_graph_ops.c ** */
/* graph editing */

View File

@@ -4188,3 +4188,20 @@ void CLIP_OT_keyframe_delete(wmOperatorType *ot)
/* flags */
ot->flag = OPTYPE_REGISTER | OPTYPE_UNDO;
}
/********************** Multicamera operators ******************************/
void CLIP_OT_create_correspondence(wmOperatorType *ot)
{
/* identifiers */
ot->name = "Create correspondence";
ot->description = "Create a correspondence between selected tracks across multiple clips";
ot->idname = "CLIP_OT_create_correspondence";
/* api callbacks */
//ot->poll = ;
//ot->exec = ;
/* flags */
ot->flag = OPTYPE_REGISTER | OPTYPE_UNDO;
}

View File

@@ -210,8 +210,10 @@ typedef struct MovieTrackingSettings {
float reconstruction_success_threshold;
int reconstruction_flag;
/* which camera intrinsics to refine. uses on the REFINE_* flags */
short refine_camera_intrinsics, pad2;
/* refinement */
short refine_intrinsics; /* camera intrinsics to refine */
short constrain_intrinsics; /* camera intrinsics to constrain */
float focal_length_min, focal_length_max; /* minimum and maximum values for focal length refinement in pixels */
/* ** tool settings ** */
@@ -413,7 +415,7 @@ enum {
TRACKING_USE_KEYFRAME_SELECTION = (1 << 1)
};
/* MovieTrackingSettings->refine_camera_intrinsics */
/* MovieTrackingSettings->refine_intrinsics */
enum {
REFINE_FOCAL_LENGTH = (1 << 0),
REFINE_PRINCIPAL_POINT = (1 << 1),
@@ -421,6 +423,11 @@ enum {
REFINE_RADIAL_DISTORTION_K2 = (1 << 4)
};
/* MovieTrackingSettings->constrain_intrinsics */
enum {
CONSTRAIN_FOCAL_LENGTH = (1 << 0)
};
/* MovieTrackingStrabilization->flag */
enum {
TRACKING_2D_STABILIZATION = (1 << 0),

View File

@@ -55,6 +55,76 @@
#include "WM_api.h"
static void rna_trackingSettings_focal_length_min_pixels_set(PointerRNA *ptr, float value)
{
MovieClip *clip = (MovieClip *)ptr->id.data;
MovieTrackingSettings *settings = &clip->tracking.settings;
if (value > settings->focal_length_max)
value = settings->focal_length_max;
settings->focal_length_min = value;
}
static void rna_trackingSettings_focal_length_max_pixels_set(PointerRNA *ptr, float value)
{
MovieClip *clip = (MovieClip *)ptr->id.data;
MovieTrackingSettings *settings = &clip->tracking.settings;
if (value < settings->focal_length_min)
value = settings->focal_length_min;
settings->focal_length_max = value;
}
static float rna_trackingSettings_focal_length_min_get(PointerRNA *ptr)
{
MovieClip *clip = (MovieClip *)ptr->id.data;
MovieTrackingSettings *settings = &clip->tracking.settings;
MovieTrackingCamera *camera = &clip->tracking.camera;
float value = settings->focal_length_min;
if (clip->lastsize[0])
value = value * camera->sensor_width / (float)clip->lastsize[0];
return value;
}
static void rna_trackingSettings_focal_length_min_set(PointerRNA *ptr, float value)
{
MovieClip *clip = (MovieClip *)ptr->id.data;
MovieTrackingCamera *camera = &clip->tracking.camera;
if (clip->lastsize[0])
value = clip->lastsize[0] * value / camera->sensor_width;
rna_trackingSettings_focal_length_min_pixels_set(ptr, value);
}
static float rna_trackingSettings_focal_length_max_get(PointerRNA *ptr)
{
MovieClip *clip = (MovieClip *)ptr->id.data;
MovieTrackingSettings *settings = &clip->tracking.settings;
MovieTrackingCamera *camera = &clip->tracking.camera;
float value = settings->focal_length_max;
if (clip->lastsize[0])
value = value * camera->sensor_width / (float)clip->lastsize[0];
return value;
}
static void rna_trackingSettings_focal_length_max_set(PointerRNA *ptr, float value)
{
MovieClip *clip = (MovieClip *)ptr->id.data;
MovieTrackingCamera *camera = &clip->tracking.camera;
if (clip->lastsize[0])
value = clip->lastsize[0] * value / camera->sensor_width;
rna_trackingSettings_focal_length_max_pixels_set(ptr, value);
}
static char *rna_tracking_path(PointerRNA *UNUSED(ptr))
{
return BLI_sprintfN("tracking");
@@ -305,7 +375,13 @@ static char *rna_trackingCamera_path(PointerRNA *UNUSED(ptr))
return BLI_sprintfN("tracking.camera");
}
static float rna_trackingCamera_focal_mm_get(PointerRNA *ptr)
static void rna_trackingCamera_focal_length_pixels_set(PointerRNA *ptr, float value)
{
MovieClip *clip = (MovieClip *)ptr->id.data;
BKE_tracking_camera_focal_length_set(&clip->tracking, value);
}
static float rna_trackingCamera_focal_length_get(PointerRNA *ptr)
{
MovieClip *clip = (MovieClip *)ptr->id.data;
MovieTrackingCamera *camera = &clip->tracking.camera;
@@ -317,7 +393,7 @@ static float rna_trackingCamera_focal_mm_get(PointerRNA *ptr)
return val;
}
static void rna_trackingCamera_focal_mm_set(PointerRNA *ptr, float value)
static void rna_trackingCamera_focal_length_set(PointerRNA *ptr, float value)
{
MovieClip *clip = (MovieClip *)ptr->id.data;
MovieTrackingCamera *camera = &clip->tracking.camera;
@@ -326,7 +402,7 @@ static void rna_trackingCamera_focal_mm_set(PointerRNA *ptr, float value)
value = clip->lastsize[0] * value / camera->sensor_width;
if (value >= 0.0001f)
camera->focal = value;
BKE_tracking_camera_focal_length_set(&clip->tracking, value);
}
static char *rna_trackingStabilization_path(PointerRNA *UNUSED(ptr))
@@ -793,11 +869,56 @@ static void rna_def_trackingSettings(BlenderRNA *brna)
/* intrinsics refinement during bundle adjustment */
prop = RNA_def_property(srna, "refine_intrinsics", PROP_ENUM, PROP_NONE);
RNA_def_property_enum_sdna(prop, NULL, "refine_camera_intrinsics");
RNA_def_property_enum_sdna(prop, NULL, "refine_intrinsics");
RNA_def_property_clear_flag(prop, PROP_ANIMATABLE);
RNA_def_property_enum_items(prop, refine_items);
RNA_def_property_ui_text(prop, "Refine", "Refine intrinsics during camera solving");
/* constrain focal length for intrinsics refinement */
prop = RNA_def_property(srna, "use_focal_length_constraint", PROP_BOOLEAN, PROP_NONE);
RNA_def_property_clear_flag(prop, PROP_ANIMATABLE);
RNA_def_property_boolean_sdna(prop, NULL, "constrain_intrinsics", CONSTRAIN_FOCAL_LENGTH);
RNA_def_property_ui_text(prop, "Constrain Focal Length",
"Constrain the focal length during intrinsics refinement");
/* focal length constraint values in pixels */
prop = RNA_def_property(srna, "focal_length_min_pixels", PROP_FLOAT, PROP_NONE);
RNA_def_property_clear_flag(prop, PROP_ANIMATABLE);
RNA_def_property_float_sdna(prop, NULL, "focal_length_min");
RNA_def_property_range(prop, 0.0f, 10000.0f);
RNA_def_property_float_default(prop, 0.0f);
RNA_def_property_float_funcs(prop, NULL, "rna_trackingSettings_focal_length_min_pixels_set", NULL);
RNA_def_property_ui_text(prop, "Min Focal Length",
"Minimum focal length in pixels");
prop = RNA_def_property(srna, "focal_length_max_pixels", PROP_FLOAT, PROP_NONE);
RNA_def_property_clear_flag(prop, PROP_ANIMATABLE);
RNA_def_property_float_sdna(prop, NULL, "focal_length_max");
RNA_def_property_range(prop, 0.0f, 10000.0f);
RNA_def_property_float_default(prop, 10000.0f);
RNA_def_property_float_funcs(prop, NULL, "rna_trackingSettings_focal_length_max_pixels_set", NULL);
RNA_def_property_ui_text(prop, "Max Focal Length",
"Maximum focal length in pixels");
/* focal length constraint values in millimeters */
prop = RNA_def_property(srna, "focal_length_min", PROP_FLOAT, PROP_NONE);
RNA_def_property_clear_flag(prop, PROP_ANIMATABLE);
RNA_def_property_range(prop, 0.0f, 10000.0f);
RNA_def_property_float_default(prop, 0.0f);
RNA_def_property_float_funcs(prop, "rna_trackingSettings_focal_length_min_get",
"rna_trackingSettings_focal_length_min_set", NULL);
RNA_def_property_ui_text(prop, "Min Focal Length",
"Minimum focal length in millimeters");
prop = RNA_def_property(srna, "focal_length_max", PROP_FLOAT, PROP_NONE);
RNA_def_property_clear_flag(prop, PROP_ANIMATABLE);
RNA_def_property_range(prop, 0.0f, 10000.0f);
RNA_def_property_float_default(prop, 10000.0f);
RNA_def_property_float_funcs(prop, "rna_trackingSettings_focal_length_max_get",
"rna_trackingSettings_focal_length_max_set", NULL);
RNA_def_property_ui_text(prop, "Max Focal Length",
"Maximum focal length in millimeters");
/* tool settings */
/* distance */
@@ -977,7 +1098,8 @@ static void rna_def_trackingCamera(BlenderRNA *brna)
RNA_def_property_float_sdna(prop, NULL, "focal");
RNA_def_property_clear_flag(prop, PROP_ANIMATABLE);
RNA_def_property_range(prop, 0.0001f, 5000.0f);
RNA_def_property_float_funcs(prop, "rna_trackingCamera_focal_mm_get", "rna_trackingCamera_focal_mm_set", NULL);
RNA_def_property_float_funcs(prop, "rna_trackingCamera_focal_length_get",
"rna_trackingCamera_focal_length_set", NULL);
RNA_def_property_ui_text(prop, "Focal Length", "Camera's focal length");
RNA_def_property_update(prop, NC_MOVIECLIP | NA_EDITED, NULL);
@@ -986,6 +1108,7 @@ static void rna_def_trackingCamera(BlenderRNA *brna)
RNA_def_property_float_sdna(prop, NULL, "focal");
RNA_def_property_clear_flag(prop, PROP_ANIMATABLE);
RNA_def_property_range(prop, 0.0f, 5000.0f);
RNA_def_property_float_funcs(prop, NULL, "rna_trackingCamera_focal_length_pixels_set", NULL);
RNA_def_property_ui_text(prop, "Focal Length", "Camera's focal length");
RNA_def_property_update(prop, NC_MOVIECLIP | NA_EDITED, NULL);
@@ -993,9 +1116,9 @@ static void rna_def_trackingCamera(BlenderRNA *brna)
prop = RNA_def_property(srna, "units", PROP_ENUM, PROP_NONE);
RNA_def_property_enum_sdna(prop, NULL, "units");
RNA_def_property_clear_flag(prop, PROP_ANIMATABLE);
RNA_def_property_clear_flag(prop, PROP_ANIMATABLE);
RNA_def_property_enum_items(prop, camera_units_items);
RNA_def_property_ui_text(prop, "Units", "Units used for camera focal length");
RNA_def_property_update(prop, NC_MOVIECLIP | NA_EDITED, NULL);
/* Principal Point */
prop = RNA_def_property(srna, "principal", PROP_FLOAT, PROP_NONE);