Compare commits

..

304 Commits

Author SHA1 Message Date
41238bba45 Add functional line node 2021-03-13 23:45:41 -05:00
0732e3de35 Merge branch 'geometry-nodes-mesh-primitives' into temp-geometry-nodes-mesh-primitive-line 2021-03-13 21:46:58 -05:00
5cd33108fc Cleanup: Remove unused includes, change switch to if 2021-03-13 21:46:12 -05:00
36397a721a Add boilerplate code for a line primitive 2021-03-13 18:04:07 -05:00
1173d1ca9e Cleanup: Spelling 2021-03-13 17:43:33 -05:00
f7a6dd7218 Cleanup: remove static 2021-03-13 17:39:09 -05:00
9934a71172 Cleanup, revert some changes 2021-03-13 17:31:39 -05:00
e46dcc499c Revert changes split to a separate patch
D10712
2021-03-13 17:29:43 -05:00
4be8061baf Rename and reorder variables for consistency 2021-03-13 17:27:57 -05:00
c20d7676fc Move cylinder / cone code to cone file 2021-03-13 17:22:15 -05:00
507fdd0e3e Merge branch 'master' into geometry-nodes-mesh-primitives 2021-03-13 17:16:32 -05:00
9f68f5c1e1 Nodes: Add distance float socket type
This is necessary to make float sockets display a value with the unit
system. `PROP_DISTANCE` will be used quite a lot by the mesh primitives
geometry nodes patch.

Differential Revision: https://developer.blender.org/D10711
2021-03-13 17:15:50 -05:00
dcfea4a1e5 Cleanup: EEVEE: Silence warning 2021-03-13 23:13:08 +01:00
e30315ba95 EEVEE: RenderPass: Fix Ambient Occlusion pass
The shader was not using the horizon texture and was trying to
trace the AO again.

Also the depth reconstruction was off because now using the maxzBuffer.
2021-03-13 23:11:53 +01:00
75fc6e3b2b Cleanup: EEVEE: Remove the horizon search layered shader
This shader is of no use now that we the fullres hizbuffer.
2021-03-13 22:51:23 +01:00
00baf875ef EEVEE: Planar reflections: Fix ambient occlusion broken in reflections
Use the maxzbuffer to get the correct depth information.
2021-03-13 22:51:22 +01:00
2c216413d5 Nodes: Move group input and output to a consistent menu location
Currently (in geometry nodes) you can delete the group input or group
output nodes with no way to get them back without copy and paste. This
adds them to the "Group" submenu of the add menu so at least there is
a way to add them back.

Additionally, these nodes are moved to the "Group" submenu for all node
editors. This makes sense since they are not like the other input or
output nodes, they really just relate to how groups are organized.

Differential Revision: https://developer.blender.org/D10241
2021-03-13 16:00:56 -05:00
1e29f64987 Merge branch 'master' into geometry-nodes-mesh-primitives 2021-03-13 15:17:53 -05:00
03490618a2 EEVEE: ScreenSpaceReflections: Avoid outputing NaNs
This happens when the normal is too much deformed to give valid
reflection even after ensure_valid_reflection.

Cycles seems to not handle this case either so we just discard the
rays.
2021-03-13 20:59:20 +01:00
09e77d2c89 Fix T86476 EEVEE: SSS material with variable radius can produce NaNs
Simple divide by 0 error. The input radius was assumed to be safe
but is not when the user can scale it arbitrarly.

This also move the division out of the loop.
2021-03-13 20:59:20 +01:00
165a2da753 EEVEE: Fix wrong sss component being affected by alpha
This fixes NaNs / blown up values when using alpha-hashed transparency
or alpha clip with SSS.
2021-03-13 20:59:20 +01:00
267a9e14f5 EEVEE: ScreenSpaceReflections: Add back multi ray-hitpoint reuse
We now reuse 9 hitpoints from the neighboorhood using a blue noise
sample distribution as mentionned in the reference presentation.

Reusing more rays does however make some area a bit more blury.

The resulting noise is quite lower compared to previous implementation
which was only reusing 4 hits.
2021-03-13 20:59:20 +01:00
b79f209041 EEVEE: ScreenSpaceReflections: Increase depth threshold
This avoids going through geometry when ray have certain angle.
2021-03-13 20:59:20 +01:00
bbc5e26051 EEVEE: ScreenSpaceReflections: Jitter starting texel
This make sure the rays are generated randomly from a fullres
texel center.

This creates more noise but increase the convergence when doing
half res tracing.
2021-03-13 20:59:20 +01:00
ff07a4afb8 EEVEE: Fix split commit 2021-03-13 20:59:20 +01:00
83b7f7dfb7 Cleanup: EEVEE: Remove SSR shader variations 2021-03-13 20:59:20 +01:00
40d579b69f Cleanup: EEVEE: Split effect_ssr.glsl
This split is to make code easier to manage and rename the files to
`effect_reflection_*` to avoid confusion.

Also this cleans up a bit of the branching mess in the trace shader.
2021-03-13 20:59:20 +01:00
6a7f6f2867 Cleanup: EEVEE: Remove hammersley texture and split hammersley code 2021-03-13 20:59:20 +01:00
5fee9dae5d Cleanup: EEVEE: Make bsdf_sampling_lib.glsl more tidy 2021-03-13 20:59:20 +01:00
ba3a0dc9ba Geometry Nodes: Add "normal" attribute for face normals
This commit adds a `normal` attribute on the polygon domain. Since
normal data is derived data purely based off of the location of each
face's vertices, it is exposed as a read-only attribute. After
rB80f7f1070f17, this attribute can be interpolated to the other domains.

Since this attribute is a special case compared to the others, the
implementation subclasses `BuiltinAttributeProvider`. It's possible
there is a better way to abstract this. Something else might also
become apparent if we add similar read-only attributes.

See rB2966871a7a891bf36 for why this is preferred over the previous
implementation.

Differential Revision: https://developer.blender.org/D10677
2021-03-13 14:13:16 -05:00
2966871a7a Geometry Nodes: Revert current normal attribute implementation
After further thought, the implementation of the "normal" attribute
from D10541 is not the best approach to expose this data, mainly
because it blindly copied existing design rather than using the
best method in the context of the generalized attribute system.

In Blender, vertex normals are simply a cache of the average normals
from the surrounding / connected faces. Because we have automatic
interpolation between domains already, we don't need a special
`vertex_normal` attribute for this case, we can just let the
generalized interpolation do the hard work where necessary,
simplifying the set of built-in attributes to only include the
`normal` attribute from faces.

The fact that vertex normals are just a cache also raised another
issue, because the cache could be dirty, so mutex locks were
necessary to calculate normals. That isn't necessarily a problem,
but it's nice to avoid where possible.

Another downside of the current attribute naming is that after the
point distribute node there would be two normal attributes.

This commit reverts the `vertex_normal` attribute so that
it can be replaced by the implementation in D10677.

Differential Revision: https://developer.blender.org/D10676
2021-03-13 14:05:00 -05:00
88f845c881 GPencil: Remove word "Strokes" in menu
This remove redundant word.
2021-03-13 19:52:10 +01:00
Charlie Jolly
670453d1ec Geometry Nodes: Add Attribute Convert node
The Attribute Convert node provides functionality to change attributes
between different domains and data types. Before it was impossible to
write to a UV Map attribute with the attribute math nodes since they
did not output a 2D vector type. This makes it possible to
"convert into" a UV map attribute.

The data type conversion uses the implicit conversions provided by
`\nodes\intern\node_tree_multi_function.cc`.

The `Auto` domain mode chooses the domain based on the following rules:
1. If the result attribute already exists, use that domain.
2. If the result attribute doesn't exist, use the source attribute domain.
3. Otherwise use the default domain (points).

See {T85700}

Differential Revision: https://developer.blender.org/D10624
2021-03-13 11:49:56 -05:00
8ab6450abb Fix geometry nodes implicit conversion to booleans reversed
The result value should be true if the input values are not zero.
Note that there is ongoing conversation about these conversions
in D10685. This is simply a fix though.
2021-03-13 11:39:48 -05:00
9b2b6674ba Merge branch 'master' into geometry-nodes-mesh-primitives 2021-03-13 09:14:22 -05:00
258b15da74 Cleanup: add BKE_pbvh_vertex_iter_begin to clang-format
Reviewed By: JacquesLucke

Differential Revision: https://developer.blender.org/D10707
2021-03-12 22:29:37 +01:00
74052a9f02 Sculpt: Mask Init operator
This operator initializes mask values for the entire mesh. It supports
different modes for initializing those values, and more will be added in
the future.

The initial version supports generating a random mask per vertex, Face
Sets or loose parts. These masks are useful for introducing variations
in the model using the filters (both shapes and colors).

Reviewed By: JacquesLucke

Differential Revision: https://developer.blender.org/D10679
2021-03-12 21:37:12 +01:00
9d08c169d1 GPencil: Interpolate can use all keyframe types except breakdown
Before only it was only possible interpolate frames of `Keyframe` type. Now all types except `Breakdown` can be used. 

`Breakdown` cannot be used because it would be impossible interpolate two times because the extremes of the interpolation would change and the clean operator would not work.
2021-03-12 19:43:13 +01:00
5788f608d3 GPencil: UI menu cleanup
Remove duplicate words Stroke and Point already in menu header.

Reviewed by: @mendio,  @filedescriptor
2021-03-12 19:12:27 +01:00
476be3746e Fix T86332: setting Cycles dicing camera fails after recent changes
Somehow "from __future__ import annotations" and "lambda" are not working
together well here, work around it by not using a lambda function.
2021-03-12 17:49:56 +01:00
b01e9ad4f0 Fluid: Enable scale options for fluid particles
There is no reason to hide the 'Scale' and 'Scale Randomness' options
for fluid particles that are rendered as 'Object'.

It is possible that hiding these options was just an oversight and
not intentional.
2021-03-12 17:24:53 +01:00
abe1a061f8 Docs: add doc-string for TransDataContainer 2021-03-13 03:14:56 +11:00
651fe243e6 Cleanup: const warning 2021-03-13 03:14:56 +11:00
f707783d5f LibOverride Auto Resync: Add option to disable it in Experimental userpref.
Some older .blend files won't react nicely to auto-resync, they need to
get manually fixed with `resync enforce` first.
2021-03-12 16:45:45 +01:00
Bastien Montagne
ef5782e297 CLOG: add support for substring matching.
So that `--log "*undo*"` matches any log identifier containing `undo`.

Reviewed By: campbellbarton

Differential Revision: https://developer.blender.org/D10647
2021-03-12 16:01:46 +01:00
bcac17196a Fix heap buffer overflow appending/linking from a blend file
Add new function `blo_bhead_is_id_valid_type()` to correctly check the
blend file block type.

File block type codes have four bytes, and two of those are only in use
when these blocks contain ID datablocks (like `"OB\0\0"`). However,
there are other types defined in `BLO_blend_defs.h` that have four
bytes, like `TEST`, `ENDB`, etc.

The function `BKE_idtype_idcode_is_valid(short idcode)` was used to
check for ID datablocks while reading a blend file. This only takes a
2-byte parameter, and thus its result is invalid for the 4-byte codes.
For `TEST` blocks, it would actually consider it a `TE` block, which is
a valid identifier for a Texture. This caused the heap buffer overflow,
as the datablock is not a valid ID, and thus the bytes that were
expected to form an ID name actually encode something completely
different.

Reviewed By: mont29

Differential Revision: https://developer.blender.org/D10703
2021-03-12 15:58:58 +01:00
f0c3ec3dc8 Fix T82532: Sculpt fails to redo the first sculpt session stroke
Sculpt undo relied on having a mode-changing undo step to properly
apply changes.

However this isn't the case with startup files or when mixing
global undo steps with sculpt (see T82851, also fixed).

Undo stepping logic follows image_undosys_step_decode_undo.
2021-03-13 01:36:26 +11:00
c7354cc64b Fix another crash in LibOverride resync code.
Another case where newly overridden ID (stored in `newid` of its linked
reference) gets immediately deleted in old broken overrides.

Re T86501.
2021-03-12 15:27:01 +01:00
20ee6c0f16 Fix compiler warning when building Cycles without Embree 2021-03-12 15:17:08 +01:00
fd905c1059 Cleanup: fix clang-tidy errors when COM_debug is active. 2021-03-12 14:32:24 +01:00
7388f9df71 Cleanup: Compiler warnings with COM_TM_NOTHREAD active. 2021-03-12 13:36:49 +01:00
583df9a5f8 Cleanup: document FileSelectAssetLibraryUID::type
No functional changes.
2021-03-12 13:24:31 +01:00
74557ca4f7 LibOverride: Add a new operation to Outliner to enforce resync of hierarchies.
This is basically done by ignoring override operations from old override
affecting ID pointer properties, when the new (destination) one is not
NULL.

Fix T86501: New object added to overridden collection doesn't show up in linking file on Resync.

This is more of a work-around actually, since there is no real way to
fix the issue in a fully automated and consistent way, it is caused by
older blender files being saved with 'broken' overrides.

WARNING: This cannot ensure that some purposedly edited/overridden ID
pointer properties won't be lost in the process.
2021-03-12 12:31:25 +01:00
fe2ceef729 Fix first part of T86501: Crash during resync process.
Code would end up freeing some of the newly created overrides, which
were assigned to the matching linked ID's `newid` pointer, accessed
again further down the code.

Note that this is not a normal expected situation, and it won't give a
proper resync result anyway, but it might happen in some complicated
corner cases, and also quite often when dealing with older .blend files.
2021-03-12 09:46:11 +01:00
4781ab0969 IDRemap: Add option to also remap internal runtime ID pointers.
In some cases (advanced, low-level), we also want to remap pointers like
`ID.newid` or `ID.orig_id`.

Only known case currently is `id_delete`, to avoid leaving potential access to freed memory. See next commit and T86501.
2021-03-12 09:46:11 +01:00
fd4c01a75c LibQuery: Add an option to process internal runtime ID pointers.
In some cases (advanced, low-level code) we also want to process ID
pointers like `ID.newid` or `ID.orig_id`.
2021-03-12 09:46:11 +01:00
960337f17a Fix T86455: vertex color baking issue with sculpt vertex colors
Baking to Vertex Colors would always bake to sculpt vertex colors (if
such a layer is present) even if those are not enabled in the
experimental preferences. This would bake without an error but leave the
user without a result to look in the viewport.

Now check if sculpt vertex colors are enabled and only bake to them in
that case.

Maniphest Tasks: T86455

Differential Revision: https://developer.blender.org/D10692
2021-03-12 09:22:47 +01:00
a5c44265a3 Cleanup: remove workaround for MSVC PyTypeObject declarations
This is no longer needed for MSVC-2017.
2021-03-12 16:13:36 +11:00
2a5f22c1af Cleanup: set the window manager to the updated context on load
While this happened to be corrected by code that runs afterwards,
leaving this in an invalid state could cause problems in the future.
2021-03-12 16:13:36 +11:00
2e9fb211c6 Cleanup: make_source_archive.py minor changes & comments
- Add notes on portability.
- Use encoding argument for all file IO.
- Use integer math to calculate major/minor version, while float
  division should be fine prefer matching Blender.
2021-03-12 16:13:36 +11:00
7f4530dad2 Cleanup: incorrect doxy section title
Also correct typo.
2021-03-12 16:13:36 +11:00
8125731cae Cleanup: break out of loop early 2021-03-12 16:13:36 +11:00
d3fa576aa7 Cleanup: redundant flag check 2021-03-12 16:13:36 +11:00
406d9749d8 Cleanup: redundant outliner includes 2021-03-12 16:13:36 +11:00
8922d177c1 Alembic procedural: specific result type for cache lookups
This type, CacheLookupResult, holds the data for the current time, or an
explanation as to why no data is available (already loaded, or simply
nothing available). This is useful to document the behavior of the code
but also, in future changes, to respond appropriately for missing data.
2021-03-12 01:57:41 +01:00
d72fc36ec5 Alembic procedural: add support for instancing
Inside of the procedural, instances are AlembicObjects which point to
the AlembicObject that they instance.

In Alembic, an instance is an untyped Object pointing to the original
(instanced) one through its source path. During the archive traversal we
detect such instances and, only if the instanced object is asked to be
rendered, set the instance's AlembicObject to point to the original's
AlembicObject.

Cycles Object Nodes are created for each AlembicObject, but only for
non-instances are Geometries created, which are then shared between
Object Nodes. It is supposed, and expected, that all instances share the
same shaders, which will be set to be the ones found on the original
object.

As for caching, the data cache for an AlembicObject is only valid for
non-instances and should not be read to or from as it is implicitly
shared.
2021-03-12 01:30:12 +01:00
7017844c5e Alembic procedural: move cache building out of object update methods
This will help support instancing as cache building is now decoupled
from the logic to update the Nodes' sockets as data (and cache) will
need to be shared by different Geometries somehow, and also simplify
implementing different data caching methods by centralizing this
operation.
2021-03-12 00:15:17 +01:00
7a028d5b99 Alembic procedural: fix missing attribute update
We need to explicitely tag the Attribute and AttributeSet as modified if
we change or add/remove data. This is more of a bandaid until attributes
handling is refactored to be able to reuse routines from the Attribute
API.
2021-03-12 00:15:16 +01:00
62e2fdf40b Cleanup: unused variable 2021-03-12 00:15:16 +01:00
2ebf4fbbfb Alembic procedural: fix potential zero scale matrix generation
This can happen during user edits or with files missing the global scale
property.
2021-03-12 00:15:15 +01:00
3256f0d52c Added missing file to last commit:
Nodes: Add Attribute Remove Node D10697
2021-03-11 23:42:19 +01:00
60f7275f7f Nodes: Add Attribute Remove Node
This patch adds a node, that removes an attribute if possible,
otherwise it adds an error message.

Differential Revision: https://developer.blender.org/D10697
2021-03-11 23:06:16 +01:00
cec588d757 Fix issue with normals 2021-03-11 16:57:04 -05:00
3e5869e083 Use full "Fill Type" text 2021-03-11 15:50:32 -05:00
5f9bec93e6 Add working cone node based on cylinder code 2021-03-11 15:47:57 -05:00
afa30f1a9d Nodes: Fix drag link from output to already linked Multi-Input Socket
This patch fixes a visual bug related to connecting an output socket to
a Multi-Input Socket, that already has a link to that same output.
In this case, the drag link got a new index and snapped to a new
position. This path makes the drag link snap to the same position as the
first link between the two sockets.

Differential Revision: https://developer.blender.org/D10689
2021-03-11 18:53:29 +01:00
8c6337e587 Fix missing UI updates, caused by own earlier commit
Caused by 46aa70cb48.

RNA would send property update notifiers with the owner ID as `reference` data.
Since above's commit we'd only send the notifiers to editors if the reference
data address matches the space's address. So editors wouldn't get the notifiers
at all.

The owner ID for space properties is always the screen AFAIK. So allow
notifiers with the screen as reference to be passed to editors as well, think
this is reasonable to do either way.

For example, steps to reproduce were:
* Open Asset Browser
* Mark some data-blocks of different types as assets (e.g. object & its
  material)
* Switch between the categories in the Asset Browser. The asset list wouldn't
  be updated.
2021-03-11 18:47:14 +01:00
1b1f8da5dd Cleanup: remove unnecessary const from function declaration
No functional changes.
2021-03-11 18:20:34 +01:00
670c1fdf64 GPencil: Remove limitation to use only one Lattice modifier
This limitation was necessary in older versions, but now can be removed.
2021-03-11 18:17:30 +01:00
7092d6a7a3 Fix warning from own previous commit 2021-03-11 17:57:05 +01:00
46aa70cb48 UI: Avoid unnecessary redraws of unrelated editors on space changes
When adding a notifier, `reference` data can be passed. The notifier system
uses this to filter out listeners, for example if data of a scene changes,
windows showing a different scene won't get the notifiers sent to their
listeners.

For the `NC_SPACE` notifiers, a number of places also passed the space as
`reference`, but that wasn't used at all. The notifier would still be sent to
all listeners in all windows (and the listeners didn't use it either). Causing
some unnecessary updates (e.g. see ed2c4825d3).
With this commit, passing a space will make sure the notifier is only sent to
that exact space. Some code seems to already have expected that to be the case.

However there were some cases that passed the space as `reference` without
reason, which would break with this commit (meaning they wouldn't redraw or
update correctly).
Corrected these so they don't pass the space anymore.
2021-03-11 17:43:45 +01:00
ed2c4825d3 Fix Asset Browser showing oudated list for changes done while browser is hidden
Steps to reproduce were:
* Open an Asset Browser
* "Mark Asset" on some data-block
* Change the Asset Browser into a different editor (not File Browser!)
* "Clear Asset" on the data-block again, or mark another asset
* Change back to the Asset Browser, it will show an outdated list

Now the file-browser reloads local file data after spaces were changed. Note
that the current notifier code doesn't limit the space-change notifiers to the
affected spaces, so changing any visible space will trigger this. That's an
issue to be fixed separately.
2021-03-11 17:43:33 +01:00
a1b01edf45 GPencil: Fix unreported Fill fails if the stroke was tagged
In some situations the strokes could be tagged before filling, so it's necessary to reset before.
2021-03-11 17:23:04 +01:00
9dfc81ccf1 Fix regression in 2cc5af9c55
The check for undo-depth increment/decrement assumed a newly loaded
window manager would have a different pointer.

This broke bl_animation_fcurves test indirectly,
the change to undo-depth caused the redo panel to attempt to popup
in background mode - which isn't supported.

Now the pointer is unchanged, the undo-depth is assumed to match
the value used when calling the operator.

The undo-depth is now properly maintained between file loads,
which is an improvement on the original behavior which reset it.
2021-03-12 03:21:39 +11:00
Siddhartha Jejurkar
350ad4bcb1 Fix T86199: error when adding custom fluid diffusion preset
Differential Revision: https://developer.blender.org/D10694
2021-03-11 16:51:01 +01:00
d518b0fda5 Fluid: Updated hidden symbol visibility comment
It turns out that on official arm64 devices (not DevKit) the linker
warnings still show up. So just leaving this as is.

Ref D9002
2021-03-11 16:15:45 +01:00
5b91a52944 Cleanup: spelling 2021-03-12 00:51:29 +11:00
2cc5af9c55 Fix T86431: Keep memory location of the window manager on file load
Keep the pointer location from the initial window-manager
between file load operations.

This is needed as the Python API may hold references to keymaps for e.g.
which are transferred to the newly loaded window manager,
without their `PointerRNA.owner_id` fields being updated.

Since there is only ever one window manager, keep the memory at the same location so the Python ID pointers stay valid.

Reviewed By: mont29

Ref D10690
2021-03-12 00:40:52 +11:00
2cdebf293d WM: keep the current state when a blend fails to load
Previously many operations would run on file load, even if the file
did not load. Pre/post load handlers were called, timers canceled,
all undo data freed, editors exited ... etc.

Now keep the blend file in it's current state.

This simplifies updating this area of code as there is one less
possible situation to account for.
2021-03-12 00:40:52 +11:00
5812bc7d89 Cleanup: split file read and setup into separate steps
Currently file loading performs almost all reloading logic
even in the case loading the file fails, causing the file to be in
a state that isn't well defined: undo is cleared, timers are canceled &
scripts are re-registered.
2021-03-12 00:40:52 +11:00
f7616c6eaf Cleanup: file loading/recover checks
- Don't set G.relbase_valid until the file is loaded.
- Remove unnecessary string pointer comparison.
- Remove unused filename being passed to 'setup_app_data'.
2021-03-12 00:40:25 +11:00
167525dc8d Remove duplicate nodes 2021-03-11 08:38:05 -05:00
93f8c9b823 LibOverride: auto-run resync process on file reading.
Part of T83811 & D10649.
2021-03-11 14:26:19 +01:00
a023c1a34c LibOverride: Add second part of auto-resync code.
`BKE_lib_override_library_main_resync` uses
`LIB_TAG_LIB_OVERRIDE_NEED_RESYNC` tags set by RNA override apply code,
and perform detection for the remaining cases (those were new overrides
need to be created for data that was not present before in the library).

And then it actually resync all needed local overrides.

Part of T83811 & D10649.
2021-03-11 14:26:19 +01:00
534f4e90fd LibOverride: First stage of detection of 'need resync'.
We can fairly easily detect some resync-needed cases when applying the
overrides operations on a Pointer RNA property.

This should cover all cases where an existing override's ID pointer is
changed in its linked data.

We still have to add code to detect when a not-yet-overridden linked ID
needs to become overridden (because its relations to other data-blocks
changed in a way that requires it).

Part of T83811 & D10649.
2021-03-11 14:26:19 +01:00
96064c3bb7 LibOverride: Do not delete no-more-used overrides during resync if they are user-edited.
Ultimately those will be listed with a special icon in the upcomming
Outliner overrides view.

Part of T83811 & D10649.
2021-03-11 14:26:19 +01:00
0a6ed7f035 LibOverride: Add a utils to check if override has been user-edited.
Part of T83811 & D10649.
2021-03-11 14:26:19 +01:00
f4f8b6dde3 Cycles: Change device-only memory to actually only allocate on the device
This patch changes the `MEM_DEVICE_ONLY` type to only allocate on the device and fail if
that is not possible anymore because out-of-memory (since OptiX acceleration structures may
not be allocated in host memory). It also fixes high peak memory usage during OptiX
acceleration structure building.

Reviewed By: brecht

Maniphest Tasks: T85985

Differential Revision: https://developer.blender.org/D10535
2021-03-11 14:12:35 +01:00
ba996ddb3a Cleanup: Add comment explaining plan for new Outliner tree-element code design
Explains how we can get rid of implicit assumptions and `void *`
arguments/storage in the future.
2021-03-11 13:49:16 +01:00
0f60dbe4bf Cleanup: Pass anim-data directly to Outliner anim-data tree element constructor
Rather than letting the `TreeElementAnimData` constructor take an ID from which
we get the animation-data based on an assumption on how it's stored, let the
constructor take the animation-data directly. That way we further centralize
the assumptions on the data passed to the element creation to
`tree_element_create()`.
The following commit will add a comment explaining the plan to entirely get rid
of those assumptions in the future.
2021-03-11 13:49:16 +01:00
d4d03f736b Fix (unreported): crash on undo when using pinned id in spreadsheet
Now the behavior is the same as in the properties editor, as far as I can tell.
2021-03-11 13:28:28 +01:00
fade765bf3 Outliner: Add assert to make assumption for new code design explicit
There was an implicit assumption that tree element types using the new code
design set their name on creation. Use an assert to make this explicit. See
f59ff9e03a, which was an error because of this broken assumption.
2021-03-11 13:20:27 +01:00
f59ff9e03a Fix crash when showing NLA actions in the Outliner
Caused by 2e221de4ce in combination with 4292bb060d.
In the former I forgot to set the name for NLA actions in the new code design,
in the latter I made it an assumtion that tree element types using the new
design set the name.
The following commit will make this assumption explicit with an assert.
2021-03-11 13:18:17 +01:00
018fffbe77 Fix failing assert when loading file with untraceable custom asset library
When loading a file with an asset browser open, and it showed a custom asset
library that can't be found currently (e.g. because the file is from somebody
else), the `BLI_assert(0)` in `rna_FileAssetSelectParams_asset_library_get()`
would fail.

There was code to handle this case already, but unlike I thought it didn't run
right after file read. Now it does.
2021-03-11 13:06:31 +01:00
42c5303409 Cleanup: Typos in comments (window-manager files)
Typos from a509e79a4c. Looks like issues with an automated cleanup tool.
2021-03-11 13:06:31 +01:00
5f1f233dc9 Spreadsheet: expore more domains and point cloud data
Ref T86135.

Differential Revision: https://developer.blender.org/D10681
2021-03-11 12:23:01 +01:00
Rahul Chaudhary
74f3edc343 Fix T86458: Simple Subdivision node does not preserve vertex groups
Differential Revision: https://developer.blender.org/D10683
2021-03-11 11:56:40 +01:00
28e83bca9d Geometry Nodes: improve handling when the same socket is connected twice
The multi-input-socket cannot be connected to the same socket twice currently.
However, it is still possible to achieve this using an intermediate reroute node.

In this case the origin socket should be listed twice in the `linked_sockets_` list.
Higher level functions can still deduplicate the list of they want.
2021-03-11 11:35:02 +01:00
Aaron Carlisle
85623f6a55 UI: Add Copy Full Data Path to RMB menu
This commit adds an operator to the RMB-menu "Copy Full Data Path“,
to copy the full RNA path to the clipboard. It aims to complement
"Copy Data Path“, which only copies the part of the path that is needed
for drivers, but for writing addons, etc. it is useful to have an option that
gives the full data path.

Similar patch have been submitted before, see D763 and D2746

This time I did not split the operator (as D2746) and does not contain the UI reorganization (as D763)

Note, the fixes from D2746 were committed in rB09eac0159db8 so that patch can be closed.

Reviewed By: campbellbarton

Differential Revision: https://developer.blender.org/D10539
2021-03-10 22:10:49 -05:00
9ef24d5aaa Geometry Nodes: fix error adding a value node
Caused by own rBcf2933c38a34 which changed the poll on this node to be
"shading-only", but this one is actually supported.
2021-03-10 22:19:36 +01:00
1d6764dddd Add ico sphere primitive 2021-03-10 16:10:53 -05:00
507b8fa527 Working cylinder node 2021-03-10 15:28:26 -05:00
91e42d81fe Merge branch 'master' into geometry-nodes-mesh-primitives 2021-03-10 12:03:23 -05:00
4cd9a1164b EEVEE: ScreenSpaceReflections: Improve minimal hit threshold
This makes the hit delta threshold dependant on the ray angle.
If the ray is more aligned with the view, its intersection
threshold gets bigger to avoid going through geometry.

This improves reflections and fix T86448 refraction issue.
2021-03-10 17:57:09 +01:00
352385d109 Cleanup: EEVEE: Remove unused function and fix comment 2021-03-10 17:57:09 +01:00
5ab2252a66 Fix T86429 EEVEE: Ambient occlusion broken on some platform
This was cause by a division by 0 if the ray direction had no
depth difference. Adding a small epsilon fix the issue.
2021-03-10 17:57:09 +01:00
56bf4f3fb3 EEVEE: ScreenSpaceReflections: Add back support for planar reflections
We now have a new buffer to output reflection depth. This buffer is
only usefull for non planar SSR but we use it to tag the planar rays.

This also touch the raytrace algo for planars to avoid degenerate
lines on vert sharp reflections.
2021-03-10 17:57:09 +01:00
793335f3e2 Cleanup: EEVEE: Use correct prefix for view space vectors 2021-03-10 17:57:09 +01:00
d89fb77d89 EEVEE: GGX: Use distribution of visible normal for sampling
This changes the sampling routine to use the method described in
"A Simpler and Exact Sampling Routine for the GGXDistribution of Visible Normals"
by Eric Heitz.
http://jcgt.org/published/0007/04/01/slides.pdf

This avoids generating bad rays and thus improve noise level in screen-
space reflections / refraction.
2021-03-10 17:57:09 +01:00
9957096f35 EEVEE: ScreenSpaceReflections: Improve hit quality
This changes the hitBuffer to store `ReflectionDir * HitTime, invPdf`
just as the reference presentation.

This avoids issues when the hit refinement produce a coordinate that
does not land on the correct surface.

We now store the pdf in the same texture and store it inversed so we can
remove some ALU from the resolve shader.

This also rewrite the resolve shader to not be vectorized to improve
readability and scalability.
2021-03-10 17:57:09 +01:00
79bc4962d3 Fix T86416: geometry nodes crash choosing a group node link menu
Since rBb279fef85d1a, the nodes properties for geometry nodes using a
texture are displayed in the Properties Editor.

It was possible to create recursive nodetrees when choosing the 'root'
nodegroup in the node link menu though leading to a crash.

Now poll if a group node of a particular node could actually be added to
the current tree.

Also check if the tree types actually match.

Maniphest Tasks: T86416

Differential Revision: https://developer.blender.org/D10671
2021-03-10 17:03:38 +01:00
cf2933c38a Add 'foreach_nodeclass' for geometry nodetrees
This way we get a choice when we click on node links in the Properties
Editor.

This also changes some of the more permissive poll functions on some
nodes back to being "shading-only" (these were made permissive in
rBb78f2675d7e5 for simulation nodes, but have not found their way into
geometry nodes yet).

ref b279fef85d / T86416 / D10671

Maniphest Tasks: T86416

Differential Revision: https://developer.blender.org/D10673
2021-03-10 17:03:38 +01:00
eb20250d2a Fix wrong white point of Linear ACES in config reading and the bundled config
The Blender/Cycles XYZ color space has a D65 white point instead of E, and
this was not correctly accounted for both in the OpenColor config reading code
and the bundled config.

This meant that since the OpenColorIO v2 upgrade, the Linear ACES color space
was not working correctly, and other OpenColorIO configs defining
aces_interchange were not interpreted correctly.
2021-03-10 16:56:27 +01:00
1e7b2d0bc6 Geometry Nodes: Add color to boolean implicit conversion
This conversion works the same way as a combination of the existing
color to float3 to boolean conversions, so the boolean result will be
false if the color is black, otherwise true, and the alpha is ignored.
2021-03-10 10:51:44 -05:00
7f07eff588 Cryptomatte tests: Fix layer_from_manifest failure.
Error in rBc6a831cfbc9b24fa8b1ed4852178c139e6ed79a6
2021-03-10 20:13:20 +05:30
576c392241 Nodes: Sortable Multi Input Sockets
This Patch removes the auto sorting from Multi-Input Sockets and allows
the links to be sorted by drag and drop instead.
As a minor related change, it fixes the drawing of the mute line to
connect to the first input instead of the socket's center.
2021-03-10 14:57:57 +01:00
5991c5c928 Cleanup: unused warning building as a Python module 2021-03-10 23:13:03 +11:00
Pratik Borhade
493628e541 Fix T67190: Edge Loop Select doesn't support non-manifold edges
- New Walker added `bmw_NonManifoldedgeWalker_type`.
- This walks over edges connected with 3 or more faces connected.

Ref D10497 with edits.
2021-03-10 23:04:37 +11:00
4fece16d45 Geometry Nodes: transfer polygon attributes to points in Point Distribute node 2021-03-10 12:41:50 +01:00
368647bd25 Geometry Nodes: move geometry component type enum to C
This allows us to use it in rna for the spreadsheet editor.
2021-03-10 11:53:31 +01:00
Siddhartha Jejurkar
122fefcc85 DNA: add defaults for UnifiedPaintSettings
Newly created scenes had unified paint settings zeroed. see T80164

Ref D10658
2021-03-10 21:45:12 +11:00
3dab6f8b7b Spreadsheet: new spreadsheet editor
This implements the MVP for the new spreadsheet editor (T85879). The functionality
is still very limited, but it proved to be useful already. A more complete picture
of where we want to go with the new editor can be found in T86279.

Supported features:
* Show point attributes of evaluated meshes (no original data, no other domains,
  no other geometry types, yet). Since only meshes are supported right now, the
  output of the Point Distribute is not shown, because it is a point cloud.
* Only show data for selected vertices when the mesh is in edit mode.
  Different parts of Blender keep track of selection state and original-indices with
  varying degrees of success. Therefore, when the selected-only filter is used, the
  result might be a bit confusing when using some modifiers or nodes. This will
  be improved in the future.
* All data is readonly. Since only evaluated data is displayed currently, it has to
  be readonly. However, this is not an inherent limitation of the spreadsheet editor.
  In the future editable data will be displayed as well.

Some boilerplate code for the new editor has been committed before in
rB9cb5f0a2282a7a84f7f8636b43a32bdc04b51cd5.

It would be good to let the spreadsheet editor mature for a couple of weeks as part
of the geometry nodes project. Then other modules are invited to show their own data
in the new editor!

Differential Revision: https://developer.blender.org/D10566
2021-03-10 11:35:42 +01:00
f247a14468 Geometry Nodes: handle same socket being linked to input more than once better
Note, this does not allow users to connect the same socket more than once to
a multi-input-socket in the UI. However, the situation could still happen when
using node muting.
2021-03-10 11:12:38 +01:00
70e73974b5 Cleanup: spelling 2021-03-10 15:47:50 +11:00
548e2e2f25 Cleanup: remove annotations from startup scripts
The current convention is not to use annotations for UI/startup scripts.
2021-03-10 15:36:23 +11:00
3e67e2a36e Cleanup: compiler warning (ignored-qualifiers) 2021-03-10 15:35:58 +11:00
53b82efed6 Fix (unreported) geometry node attribute search not working in the
Properties Editor

Since rBb279fef85d1a, the nodes properties for geometry nodes using a
texture are displayed in the Properties Editor.

rB85421c4fab02 added an attribute search button, but this was missing
still (gave just the regular text button) if this was displayed in the
Properties Editor.

ref b279fef85d / T86416 / D10671 / D10673

Maniphest Tasks: T86416

Differential Revision: https://developer.blender.org/D10674
2021-03-10 00:01:28 +01:00
90520026e9 Fix: only follow first input of multi-input-socket when muted
Otherwise muting a Join Geometry node has no effect, when there
are multiple Join Geometry nodes in a row.
2021-03-09 21:22:30 +01:00
7d827d0e9e Fix T86422: Expand crashing with EEVEE enabled
Using EEVEE (as well as some other actions like saving the file or
tweaking mesh parameters) can cause a PBVH rebuild. The different sculpt
tools can store PBVH nodes or other related data in their caches, so
this data becomes invalid if the PBVH rebuilds during evaluation. This
ensures that the PBVH does not rebuild while the cache of Expand is
being used, like it already happens for brushes and filters.

Reviewed By: JacquesLucke

Maniphest Tasks: T86422

Differential Revision: https://developer.blender.org/D10675
2021-03-09 21:21:08 +01:00
dcd7dacc4f Graph Editor: FCurve Show Extrapolation Toggle
Adds toggle to graph editor (View->Show Extrapolation). When disabled,
then fcurves only draw over the keyframe range. For baked fcurves and
ghost fcurves, the range is all sampled points.

It is intended for frequent use so anybody could assign hotkey or add
to quick favorites that's why GE-View is the best place for it.

Show Extrapolation is the default.

Reviewed By: sybren, Stan1, looch

Differential Revision: http://developer.blender.org/D10442
2021-03-09 15:09:09 -05:00
cfd7b4d1cd BLI: New 'BLI_array_iter_spiral_square'
No functional changes.

This function replaces some of the logic in
`DRW_select_buffer_find_nearest_to_point` that traverses a buffer in a
spiral way to search for a closer pixel (not the closest).

Differential Revision: https://developer.blender.org/D10548
2021-03-09 16:03:56 -03:00
80f7f1070f Geometry Nodes: Add Attribute interpolation for polygon domains
This commit adds interpolation to and from attribute on the polygon
domain. Interpolation is done automatically when a node uses attributes
on two different domains. The following are the new interpolations and
corresponding simple test cases:
- **Point to Polygon**: Painting the shade smooth attribute in weight
  paint mode
- **Polygon to Point**: Moving points along a normal based on the
  material index
- **Polygon to Corner**: Scaling a UV map with the material index
  before sampling a texture

{F9881516}

This is also necessary for an improved implementation of the `normal`
attribute.

Differential Revision: https://developer.blender.org/D10393
2021-03-09 13:39:05 -05:00
996586860b Cleanup: Do not pass stack allocated string to MEM_callocN 2021-03-09 13:31:51 -05:00
3f7b585a08 Fix crash in boundary brush after refactor
A missing continue in this loop.

Reviewed By: JacquesLucke

Differential Revision: https://developer.blender.org/D10610
2021-03-09 18:56:19 +01:00
e5c1e13ef0 Sculpt: Init Face Sets by Face Sets boundaries
This adds an extra option to the Face Sets Init operator to initialize
individual Face Sets based on the current Face Sets boundaries.
In particular, this is useful for splitting the patterns created by
Expand into individual Face Sets for further editing.

Reviewed By: JacquesLucke

Differential Revision: https://developer.blender.org/D10608
2021-03-09 18:55:20 +01:00
b1ef55abdb Cleanup: Document ensure()-like behaviour of KeyMaps.new()
Document the fact that `bpy.types.KeyMaps.new()` will not create a new
keymap but instead return an existing one, if one with the given
name/space/region already exists.

No functional changes.
2021-03-09 17:42:23 +01:00
8351e2d4b9 Cleanup: add missing full stop to docstring of function
No functional changes.
2021-03-09 17:42:23 +01:00
Corbin Dunn
4069016de8 macOS/Ghost: Simplify pasteboard and screen count code
Reviewed By: #platform_macos, sebbas, ankitm
Differential Revision: https://developer.blender.org/D10616
2021-03-09 21:38:47 +05:30
Corbin Dunn
9d046019e2 macOS/Ghost: Remove unnecessary nil checks.
Reviewed By: #platform_macos, sebbas, ankitm
Differential Revision: https://developer.blender.org/D10616
2021-03-09 21:38:44 +05:30
Corbin Dunn
cd1e6df534 macOS/Ghost: Replace NSAutoreleasePool with @autoreleasepool
- Automatic and guaranteed cleanup.
- Improves readability and reduces chances of errors by removing
`[pool drain]` statements.

Reviewed By: #platform_macos, sebbas, ankitm
Differential Revision: https://developer.blender.org/D10616
2021-03-09 21:38:41 +05:30
04e816bd11 Fix T86432: missing check if attribute is available
This failed when the component did exist, but did not contain any data.
2021-03-09 17:00:17 +01:00
c6a831cfbc Cleanup: use raw strings. 2021-03-09 16:50:47 +01:00
077beb738c Cleanup: use nullptr in cpp. 2021-03-09 16:34:43 +01:00
0700441578 Geometry Nodes: Expose "shade smooth" as an attribute
This patch exposes the "Shade Smooth" value as a boolean attribute.
This setting is exposed as a check-box in the mesh data properties,
but the value is actually stored for every face, allowing some faces
to be shaded smooth with a simple per-face control.

One bonus, this allows at least a workaround to the lack of control
of whether meshes created by nodes are shaded smooth or not: just use
an attribute fill node.

Differential Revision: https://developer.blender.org/D10538
2021-03-09 09:27:44 -05:00
Jeroen Bakker
eaada56591 Cleanup: add resource manager for cryptomatte session.
Auto frees cryptomatte session when it the pointer is collected from the
stack.

Reviewed By: Jacques Lucke

Differential Revision: https://developer.blender.org/D10667
2021-03-09 15:19:12 +01:00
53daf2a0db Use 3x3 matrix for normal transformation 2021-03-09 09:18:35 -05:00
cdb0b3cedc Compositor: Silence -Wself-assign
Use member initializer list for constructor.
Use `this->` for member function.
Introduced in rBef53859d24a9720882e3ca6c5415faefec6fb82c

Reviewed By: jbakker
Differential Revision: https://developer.blender.org/D10653
2021-03-09 19:19:21 +05:30
be6b3923f5 Compositor: silence clang/clang-tidy override warnings
`-Winconsistent-missing-override` and `modernize-use-override`.

Reviewed By: jbakker
Differential Revision: https://developer.blender.org/D10654
2021-03-09 19:19:08 +05:30
6114d909d6 Fix T86417: Crash deleting Shader AOV from an empty list
Missing NULL check in {rB2bae11d5c08a}.

Candidate for corrective release I guess.

Maniphest Tasks: T86417

Differential Revision: https://developer.blender.org/D10666
2021-03-09 14:45:38 +01:00
68ff213cd4 Cleanup: Correct area writing function name
This function writes areas, not regions. The old name read as if it was
regions.
2021-03-09 14:18:09 +01:00
7f9b6b1dc9 Sculpt: 'Unifiy' Dyntopo Detail Size operators
Prior to rB99a7c917eab7, Shift + D was used to set detail size for both
constant and relative detail (using radial control). The commit added an
improved operator for doing this for constant detail (showing the
triangle grid representation), but left the user without a shortcut to
do this for relative detail.

Interestingly rB99a7c917eab7 only changed this for the Blender keymap,
the Industy Compatible keymap still has the "old" entry.

This patch changes both keymaps to have both entries.

For user experience, the real change here is to have both available on
one 'primary' shortcut (Shift+D), the improved
'dyntopo_detail_size_edit' operator will now act on all possible cases.
If it deals with constant detail, it acts as before, if it deals with
relative detail etc, it will fallback to the "old" way of doing it via
radial control instead. I assume this adresses what was stated in
rB99a7c917eab7: "Deciding if both detail sizes can be unified needs a
separate discussion"

Also, move dyntopo_detail_size_edit to sculpt_detail.c

Fixes T83828

Maniphest Tasks: T83828

Differential Revision: https://developer.blender.org/D9871
2021-03-09 13:55:37 +01:00
2283b6ef69 Fix the session UUID being set for temporary blend file data
Triggered by temporarily appending library linked data from Python.
2021-03-09 23:47:43 +11:00
07a9d39f57 Fix: unconnected multi socket input crashes
The crash would only happen when the output of the Join Geometry node is used.
2021-03-09 09:45:18 +01:00
dc3e9048ae Cleanup: fix warnings 2021-03-09 09:31:14 +01:00
73d53b3bd8 Merge branch 'master' into geometry-nodes-mesh-primitives 2021-03-08 22:48:47 -05:00
745576b16e UI: Clean up sub-panel for new boolean modifier options
A few changes to make this consistent with other modifier panels:
 - Title case for UI labels
 - Use property split (and therefore decorators)
 - Declare sublayout variables after getting modifier info
2021-03-08 22:22:16 -05:00
55fe91b83b Progress on cylinder node 2021-03-08 22:14:19 -05:00
94d826f6d6 Cleanup, add transformation 2021-03-08 21:36:56 -05:00
51b731d479 Add primitive cube node 2021-03-08 21:36:39 -05:00
5fa962c7f6 Expose transform operation and optimize normal transform 2021-03-08 21:35:58 -05:00
543783fc61 Add float3.is_zero() 2021-03-08 21:34:18 -05:00
a70a715f67 Fix build 2021-03-08 17:04:08 -05:00
1f95b07b32 Merge branch 'master' into geometry-nodes-mesh-primitives 2021-03-08 16:56:18 -05:00
d25ab68cf4 Cleanup: Complete earlier geometry component refactor
This was meant to be part of rB9ce950daabbf, but the change dropped from
the set at some point in the process of updating and committing.
Sorry for the noise.
2021-03-08 16:31:51 -05:00
8df968d69f "Show Texture in texture tab" button: full support for Sculpt / Paint
In {rBb279fef85d1a} the button that displays a texture in a Properties
Editor texture tab was added for geometry nodes.
Same commit will actually show them for Brush textures as well (but
disabled -- because the Texture users dont match).
This task is for finanlizing proper support for Brush textures as well.
There was originally a separate patch for this (see {D9813}) but most of
it was already implemented by above commit.

**what this solves**
from the default startup file:
- go to any sculpt or paint mode and add a texture to your brush
- observe the button to edit this texture in the Properties editor is
greyed out

{F9860470}

There are two possible solutions:
- [1] call the texture template for the brush `texture_slot` texture
(instead of the brush 'texture') from the python UI code, this is then
working in harmony how ButsTextureUser works for brushes
- [2] tweak the way `ButsTextureUser` works (dont rely on
`RNA_BrushTextureSlot` there)

This patch implements the first solution.
Since `brush.texture_slot` is `br->mtex` RNA wrapped and `brush.texture`
is `br->mtex.tex` RNA wrapped, this really comes down to doing the same
thing. I checked that creating a new texture and unlinking/deleting will
have the same results even though they take slightly different code
paths: assignment and NULLing the pointers are working on the same (see
above) and RNA update callbacks also do the same [even though in
different functions]:
- brush.texture will do rna_Brush_main_tex_update
- brush.texture_slot.texture will do rna_TextureSlotTexture_update /
rna_TextureSlot_update
(only difference here is an additional DEG relations update in the case
of texture_slot which should not do harm)

Differential Revision: https://developer.blender.org/D10626
2021-03-08 19:38:35 +01:00
2e19509e60 Geometry Nodes: Rename subdivision nodes
This makes the following changes to the name of the two
geometry nodes subvision nodes:
 - `Subdivision Surface` -> `Subdivide Smooth`
 - `Subdivision Surface Simple` -> `Subdivide`
Most of the benefit is that the names are shorter, but it also better
mirrors the naming of operations in edit mode, and phrases the names
more like actions. This was discussed with the geometry nodes team.
2021-03-08 13:37:49 -05:00
Mark Stead
a45af290f3 Fix T86357: EEVEE: Shadows: Casters have exponential performance degradation with many objects
When you have many distinct objects, in an Eevee render then the shadow caster gets exponentially slower as the number of (distinct) objects increase.

This is because of the way that frontbuffer->bbox (EEVEE_BoundBox array) and the associated frontbuffer->update bitmap are resized.
Currently the resizing is done by reserving space for SH_CASTER_ALLOC_CHUNK (32) objects at a time.
When the number of objects is large, then the MEM_reallocN() gets progressively slower because it must memcpy the entire bbox/bitmap data to the new memory chunk.
And there will be a lot of *memcpy* operations for a large scene.
(Obviously there are a significant number of memory allocations/deallocations too - though this would be linear performance.)

I've switched to doubling the frontbuffer->alloc_count (buffer capacity) instead of adding SH_CASTER_ALLOC_CHUNK (32).  As I understand this is the only way to eliminate exponential slowdown.  Just increasing the size of SH_CASTER_ALLOC_CHUNK would still result in exponential slowdown eventually.

In other changes, the "+ 1" in this expression is not necessary.
if (id + 1 >= frontbuffer->alloc_count)
The buffer is 0-based.  So when the buffer is initially allocated then id values from bbox[0] to bbox[31] are valid.  Hence when frontbuffer->count == frontbuffer->alloc_count, is when the resizing should be triggered.
As it stands the "+ 1" results in resizing the buffer, when there is still capacity for one more object in the buffer.

I've changed the initial buffer allocation to use MEM_mallocN() instead of MEM_callocN().  The difference is that malloc() doesn't memset buffer (with zeros) when allocated.  I've checked the code where new bbox records are created, and it does not rely on the buffer being initialised with zeros.
Anyway, isn't calloc() safer than using malloc()?  Well no, it's actually the opposite in this case.  Every time the buffer size is increased, it is done using realloc(), and this does not zero-out the uniniitialised portion of the buffer.  So the code would break if it was modified to assume that the buffer contains zeros.  Hence I believe initialising the buffer using calloc() could be misleading to a new developer.

Won't this result in increased memory usage?  Yes, if you have millions of objects in your scene, then you are potentially using up-to twice the memory for the shadow caster.  (However if you have millions of objects in your scene you're probably finding the Eevee render times a slow.)
Note that once the render gets going the frontbuffer bbox/bitmap will be shrunk to a multiple of SH_CASTER_ALLOC_CHUNK (32), therefore releasing the overallocation of memory.
As observed in Visual Studio - this appears to be prior to peak memory usage anyway.
Note this shrinking is executed in EEVEE_shadows_update() - during the first render sample pass.  If necessary you could consider shrinking the buffer immediately after the EEVEE_shadows_caster_register() has done it's work.  (Note however it appears you would need to add that function call is multiple places.)
Anyway as per the bug report I raised, I observed a 5% increase in peak-memory.  And I'm unclear whether this difference in memory is due to me running the debug build.  (It could be that there is no difference because of the shrinking.)

I couldn't figure out how the shadow caster backbuffer works.  I see that EEVEE_shadows_init() has an explicit command to swap the front/back buffers.  However this is done only when the buffers are first initialised and there is nothing in there yet.  In my testing, the backbuffer->count was always zero, EEVEE_shadows_update() never did anything with the backbuffer.

Finally this problem is most evident when using Geometry Nodes or a Particle System to instantiate many objects.  Objects created through say the array modifier do not cause any issues because it is considered one object by the shadow caster.

Reviewed By: #eevee_viewport, fclem

Differential Revision: https://developer.blender.org/D10631
2021-03-08 18:51:48 +01:00
84a4f2ae68 Geometry Nodes: Improve performance of point distribute node
This commit refactors the point distribute node to skip realizing
the instances created by the point instance node or the collection
and object info nodes. Realizing instances is not necessary here
because it copies all the mesh data and and interpolates all
attributes from the instances when this operation does not
need to modify the input geometry at all.

In the tree leaves test file this patch improves the performance of
the node by about 14%. That's not very much, the gain is likely larger
for more complicated input instances with more attributes (especially
attributes on different domains, where interpolation would be necessary
to join all of the instances). Another possible performance improvement
would be to parallelize the code in this node where possible.

The point distribution code unfortunately gets quite a bit more
complicated because it has to handle the complexity of having many
inputs instead of just one.

Note that this commit changes the randomness of the distribution
in some cases, as if the seed input had changed.

Differential Revision: https://developer.blender.org/D10596
2021-03-08 12:45:06 -05:00
fc0de69471 Cleanup: Split up new files for Outliner ID tree-elements
Splits up `tree_element_id.cc`/`tree_element_id.hh`.
If we move all ID types into this one file, it will become rather big. Smaller
files are probably easier to work with. We could still keep small classes like
`TreeElementIDLibrary` in the general file, don't mind really, but this creates
separate files for that too.
2021-03-08 18:38:23 +01:00
f0ad78e17c Cleanup: Rename recently added Outliner files to exclude "_base" suffix
These files can contain more than just the "base" tree element types. E.g. the
class for the view-layer base element can also contain the class for the
view-layer elements. Otherwise we'd end up with like >50 files for the
individual types, most of them very small.
So just give the files a general name and put the related classes in there.
2021-03-08 18:38:23 +01:00
4292bb060d Outliner: Port scene elements and some related types to new tree-element code design
Continuation of work in 2e221de4ce, 249e4df110 and 3a907e7425.

Adds new tree-element classes for the scene-ID, scene collections, scene
objects, and the view layers base.
There is some more temporary stuff in here, which can be removed once we're
further along with the porting. Noted that in comments.
2021-03-08 18:38:23 +01:00
8771f015f5 Python version of make_source_archive.sh
This is a Python version of the existing `make_source_archive.sh`
script. IMO it's easier to read, and it'll also be easier to extend with
the necessary functionality for D10598.

The number of lines of code is larger than `make_source_archive.sh`, but
it has considerably less invocations of `awk` ;-) And also the filtering
is integrated, instead of forking out to Python to prevent certain files
to be included in the tarball.

Reviewed By: dfelinto, campbellbarton

Differential Revision: https://developer.blender.org/D10629
2021-03-08 18:10:26 +01:00
9ce950daab Cleanup: Move geometry component implementations to separate files
Currently the implementations specific to each geometry type are in
the same file. This makes it difficult to tell which code is generic
for all component types and which is specific to a certain type.
The two files, `attribute_access.cc`, and `geometry_set.cc` are
also getting quite long.

This commit splits up the implementation for every geometry component,
and adds an internal header file for the common parts of the attribute
access code. This was discussed with Jacques Lucke.
2021-03-08 11:41:23 -05:00
bf799cb12c Fix T81741 EEVEE: Ambient Occlusion does not converge properly
This was due to the AO random sampling using the same "seed" as
the AA jitter. Decorelating the noise fixes the issue.
2021-03-08 17:25:38 +01:00
30cb4326fe EEVEE: Ambient Occlusion: Add sample parameter support for the AO node
The actual sample count is rounded up to a multiple of 4 because we
sample 4 horizons directions.

Changing this setting forces the shader to recompile (because using a
GPU_constant).
2021-03-08 17:25:38 +01:00
1540f1df07 EEVEE: RenderPass: Improve AO pass if screen space radius is small
This just bypass the occlusion computation if there is no occlusion
data. This avoids weird looking occlusion due to the screen space
geometric normal reconstruction.
2021-03-08 17:25:38 +01:00
0983e66e03 EEVEE: Occlusion: Use ScreenSpaceRay for iteration
The sampling is now optimum with every samples being at least one pixel
appart. Also use a squared repartition to improve the sampling near the
center.

This also removes the thickness heuristic since it seems to remove
a lot of details and bias the AO too much.
2021-03-08 17:25:38 +01:00
5db5966cdb EEVEE: Sampling: Split hemisphere sampling just like GGX
This is useful for debugging raycasting.
2021-03-08 17:25:16 +01:00
6842c549bb EEVEE: SSRayTrace: Cleanup/Refactor
This is a major rewrite that improves the screen space raytracing
a little bit.

This also decouple ray preparation from raytracing to be reuse in other
part of the code.

This changes a few things:
- Reflections have lower grazing angle failure
- Reflections have less self intersection issues
- Contact shadows are now fully opaque (faster)

Unrelated but some self intersection / incorrect bad rays are caused by
the ray reconstruction technique used by the SSR. This is not fixed by
this commit but I added a TODO.
2021-03-08 17:25:16 +01:00
ba75ea8012 EEVEE: Use Fullscreen maxZBuffer instead of halfres
This removes the need for per mipmap scalling factor and trilinear interpolation
issues. We pad the texture so that all mipmaps have pixels in the next mip.

This simplifies the downsampling shader too.

This also change the SSR radiance buffer as well in the same fashion.
2021-03-08 17:25:16 +01:00
6afe2d373a Fix ID preview not updating in Asset Browser
Fix ID preview not updating in Asset Browser, by actually sending an
explicit `NA_EDITED` along with the `NC_ASSET` notifier.
2021-03-08 17:20:23 +01:00
a6a8ca9212 Fix T86063: support 'Relative Path' setting opening (alembic) caches
This was reported as opening alembic caches ignoring the
'use_relative_paths' preference, but this operator just did not have
this setting. Fortunately, adding this is just a simple switch.

Maniphest Tasks: T86063

Differential Revision: https://developer.blender.org/D10568
2021-03-08 17:01:03 +01:00
1aa59464b6 Cleanup: Move LibOverride debug prints to CLOG. 2021-03-08 16:53:16 +01:00
9cb5f0a228 Spreadsheet: add boilerplate code for new editor type
This adds the initial boilerplate code that is required to introduce
the new spreadsheet editor. The editor is still hidden from the ui.

It can be made visible by undoing the change in `rna_screen.c`.

This patch does not contain any business logic for the spreadsheet editor.

Differential Revision: https://developer.blender.org/D10645

Ref T86279.
2021-03-08 16:25:08 +01:00
d230c9b96c Alembic: avoid red overwrite warning when opening a file
Pass `FILE_OPENFILE` instead of `FILE_SAVE` when selecting a file for
reading.
2021-03-08 15:49:16 +01:00
5a67407d5a File Browser: scroll selected files into view
Add operator `FILE_OT_view_selected` to the file browser (and thus also
to the asset browser) that scrolls selected files into view.

This includes the active file, even though it is not selected. In
certain cases the active file can loose its selected state (clicking
next to it, or refreshing the asset browser), but then it's still shown
in the right-hand sidebar. Because of this, I found it important to take
it into account when scrolling.

This also includes a change to the keymaps:
- Blender default: {key NUMPAD_PERIOD} is removed from the "reload"
  operator, and assigned to the new "view selected files" operator. The
  reload operator was already doubly bound, and now {key R} is the only
  remaining hotkey for it.
- Industry compatible: {key F} is assigned to the new "view selected
  files" operator. This is consistent with the other "view selected"
  operators in other editors.

Reviewed By: Severin

Differential Revision: https://developer.blender.org/D10583
2021-03-08 15:21:34 +01:00
e20b31504a Fix T86347: Add Primitive Tool fails for 1x1x1 scale
The Tool stores desired dimensions as `scale` and thus had to half the
scale again in `make_prim_init()` because by default all primitives are
created at a size of 2 in blender.

This worked, but:
- [1] it logged something like size=2, scale=2,2,2 for a 2x2x2 cube
[which does not sound right, it should be size=2 scale=1,1,1]
- [2] it had to make an exception for the case scale is exactly 1x1x1
[this happens when the property is not set specifically, e.g. adding
primitives from the menu]
-- this exception led to double sized primitives being created when the
tool asked for exact dimensions of 1x1x1

Now - instead of compensating in `make_prim_init()` - do this earlier in
the tool itself, see `view3d_interactive_add_modal`, this fixes the bug
and now also correctly logs size=2 scale 0.5,0.5,0.5 for a 1x1x1 cube.

Maniphest Tasks: T86347

Differential Revision: https://developer.blender.org/D10632
2021-03-08 15:18:16 +01:00
91825ebfe2 UI: UVProject modifier: clarify aspect & scale are only for camera
projectors

Make this clear in property UI descriptions and deactivate aspect &
scale fields if no camera projectors are present.

ref T86268

Maniphest Tasks: T86268

Differential Revision: https://developer.blender.org/D10634
2021-03-08 15:09:17 +01:00
9e09214979 PyAPI: add bpy.types.BlendFile.temp_data for temporary library loading
This adds support for creating a `BlendFile` (internally called `Main`),
which is limited to a context.

Temporary data can now be created which can then use
`.libraries.load()` the same as with `bpy.data`.

To prevent errors caused by mixing the temporary ID's with data in
`bpy.data` they are tagged as temporary so they can't be assigned
to properties, however they can be passed as arguments to functions.

Reviewed By: mont29, sybren

Ref D10612
2021-03-09 01:01:31 +11:00
cfd11af981 readfile: add id_tag_extra argument
This allows adding ID tags when linking/loading data.

This is needed to implement loading non 'G.main' ID data-blocks,
see D10612.
2021-03-09 00:55:55 +11:00
09a8f5ebca Fix T86384: Click detection fails in some cases with modifiers
Regression in b5d154f400
2021-03-09 00:44:46 +11:00
Germano Cavalcante
b6c07d69e2 Fix T86106: bpy.types.SpaceView3D.draw_handler_remove(...) causes Blender to crash
The handle of a drawing callback can be removed within the drawing function itself.

This causes `var = (type)(((Link *)(var))->next` to read an invalid memory value in C.
2021-03-08 10:38:11 -03:00
171ba42439 Fix error in unused argument cleanup
Bad keyword argument rename from
9dc0c44aa1
2021-03-09 00:25:46 +11:00
3669a3e2e9 Fix Cycles CUDA build error with Visual Studio 2019 v16.9
Something in this update broke the floor() function in CUDA, instead use
floorf() like we do everywhere else in the kernel code. Thanks to Ray
Molenkamp for identifying the solution.
2021-03-08 14:06:16 +01:00
240e721dd3 Fix T86210: No preview icons for non-8bit images
It looks like we never generated correct icon previews for images with
float_rects (non-8bit-images). Images from the report were 16bit pngs.

In this case, `icon_preview_startjob` would return early (it only
checked if the ImBuf `rect` was NULL -- which is the case if it has a
`rect_float` instead). This is not neccessary since `icon_copy_rect` is
perfectly capable of taking float rects.

Now correct the check and only return early if both `rect` & `rect_float`
are NULL.

note: this will not refresh icon previews from existing files
automatically. For this, use File > Data Previews > Clear Data-Block
Previews.

Maniphest Tasks: T86210

Differential Revision: https://developer.blender.org/D10601
2021-03-08 13:51:56 +01:00
9ba1ff1c63 Fix T86373: crash picking colors in geometry nodesockets
Caused by {rB85421c4fab02}.

Above commit treated `SOCK_RGBA` and `SOCK_STRING` the same in
`std_node_socket_draw`.
That would turn a RGBA button into a text button (for attribute search),
leading to trouble.
Note these were just treated the same prior to above commit because both
were doing the layout split.

Now just give RGBA sockets their own case.

Maniphest Tasks: T86373

Differential Revision: https://developer.blender.org/D10646
2021-03-08 13:47:39 +01:00
1775ea74c1 Cleanup: Change extension .cpp to .cc 2021-03-08 13:41:52 +01:00
b9cd2f4531 Revert "Fix modernize-raw-string-literal complaints from clang-tidy."
This reverts commit 7a34bd7c28.

Broke windows build. Can apparently fix with /Zc:preprocessor flag
for windows but need a Windows dev to make that fix.
2021-03-08 06:45:45 -05:00
e9e53ff3a6 Cleanup: Removed duplicate slash in macOS SDK path
Cleanup although it's harmless.
2021-03-08 11:55:35 +01:00
Leon Leno
e12ad2bce0 Geometry Nodes: support Vector Rotate node
Differential Revision: https://developer.blender.org/D10410
2021-03-08 11:37:37 +01:00
2b9eea17cc Particles: change default name to "ParticleSystem"
Since {rB7a6b46aac56b}, particle systems were named "ParticleSettings"
by default, same as particle settings themselves. These are not the same
thing and their names should reflect that.

Issue came up in T86366.

Now name them "ParticleSystem" by default, name uniqueness is preserved
for both system and settings.

Maniphest Tasks: T86366

Differential Revision: https://developer.blender.org/D10641
2021-03-08 10:47:27 +01:00
74c50d0c77 Cleanup: Remove unused variable. 2021-03-08 08:52:43 +01:00
cbf033c055 Fix T86026: Crash Opening Cryptomatte File.
But this time the root cause. Writing undo files is done in a separate
thread. This patch moved the updating of the matte_id when the user
actually changes the matte.
2021-03-08 08:47:22 +01:00
89757f918c Revert "Fix T86026: Crash Opening Cryptomatte File."
This reverts commit 7f36498740.
2021-03-08 08:28:31 +01:00
a1bc7729f2 Cleanup: use ofs instead of offs as an abbreviation for offset
Used for local structs/variables,
since `ofs` is by far the most widely used abbreviation.
2021-03-08 14:54:23 +11:00
b4f5128b21 Cleanup: rename offs to offset 2021-03-08 14:44:57 +11:00
242a278b56 Cleanup: rename offs to offscreen 2021-03-08 14:40:57 +11:00
c950e08cbb Cleanup: add use prefix for boolean 2021-03-08 14:39:28 +11:00
1ba15f1f7f Speedup for usual non-manifold exact boolean case.
The commit rB6f63417b500d that made exact boolean work on meshes
with holes (like Suzanne) unfortunately dramatically slowed things
down on other non-manifold meshes that don't have holes and didn't
need the per-triangle insideness test.
This adds a hole_tolerant parameter, false by default, that the user
can enable to get good results on non-manifold meshes with holes.
Using false for this parameter speeds up the time from 90 seconds
to 10 seconds on an example with 1.2M triangles.
2021-03-07 18:13:19 -05:00
7a34bd7c28 Fix modernize-raw-string-literal complaints from clang-tidy. 2021-03-07 17:03:24 -05:00
239a7d93ae Cleanup: fix compiler warning 2021-03-07 18:05:25 +01:00
74979459cb Geometry Nodes: simplify allocating dynamically sized buffer on stack 2021-03-07 17:53:05 +01:00
9c8382e618 Cleanup: do not pass class member to class methods 2021-03-07 17:22:00 +01:00
00f218602d Alembic procedural: fix missing update when only the transforms change
The missing update has two sources:

The TimeSampling used for looking up transformations in the cache was
uninitialized. To fix this, simply use the TimeSampling from the last
transformation in the hierarchy (that is the object's parent), which
should also contain the time information for all of its parents.

The objects are not tagged for update when their trasformations change.
2021-03-07 17:16:16 +01:00
ac4d45dbf1 Alembic procedural: fix infinite update loop when modifying Object level properties 2021-03-07 17:16:16 +01:00
111a77e818 Cleanup: remove dead code 2021-03-07 17:03:20 +01:00
a9fc9ce0ae Cleanup: remove dead code 2021-03-07 16:08:59 +01:00
b30f89918e Fix T85632 Improve Exact boolean in cell fracture of Suzanne.
The Exact boolean used in the cell fracture addon incorrectly
kept some outside faces: due to some raycasts going into open
eye socket then out of the head, leading to one ray direction
(out of 8) saying the face was inside the head. The current
code allowed 1 of 8 rays only as "inside" to accommodate the
case of a plane used in boolean to bisect. But this cell fracture
case needs more confidence of being inside. So changed the
test for intersection to require at least 3 of 8 rays to be inside.

Maybe the number of rays to indicate insideness should be exposed
as an option, to allow user tuning according to the degree of
"non-volumeness" of the arguments, but will try at least for now
to magically guess the right value of the rays-inside threshold.

Note: all of this only for the case where the arguments are not
all PWN (approx: manifold). The all-PWN case doesn't use raycast.
2021-03-07 08:55:11 -05:00
e72dc1e6c6 Cleanup: compiler warnings 2021-03-07 14:46:48 +01:00
c14770370f Cleanup: fix implicit conversion warning 2021-03-07 14:27:08 +01:00
649916f098 BLI: make it harder to forget to destruct a value
Instead of returning a raw pointer, `LinearAllocator.construct(...)` now returns
a `destruct_ptr`, which is similar to `unique_ptr`, but does not deallocate
the memory and only calls the destructor instead.
2021-03-07 14:24:52 +01:00
456d3cc85e BLI: reduce wasted memory in linear allocator
The main change is that large allocations are done separately now.
Also, buffers that small allocations are packed into, have a maximum
size now. Using larger buffers does not really provider performance
benefits, but increases wasted memory.
2021-03-07 14:15:20 +01:00
84da76a96c Cleanup: use POINTER_OFFSET macro 2021-03-07 19:27:11 +11:00
8618c4159c Cleanup: use class instead of struct in forward declaration 2021-03-06 17:54:28 +01:00
9828882b8e Cleanup: clang tidy 2021-03-06 16:54:09 +01:00
d2869943d2 Nodes: refactor derived node tree
This is a complete rewrite of the derived node tree data structure.
It is a much thinner abstraction about `NodeTreeRef` than before.
This gives the user of the derived node tree more control and allows
for greater introspection capabilities (e.g. before muted nodes were
completely abstracted away; this was convenient, but came with
limitations).

Another nice benefit of the new structure is that it is much cheaper
to build, because it does not inline all nodes and sockets in nested
node groups.

Differential Revision: https://developer.blender.org/D10620
2021-03-06 16:51:06 +01:00
cfd766cebd Fix T86308 Crash in Exact Boolean when have custom normal layer.
Custom Normal layers can't be interpolated, so needed a check
for non-interpolatable layers before trying to interpolate.
2021-03-06 09:05:55 -05:00
11efc9087b Cleanup: remove workaround for Python 3.7x crashing with libedit
This removes workaround for T43491. It's no longer needed as
Python 3.9x supports libedit as an alternative to readline
on all platforms.
2021-03-06 19:31:49 +11:00
995bb0860a Cleanup: remove redundant draw callback 2021-03-06 19:26:18 +11:00
37793b90be Cleanup: unused imports 2021-03-06 19:26:18 +11:00
9dc0c44aa1 Cleanup: unused arguments 2021-03-06 19:26:18 +11:00
bd79691599 Cleanup: unused variables 2021-03-06 19:00:18 +11:00
753f1cf0ad Cleanup: redundant pose bone assignment 2021-03-06 18:33:54 +11:00
b22b037229 Cleanup: rename wm_get_cursor_position
Match naming of other wm_cursor_position_* functions.
2021-03-06 18:33:54 +11:00
3bc406274b Cleanup: comments 2021-03-06 18:33:54 +11:00
f117ea2624 Geometry Nodes: Expose vertex normals as an attribute
This attribute exposes mesh vertex normals as a `vertex_normal`
attribute for use with nodes. Since the normal vector stored in
vertices is only a cache of data computable from the surrounding faces,
the attribute is read-only. A proper error message for attempting to
write this attribute is part of T85749. A write-only normal attribute
will likely come later, most likely called `corner_normal`.

The normals are recomputed before reading if they are marked dirty.
This involves const write-access to the mesh, protected by the mutex
stored in `Mesh_Runtime`. This is essential for correct behavior after
nodes like "Edge Split" or nodes that adjust the position attribute.

Ref T84297, T85880, T86206

Differential Revision: https://developer.blender.org/D10541
2021-03-05 15:16:25 -06:00
becc36cce5 Geometry Nodes: Sort attribute search items when menu opens
Because the search didn't run when the menu first opens, the attributes
appeared in a different order than after you typed anything into the
search field. This commit instead runs the search when the menu
is first opened, but it only sorts items without filtering.
2021-03-05 14:45:09 -06:00
4addcf1efc Cleanup: Remove redundant special handling after previous commit
Not needed anymore since 3a907e7425.
2021-03-05 18:14:56 +01:00
3a907e7425 Outliner: Barebones to port IDs to new Outliner tree-element code design
Continuation of work in 2e221de4ce and 249e4df110 .
This prepares things so we can start porting the individual ID types to
the new code design. I already added some code for library IDs, because
they need some special handling during construction, which I didn't want
to break.

The `AbstractTreeElement::isExpandValid()` check can be removed once
types were ported and can be assumed to have a proper `expand()`
implemenation.

Also makes `TreeElementGPencilLayer` `final` which I forgot in
e0442a955b.
2021-03-05 18:07:35 +01:00
ae005393dc Fix incorrect assert in Outliner ID deletion
Mistake in aa3a4973a3. The expanded `ELEM()` check would include
`0 && te->idcode != 0`, which always evaluates to `false`/`0`. That
wouldn't cause the asset to fail, but the `te->idcode` part would never
be checked.

Fixed the error and cleaned up the check against "0" with a check
against `TSE_SOME_ID`, see b9e54566e3.
2021-03-05 17:46:49 +01:00
b9e54566e3 Cleanup: Add & use enum value for ID Outliner element type
Code to check if the Outliner tree-element type was the general ID one
would always check against "0" (explicity or even implicitly). For
somebody unfamiliar with the code this is very confusing. Instead the
value should be given a name, e.g. through an enum.

Adds `TSE_SOME_ID` as the "default" ID tree-element type. Other types
may still represent IDs, as I explained in a comment at the definition.

There may also still be cases where the type is checked against "0". I
noted in the comment that such cases should be cleaned up if found.
2021-03-05 17:46:33 +01:00
ed84161529 Cleanup: Rename func occurences to _fn
Use _fn as a suffix for callbacks.
2021-03-05 17:35:35 +01:00
f882bee431 Fix crash with pinned geometry node tree with no active object
The check for a null active object was after trying to retrieve the active
modifier from the object.
2021-03-05 10:13:42 -06:00
d5c727c6ea Fix windows compilation. 2021-03-05 16:08:11 +01:00
ffd5b0d91e Cleanup: Use blender::Vector. 2021-03-05 16:56:14 +01:00
0729376a13 Fix: compilation OpenCL kernels Compositor.
introduced during cleanup.
2021-03-05 16:56:14 +01:00
d4c673d4c6 Cleanup: use blender::Vector. 2021-03-05 16:56:14 +01:00
7c8ec99b9a Cleanup: use blender::Vector. 2021-03-05 16:56:14 +01:00
7bccbce512 Cleanup: use snake case. 2021-03-05 16:56:14 +01:00
921138cf5f Cleanup: rename private attribtue m_max_read_buffer_offset. 2021-03-05 16:56:14 +01:00
87842d6388 Cleanup: Use blender::Vector in ExecutionGroup. 2021-03-05 16:56:14 +01:00
fde150fee4 Cleanup: Use std::Vector for chunk execution status. 2021-03-05 16:56:14 +01:00
e1d9b095e4 Cleanup: Use enum class for eChunkExecutionState. 2021-03-05 16:56:14 +01:00
3d4a844a50 Cleanup: ExecutionSystem::find_output_execution_groups. 2021-03-05 16:56:14 +01:00
8b2fb7aeed Cleanup: Remove unused method. 2021-03-05 16:56:14 +01:00
6ebd34c802 Cleanup: Make WorkPackage a struct 2021-03-05 16:56:14 +01:00
ba5961b4cd Cleanup: use MIN2/MAX2 in compositor. 2021-03-05 16:56:14 +01:00
3d3a5bb892 Cleanup: Remove using statements. 2021-03-05 16:56:14 +01:00
bf030decd4 Animation: add function to apply a pose from an Action
Add `BKE_pose_apply_action(object, action, anim_eval_context)` function
and expose in RNA as `Pose.apply_action(action, evaluation_time)`.

This makes it possible to do the following:

- Have a rig in pose mode.
- Select a subset of the bones.
- Have some Action loaded that contains the pose you want to apply.
- Run `C.object.pose.apply_pose_from_action(D.actions['PoseName'])`
- The selected bones are now posed as determined by the Action.

Just like Blender's current pose library, having no bones selected acts
the same as having all bones selected.

Manifest Task: T86159

Reviewed By: Severin

Differential Revision: https://developer.blender.org/D10578
2021-03-05 16:35:08 +01:00
8d0fbcd6df Nodes: move vector rotate node to C++
This makes it easier to add an implementation that can
be used in Geometry Nodes.

There should be no functional changes.
2021-03-05 16:09:19 +01:00
fe35551df2 Asset Browser Space API: add activate_asset_by_id() function
Add an RNA function `activate_asset_by_id(asset_id: ID, deferred: bool)`
to the File Browser space type, which intended to be used to activate an
asset's entry as identified by its `ID *`. Calling it changes the active
asset, but only if the given ID can actually be found.

The activation can be deferred (by passing `deferred=True`) until the
next refresh operation has finished. This is necessary when an asset has
just been added, as it will be loaded by the filebrowser in a background
job.

Reviewed By: Severin

Differential Revision: https://developer.blender.org/D10549
2021-03-05 15:11:40 +01:00
0dd9a4a576 Cleanup: Libmv, clang-format
Is based on Google style which was used in the Libmv project before,
but is now consistently applied for the sources of the library itself
and to C-API. With some time C-API will likely be removed, and it
makes it easier to make it follow Libmv style, hence the diversion
from Blender's style.

There are quite some exceptions (clang-format off) in the code around
Eigen matrix initialization. It is rather annoying, and there could be
some neat way to make initialization readable without such exception.

Could be some places where loss of readability in matrix initialization
got lost as the change is quite big. If this has happened it is easier
to address readability once actually working on the code.

This change allowed to spot some missing header guards, so that's nice.

Doing it in bundled version, as the upstream library needs to have some
of the recent development ported over from bundle to upstream.

There should be no functional changes.
2021-03-05 15:05:08 +01:00
e0442a955b UI Code Quality: Port Outliner Grease Pencil layers to new design
Continuation of work in 2e221de4ce and 249e4df110. Now the tree-element
types have to be ported one by one. This is probably the most straight forward
type to port.
2021-03-05 14:45:46 +01:00
a592f7e6cb Cleanup: COM_convert_data_types parameters. 2021-03-05 13:46:25 +01:00
b12be5a872 Cleanup: Remove static struct without data. 2021-03-05 13:46:25 +01:00
f3fb1df192 Fix T86293: crash undoing after executing the python console in certain
scenarios

In general, I could not find a reason executing from the python console
should not do an Undo push. Running a script from the Text Editor does
this as well and this seems generally useful.

Without an Undo push, one can easily run into situations were IDs have
been added or removed and undo on would then cause trouble (e.g. first
selection then bpy.ops.object.duplicate() -- this crashed as reported in
T86293 -- duplicate does not get its own undo push because it is not the
last op in the list, wm->op_undo_depth is not zero). This has changed
with the Undo refactor, so in essence the root cause is the same as
T77557, Legacy Undo does not suffer from the crash (but misses
the generally useful undo push from the console still)

Now add Undo to CONSOLE_OT_execute bl_options ('UNDO_GROUPED' seems more
appropriate than plain 'UNDO' since pasting multiple lines of code will
call CONSOLE_OT_execute multiple times in a row).

Maniphest Tasks: T86293

Differential Revision: https://developer.blender.org/D10625
2021-03-05 12:50:55 +01:00
5668901ced Cleanup: remove redundant NULL window checks handling events 2021-03-05 21:30:16 +11:00
6bc01222c6 Cleanup: Sync header+implementaiton definition.
Gave warning on Windows platform. There are more of these cases.
2021-03-05 09:48:49 +01:00
99e1866712 Cleanup: Use boolean in WM_cursor_wait 2021-03-05 10:36:57 +01:00
9b28ab0d0b Merge branch 'master' into geometry-nodes-mesh-primitives 2021-03-01 08:36:08 -06:00
83c87b6564 Start of Cylinder node, add distance float socket 2021-02-26 09:00:24 -06:00
50072596d1 Merge branch 'master' into geometry-nodes-mesh-primitives 2021-02-25 22:43:12 -06:00
05aea4bb51 Fix issue with normals 2021-02-25 18:39:36 -06:00
f993a47248 remove extra whitespace 2021-02-25 18:03:33 -06:00
73b023bbab Remove asserts 2021-02-25 18:03:13 -06:00
95ecd5804d Fix index calculations 2021-02-25 18:02:41 -06:00
0c42b40aee Fix: Make output mesh valid, add a bunch of asserts 2021-02-25 17:26:50 -06:00
0084954ade Working sphere polygons 2021-02-25 14:56:03 -06:00
8464641e77 Sphere edges working 2021-02-25 13:37:37 -06:00
4f875a31c9 Merge branch 'master' into geometry-nodes-mesh-primitives 2021-02-25 09:31:02 -06:00
46b8b36eff Broken sphere stuff 2021-02-25 06:59:39 -06:00
475f0f5ece Merge branch 'master' into geometry-nodes-mesh-primitives 2021-02-24 21:58:46 -06:00
9a501e1ece Continue working on sphere node 2021-02-24 08:48:14 -06:00
3915392560 Fix sphere node not registered 2021-02-24 08:17:42 -06:00
fdb116a0b6 Merge branch 'master' into geometry-nodes-mesh-primitives 2021-02-23 23:12:03 -06:00
ca8d1900ff Shell for UV sphere code 2021-02-23 07:59:10 -06:00
edce7ee71d Merge branch 'master' into geometry-nodes-mesh-primitives 2021-02-22 23:46:43 -06:00
ffd63bc495 Merge branch 'master' into geometry-nodes-mesh-primitives 2021-02-20 17:59:26 -06:00
fbf093dee1 Merge branch 'master' into geometry-nodes-mesh-primitives 2021-02-20 17:22:16 -06:00
414747f40d Working circle primitive, except for rotation 2021-02-19 00:11:48 -06:00
e3995e5050 Merge branch 'master' into geometry-nodes-mesh-primitives 2021-02-18 18:11:55 -06:00
6ac36103ea Working circle fill types (not triangle fan) 2021-02-18 12:23:17 -06:00
fcd7c0cfcc Merge branch 'master' into geometry-nodes-mesh-primitive-cube 2021-02-17 21:10:09 -06:00
93788a9b8d Initial broken implementation 2021-02-15 11:57:18 -06:00
821 changed files with 18624 additions and 13784 deletions

View File

@@ -161,6 +161,7 @@ PenaltyBreakString: 1000000
# "^\s+[A-Z][A-Z0-9_]+\s*\([^\n]*\)\n\s*\{"
ForEachMacros:
- BEGIN_ANIMFILTER_SUBCHANNELS
- BKE_pbvh_vertex_iter_begin
- BLI_FOREACH_SPARSE_RANGE
- BLI_SMALLSTACK_ITER_BEGIN
- BMO_ITER

View File

@@ -506,7 +506,7 @@ check_descriptions: .FORCE
#
source_archive: .FORCE
./build_files/utils/make_source_archive.sh
python3 ./build_files/utils/make_source_archive.py
INKSCAPE_BIN?="inkscape"
icons: .FORCE

View File

@@ -66,7 +66,7 @@ endif()
if(XCODE_VERSION)
# Construct SDKs path ourselves, because xcode-select path could be ambiguous.
# Both /Applications/Xcode.app/Contents/Developer or /Applications/Xcode.app would be allowed.
set(XCODE_SDK_DIR ${XCODE_DEVELOPER_DIR}/Platforms/MacOSX.platform//Developer/SDKs)
set(XCODE_SDK_DIR ${XCODE_DEVELOPER_DIR}/Platforms/MacOSX.platform/Developer/SDKs)
# Detect SDK version to use
if(NOT DEFINED OSX_SYSTEM)

View File

@@ -0,0 +1,198 @@
#!/usr/bin/env python3
import dataclasses
import os
import re
import subprocess
from pathlib import Path
from typing import Iterable, TextIO
# This script can run from any location,
# output is created in the $CWD
#
# NOTE: while the Python part of this script is portable,
# it relies on external commands typically found on GNU/Linux.
# Support for other platforms could be added by moving GNU `tar` & `md5sum` use to Python.
SKIP_NAMES = {
".gitignore",
".gitmodules",
".arcconfig",
}
def main() -> None:
output_dir = Path(".").absolute()
blender_srcdir = Path(__file__).absolute().parent.parent.parent
print(f"Source dir: {blender_srcdir}")
version = parse_blender_version(blender_srcdir)
manifest = output_dir / f"blender-{version}-manifest.txt"
tarball = output_dir / f"blender-{version}.tar.xz"
os.chdir(blender_srcdir)
create_manifest(version, manifest)
create_tarball(version, tarball, manifest)
create_checksum_file(tarball)
cleanup(manifest)
print("Done!")
@dataclasses.dataclass
class BlenderVersion:
version: int # 293 for 2.93.1
patch: int # 1 for 2.93.1
cycle: str # 'alpha', 'beta', 'release', maybe others.
@property
def is_release(self) -> bool:
return self.cycle == "release"
def __str__(self) -> str:
"""Convert to version string.
>>> str(BlenderVersion(293, 1, "alpha"))
'2.93.1-alpha'
>>> str(BlenderVersion(327, 0, "release"))
'3.27.0'
"""
version_major = self.version // 100
version_minor = self.version % 100
as_string = f"{version_major}.{version_minor}.{self.patch}"
if self.is_release:
return as_string
return f"{as_string}-{self.cycle}"
def parse_blender_version(blender_srcdir: Path) -> BlenderVersion:
version_path = blender_srcdir / "source/blender/blenkernel/BKE_blender_version.h"
version_info = {}
line_re = re.compile(r"^#define (BLENDER_VERSION[A-Z_]*)\s+([0-9a-z]+)$")
with version_path.open(encoding="utf-8") as version_file:
for line in version_file:
match = line_re.match(line.strip())
if not match:
continue
version_info[match.group(1)] = match.group(2)
return BlenderVersion(
int(version_info["BLENDER_VERSION"]),
int(version_info["BLENDER_VERSION_PATCH"]),
version_info["BLENDER_VERSION_CYCLE"],
)
### Manifest creation
def create_manifest(version: BlenderVersion, outpath: Path) -> None:
print(f'Building manifest of files: "{outpath}"...', end="", flush=True)
with outpath.open("w", encoding="utf-8") as outfile:
main_files_to_manifest(outfile)
submodules_to_manifest(version, outfile)
print("OK")
def main_files_to_manifest(outfile: TextIO) -> None:
for path in git_ls_files():
print(path, file=outfile)
def submodules_to_manifest(version: BlenderVersion, outfile: TextIO) -> None:
skip_addon_contrib = version.is_release
for line in git_command("submodule"):
submodule = line.split()[1]
# Don't use native slashes as GIT for MS-Windows outputs forward slashes.
if skip_addon_contrib and submodule == "release/scripts/addons_contrib":
continue
for path in git_ls_files(Path(submodule)):
print(path, file=outfile)
def create_tarball(version: BlenderVersion, tarball: Path, manifest: Path) -> None:
print(f'Creating archive: "{tarball}" ...', end="", flush=True)
# Requires GNU `tar`, since `--transform` is used.
command = [
"tar",
"--transform",
f"s,^,blender-{version}/,g",
"--use-compress-program=xz -9",
"--create",
f"--file={tarball}",
f"--files-from={manifest}",
# Without owner/group args, extracting the files as root will
# use ownership from the tar archive:
"--owner=0",
"--group=0",
]
subprocess.run(command, check=True, timeout=300)
print("OK")
def create_checksum_file(tarball: Path) -> None:
md5_path = tarball.with_name(tarball.name + ".md5sum")
print(f'Creating checksum: "{md5_path}" ...', end="", flush=True)
command = [
"md5sum",
# The name is enough, as the tarball resides in the same dir as the MD5
# file, and that's the current working directory.
tarball.name,
]
md5_cmd = subprocess.run(
command, stdout=subprocess.PIPE, check=True, text=True, timeout=300
)
with md5_path.open("w", encoding="utf-8") as outfile:
outfile.write(md5_cmd.stdout)
print("OK")
def cleanup(manifest: Path) -> None:
print("Cleaning up ...", end="", flush=True)
if manifest.exists():
manifest.unlink()
print("OK")
## Low-level commands
def git_ls_files(directory: Path = Path(".")) -> Iterable[Path]:
"""Generator, yields lines of output from 'git ls-files'.
Only lines that are actually files (so no directories, sockets, etc.) are
returned, and never one from SKIP_NAMES.
"""
for line in git_command("-C", str(directory), "ls-files"):
path = directory / line
if not path.is_file() or path.name in SKIP_NAMES:
continue
yield path
def git_command(*cli_args) -> Iterable[str]:
"""Generator, yields lines of output from a Git command."""
command = ("git", *cli_args)
# import shlex
# print(">", " ".join(shlex.quote(arg) for arg in command))
git = subprocess.run(
command, stdout=subprocess.PIPE, check=True, text=True, timeout=30
)
for line in git.stdout.split("\n"):
if line:
yield line
if __name__ == "__main__":
import doctest
if doctest.testmod().failed:
raise SystemExit("ERROR: Self-test failed, refusing to run")
main()

View File

@@ -1,82 +0,0 @@
#!/bin/sh
# This script can run from any location,
# output is created in the $CWD
BASE_DIR="$PWD"
blender_srcdir=$(dirname -- $0)/../..
blender_version=$(grep "BLENDER_VERSION\s" "$blender_srcdir/source/blender/blenkernel/BKE_blender_version.h" | awk '{print $3}')
blender_version_patch=$(grep "BLENDER_VERSION_PATCH\s" "$blender_srcdir/source/blender/blenkernel/BKE_blender_version.h" | awk '{print $3}')
blender_version_cycle=$(grep "BLENDER_VERSION_CYCLE\s" "$blender_srcdir/source/blender/blenkernel/BKE_blender_version.h" | awk '{print $3}')
VERSION=$(expr $blender_version / 100).$(expr $blender_version % 100).$blender_version_patch
if [ "$blender_version_cycle" = "release" ] ; then
SUBMODULE_EXCLUDE="^\(release/scripts/addons_contrib\)$"
else
VERSION=$VERSION-$blender_version_cycle
SUBMODULE_EXCLUDE="^$" # dummy regex
fi
MANIFEST="blender-$VERSION-manifest.txt"
TARBALL="blender-$VERSION.tar.xz"
cd "$blender_srcdir"
# not so nice, but works
FILTER_FILES_PY=\
"import os, sys; "\
"[print(l[:-1]) for l in sys.stdin.readlines() "\
"if os.path.isfile(l[:-1]) "\
"if os.path.basename(l[:-1]) not in {"\
"'.gitignore', "\
"'.gitmodules', "\
"'.arcconfig', "\
"}"\
"]"
# Build master list
echo -n "Building manifest of files: \"$BASE_DIR/$MANIFEST\" ..."
git ls-files | python3 -c "$FILTER_FILES_PY" > $BASE_DIR/$MANIFEST
# Enumerate submodules
for lcv in $(git submodule | awk '{print $2}' | grep -v "$SUBMODULE_EXCLUDE"); do
cd "$BASE_DIR"
cd "$blender_srcdir/$lcv"
git ls-files | python3 -c "$FILTER_FILES_PY" | awk '$0="'"$lcv"/'"$0' >> $BASE_DIR/$MANIFEST
cd "$BASE_DIR"
done
echo "OK"
# Create the tarball
#
# Without owner/group args, extracting the files as root will
# use ownership from the tar archive.
cd "$blender_srcdir"
echo -n "Creating archive: \"$BASE_DIR/$TARBALL\" ..."
tar \
--transform "s,^,blender-$VERSION/,g" \
--use-compress-program="xz -9" \
--create \
--file="$BASE_DIR/$TARBALL" \
--files-from="$BASE_DIR/$MANIFEST" \
--owner=0 \
--group=0
echo "OK"
# Create checksum file
cd "$BASE_DIR"
echo -n "Creating checksum: \"$BASE_DIR/$TARBALL.md5sum\" ..."
md5sum "$TARBALL" > "$TARBALL.md5sum"
echo "OK"
# Cleanup
echo -n "Cleaning up ..."
rm "$BASE_DIR/$MANIFEST"
echo "OK"
echo "Done!"

View File

@@ -32,9 +32,7 @@ if(MSVC_CLANG AND WITH_OPENMP AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS "9.0.1
endif()
# Exporting functions from the blender binary gives linker warnings on Apple arm64 systems.
# For now and until Apple arm64 is officially supported, these will just be silenced here.
# TODO (sebbas): Check if official arm64 devices give linker warnings without this block.
# Silence them here.
if(APPLE AND ("${CMAKE_OSX_ARCHITECTURES}" STREQUAL "arm64"))
if(CMAKE_COMPILER_IS_GNUCXX OR "${CMAKE_CXX_COMPILER_ID}" MATCHES "Clang")
string(APPEND CMAKE_C_FLAGS " -fvisibility=hidden")

View File

@@ -303,19 +303,27 @@ static enum eCLogColor clg_severity_to_color(enum CLG_Severity severity)
* - `foo` exact match of `foo`.
* - `foo.bar` exact match for `foo.bar`
* - `foo.*` match for `foo` & `foo.bar` & `foo.bar.baz`
* - `*bar*` match for `foo.bar` & `baz.bar` & `foo.barbaz`
* - `*` matches everything.
*/
static bool clg_ctx_filter_check(CLogContext *ctx, const char *identifier)
{
const int identifier_len = strlen(identifier);
const size_t identifier_len = strlen(identifier);
for (uint i = 0; i < 2; i++) {
const CLG_IDFilter *flt = ctx->filters[i];
while (flt != NULL) {
const int len = strlen(flt->match);
const size_t len = strlen(flt->match);
if (STREQ(flt->match, "*") || ((len == identifier_len) && (STREQ(identifier, flt->match)))) {
return (bool)i;
}
if ((len >= 2) && (STREQLEN(".*", &flt->match[len - 2], 2))) {
if (flt->match[0] == '*' && flt->match[len - 1] == '*') {
char *match = MEM_callocN(sizeof(char) * len - 1, __func__);
memcpy(match, flt->match + 1, len - 2);
if (strstr(identifier, match) != NULL) {
return (bool)i;
}
}
else if ((len >= 2) && (STREQLEN(".*", &flt->match[len - 2], 2))) {
if (((identifier_len == len - 2) && STREQLEN(identifier, flt->match, len - 2)) ||
((identifier_len >= len - 1) && STREQLEN(identifier, flt->match, len - 1))) {
return (bool)i;

View File

@@ -226,6 +226,9 @@ def update_render_passes(self, context):
view_layer = context.view_layer
view_layer.update_render_passes()
def poll_object_is_camera(self, obj):
return obj.type == 'CAMERA'
class CyclesRenderSettings(bpy.types.PropertyGroup):
@@ -538,7 +541,7 @@ class CyclesRenderSettings(bpy.types.PropertyGroup):
description="Camera to use as reference point when subdividing geometry, useful to avoid crawling "
"artifacts in animations when the scene camera is moving",
type=bpy.types.Object,
poll=lambda self, obj: obj.type == 'CAMERA',
poll=poll_object_is_camera,
)
offscreen_dicing_scale: FloatProperty(
name="Offscreen Dicing Scale",

View File

@@ -27,8 +27,8 @@ BVHOptiX::BVHOptiX(const BVHParams &params_,
Device *device)
: BVH(params_, geometry_, objects_),
traversable_handle(0),
as_data(device, params_.top_level ? "optix tlas" : "optix blas"),
motion_transform_data(device, "optix motion transform")
as_data(device, params_.top_level ? "optix tlas" : "optix blas", false),
motion_transform_data(device, "optix motion transform", false)
{
}

View File

@@ -854,7 +854,7 @@ CUDADevice::CUDAMem *CUDADevice::generic_alloc(device_memory &mem, size_t pitch_
void *shared_pointer = 0;
if (mem_alloc_result != CUDA_SUCCESS && can_map_host) {
if (mem_alloc_result != CUDA_SUCCESS && can_map_host && mem.type != MEM_DEVICE_ONLY) {
if (mem.shared_pointer) {
/* Another device already allocated host memory. */
mem_alloc_result = CUDA_SUCCESS;
@@ -877,8 +877,14 @@ CUDADevice::CUDAMem *CUDADevice::generic_alloc(device_memory &mem, size_t pitch_
}
if (mem_alloc_result != CUDA_SUCCESS) {
status = " failed, out of device and host memory";
set_error("System is out of GPU and shared host memory");
if (mem.type == MEM_DEVICE_ONLY) {
status = " failed, out of device memory";
set_error("System is out of GPU memory");
}
else {
status = " failed, out of device and host memory";
set_error("System is out of GPU and shared host memory");
}
}
if (mem.name) {

View File

@@ -396,8 +396,7 @@ class CPUDevice : public Device {
<< string_human_readable_size(mem.memory_size()) << ")";
}
if (mem.type == MEM_DEVICE_ONLY) {
assert(!mem.host_pointer);
if (mem.type == MEM_DEVICE_ONLY || !mem.host_pointer) {
size_t alignment = MIN_ALIGNMENT_CPU_DATA_TYPES;
void *data = util_aligned_malloc(mem.memory_size(), alignment);
mem.device_pointer = (device_ptr)data;
@@ -459,7 +458,7 @@ class CPUDevice : public Device {
tex_free((device_texture &)mem);
}
else if (mem.device_pointer) {
if (mem.type == MEM_DEVICE_ONLY) {
if (mem.type == MEM_DEVICE_ONLY || !mem.host_pointer) {
util_aligned_free((void *)mem.device_pointer);
}
mem.device_pointer = 0;

View File

@@ -171,7 +171,8 @@ class DenoisingTask {
bool gpu_temporary_mem;
DenoiseBuffers(Device *device)
: mem(device, "denoising pixel buffer"), temporary_mem(device, "denoising temporary mem")
: mem(device, "denoising pixel buffer"),
temporary_mem(device, "denoising temporary mem", true)
{
}
} buffer;

View File

@@ -270,8 +270,8 @@ class device_memory {
template<typename T> class device_only_memory : public device_memory {
public:
device_only_memory(Device *device, const char *name)
: device_memory(device, name, MEM_DEVICE_ONLY)
device_only_memory(Device *device, const char *name, bool allow_host_memory_fallback = false)
: device_memory(device, name, allow_host_memory_fallback ? MEM_READ_WRITE : MEM_DEVICE_ONLY)
{
data_type = device_type_traits<T>::data_type;
data_elements = max(device_type_traits<T>::num_elements, 1);

View File

@@ -197,8 +197,8 @@ class OptiXDevice : public CUDADevice {
OptiXDevice(DeviceInfo &info_, Stats &stats_, Profiler &profiler_, bool background_)
: CUDADevice(info_, stats_, profiler_, background_),
sbt_data(this, "__sbt", MEM_READ_ONLY),
launch_params(this, "__params"),
denoiser_state(this, "__denoiser_state")
launch_params(this, "__params", false),
denoiser_state(this, "__denoiser_state", true)
{
// Store number of CUDA streams in device info
info.cpu_threads = DebugFlags().optix.cuda_streams;
@@ -878,8 +878,8 @@ class OptiXDevice : public CUDADevice {
device_ptr input_ptr = rtile.buffer + pixel_offset;
// Copy tile data into a common buffer if necessary
device_only_memory<float> input(this, "denoiser input");
device_vector<TileInfo> tile_info_mem(this, "denoiser tile info", MEM_READ_WRITE);
device_only_memory<float> input(this, "denoiser input", true);
device_vector<TileInfo> tile_info_mem(this, "denoiser tile info", MEM_READ_ONLY);
bool contiguous_memory = true;
for (int i = 0; i < RenderTileNeighbors::SIZE; i++) {
@@ -924,7 +924,7 @@ class OptiXDevice : public CUDADevice {
}
# if OPTIX_DENOISER_NO_PIXEL_STRIDE
device_only_memory<float> input_rgb(this, "denoiser input rgb");
device_only_memory<float> input_rgb(this, "denoiser input rgb", true);
input_rgb.alloc_to_device(rect_size.x * rect_size.y * 3 * task.denoising.input_passes);
void *input_args[] = {&input_rgb.device_pointer,
@@ -1146,6 +1146,13 @@ class OptiXDevice : public CUDADevice {
const OptixBuildInput &build_input,
uint16_t num_motion_steps)
{
/* Allocate and build acceleration structures only one at a time, to prevent parallel builds
* from running out of memory (since both original and compacted acceleration structure memory
* may be allocated at the same time for the duration of this function). The builds would
* otherwise happen on the same CUDA stream anyway. */
static thread_mutex mutex;
thread_scoped_lock lock(mutex);
const CUDAContextScope scope(cuContext);
// Compute memory usage
@@ -1170,11 +1177,12 @@ class OptiXDevice : public CUDADevice {
optixAccelComputeMemoryUsage(context, &options, &build_input, 1, &sizes));
// Allocate required output buffers
device_only_memory<char> temp_mem(this, "optix temp as build mem");
device_only_memory<char> temp_mem(this, "optix temp as build mem", true);
temp_mem.alloc_to_device(align_up(sizes.tempSizeInBytes, 8) + 8);
if (!temp_mem.device_pointer)
return false; // Make sure temporary memory allocation succeeded
// Acceleration structure memory has to be allocated on the device (not allowed to be on host)
device_only_memory<char> &out_data = bvh->as_data;
if (operation == OPTIX_BUILD_OPERATION_BUILD) {
assert(out_data.device == this);
@@ -1222,7 +1230,7 @@ class OptiXDevice : public CUDADevice {
// There is no point compacting if the size does not change
if (compacted_size < sizes.outputSizeInBytes) {
device_only_memory<char> compacted_data(this, "optix compacted as");
device_only_memory<char> compacted_data(this, "optix compacted as", false);
compacted_data.alloc_to_device(compacted_size);
if (!compacted_data.device_pointer)
// Do not compact if memory allocation for compacted acceleration structure fails
@@ -1242,6 +1250,7 @@ class OptiXDevice : public CUDADevice {
std::swap(out_data.device_size, compacted_data.device_size);
std::swap(out_data.device_pointer, compacted_data.device_pointer);
// Original acceleration structure memory is freed when 'compacted_data' goes out of scope
}
}

View File

@@ -68,8 +68,8 @@ ccl_device T kernel_tex_image_interp_bicubic(const TextureInfo &info, float x, f
x = (x * info.width) - 0.5f;
y = (y * info.height) - 0.5f;
float px = floor(x);
float py = floor(y);
float px = floorf(x);
float py = floorf(y);
float fx = x - px;
float fy = y - py;
@@ -95,9 +95,9 @@ ccl_device T kernel_tex_image_interp_tricubic(const TextureInfo &info, float x,
y = (y * info.height) - 0.5f;
z = (z * info.depth) - 0.5f;
float px = floor(x);
float py = floor(y);
float pz = floor(z);
float px = floorf(x);
float py = floorf(y);
float pz = floorf(z);
float fx = x - px;
float fy = y - py;
float fz = z - pz;
@@ -127,9 +127,9 @@ ccl_device T kernel_tex_image_interp_tricubic(const TextureInfo &info, float x,
template<typename T, typename S>
ccl_device T kernel_tex_image_interp_tricubic_nanovdb(S &s, float x, float y, float z)
{
float px = floor(x);
float py = floor(y);
float pz = floor(z);
float px = floorf(x);
float py = floorf(y);
float pz = floorf(z);
float fx = x - px;
float fy = y - py;
float fz = z - pz;

View File

@@ -402,16 +402,16 @@ static void add_uvs(AlembicProcedural *proc,
}
const ISampleSelector iss = ISampleSelector(time);
const IV2fGeomParam::Sample sample = uvs.getExpandedValue(iss);
const IV2fGeomParam::Sample uvsample = uvs.getIndexedValue(iss);
if (!uvsample.valid()) {
continue;
}
const array<int3> *triangles = cached_data.triangles.data_for_time_no_check(time);
const array<int3> *triangles_loops = cached_data.triangles_loops.data_for_time_no_check(time);
const array<int3> *triangles =
cached_data.triangles.data_for_time_no_check(time).get_data_or_null();
const array<int3> *triangles_loops =
cached_data.triangles_loops.data_for_time_no_check(time).get_data_or_null();
if (!triangles || !triangles_loops) {
continue;
@@ -458,7 +458,8 @@ static void add_normals(const Int32ArraySamplePtr face_indices,
*normals.getTimeSampling());
attr.std = ATTR_STD_VERTEX_NORMAL;
const array<float3> *vertices = cached_data.vertices.data_for_time_no_check(time);
const array<float3> *vertices =
cached_data.vertices.data_for_time_no_check(time).get_data_or_null();
if (!vertices) {
return;
@@ -493,7 +494,8 @@ static void add_normals(const Int32ArraySamplePtr face_indices,
*normals.getTimeSampling());
attr.std = ATTR_STD_VERTEX_NORMAL;
const array<float3> *vertices = cached_data.vertices.data_for_time_no_check(time);
const array<float3> *vertices =
cached_data.vertices.data_for_time_no_check(time).get_data_or_null();
if (!vertices) {
return;
@@ -717,11 +719,15 @@ void AlembicObject::read_face_sets(SchemaType &schema,
void AlembicObject::load_all_data(AlembicProcedural *proc,
IPolyMeshSchema &schema,
float scale,
Progress &progress)
{
cached_data.clear();
/* Only load data for the original Geometry. */
if (instance_of) {
return;
}
const TimeSamplingPtr time_sampling = schema.getTimeSampling();
cached_data.set_time_sampling(*time_sampling);
@@ -780,22 +786,18 @@ void AlembicObject::load_all_data(AlembicProcedural *proc,
add_uvs(proc, uvs, cached_data, progress);
}
if (progress.get_cancel()) {
return;
}
setup_transform_cache(scale);
data_loaded = true;
}
void AlembicObject::load_all_data(AlembicProcedural *proc,
ISubDSchema &schema,
float scale,
Progress &progress)
void AlembicObject::load_all_data(AlembicProcedural *proc, ISubDSchema &schema, Progress &progress)
{
cached_data.clear();
/* Only load data for the original Geometry. */
if (instance_of) {
return;
}
AttributeRequestSet requested_attributes = get_requested_attributes();
const TimeSamplingPtr time_sampling = schema.getTimeSampling();
@@ -920,19 +922,21 @@ void AlembicObject::load_all_data(AlembicProcedural *proc,
return;
}
setup_transform_cache(scale);
data_loaded = true;
}
void AlembicObject::load_all_data(AlembicProcedural *proc,
const ICurvesSchema &schema,
float scale,
Progress &progress,
float default_radius)
{
cached_data.clear();
/* Only load data for the original Geometry. */
if (instance_of) {
return;
}
const TimeSamplingPtr time_sampling = schema.getTimeSampling();
cached_data.set_time_sampling(*time_sampling);
@@ -1007,8 +1011,6 @@ void AlembicObject::load_all_data(AlembicProcedural *proc,
// TODO(@kevindietrich): attributes, need example files
setup_transform_cache(scale);
data_loaded = true;
}
@@ -1017,6 +1019,14 @@ void AlembicObject::setup_transform_cache(float scale)
cached_data.transforms.clear();
cached_data.transforms.invalidate_last_loaded_time();
if (scale == 0.0f) {
scale = 1.0f;
}
if (xform_time_sampling) {
cached_data.transforms.set_time_sampling(*xform_time_sampling);
}
if (xform_samples.size() == 0) {
Transform tfm = transform_scale(make_float3(scale));
cached_data.transforms.add_data(tfm, 0.0);
@@ -1103,9 +1113,10 @@ void AlembicObject::read_attribute(const ICompoundProperty &arb_geom_params,
attribute.element = ATTR_ELEMENT_CORNER;
attribute.type_desc = TypeFloat2;
const array<int3> *triangles = cached_data.triangles.data_for_time_no_check(time);
const array<int3> *triangles_loops = cached_data.triangles_loops.data_for_time_no_check(
time);
const array<int3> *triangles =
cached_data.triangles.data_for_time_no_check(time).get_data_or_null();
const array<int3> *triangles_loops =
cached_data.triangles_loops.data_for_time_no_check(time).get_data_or_null();
if (!triangles || !triangles_loops) {
return;
@@ -1158,7 +1169,8 @@ void AlembicObject::read_attribute(const ICompoundProperty &arb_geom_params,
attribute.element = ATTR_ELEMENT_CORNER_BYTE;
attribute.type_desc = TypeRGBA;
const array<int3> *triangles = cached_data.triangles.data_for_time_no_check(time);
const array<int3> *triangles =
cached_data.triangles.data_for_time_no_check(time).get_data_or_null();
if (!triangles) {
return;
@@ -1214,7 +1226,8 @@ void AlembicObject::read_attribute(const ICompoundProperty &arb_geom_params,
attribute.element = ATTR_ELEMENT_CORNER_BYTE;
attribute.type_desc = TypeRGBA;
const array<int3> *triangles = cached_data.triangles.data_for_time_no_check(time);
const array<int3> *triangles =
cached_data.triangles.data_for_time_no_check(time).get_data_or_null();
if (!triangles) {
return;
@@ -1253,7 +1266,7 @@ static void update_attributes(AttributeSet &attributes, CachedData &cached_data,
set<Attribute *> cached_attributes;
for (CachedData::CachedAttribute &attribute : cached_data.attributes) {
const array<char> *attr_data = attribute.data.data_for_time(frame_time);
const array<char> *attr_data = attribute.data.data_for_time(frame_time).get_data_or_null();
Attribute *attr = nullptr;
if (attribute.std != ATTR_STD_NONE) {
@@ -1278,6 +1291,7 @@ static void update_attributes(AttributeSet &attributes, CachedData &cached_data,
}
memcpy(attr->data(), attr_data->data(), attr_data->size());
attr->modified = true;
}
/* remove any attributes not in cached_attributes */
@@ -1285,6 +1299,7 @@ static void update_attributes(AttributeSet &attributes, CachedData &cached_data,
for (it = attributes.attributes.begin(); it != attributes.attributes.end();) {
if (cached_attributes.find(&(*it)) == cached_attributes.end()) {
attributes.attributes.erase(it++);
attributes.modified = true;
continue;
}
@@ -1358,11 +1373,16 @@ void AlembicProcedural::generate(Scene *scene, Progress &progress)
}
bool need_shader_updates = false;
bool need_data_updates = false;
/* Check for changes in shaders (newly requested attributes). */
foreach (Node *object_node, objects) {
AlembicObject *object = static_cast<AlembicObject *>(object_node);
if (object->is_modified()) {
need_data_updates = true;
}
/* Check for changes in shaders (e.g. newly requested attributes). */
foreach (Node *shader_node, object->get_used_shaders()) {
Shader *shader = static_cast<Shader *>(shader_node);
@@ -1373,7 +1393,7 @@ void AlembicProcedural::generate(Scene *scene, Progress &progress)
}
}
if (!is_modified() && !need_shader_updates) {
if (!is_modified() && !need_shader_updates && !need_data_updates) {
return;
}
@@ -1397,6 +1417,8 @@ void AlembicProcedural::generate(Scene *scene, Progress &progress)
const chrono_t frame_time = (chrono_t)((frame - frame_offset) / frame_rate);
build_caches(progress);
foreach (Node *node, objects) {
AlembicObject *object = static_cast<AlembicObject *>(node);
@@ -1405,19 +1427,19 @@ void AlembicProcedural::generate(Scene *scene, Progress &progress)
}
/* skip constant objects */
if (object->has_data_loaded() && object->is_constant() && !object->is_modified() &&
!object->need_shader_update && !scale_is_modified()) {
if (object->is_constant() && !object->is_modified() && !object->need_shader_update &&
!scale_is_modified()) {
continue;
}
if (object->schema_type == AlembicObject::POLY_MESH) {
read_mesh(scene, object, frame_time, progress);
read_mesh(object, frame_time);
}
else if (object->schema_type == AlembicObject::CURVES) {
read_curves(scene, object, frame_time, progress);
read_curves(object, frame_time);
}
else if (object->schema_type == AlembicObject::SUBD) {
read_subd(scene, object, frame_time, progress);
read_subd(object, frame_time);
}
object->clear_modified();
@@ -1471,7 +1493,7 @@ void AlembicProcedural::load_objects(Progress &progress)
IObject root = archive.getTop();
for (size_t i = 0; i < root.getNumChildren(); ++i) {
walk_hierarchy(root, root.getChildHeader(i), nullptr, object_map, progress);
walk_hierarchy(root, root.getChildHeader(i), {}, object_map, progress);
}
/* Create nodes in the scene. */
@@ -1480,22 +1502,24 @@ void AlembicProcedural::load_objects(Progress &progress)
Geometry *geometry = nullptr;
if (abc_object->schema_type == AlembicObject::CURVES) {
geometry = scene_->create_node<Hair>();
}
else if (abc_object->schema_type == AlembicObject::POLY_MESH ||
abc_object->schema_type == AlembicObject::SUBD) {
geometry = scene_->create_node<Mesh>();
}
else {
continue;
}
if (!abc_object->instance_of) {
if (abc_object->schema_type == AlembicObject::CURVES) {
geometry = scene_->create_node<Hair>();
}
else if (abc_object->schema_type == AlembicObject::POLY_MESH ||
abc_object->schema_type == AlembicObject::SUBD) {
geometry = scene_->create_node<Mesh>();
}
else {
continue;
}
geometry->set_owner(this);
geometry->name = abc_object->iobject.getName();
geometry->set_owner(this);
geometry->name = abc_object->iobject.getName();
array<Node *> used_shaders = abc_object->get_used_shaders();
geometry->set_used_shaders(used_shaders);
array<Node *> used_shaders = abc_object->get_used_shaders();
geometry->set_used_shaders(used_shaders);
}
Object *object = scene_->create_node<Object>();
object->set_owner(this);
@@ -1504,43 +1528,44 @@ void AlembicProcedural::load_objects(Progress &progress)
abc_object->set_object(object);
}
/* Share geometries between instances. */
foreach (Node *node, objects) {
AlembicObject *abc_object = static_cast<AlembicObject *>(node);
if (abc_object->instance_of) {
abc_object->get_object()->set_geometry(
abc_object->instance_of->get_object()->get_geometry());
abc_object->schema_type = abc_object->instance_of->schema_type;
}
}
}
void AlembicProcedural::read_mesh(Scene *scene,
AlembicObject *abc_object,
Abc::chrono_t frame_time,
Progress &progress)
void AlembicProcedural::read_mesh(AlembicObject *abc_object, Abc::chrono_t frame_time)
{
IPolyMesh polymesh(abc_object->iobject, Alembic::Abc::kWrapExisting);
Mesh *mesh = static_cast<Mesh *>(abc_object->get_object()->get_geometry());
CachedData &cached_data = abc_object->get_cached_data();
IPolyMeshSchema schema = polymesh.getSchema();
if (!abc_object->has_data_loaded()) {
abc_object->load_all_data(this, schema, scale, progress);
}
else {
if (abc_object->need_shader_update) {
abc_object->update_shader_attributes(schema.getArbGeomParams(), progress);
}
if (scale_is_modified()) {
abc_object->setup_transform_cache(scale);
}
}
/* update sockets */
Object *object = abc_object->get_object();
cached_data.transforms.copy_to_socket(frame_time, object, object->get_tfm_socket());
if (object->is_modified()) {
object->tag_update(scene_);
}
/* Only update sockets for the original Geometry. */
if (abc_object->instance_of) {
return;
}
Mesh *mesh = static_cast<Mesh *>(object->get_geometry());
cached_data.vertices.copy_to_socket(frame_time, mesh, mesh->get_verts_socket());
cached_data.shader.copy_to_socket(frame_time, mesh, mesh->get_shader_socket());
array<int3> *triangle_data = cached_data.triangles.data_for_time(frame_time);
array<int3> *triangle_data = cached_data.triangles.data_for_time(frame_time).get_data_or_null();
if (triangle_data) {
array<int> triangles;
array<bool> smooth;
@@ -1566,7 +1591,7 @@ void AlembicProcedural::read_mesh(Scene *scene,
/* we don't yet support arbitrary attributes, for now add vertex
* coordinates as generated coordinates if requested */
if (mesh->need_attribute(scene, ATTR_STD_GENERATED)) {
if (mesh->need_attribute(scene_, ATTR_STD_GENERATED)) {
Attribute *attr = mesh->attributes.add(ATTR_STD_GENERATED);
memcpy(
attr->data_float3(), mesh->get_verts().data(), sizeof(float3) * mesh->get_verts().size());
@@ -1574,39 +1599,12 @@ void AlembicProcedural::read_mesh(Scene *scene,
if (mesh->is_modified()) {
bool need_rebuild = mesh->triangles_is_modified();
mesh->tag_update(scene, need_rebuild);
mesh->tag_update(scene_, need_rebuild);
}
}
void AlembicProcedural::read_subd(Scene *scene,
AlembicObject *abc_object,
Abc::chrono_t frame_time,
Progress &progress)
void AlembicProcedural::read_subd(AlembicObject *abc_object, Abc::chrono_t frame_time)
{
ISubD subd_mesh(abc_object->iobject, Alembic::Abc::kWrapExisting);
ISubDSchema schema = subd_mesh.getSchema();
Mesh *mesh = static_cast<Mesh *>(abc_object->get_object()->get_geometry());
/* Alembic is OpenSubDiv compliant, there is no option to set another subdivision type. */
mesh->set_subdivision_type(Mesh::SubdivisionType::SUBDIVISION_CATMULL_CLARK);
if (!abc_object->has_data_loaded()) {
abc_object->load_all_data(this, schema, scale, progress);
}
else {
if (abc_object->need_shader_update) {
abc_object->update_shader_attributes(schema.getArbGeomParams(), progress);
}
if (scale_is_modified()) {
abc_object->setup_transform_cache(scale);
}
}
mesh->set_subd_max_level(abc_object->get_subd_max_level());
mesh->set_subd_dicing_rate(abc_object->get_subd_dicing_rate());
CachedData &cached_data = abc_object->get_cached_data();
if (abc_object->subd_max_level_is_modified() || abc_object->subd_dicing_rate_is_modified()) {
@@ -1614,6 +1612,22 @@ void AlembicProcedural::read_subd(Scene *scene,
cached_data.invalidate_last_loaded_time();
}
/* Update sockets. */
Object *object = abc_object->get_object();
cached_data.transforms.copy_to_socket(frame_time, object, object->get_tfm_socket());
if (object->is_modified()) {
object->tag_update(scene_);
}
/* Only update sockets for the original Geometry. */
if (abc_object->instance_of) {
return;
}
Mesh *mesh = static_cast<Mesh *>(object->get_geometry());
/* Cycles overwrites the original triangles when computing displacement, so we always have to
* repass the data if something is animated (vertices most likely) to avoid buffer overflows. */
if (!cached_data.is_constant()) {
@@ -1626,10 +1640,10 @@ void AlembicProcedural::read_subd(Scene *scene,
mesh->clear_non_sockets();
/* Update sockets. */
Object *object = abc_object->get_object();
cached_data.transforms.copy_to_socket(frame_time, object, object->get_tfm_socket());
/* Alembic is OpenSubDiv compliant, there is no option to set another subdivision type. */
mesh->set_subdivision_type(Mesh::SubdivisionType::SUBDIVISION_CATMULL_CLARK);
mesh->set_subd_max_level(abc_object->get_subd_max_level());
mesh->set_subd_dicing_rate(abc_object->get_subd_dicing_rate());
cached_data.vertices.copy_to_socket(frame_time, mesh, mesh->get_verts_socket());
@@ -1666,7 +1680,7 @@ void AlembicProcedural::read_subd(Scene *scene,
/* we don't yet support arbitrary attributes, for now add vertex
* coordinates as generated coordinates if requested */
if (mesh->need_attribute(scene, ATTR_STD_GENERATED)) {
if (mesh->need_attribute(scene_, ATTR_STD_GENERATED)) {
Attribute *attr = mesh->attributes.add(ATTR_STD_GENERATED);
memcpy(
attr->data_float3(), mesh->get_verts().data(), sizeof(float3) * mesh->get_verts().size());
@@ -1680,30 +1694,12 @@ void AlembicProcedural::read_subd(Scene *scene,
(mesh->subd_start_corner_is_modified()) ||
(mesh->subd_face_corners_is_modified());
mesh->tag_update(scene, need_rebuild);
mesh->tag_update(scene_, need_rebuild);
}
}
void AlembicProcedural::read_curves(Scene *scene,
AlembicObject *abc_object,
Abc::chrono_t frame_time,
Progress &progress)
void AlembicProcedural::read_curves(AlembicObject *abc_object, Abc::chrono_t frame_time)
{
ICurves curves(abc_object->iobject, Alembic::Abc::kWrapExisting);
Hair *hair = static_cast<Hair *>(abc_object->get_object()->get_geometry());
ICurvesSchema schema = curves.getSchema();
if (!abc_object->has_data_loaded() || default_radius_is_modified() ||
abc_object->radius_scale_is_modified()) {
abc_object->load_all_data(this, schema, scale, progress, default_radius);
}
else {
if (scale_is_modified()) {
abc_object->setup_transform_cache(scale);
}
}
CachedData &cached_data = abc_object->get_cached_data();
/* update sockets */
@@ -1711,6 +1707,17 @@ void AlembicProcedural::read_curves(Scene *scene,
Object *object = abc_object->get_object();
cached_data.transforms.copy_to_socket(frame_time, object, object->get_tfm_socket());
if (object->is_modified()) {
object->tag_update(scene_);
}
/* Only update sockets for the original Geometry. */
if (abc_object->instance_of) {
return;
}
Hair *hair = static_cast<Hair *>(object->get_geometry());
cached_data.curve_keys.copy_to_socket(frame_time, hair, hair->get_curve_keys_socket());
cached_data.curve_radius.copy_to_socket(frame_time, hair, hair->get_curve_radius_socket());
@@ -1725,7 +1732,7 @@ void AlembicProcedural::read_curves(Scene *scene,
/* we don't yet support arbitrary attributes, for now add first keys as generated coordinates if
* requested */
if (hair->need_attribute(scene, ATTR_STD_GENERATED)) {
if (hair->need_attribute(scene_, ATTR_STD_GENERATED)) {
Attribute *attr_generated = hair->attributes.add(ATTR_STD_GENERATED);
float3 *generated = attr_generated->data_float3();
@@ -1735,13 +1742,13 @@ void AlembicProcedural::read_curves(Scene *scene,
}
const bool rebuild = (hair->curve_keys_is_modified() || hair->curve_radius_is_modified());
hair->tag_update(scene, rebuild);
hair->tag_update(scene_, rebuild);
}
void AlembicProcedural::walk_hierarchy(
IObject parent,
const ObjectHeader &header,
MatrixSampleMap *xform_samples,
MatrixSamplesData matrix_samples_data,
const unordered_map<std::string, AlembicObject *> &object_map,
Progress &progress)
{
@@ -1763,7 +1770,7 @@ void AlembicProcedural::walk_hierarchy(
MatrixSampleMap local_xform_samples;
MatrixSampleMap *temp_xform_samples = nullptr;
if (xform_samples == nullptr) {
if (matrix_samples_data.samples == nullptr) {
/* If there is no parent transforms, fill the map directly. */
temp_xform_samples = &concatenated_xform_samples;
}
@@ -1778,11 +1785,13 @@ void AlembicProcedural::walk_hierarchy(
temp_xform_samples->insert({sample_time, sample.getMatrix()});
}
if (xform_samples != nullptr) {
concatenate_xform_samples(*xform_samples, local_xform_samples, concatenated_xform_samples);
if (matrix_samples_data.samples != nullptr) {
concatenate_xform_samples(
*matrix_samples_data.samples, local_xform_samples, concatenated_xform_samples);
}
xform_samples = &concatenated_xform_samples;
matrix_samples_data.samples = &concatenated_xform_samples;
matrix_samples_data.time_sampling = ts;
}
next_object = xform;
@@ -1798,8 +1807,9 @@ void AlembicProcedural::walk_hierarchy(
abc_object->iobject = subd;
abc_object->schema_type = AlembicObject::SUBD;
if (xform_samples) {
abc_object->xform_samples = *xform_samples;
if (matrix_samples_data.samples) {
abc_object->xform_samples = *matrix_samples_data.samples;
abc_object->xform_time_sampling = matrix_samples_data.time_sampling;
}
}
@@ -1816,8 +1826,9 @@ void AlembicProcedural::walk_hierarchy(
abc_object->iobject = mesh;
abc_object->schema_type = AlembicObject::POLY_MESH;
if (xform_samples) {
abc_object->xform_samples = *xform_samples;
if (matrix_samples_data.samples) {
abc_object->xform_samples = *matrix_samples_data.samples;
abc_object->xform_time_sampling = matrix_samples_data.time_sampling;
}
}
@@ -1834,8 +1845,9 @@ void AlembicProcedural::walk_hierarchy(
abc_object->iobject = curves;
abc_object->schema_type = AlembicObject::CURVES;
if (xform_samples) {
abc_object->xform_samples = *xform_samples;
if (matrix_samples_data.samples) {
abc_object->xform_samples = *matrix_samples_data.samples;
abc_object->xform_time_sampling = matrix_samples_data.time_sampling;
}
}
@@ -1844,15 +1856,92 @@ void AlembicProcedural::walk_hierarchy(
else if (IFaceSet::matches(header)) {
// ignore the face set, it will be read along with the data
}
else if (IPoints::matches(header)) {
// unsupported for now
}
else if (INuPatch::matches(header)) {
// unsupported for now
}
else {
// unsupported type for now (Points, NuPatch)
next_object = parent.getChild(header.getName());
if (next_object.isInstanceRoot()) {
unordered_map<std::string, AlembicObject *>::const_iterator iter;
/* Was this object asked to be rendered? */
iter = object_map.find(next_object.getFullName());
if (iter != object_map.end()) {
AlembicObject *abc_object = iter->second;
/* Only try to render an instance if the original object is also rendered. */
iter = object_map.find(next_object.instanceSourcePath());
if (iter != object_map.end()) {
abc_object->iobject = next_object;
abc_object->instance_of = iter->second;
if (matrix_samples_data.samples) {
abc_object->xform_samples = *matrix_samples_data.samples;
abc_object->xform_time_sampling = matrix_samples_data.time_sampling;
}
}
}
}
}
if (next_object.valid()) {
for (size_t i = 0; i < next_object.getNumChildren(); ++i) {
walk_hierarchy(
next_object, next_object.getChildHeader(i), xform_samples, object_map, progress);
next_object, next_object.getChildHeader(i), matrix_samples_data, object_map, progress);
}
}
}
void AlembicProcedural::build_caches(Progress &progress)
{
for (Node *node : objects) {
AlembicObject *object = static_cast<AlembicObject *>(node);
if (progress.get_cancel()) {
return;
}
if (object->schema_type == AlembicObject::POLY_MESH) {
if (!object->has_data_loaded()) {
IPolyMesh polymesh(object->iobject, Alembic::Abc::kWrapExisting);
IPolyMeshSchema schema = polymesh.getSchema();
object->load_all_data(this, schema, progress);
}
else if (object->need_shader_update) {
IPolyMesh polymesh(object->iobject, Alembic::Abc::kWrapExisting);
IPolyMeshSchema schema = polymesh.getSchema();
object->update_shader_attributes(schema.getArbGeomParams(), progress);
}
}
else if (object->schema_type == AlembicObject::CURVES) {
if (!object->has_data_loaded() || default_radius_is_modified() ||
object->radius_scale_is_modified()) {
ICurves curves(object->iobject, Alembic::Abc::kWrapExisting);
ICurvesSchema schema = curves.getSchema();
object->load_all_data(this, schema, progress, default_radius);
}
}
else if (object->schema_type == AlembicObject::SUBD) {
if (!object->has_data_loaded()) {
ISubD subd_mesh(object->iobject, Alembic::Abc::kWrapExisting);
ISubDSchema schema = subd_mesh.getSchema();
object->load_all_data(this, schema, progress);
}
else if (object->need_shader_update) {
ISubD subd_mesh(object->iobject, Alembic::Abc::kWrapExisting);
ISubDSchema schema = subd_mesh.getSchema();
object->update_shader_attributes(schema.getArbGeomParams(), progress);
}
}
if (scale_is_modified() || object->get_cached_data().transforms.size() == 0) {
object->setup_transform_cache(scale);
}
}
}

View File

@@ -38,6 +38,11 @@ class Shader;
using MatrixSampleMap = std::map<Alembic::Abc::chrono_t, Alembic::Abc::M44d>;
struct MatrixSamplesData {
MatrixSampleMap *samples = nullptr;
Alembic::AbcCoreAbstract::TimeSamplingPtr time_sampling;
};
/* Helpers to detect if some type is a ccl::array. */
template<typename> struct is_array : public std::false_type {
};
@@ -45,6 +50,78 @@ template<typename> struct is_array : public std::false_type {
template<typename T> struct is_array<array<T>> : public std::true_type {
};
/* Holds the data for a cache lookup at a given time, as well as informations to
* help disambiguate successes or failures to get data from the cache. */
template<typename T> class CacheLookupResult {
enum class State {
NEW_DATA,
ALREADY_LOADED,
NO_DATA_FOR_TIME,
};
T *data;
State state;
protected:
/* Prevent default construction outside of the class: for a valid result, we
* should use the static functions below. */
CacheLookupResult() = default;
public:
static CacheLookupResult new_data(T *data_)
{
CacheLookupResult result;
result.data = data_;
result.state = State::NEW_DATA;
return result;
}
static CacheLookupResult no_data_found_for_time()
{
CacheLookupResult result;
result.data = nullptr;
result.state = State::NO_DATA_FOR_TIME;
return result;
}
static CacheLookupResult already_loaded()
{
CacheLookupResult result;
result.data = nullptr;
result.state = State::ALREADY_LOADED;
return result;
}
/* This should only be call if new data is available. */
const T &get_data() const
{
assert(state == State::NEW_DATA);
assert(data != nullptr);
return *data;
}
T *get_data_or_null() const
{
// data_ should already be null if there is no new data so no need to check
return data;
}
bool has_new_data() const
{
return state == State::NEW_DATA;
}
bool has_already_loaded() const
{
return state == State::ALREADY_LOADED;
}
bool has_no_data_for_time() const
{
return state == State::NO_DATA_FOR_TIME;
}
};
/* Store the data set for an animation at every time points, or at the beginning of the animation
* for constant data.
*
@@ -74,10 +151,10 @@ template<typename T> class DataStore {
/* Get the data for the specified time.
* Return nullptr if there is no data or if the data for this time was already loaded. */
T *data_for_time(double time)
CacheLookupResult<T> data_for_time(double time)
{
if (size() == 0) {
return nullptr;
return CacheLookupResult<T>::no_data_found_for_time();
}
std::pair<size_t, Alembic::Abc::chrono_t> index_pair;
@@ -85,26 +162,26 @@ template<typename T> class DataStore {
DataTimePair &data_pair = data[index_pair.first];
if (last_loaded_time == data_pair.time) {
return nullptr;
return CacheLookupResult<T>::already_loaded();
}
last_loaded_time = data_pair.time;
return &data_pair.data;
return CacheLookupResult<T>::new_data(&data_pair.data);
}
/* get the data for the specified time, but do not check if the data was already loaded for this
* time return nullptr if there is no data */
T *data_for_time_no_check(double time)
CacheLookupResult<T> data_for_time_no_check(double time)
{
if (size() == 0) {
return nullptr;
return CacheLookupResult<T>::no_data_found_for_time();
}
std::pair<size_t, Alembic::Abc::chrono_t> index_pair;
index_pair = time_sampling.getNearIndex(time, data.size());
DataTimePair &data_pair = data[index_pair.first];
return &data_pair.data;
return CacheLookupResult<T>::new_data(&data_pair.data);
}
void add_data(T &data_, double time)
@@ -144,15 +221,15 @@ template<typename T> class DataStore {
* data for this time or it was already loaded, do nothing. */
void copy_to_socket(double time, Node *node, const SocketType *socket)
{
T *data_ = data_for_time(time);
CacheLookupResult<T> result = data_for_time(time);
if (data_ == nullptr) {
if (!result.has_new_data()) {
return;
}
/* TODO(kevindietrich): arrays are emptied when passed to the sockets, so for now we copy the
* arrays to avoid reloading the data */
T value = *data_;
T value = result.get_data();
node->set(*socket, value);
}
};
@@ -249,15 +326,12 @@ class AlembicObject : public Node {
void load_all_data(AlembicProcedural *proc,
Alembic::AbcGeom::IPolyMeshSchema &schema,
float scale,
Progress &progress);
void load_all_data(AlembicProcedural *proc,
Alembic::AbcGeom::ISubDSchema &schema,
float scale,
Progress &progress);
void load_all_data(AlembicProcedural *proc,
const Alembic::AbcGeom::ICurvesSchema &schema,
float scale,
Progress &progress,
float default_radius);
@@ -274,6 +348,9 @@ class AlembicObject : public Node {
bool need_shader_update = true;
AlembicObject *instance_of = nullptr;
Alembic::AbcCoreAbstract::TimeSamplingPtr xform_time_sampling;
MatrixSampleMap xform_samples;
Alembic::AbcGeom::IObject iobject;
@@ -384,30 +461,23 @@ class AlembicProcedural : public Procedural {
* way for each IObject. */
void walk_hierarchy(Alembic::AbcGeom::IObject parent,
const Alembic::AbcGeom::ObjectHeader &ohead,
MatrixSampleMap *xform_samples,
MatrixSamplesData matrix_samples_data,
const unordered_map<string, AlembicObject *> &object_map,
Progress &progress);
/* Read the data for an IPolyMesh at the specified frame_time. Creates corresponding Geometry and
* Object Nodes in the Cycles scene if none exist yet. */
void read_mesh(Scene *scene,
AlembicObject *abc_object,
Alembic::AbcGeom::Abc::chrono_t frame_time,
Progress &progress);
void read_mesh(AlembicObject *abc_object, Alembic::AbcGeom::Abc::chrono_t frame_time);
/* Read the data for an ICurves at the specified frame_time. Creates corresponding Geometry and
* Object Nodes in the Cycles scene if none exist yet. */
void read_curves(Scene *scene,
AlembicObject *abc_object,
Alembic::AbcGeom::Abc::chrono_t frame_time,
Progress &progress);
void read_curves(AlembicObject *abc_object, Alembic::AbcGeom::Abc::chrono_t frame_time);
/* Read the data for an ISubD at the specified frame_time. Creates corresponding Geometry and
* Object Nodes in the Cycles scene if none exist yet. */
void read_subd(Scene *scene,
AlembicObject *abc_object,
Alembic::AbcGeom::Abc::chrono_t frame_time,
Progress &progress);
void read_subd(AlembicObject *abc_object, Alembic::AbcGeom::Abc::chrono_t frame_time);
void build_caches(Progress &progress);
};
CCL_NAMESPACE_END

View File

@@ -1367,7 +1367,7 @@ void GeometryManager::device_update_bvh(Device *device,
dscene->data.bvh.use_bvh_steps = (scene->params.num_bvh_time_steps != 0);
dscene->data.bvh.curve_subdivisions = scene->params.curve_subdivisions();
/* The scene handle is set in 'CPUDevice::const_copy_to' and 'OptiXDevice::const_copy_to' */
dscene->data.bvh.scene = NULL;
dscene->data.bvh.scene = 0;
}
/* Set of flags used to help determining what data has been modified or needs reallocation, so we

View File

@@ -831,7 +831,8 @@ static bool to_scene_linear_transform(OCIO::ConstConfigRcPtr &config,
void ShaderManager::init_xyz_transforms()
{
/* Default to ITU-BT.709 in case no appropriate transform found. */
/* Default to ITU-BT.709 in case no appropriate transform found.
* Note XYZ here is defined as having a D65 white point. */
xyz_to_r = make_float3(3.2404542f, -1.5371385f, -0.4985314f);
xyz_to_g = make_float3(-0.9692660f, 1.8760108f, 0.0415560f);
xyz_to_b = make_float3(0.0556434f, -0.2040259f, 1.0572252f);
@@ -848,24 +849,27 @@ void ShaderManager::init_xyz_transforms()
if (config->hasRole("aces_interchange")) {
/* Standard OpenColorIO role, defined as ACES2065-1. */
const Transform xyz_to_aces = make_transform(1.0498110175f,
0.0f,
-0.0000974845f,
0.0f,
-0.4959030231f,
1.3733130458f,
0.0982400361f,
0.0f,
0.0f,
0.0f,
0.9912520182f,
0.0f);
const Transform xyz_E_to_aces = make_transform(1.0498110175f,
0.0f,
-0.0000974845f,
0.0f,
-0.4959030231f,
1.3733130458f,
0.0982400361f,
0.0f,
0.0f,
0.0f,
0.9912520182f,
0.0f);
const Transform xyz_D65_to_E = make_transform(
1.0521111f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.9184170f, 0.0f);
Transform aces_to_rgb;
if (!to_scene_linear_transform(config, "aces_interchange", aces_to_rgb)) {
return;
}
xyz_to_rgb = aces_to_rgb * xyz_to_aces;
xyz_to_rgb = aces_to_rgb * xyz_E_to_aces * xyz_D65_to_E;
}
else if (config->hasRole("XYZ")) {
/* Custom role used before the standard existed. */

View File

@@ -386,13 +386,11 @@ extern "C" int GHOST_HACK_getFirstFile(char buf[FIRSTFILEBUFLG])
- (id)init
{
self = [super init];
if (self) {
NSNotificationCenter *center = [NSNotificationCenter defaultCenter];
[center addObserver:self
selector:@selector(windowWillClose:)
name:NSWindowWillCloseNotification
object:nil];
}
NSNotificationCenter *center = [NSNotificationCenter defaultCenter];
[center addObserver:self
selector:@selector(windowWillClose:)
name:NSWindowWillCloseNotification
object:nil];
return self;
}
@@ -563,97 +561,96 @@ GHOST_TSuccess GHOST_SystemCocoa::init()
SetFrontProcess(&psn);
}*/
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
[NSApplication sharedApplication]; // initializes NSApp
@autoreleasepool {
[NSApplication sharedApplication]; // initializes NSApp
if ([NSApp mainMenu] == nil) {
NSMenu *mainMenubar = [[NSMenu alloc] init];
NSMenuItem *menuItem;
NSMenu *windowMenu;
NSMenu *appMenu;
if ([NSApp mainMenu] == nil) {
NSMenu *mainMenubar = [[NSMenu alloc] init];
NSMenuItem *menuItem;
NSMenu *windowMenu;
NSMenu *appMenu;
// Create the application menu
appMenu = [[NSMenu alloc] initWithTitle:@"Blender"];
// Create the application menu
appMenu = [[NSMenu alloc] initWithTitle:@"Blender"];
[appMenu addItemWithTitle:@"About Blender"
action:@selector(orderFrontStandardAboutPanel:)
keyEquivalent:@""];
[appMenu addItem:[NSMenuItem separatorItem]];
[appMenu addItemWithTitle:@"About Blender"
action:@selector(orderFrontStandardAboutPanel:)
keyEquivalent:@""];
[appMenu addItem:[NSMenuItem separatorItem]];
menuItem = [appMenu addItemWithTitle:@"Hide Blender"
action:@selector(hide:)
keyEquivalent:@"h"];
[menuItem setKeyEquivalentModifierMask:NSEventModifierFlagCommand];
menuItem = [appMenu addItemWithTitle:@"Hide Blender"
action:@selector(hide:)
keyEquivalent:@"h"];
[menuItem setKeyEquivalentModifierMask:NSEventModifierFlagCommand];
menuItem = [appMenu addItemWithTitle:@"Hide Others"
action:@selector(hideOtherApplications:)
keyEquivalent:@"h"];
[menuItem
setKeyEquivalentModifierMask:(NSEventModifierFlagOption | NSEventModifierFlagCommand)];
menuItem = [appMenu addItemWithTitle:@"Hide Others"
action:@selector(hideOtherApplications:)
keyEquivalent:@"h"];
[menuItem
setKeyEquivalentModifierMask:(NSEventModifierFlagOption | NSEventModifierFlagCommand)];
[appMenu addItemWithTitle:@"Show All"
action:@selector(unhideAllApplications:)
keyEquivalent:@""];
[appMenu addItemWithTitle:@"Show All"
action:@selector(unhideAllApplications:)
keyEquivalent:@""];
menuItem = [appMenu addItemWithTitle:@"Quit Blender"
action:@selector(terminate:)
keyEquivalent:@"q"];
[menuItem setKeyEquivalentModifierMask:NSEventModifierFlagCommand];
menuItem = [appMenu addItemWithTitle:@"Quit Blender"
action:@selector(terminate:)
keyEquivalent:@"q"];
[menuItem setKeyEquivalentModifierMask:NSEventModifierFlagCommand];
menuItem = [[NSMenuItem alloc] init];
[menuItem setSubmenu:appMenu];
menuItem = [[NSMenuItem alloc] init];
[menuItem setSubmenu:appMenu];
[mainMenubar addItem:menuItem];
[menuItem release];
[NSApp performSelector:@selector(setAppleMenu:) withObject:appMenu]; // Needed for 10.5
[appMenu release];
[mainMenubar addItem:menuItem];
[menuItem release];
[NSApp performSelector:@selector(setAppleMenu:) withObject:appMenu]; // Needed for 10.5
[appMenu release];
// Create the window menu
windowMenu = [[NSMenu alloc] initWithTitle:@"Window"];
// Create the window menu
windowMenu = [[NSMenu alloc] initWithTitle:@"Window"];
menuItem = [windowMenu addItemWithTitle:@"Minimize"
action:@selector(performMiniaturize:)
keyEquivalent:@"m"];
[menuItem setKeyEquivalentModifierMask:NSEventModifierFlagCommand];
menuItem = [windowMenu addItemWithTitle:@"Minimize"
action:@selector(performMiniaturize:)
keyEquivalent:@"m"];
[menuItem setKeyEquivalentModifierMask:NSEventModifierFlagCommand];
[windowMenu addItemWithTitle:@"Zoom" action:@selector(performZoom:) keyEquivalent:@""];
[windowMenu addItemWithTitle:@"Zoom" action:@selector(performZoom:) keyEquivalent:@""];
menuItem = [windowMenu addItemWithTitle:@"Enter Full Screen"
action:@selector(toggleFullScreen:)
keyEquivalent:@"f"];
[menuItem
setKeyEquivalentModifierMask:NSEventModifierFlagControl | NSEventModifierFlagCommand];
menuItem = [windowMenu addItemWithTitle:@"Enter Full Screen"
action:@selector(toggleFullScreen:)
keyEquivalent:@"f"];
[menuItem
setKeyEquivalentModifierMask:NSEventModifierFlagControl | NSEventModifierFlagCommand];
menuItem = [windowMenu addItemWithTitle:@"Close"
action:@selector(performClose:)
keyEquivalent:@"w"];
[menuItem setKeyEquivalentModifierMask:NSEventModifierFlagCommand];
menuItem = [windowMenu addItemWithTitle:@"Close"
action:@selector(performClose:)
keyEquivalent:@"w"];
[menuItem setKeyEquivalentModifierMask:NSEventModifierFlagCommand];
menuItem = [[NSMenuItem alloc] init];
[menuItem setSubmenu:windowMenu];
menuItem = [[NSMenuItem alloc] init];
[menuItem setSubmenu:windowMenu];
[mainMenubar addItem:menuItem];
[menuItem release];
[mainMenubar addItem:menuItem];
[menuItem release];
[NSApp setMainMenu:mainMenubar];
[NSApp setWindowsMenu:windowMenu];
[windowMenu release];
[NSApp setMainMenu:mainMenubar];
[NSApp setWindowsMenu:windowMenu];
[windowMenu release];
}
if ([NSApp delegate] == nil) {
CocoaAppDelegate *appDelegate = [[CocoaAppDelegate alloc] init];
[appDelegate setSystemCocoa:this];
[NSApp setDelegate:appDelegate];
}
// AppKit provides automatic window tabbing. Blender is a single-tabbed application
// without a macOS tab bar, and should explicitly opt-out of this. This is also
// controlled by the macOS user default #NSWindowTabbingEnabled.
NSWindow.allowsAutomaticWindowTabbing = NO;
[NSApp finishLaunching];
}
if ([NSApp delegate] == nil) {
CocoaAppDelegate *appDelegate = [[CocoaAppDelegate alloc] init];
[appDelegate setSystemCocoa:this];
[NSApp setDelegate:appDelegate];
}
// AppKit provides automatic window tabbing. Blender is a single-tabbed application without a
// macOS tab bar, and should explicitly opt-out of this. This is also controlled by the macOS
// user default #NSWindowTabbingEnabled.
NSWindow.allowsAutomaticWindowTabbing = NO;
[NSApp finishLaunching];
[pool drain];
}
return success;
}
@@ -676,30 +673,26 @@ GHOST_TUns8 GHOST_SystemCocoa::getNumDisplays() const
{
// Note that OS X supports monitor hot plug
// We do not support multiple monitors at the moment
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
GHOST_TUns8 count = [[NSScreen screens] count];
[pool drain];
return count;
@autoreleasepool {
return NSScreen.screens.count;
}
}
void GHOST_SystemCocoa::getMainDisplayDimensions(GHOST_TUns32 &width, GHOST_TUns32 &height) const
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
// Get visible frame, that is frame excluding dock and top menu bar
NSRect frame = [[NSScreen mainScreen] visibleFrame];
@autoreleasepool {
// Get visible frame, that is frame excluding dock and top menu bar
NSRect frame = [[NSScreen mainScreen] visibleFrame];
// Returns max window contents (excluding title bar...)
NSRect contentRect = [NSWindow
contentRectForFrameRect:frame
styleMask:(NSWindowStyleMaskTitled | NSWindowStyleMaskClosable |
NSWindowStyleMaskMiniaturizable)];
// Returns max window contents (excluding title bar...)
NSRect contentRect = [NSWindow
contentRectForFrameRect:frame
styleMask:(NSWindowStyleMaskTitled | NSWindowStyleMaskClosable |
NSWindowStyleMaskMiniaturizable)];
width = contentRect.size.width;
height = contentRect.size.height;
[pool drain];
width = contentRect.size.width;
height = contentRect.size.height;
}
}
void GHOST_SystemCocoa::getAllDisplayDimensions(GHOST_TUns32 &width, GHOST_TUns32 &height) const
@@ -720,53 +713,52 @@ GHOST_IWindow *GHOST_SystemCocoa::createWindow(const char *title,
const bool is_dialog,
const GHOST_IWindow *parentWindow)
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
GHOST_IWindow *window = NULL;
@autoreleasepool {
// Get the available rect for including window contents
NSRect frame = [[NSScreen mainScreen] visibleFrame];
NSRect contentRect = [NSWindow
contentRectForFrameRect:frame
styleMask:(NSWindowStyleMaskTitled | NSWindowStyleMaskClosable |
NSWindowStyleMaskMiniaturizable)];
// Get the available rect for including window contents
NSRect frame = [[NSScreen mainScreen] visibleFrame];
NSRect contentRect = [NSWindow
contentRectForFrameRect:frame
styleMask:(NSWindowStyleMaskTitled | NSWindowStyleMaskClosable |
NSWindowStyleMaskMiniaturizable)];
GHOST_TInt32 bottom = (contentRect.size.height - 1) - height - top;
GHOST_TInt32 bottom = (contentRect.size.height - 1) - height - top;
// Ensures window top left is inside this available rect
left = left > contentRect.origin.x ? left : contentRect.origin.x;
// Add contentRect.origin.y to respect docksize
bottom = bottom > contentRect.origin.y ? bottom + contentRect.origin.y : contentRect.origin.y;
// Ensures window top left is inside this available rect
left = left > contentRect.origin.x ? left : contentRect.origin.x;
// Add contentRect.origin.y to respect docksize
bottom = bottom > contentRect.origin.y ? bottom + contentRect.origin.y : contentRect.origin.y;
window = new GHOST_WindowCocoa(this,
title,
left,
bottom,
width,
height,
state,
type,
glSettings.flags & GHOST_glStereoVisual,
glSettings.flags & GHOST_glDebugContext,
is_dialog,
(GHOST_WindowCocoa *)parentWindow);
window = new GHOST_WindowCocoa(this,
title,
left,
bottom,
width,
height,
state,
type,
glSettings.flags & GHOST_glStereoVisual,
glSettings.flags & GHOST_glDebugContext,
is_dialog,
(GHOST_WindowCocoa *)parentWindow);
if (window->getValid()) {
// Store the pointer to the window
GHOST_ASSERT(m_windowManager, "m_windowManager not initialized");
m_windowManager->addWindow(window);
m_windowManager->setActiveWindow(window);
/* Need to tell window manager the new window is the active one
* (Cocoa does not send the event activate upon window creation). */
pushEvent(new GHOST_Event(getMilliSeconds(), GHOST_kEventWindowActivate, window));
pushEvent(new GHOST_Event(getMilliSeconds(), GHOST_kEventWindowSize, window));
if (window->getValid()) {
// Store the pointer to the window
GHOST_ASSERT(m_windowManager, "m_windowManager not initialized");
m_windowManager->addWindow(window);
m_windowManager->setActiveWindow(window);
/* Need to tell window manager the new window is the active one
* (Cocoa does not send the event activate upon window creation). */
pushEvent(new GHOST_Event(getMilliSeconds(), GHOST_kEventWindowActivate, window));
pushEvent(new GHOST_Event(getMilliSeconds(), GHOST_kEventWindowSize, window));
}
else {
GHOST_PRINT("GHOST_SystemCocoa::createWindow(): window invalid\n");
delete window;
window = NULL;
}
}
else {
GHOST_PRINT("GHOST_SystemCocoa::createWindow(): window invalid\n");
delete window;
window = NULL;
}
[pool drain];
return window;
}
@@ -841,29 +833,28 @@ GHOST_TSuccess GHOST_SystemCocoa::setMouseCursorPosition(GHOST_TInt32 x, GHOST_T
if (!window)
return GHOST_kFailure;
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
NSScreen *windowScreen = window->getScreen();
NSRect screenRect = [windowScreen frame];
@autoreleasepool {
NSScreen *windowScreen = window->getScreen();
NSRect screenRect = [windowScreen frame];
// Set position relative to current screen
xf -= screenRect.origin.x;
yf -= screenRect.origin.y;
// Set position relative to current screen
xf -= screenRect.origin.x;
yf -= screenRect.origin.y;
// Quartz Display Services uses the old coordinates (top left origin)
yf = screenRect.size.height - yf;
// Quartz Display Services uses the old coordinates (top left origin)
yf = screenRect.size.height - yf;
CGDisplayMoveCursorToPoint((CGDirectDisplayID)[[[windowScreen deviceDescription]
objectForKey:@"NSScreenNumber"] unsignedIntValue],
CGPointMake(xf, yf));
CGDisplayMoveCursorToPoint((CGDirectDisplayID)[[[windowScreen deviceDescription]
objectForKey:@"NSScreenNumber"] unsignedIntValue],
CGPointMake(xf, yf));
// See https://stackoverflow.com/a/17559012. By default, hardware events
// will be suppressed for 500ms after a synthetic mouse event. For unknown
// reasons CGEventSourceSetLocalEventsSuppressionInterval does not work,
// however calling CGAssociateMouseAndMouseCursorPosition also removes the
// delay, even if this is undocumented.
CGAssociateMouseAndMouseCursorPosition(true);
[pool drain];
// See https://stackoverflow.com/a/17559012. By default, hardware events
// will be suppressed for 500ms after a synthetic mouse event. For unknown
// reasons CGEventSourceSetLocalEventsSuppressionInterval does not work,
// however calling CGAssociateMouseAndMouseCursorPosition also removes the
// delay, even if this is undocumented.
CGAssociateMouseAndMouseCursorPosition(true);
}
return GHOST_kSuccess;
}
@@ -928,42 +919,40 @@ bool GHOST_SystemCocoa::processEvents(bool waitForEvent)
}
#endif
do {
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
event = [NSApp nextEventMatchingMask:NSEventMaskAny
untilDate:[NSDate distantPast]
inMode:NSDefaultRunLoopMode
dequeue:YES];
if (event == nil) {
[pool drain];
break;
}
@autoreleasepool {
event = [NSApp nextEventMatchingMask:NSEventMaskAny
untilDate:[NSDate distantPast]
inMode:NSDefaultRunLoopMode
dequeue:YES];
if (event == nil) {
break;
}
anyProcessed = true;
anyProcessed = true;
// Send event to NSApp to ensure Mac wide events are handled,
// this will send events to CocoaWindow which will call back
// to handleKeyEvent, handleMouseEvent and handleTabletEvent
// Send event to NSApp to ensure Mac wide events are handled,
// this will send events to CocoaWindow which will call back
// to handleKeyEvent, handleMouseEvent and handleTabletEvent
// There is on special exception for ctrl+(shift)+tab. We do not
// get keyDown events delivered to the view because they are
// special hotkeys to switch between views, so override directly
// There is on special exception for ctrl+(shift)+tab. We do not
// get keyDown events delivered to the view because they are
// special hotkeys to switch between views, so override directly
if ([event type] == NSEventTypeKeyDown && [event keyCode] == kVK_Tab &&
([event modifierFlags] & NSEventModifierFlagControl)) {
handleKeyEvent(event);
}
else {
// For some reason NSApp is swallowing the key up events when modifier
// key is pressed, even if there seems to be no apparent reason to do
// so, as a workaround we always handle these up events.
if ([event type] == NSEventTypeKeyUp &&
([event modifierFlags] & (NSEventModifierFlagCommand | NSEventModifierFlagOption)))
if ([event type] == NSEventTypeKeyDown && [event keyCode] == kVK_Tab &&
([event modifierFlags] & NSEventModifierFlagControl)) {
handleKeyEvent(event);
}
else {
// For some reason NSApp is swallowing the key up events when modifier
// key is pressed, even if there seems to be no apparent reason to do
// so, as a workaround we always handle these up events.
if ([event type] == NSEventTypeKeyUp &&
([event modifierFlags] & (NSEventModifierFlagCommand | NSEventModifierFlagOption)))
handleKeyEvent(event);
[NSApp sendEvent:event];
[NSApp sendEvent:event];
}
}
[pool drain];
} while (event != nil);
#if 0
} while (waitForEvent && !anyProcessed); // Needed only for timer implementation
@@ -1677,10 +1666,8 @@ GHOST_TSuccess GHOST_SystemCocoa::handleMouseEvent(void *eventPtr)
NSEventPhase momentumPhase = NSEventPhaseNone;
NSEventPhase phase = NSEventPhaseNone;
if ([event respondsToSelector:@selector(momentumPhase)])
momentumPhase = [event momentumPhase];
if ([event respondsToSelector:@selector(phase)])
phase = [event phase];
momentumPhase = [event momentumPhase];
phase = [event phase];
/* when pressing a key while momentum scrolling continues after
* lifting fingers off the trackpad, the action can unexpectedly
@@ -1953,78 +1940,48 @@ GHOST_TUns8 *GHOST_SystemCocoa::getClipboard(bool selection) const
GHOST_TUns8 *temp_buff;
size_t pastedTextSize;
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
@autoreleasepool {
NSPasteboard *pasteBoard = [NSPasteboard generalPasteboard];
NSPasteboard *pasteBoard = [NSPasteboard generalPasteboard];
if (pasteBoard == nil) {
[pool drain];
return NULL;
}
NSString *textPasted = [pasteBoard stringForType:NSStringPboardType];
NSArray *supportedTypes = [NSArray arrayWithObjects:NSStringPboardType, nil];
if (textPasted == nil) {
return NULL;
}
NSString *bestType = [[NSPasteboard generalPasteboard] availableTypeFromArray:supportedTypes];
pastedTextSize = [textPasted lengthOfBytesUsingEncoding:NSUTF8StringEncoding];
if (bestType == nil) {
[pool drain];
return NULL;
}
temp_buff = (GHOST_TUns8 *)malloc(pastedTextSize + 1);
NSString *textPasted = [pasteBoard stringForType:NSStringPboardType];
if (temp_buff == NULL) {
return NULL;
}
if (textPasted == nil) {
[pool drain];
return NULL;
}
strncpy(
(char *)temp_buff, [textPasted cStringUsingEncoding:NSUTF8StringEncoding], pastedTextSize);
pastedTextSize = [textPasted lengthOfBytesUsingEncoding:NSUTF8StringEncoding];
temp_buff[pastedTextSize] = '\0';
temp_buff = (GHOST_TUns8 *)malloc(pastedTextSize + 1);
if (temp_buff == NULL) {
[pool drain];
return NULL;
}
strncpy(
(char *)temp_buff, [textPasted cStringUsingEncoding:NSUTF8StringEncoding], pastedTextSize);
temp_buff[pastedTextSize] = '\0';
[pool drain];
if (temp_buff) {
return temp_buff;
}
else {
return NULL;
if (temp_buff) {
return temp_buff;
}
else {
return NULL;
}
}
}
void GHOST_SystemCocoa::putClipboard(GHOST_TInt8 *buffer, bool selection) const
{
NSString *textToCopy;
if (selection)
return; // for copying the selection, used on X11
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
@autoreleasepool {
NSPasteboard *pasteBoard = [NSPasteboard generalPasteboard];
if (pasteBoard == nil) {
[pool drain];
return;
NSPasteboard *pasteBoard = NSPasteboard.generalPasteboard;
[pasteBoard declareTypes:@[ NSStringPboardType ] owner:nil];
NSString *textToCopy = [NSString stringWithCString:buffer encoding:NSUTF8StringEncoding];
[pasteBoard setString:textToCopy forType:NSStringPboardType];
}
NSArray *supportedTypes = [NSArray arrayWithObject:NSStringPboardType];
[pasteBoard declareTypes:supportedTypes owner:nil];
textToCopy = [NSString stringWithCString:buffer encoding:NSUTF8StringEncoding];
[pasteBoard setString:textToCopy forType:NSStringPboardType];
[pool drain];
}

View File

@@ -1,2 +1,34 @@
DisableFormat: true
SortIncludes: false
BasedOnStyle: Google
ColumnLimit: 80
Standard: Cpp11
# Indent nested preprocessor.
# #ifdef Foo
# # include <nested>
# #endif
IndentPPDirectives: AfterHash
# For the cases when namespace is closing with a wrong comment
FixNamespaceComments: true
AllowShortFunctionsOnASingleLine: InlineOnly
AllowShortBlocksOnASingleLine: false
AllowShortIfStatementsOnASingleLine: false
AllowShortLoopsOnASingleLine: false
AllowShortCaseLabelsOnASingleLine: true
# No bin packing, every argument is on its own line.
BinPackArguments: false
BinPackParameters: false
# Ensure pointer alignment.
# ObjectType* object;
PointerAlignment: Left
DerivePointerAlignment: false
AlignEscapedNewlines: Right
IncludeBlocks: Preserve
SortIncludes: true

View File

@@ -22,15 +22,15 @@
#include "intern/utildefines.h"
#include "libmv/autotrack/autotrack.h"
using libmv::TrackRegionOptions;
using libmv::TrackRegionResult;
using mv::AutoTrack;
using mv::FrameAccessor;
using mv::Marker;
using libmv::TrackRegionOptions;
using libmv::TrackRegionResult;
libmv_AutoTrack* libmv_autoTrackNew(libmv_FrameAccessor *frame_accessor) {
return (libmv_AutoTrack*) LIBMV_OBJECT_NEW(AutoTrack,
(FrameAccessor*) frame_accessor);
libmv_AutoTrack* libmv_autoTrackNew(libmv_FrameAccessor* frame_accessor) {
return (libmv_AutoTrack*)LIBMV_OBJECT_NEW(AutoTrack,
(FrameAccessor*)frame_accessor);
}
void libmv_autoTrackDestroy(libmv_AutoTrack* libmv_autotrack) {
@@ -39,7 +39,7 @@ void libmv_autoTrackDestroy(libmv_AutoTrack* libmv_autotrack) {
void libmv_autoTrackSetOptions(libmv_AutoTrack* libmv_autotrack,
const libmv_AutoTrackOptions* options) {
AutoTrack *autotrack = ((AutoTrack*) libmv_autotrack);
AutoTrack* autotrack = ((AutoTrack*)libmv_autotrack);
libmv_configureTrackRegionOptions(options->track_region,
&autotrack->options.track_region);
@@ -51,18 +51,15 @@ void libmv_autoTrackSetOptions(libmv_AutoTrack* libmv_autotrack,
int libmv_autoTrackMarker(libmv_AutoTrack* libmv_autotrack,
const libmv_TrackRegionOptions* libmv_options,
libmv_Marker *libmv_tracked_marker,
libmv_Marker* libmv_tracked_marker,
libmv_TrackRegionResult* libmv_result) {
Marker tracked_marker;
TrackRegionOptions options;
TrackRegionResult result;
libmv_apiMarkerToMarker(*libmv_tracked_marker, &tracked_marker);
libmv_configureTrackRegionOptions(*libmv_options,
&options);
bool ok = (((AutoTrack*) libmv_autotrack)->TrackMarker(&tracked_marker,
&result,
&options));
libmv_configureTrackRegionOptions(*libmv_options, &options);
bool ok = (((AutoTrack*)libmv_autotrack)
->TrackMarker(&tracked_marker, &result, &options));
libmv_markerToApiMarker(tracked_marker, libmv_tracked_marker);
libmv_regionTrackergetResult(result, libmv_result);
return ok && result.is_usable();
@@ -72,7 +69,7 @@ void libmv_autoTrackAddMarker(libmv_AutoTrack* libmv_autotrack,
const libmv_Marker* libmv_marker) {
Marker marker;
libmv_apiMarkerToMarker(*libmv_marker, &marker);
((AutoTrack*) libmv_autotrack)->AddMarker(marker);
((AutoTrack*)libmv_autotrack)->AddMarker(marker);
}
void libmv_autoTrackSetMarkers(libmv_AutoTrack* libmv_autotrack,
@@ -87,19 +84,17 @@ void libmv_autoTrackSetMarkers(libmv_AutoTrack* libmv_autotrack,
for (size_t i = 0; i < num_markers; ++i) {
libmv_apiMarkerToMarker(libmv_marker[i], &markers[i]);
}
((AutoTrack*) libmv_autotrack)->SetMarkers(&markers);
((AutoTrack*)libmv_autotrack)->SetMarkers(&markers);
}
int libmv_autoTrackGetMarker(libmv_AutoTrack* libmv_autotrack,
int clip,
int frame,
int track,
libmv_Marker *libmv_marker) {
libmv_Marker* libmv_marker) {
Marker marker;
int ok = ((AutoTrack*) libmv_autotrack)->GetMarker(clip,
frame,
track,
&marker);
int ok =
((AutoTrack*)libmv_autotrack)->GetMarker(clip, frame, track, &marker);
if (ok) {
libmv_markerToApiMarker(marker, libmv_marker);
}

View File

@@ -21,9 +21,9 @@
#define LIBMV_C_API_AUTOTRACK_H_
#include "intern/frame_accessor.h"
#include "intern/tracksN.h"
#include "intern/track_region.h"
#include "intern/region.h"
#include "intern/track_region.h"
#include "intern/tracksN.h"
#ifdef __cplusplus
extern "C" {
@@ -36,7 +36,7 @@ typedef struct libmv_AutoTrackOptions {
libmv_Region search_region;
} libmv_AutoTrackOptions;
libmv_AutoTrack* libmv_autoTrackNew(libmv_FrameAccessor *frame_accessor);
libmv_AutoTrack* libmv_autoTrackNew(libmv_FrameAccessor* frame_accessor);
void libmv_autoTrackDestroy(libmv_AutoTrack* libmv_autotrack);
@@ -45,7 +45,7 @@ void libmv_autoTrackSetOptions(libmv_AutoTrack* libmv_autotrack,
int libmv_autoTrackMarker(libmv_AutoTrack* libmv_autotrack,
const libmv_TrackRegionOptions* libmv_options,
libmv_Marker *libmv_tracker_marker,
libmv_Marker* libmv_tracker_marker,
libmv_TrackRegionResult* libmv_result);
void libmv_autoTrackAddMarker(libmv_AutoTrack* libmv_autotrack,
@@ -59,7 +59,7 @@ int libmv_autoTrackGetMarker(libmv_AutoTrack* libmv_autotrack,
int clip,
int frame,
int track,
libmv_Marker *libmv_marker);
libmv_Marker* libmv_marker);
#ifdef __cplusplus
}

View File

@@ -21,62 +21,56 @@
#include "intern/utildefines.h"
#include "libmv/simple_pipeline/camera_intrinsics.h"
using libmv::BrownCameraIntrinsics;
using libmv::CameraIntrinsics;
using libmv::DivisionCameraIntrinsics;
using libmv::PolynomialCameraIntrinsics;
using libmv::NukeCameraIntrinsics;
using libmv::BrownCameraIntrinsics;
using libmv::PolynomialCameraIntrinsics;
libmv_CameraIntrinsics *libmv_cameraIntrinsicsNew(
libmv_CameraIntrinsics* libmv_cameraIntrinsicsNew(
const libmv_CameraIntrinsicsOptions* libmv_camera_intrinsics_options) {
CameraIntrinsics *camera_intrinsics =
libmv_cameraIntrinsicsCreateFromOptions(libmv_camera_intrinsics_options);
return (libmv_CameraIntrinsics *) camera_intrinsics;
CameraIntrinsics* camera_intrinsics =
libmv_cameraIntrinsicsCreateFromOptions(libmv_camera_intrinsics_options);
return (libmv_CameraIntrinsics*)camera_intrinsics;
}
libmv_CameraIntrinsics *libmv_cameraIntrinsicsCopy(
libmv_CameraIntrinsics* libmv_cameraIntrinsicsCopy(
const libmv_CameraIntrinsics* libmv_intrinsics) {
const CameraIntrinsics *orig_intrinsics =
(const CameraIntrinsics *) libmv_intrinsics;
const CameraIntrinsics* orig_intrinsics =
(const CameraIntrinsics*)libmv_intrinsics;
CameraIntrinsics *new_intrinsics = NULL;
CameraIntrinsics* new_intrinsics = NULL;
switch (orig_intrinsics->GetDistortionModelType()) {
case libmv::DISTORTION_MODEL_POLYNOMIAL:
{
const PolynomialCameraIntrinsics *polynomial_intrinsics =
case libmv::DISTORTION_MODEL_POLYNOMIAL: {
const PolynomialCameraIntrinsics* polynomial_intrinsics =
static_cast<const PolynomialCameraIntrinsics*>(orig_intrinsics);
new_intrinsics = LIBMV_OBJECT_NEW(PolynomialCameraIntrinsics,
*polynomial_intrinsics);
break;
}
case libmv::DISTORTION_MODEL_DIVISION:
{
const DivisionCameraIntrinsics *division_intrinsics =
new_intrinsics =
LIBMV_OBJECT_NEW(PolynomialCameraIntrinsics, *polynomial_intrinsics);
break;
}
case libmv::DISTORTION_MODEL_DIVISION: {
const DivisionCameraIntrinsics* division_intrinsics =
static_cast<const DivisionCameraIntrinsics*>(orig_intrinsics);
new_intrinsics = LIBMV_OBJECT_NEW(DivisionCameraIntrinsics,
*division_intrinsics);
break;
}
case libmv::DISTORTION_MODEL_NUKE:
{
const NukeCameraIntrinsics *nuke_intrinsics =
new_intrinsics =
LIBMV_OBJECT_NEW(DivisionCameraIntrinsics, *division_intrinsics);
break;
}
case libmv::DISTORTION_MODEL_NUKE: {
const NukeCameraIntrinsics* nuke_intrinsics =
static_cast<const NukeCameraIntrinsics*>(orig_intrinsics);
new_intrinsics = LIBMV_OBJECT_NEW(NukeCameraIntrinsics,
*nuke_intrinsics);
break;
}
case libmv::DISTORTION_MODEL_BROWN:
{
const BrownCameraIntrinsics *brown_intrinsics =
new_intrinsics = LIBMV_OBJECT_NEW(NukeCameraIntrinsics, *nuke_intrinsics);
break;
}
case libmv::DISTORTION_MODEL_BROWN: {
const BrownCameraIntrinsics* brown_intrinsics =
static_cast<const BrownCameraIntrinsics*>(orig_intrinsics);
new_intrinsics = LIBMV_OBJECT_NEW(BrownCameraIntrinsics,
*brown_intrinsics);
break;
}
default:
assert(!"Unknown distortion model");
new_intrinsics =
LIBMV_OBJECT_NEW(BrownCameraIntrinsics, *brown_intrinsics);
break;
}
default: assert(!"Unknown distortion model");
}
return (libmv_CameraIntrinsics *) new_intrinsics;
return (libmv_CameraIntrinsics*)new_intrinsics;
}
void libmv_cameraIntrinsicsDestroy(libmv_CameraIntrinsics* libmv_intrinsics) {
@@ -86,7 +80,7 @@ void libmv_cameraIntrinsicsDestroy(libmv_CameraIntrinsics* libmv_intrinsics) {
void libmv_cameraIntrinsicsUpdate(
const libmv_CameraIntrinsicsOptions* libmv_camera_intrinsics_options,
libmv_CameraIntrinsics* libmv_intrinsics) {
CameraIntrinsics *camera_intrinsics = (CameraIntrinsics *) libmv_intrinsics;
CameraIntrinsics* camera_intrinsics = (CameraIntrinsics*)libmv_intrinsics;
double focal_length = libmv_camera_intrinsics_options->focal_length;
double principal_x = libmv_camera_intrinsics_options->principal_point_x;
@@ -115,191 +109,173 @@ void libmv_cameraIntrinsicsUpdate(
}
switch (libmv_camera_intrinsics_options->distortion_model) {
case LIBMV_DISTORTION_MODEL_POLYNOMIAL:
{
assert(camera_intrinsics->GetDistortionModelType() ==
libmv::DISTORTION_MODEL_POLYNOMIAL);
case LIBMV_DISTORTION_MODEL_POLYNOMIAL: {
assert(camera_intrinsics->GetDistortionModelType() ==
libmv::DISTORTION_MODEL_POLYNOMIAL);
PolynomialCameraIntrinsics *polynomial_intrinsics =
(PolynomialCameraIntrinsics *) camera_intrinsics;
PolynomialCameraIntrinsics* polynomial_intrinsics =
(PolynomialCameraIntrinsics*)camera_intrinsics;
double k1 = libmv_camera_intrinsics_options->polynomial_k1;
double k2 = libmv_camera_intrinsics_options->polynomial_k2;
double k3 = libmv_camera_intrinsics_options->polynomial_k3;
double k1 = libmv_camera_intrinsics_options->polynomial_k1;
double k2 = libmv_camera_intrinsics_options->polynomial_k2;
double k3 = libmv_camera_intrinsics_options->polynomial_k3;
if (polynomial_intrinsics->k1() != k1 ||
polynomial_intrinsics->k2() != k2 ||
polynomial_intrinsics->k3() != k3) {
polynomial_intrinsics->SetRadialDistortion(k1, k2, k3);
}
break;
if (polynomial_intrinsics->k1() != k1 ||
polynomial_intrinsics->k2() != k2 ||
polynomial_intrinsics->k3() != k3) {
polynomial_intrinsics->SetRadialDistortion(k1, k2, k3);
}
break;
}
case LIBMV_DISTORTION_MODEL_DIVISION: {
assert(camera_intrinsics->GetDistortionModelType() ==
libmv::DISTORTION_MODEL_DIVISION);
DivisionCameraIntrinsics* division_intrinsics =
(DivisionCameraIntrinsics*)camera_intrinsics;
double k1 = libmv_camera_intrinsics_options->division_k1;
double k2 = libmv_camera_intrinsics_options->division_k2;
if (division_intrinsics->k1() != k1 || division_intrinsics->k2() != k2) {
division_intrinsics->SetDistortion(k1, k2);
}
case LIBMV_DISTORTION_MODEL_DIVISION:
{
assert(camera_intrinsics->GetDistortionModelType() ==
libmv::DISTORTION_MODEL_DIVISION);
break;
}
DivisionCameraIntrinsics *division_intrinsics =
(DivisionCameraIntrinsics *) camera_intrinsics;
case LIBMV_DISTORTION_MODEL_NUKE: {
assert(camera_intrinsics->GetDistortionModelType() ==
libmv::DISTORTION_MODEL_NUKE);
double k1 = libmv_camera_intrinsics_options->division_k1;
double k2 = libmv_camera_intrinsics_options->division_k2;
NukeCameraIntrinsics* nuke_intrinsics =
(NukeCameraIntrinsics*)camera_intrinsics;
if (division_intrinsics->k1() != k1 ||
division_intrinsics->k2() != k2) {
division_intrinsics->SetDistortion(k1, k2);
}
double k1 = libmv_camera_intrinsics_options->nuke_k1;
double k2 = libmv_camera_intrinsics_options->nuke_k2;
break;
if (nuke_intrinsics->k1() != k1 || nuke_intrinsics->k2() != k2) {
nuke_intrinsics->SetDistortion(k1, k2);
}
case LIBMV_DISTORTION_MODEL_NUKE:
{
assert(camera_intrinsics->GetDistortionModelType() ==
libmv::DISTORTION_MODEL_NUKE);
break;
}
NukeCameraIntrinsics *nuke_intrinsics =
(NukeCameraIntrinsics *) camera_intrinsics;
case LIBMV_DISTORTION_MODEL_BROWN: {
assert(camera_intrinsics->GetDistortionModelType() ==
libmv::DISTORTION_MODEL_BROWN);
double k1 = libmv_camera_intrinsics_options->nuke_k1;
double k2 = libmv_camera_intrinsics_options->nuke_k2;
BrownCameraIntrinsics* brown_intrinsics =
(BrownCameraIntrinsics*)camera_intrinsics;
if (nuke_intrinsics->k1() != k1 ||
nuke_intrinsics->k2() != k2) {
nuke_intrinsics->SetDistortion(k1, k2);
}
double k1 = libmv_camera_intrinsics_options->brown_k1;
double k2 = libmv_camera_intrinsics_options->brown_k2;
double k3 = libmv_camera_intrinsics_options->brown_k3;
double k4 = libmv_camera_intrinsics_options->brown_k4;
break;
if (brown_intrinsics->k1() != k1 || brown_intrinsics->k2() != k2 ||
brown_intrinsics->k3() != k3 || brown_intrinsics->k4() != k4) {
brown_intrinsics->SetRadialDistortion(k1, k2, k3, k4);
}
case LIBMV_DISTORTION_MODEL_BROWN:
{
assert(camera_intrinsics->GetDistortionModelType() ==
libmv::DISTORTION_MODEL_BROWN);
double p1 = libmv_camera_intrinsics_options->brown_p1;
double p2 = libmv_camera_intrinsics_options->brown_p2;
BrownCameraIntrinsics *brown_intrinsics =
(BrownCameraIntrinsics *) camera_intrinsics;
double k1 = libmv_camera_intrinsics_options->brown_k1;
double k2 = libmv_camera_intrinsics_options->brown_k2;
double k3 = libmv_camera_intrinsics_options->brown_k3;
double k4 = libmv_camera_intrinsics_options->brown_k4;
if (brown_intrinsics->k1() != k1 ||
brown_intrinsics->k2() != k2 ||
brown_intrinsics->k3() != k3 ||
brown_intrinsics->k4() != k4) {
brown_intrinsics->SetRadialDistortion(k1, k2, k3, k4);
}
double p1 = libmv_camera_intrinsics_options->brown_p1;
double p2 = libmv_camera_intrinsics_options->brown_p2;
if (brown_intrinsics->p1() != p1 || brown_intrinsics->p2() != p2) {
brown_intrinsics->SetTangentialDistortion(p1, p2);
}
break;
if (brown_intrinsics->p1() != p1 || brown_intrinsics->p2() != p2) {
brown_intrinsics->SetTangentialDistortion(p1, p2);
}
break;
}
default:
assert(!"Unknown distortion model");
default: assert(!"Unknown distortion model");
}
}
void libmv_cameraIntrinsicsSetThreads(libmv_CameraIntrinsics* libmv_intrinsics,
int threads) {
CameraIntrinsics *camera_intrinsics = (CameraIntrinsics *) libmv_intrinsics;
CameraIntrinsics* camera_intrinsics = (CameraIntrinsics*)libmv_intrinsics;
camera_intrinsics->SetThreads(threads);
}
void libmv_cameraIntrinsicsExtractOptions(
const libmv_CameraIntrinsics* libmv_intrinsics,
libmv_CameraIntrinsicsOptions* camera_intrinsics_options) {
const CameraIntrinsics *camera_intrinsics =
(const CameraIntrinsics *) libmv_intrinsics;
const CameraIntrinsics* camera_intrinsics =
(const CameraIntrinsics*)libmv_intrinsics;
// Fill in options which are common for all distortion models.
camera_intrinsics_options->focal_length = camera_intrinsics->focal_length();
camera_intrinsics_options->principal_point_x =
camera_intrinsics->principal_point_x();
camera_intrinsics->principal_point_x();
camera_intrinsics_options->principal_point_y =
camera_intrinsics->principal_point_y();
camera_intrinsics->principal_point_y();
camera_intrinsics_options->image_width = camera_intrinsics->image_width();
camera_intrinsics_options->image_height = camera_intrinsics->image_height();
switch (camera_intrinsics->GetDistortionModelType()) {
case libmv::DISTORTION_MODEL_POLYNOMIAL:
{
const PolynomialCameraIntrinsics *polynomial_intrinsics =
static_cast<const PolynomialCameraIntrinsics *>(camera_intrinsics);
camera_intrinsics_options->distortion_model =
case libmv::DISTORTION_MODEL_POLYNOMIAL: {
const PolynomialCameraIntrinsics* polynomial_intrinsics =
static_cast<const PolynomialCameraIntrinsics*>(camera_intrinsics);
camera_intrinsics_options->distortion_model =
LIBMV_DISTORTION_MODEL_POLYNOMIAL;
camera_intrinsics_options->polynomial_k1 = polynomial_intrinsics->k1();
camera_intrinsics_options->polynomial_k2 = polynomial_intrinsics->k2();
camera_intrinsics_options->polynomial_k3 = polynomial_intrinsics->k3();
camera_intrinsics_options->polynomial_p1 = polynomial_intrinsics->p1();
camera_intrinsics_options->polynomial_p2 = polynomial_intrinsics->p2();
break;
}
camera_intrinsics_options->polynomial_k1 = polynomial_intrinsics->k1();
camera_intrinsics_options->polynomial_k2 = polynomial_intrinsics->k2();
camera_intrinsics_options->polynomial_k3 = polynomial_intrinsics->k3();
camera_intrinsics_options->polynomial_p1 = polynomial_intrinsics->p1();
camera_intrinsics_options->polynomial_p2 = polynomial_intrinsics->p2();
break;
}
case libmv::DISTORTION_MODEL_DIVISION:
{
const DivisionCameraIntrinsics *division_intrinsics =
static_cast<const DivisionCameraIntrinsics *>(camera_intrinsics);
camera_intrinsics_options->distortion_model =
case libmv::DISTORTION_MODEL_DIVISION: {
const DivisionCameraIntrinsics* division_intrinsics =
static_cast<const DivisionCameraIntrinsics*>(camera_intrinsics);
camera_intrinsics_options->distortion_model =
LIBMV_DISTORTION_MODEL_DIVISION;
camera_intrinsics_options->division_k1 = division_intrinsics->k1();
camera_intrinsics_options->division_k2 = division_intrinsics->k2();
break;
}
camera_intrinsics_options->division_k1 = division_intrinsics->k1();
camera_intrinsics_options->division_k2 = division_intrinsics->k2();
break;
}
case libmv::DISTORTION_MODEL_NUKE:
{
const NukeCameraIntrinsics *nuke_intrinsics =
static_cast<const NukeCameraIntrinsics *>(camera_intrinsics);
camera_intrinsics_options->distortion_model =
LIBMV_DISTORTION_MODEL_NUKE;
camera_intrinsics_options->nuke_k1 = nuke_intrinsics->k1();
camera_intrinsics_options->nuke_k2 = nuke_intrinsics->k2();
break;
}
case libmv::DISTORTION_MODEL_NUKE: {
const NukeCameraIntrinsics* nuke_intrinsics =
static_cast<const NukeCameraIntrinsics*>(camera_intrinsics);
camera_intrinsics_options->distortion_model = LIBMV_DISTORTION_MODEL_NUKE;
camera_intrinsics_options->nuke_k1 = nuke_intrinsics->k1();
camera_intrinsics_options->nuke_k2 = nuke_intrinsics->k2();
break;
}
case libmv::DISTORTION_MODEL_BROWN:
{
const BrownCameraIntrinsics *brown_intrinsics =
static_cast<const BrownCameraIntrinsics *>(camera_intrinsics);
camera_intrinsics_options->distortion_model =
case libmv::DISTORTION_MODEL_BROWN: {
const BrownCameraIntrinsics* brown_intrinsics =
static_cast<const BrownCameraIntrinsics*>(camera_intrinsics);
camera_intrinsics_options->distortion_model =
LIBMV_DISTORTION_MODEL_BROWN;
camera_intrinsics_options->brown_k1 = brown_intrinsics->k1();
camera_intrinsics_options->brown_k2 = brown_intrinsics->k2();
camera_intrinsics_options->brown_k3 = brown_intrinsics->k3();
camera_intrinsics_options->brown_k4 = brown_intrinsics->k4();
camera_intrinsics_options->brown_p1 = brown_intrinsics->p1();
camera_intrinsics_options->brown_p2 = brown_intrinsics->p2();
break;
}
camera_intrinsics_options->brown_k1 = brown_intrinsics->k1();
camera_intrinsics_options->brown_k2 = brown_intrinsics->k2();
camera_intrinsics_options->brown_k3 = brown_intrinsics->k3();
camera_intrinsics_options->brown_k4 = brown_intrinsics->k4();
camera_intrinsics_options->brown_p1 = brown_intrinsics->p1();
camera_intrinsics_options->brown_p2 = brown_intrinsics->p2();
break;
}
default:
assert(!"Unknown distortion model");
default: assert(!"Unknown distortion model");
}
}
void libmv_cameraIntrinsicsUndistortByte(
const libmv_CameraIntrinsics* libmv_intrinsics,
const unsigned char *source_image,
const unsigned char* source_image,
int width,
int height,
float overscan,
int channels,
unsigned char* destination_image) {
CameraIntrinsics *camera_intrinsics = (CameraIntrinsics *) libmv_intrinsics;
camera_intrinsics->UndistortBuffer(source_image,
width, height,
overscan,
channels,
destination_image);
CameraIntrinsics* camera_intrinsics = (CameraIntrinsics*)libmv_intrinsics;
camera_intrinsics->UndistortBuffer(
source_image, width, height, overscan, channels, destination_image);
}
void libmv_cameraIntrinsicsUndistortFloat(
@@ -310,28 +286,22 @@ void libmv_cameraIntrinsicsUndistortFloat(
float overscan,
int channels,
float* destination_image) {
CameraIntrinsics *intrinsics = (CameraIntrinsics *) libmv_intrinsics;
intrinsics->UndistortBuffer(source_image,
width, height,
overscan,
channels,
destination_image);
CameraIntrinsics* intrinsics = (CameraIntrinsics*)libmv_intrinsics;
intrinsics->UndistortBuffer(
source_image, width, height, overscan, channels, destination_image);
}
void libmv_cameraIntrinsicsDistortByte(
const struct libmv_CameraIntrinsics* libmv_intrinsics,
const unsigned char *source_image,
const unsigned char* source_image,
int width,
int height,
float overscan,
int channels,
unsigned char *destination_image) {
CameraIntrinsics *intrinsics = (CameraIntrinsics *) libmv_intrinsics;
intrinsics->DistortBuffer(source_image,
width, height,
overscan,
channels,
destination_image);
unsigned char* destination_image) {
CameraIntrinsics* intrinsics = (CameraIntrinsics*)libmv_intrinsics;
intrinsics->DistortBuffer(
source_image, width, height, overscan, channels, destination_image);
}
void libmv_cameraIntrinsicsDistortFloat(
@@ -342,12 +312,9 @@ void libmv_cameraIntrinsicsDistortFloat(
float overscan,
int channels,
float* destination_image) {
CameraIntrinsics *intrinsics = (CameraIntrinsics *) libmv_intrinsics;
intrinsics->DistortBuffer(source_image,
width, height,
overscan,
channels,
destination_image);
CameraIntrinsics* intrinsics = (CameraIntrinsics*)libmv_intrinsics;
intrinsics->DistortBuffer(
source_image, width, height, overscan, channels, destination_image);
}
void libmv_cameraIntrinsicsApply(
@@ -356,7 +323,7 @@ void libmv_cameraIntrinsicsApply(
double y,
double* x1,
double* y1) {
CameraIntrinsics *intrinsics = (CameraIntrinsics *) libmv_intrinsics;
CameraIntrinsics* intrinsics = (CameraIntrinsics*)libmv_intrinsics;
intrinsics->ApplyIntrinsics(x, y, x1, y1);
}
@@ -366,7 +333,7 @@ void libmv_cameraIntrinsicsInvert(
double y,
double* x1,
double* y1) {
CameraIntrinsics *intrinsics = (CameraIntrinsics *) libmv_intrinsics;
CameraIntrinsics* intrinsics = (CameraIntrinsics*)libmv_intrinsics;
intrinsics->InvertIntrinsics(x, y, x1, y1);
}
@@ -381,69 +348,63 @@ static void libmv_cameraIntrinsicsFillFromOptions(
camera_intrinsics_options->principal_point_y);
camera_intrinsics->SetImageSize(camera_intrinsics_options->image_width,
camera_intrinsics_options->image_height);
camera_intrinsics_options->image_height);
switch (camera_intrinsics_options->distortion_model) {
case LIBMV_DISTORTION_MODEL_POLYNOMIAL:
{
PolynomialCameraIntrinsics *polynomial_intrinsics =
case LIBMV_DISTORTION_MODEL_POLYNOMIAL: {
PolynomialCameraIntrinsics* polynomial_intrinsics =
static_cast<PolynomialCameraIntrinsics*>(camera_intrinsics);
polynomial_intrinsics->SetRadialDistortion(
camera_intrinsics_options->polynomial_k1,
camera_intrinsics_options->polynomial_k2,
camera_intrinsics_options->polynomial_k3);
polynomial_intrinsics->SetRadialDistortion(
camera_intrinsics_options->polynomial_k1,
camera_intrinsics_options->polynomial_k2,
camera_intrinsics_options->polynomial_k3);
break;
}
break;
}
case LIBMV_DISTORTION_MODEL_DIVISION:
{
DivisionCameraIntrinsics *division_intrinsics =
case LIBMV_DISTORTION_MODEL_DIVISION: {
DivisionCameraIntrinsics* division_intrinsics =
static_cast<DivisionCameraIntrinsics*>(camera_intrinsics);
division_intrinsics->SetDistortion(
camera_intrinsics_options->division_k1,
camera_intrinsics_options->division_k2);
break;
}
division_intrinsics->SetDistortion(
camera_intrinsics_options->division_k1,
camera_intrinsics_options->division_k2);
break;
}
case LIBMV_DISTORTION_MODEL_NUKE:
{
NukeCameraIntrinsics *nuke_intrinsics =
case LIBMV_DISTORTION_MODEL_NUKE: {
NukeCameraIntrinsics* nuke_intrinsics =
static_cast<NukeCameraIntrinsics*>(camera_intrinsics);
nuke_intrinsics->SetDistortion(
camera_intrinsics_options->nuke_k1,
camera_intrinsics_options->nuke_k2);
break;
}
nuke_intrinsics->SetDistortion(camera_intrinsics_options->nuke_k1,
camera_intrinsics_options->nuke_k2);
break;
}
case LIBMV_DISTORTION_MODEL_BROWN:
{
BrownCameraIntrinsics *brown_intrinsics =
case LIBMV_DISTORTION_MODEL_BROWN: {
BrownCameraIntrinsics* brown_intrinsics =
static_cast<BrownCameraIntrinsics*>(camera_intrinsics);
brown_intrinsics->SetRadialDistortion(
camera_intrinsics_options->brown_k1,
camera_intrinsics_options->brown_k2,
camera_intrinsics_options->brown_k3,
camera_intrinsics_options->brown_k4);
brown_intrinsics->SetTangentialDistortion(
brown_intrinsics->SetRadialDistortion(
camera_intrinsics_options->brown_k1,
camera_intrinsics_options->brown_k2,
camera_intrinsics_options->brown_k3,
camera_intrinsics_options->brown_k4);
brown_intrinsics->SetTangentialDistortion(
camera_intrinsics_options->brown_p1,
camera_intrinsics_options->brown_p2);
break;
}
break;
}
default:
assert(!"Unknown distortion model");
default: assert(!"Unknown distortion model");
}
}
CameraIntrinsics* libmv_cameraIntrinsicsCreateFromOptions(
const libmv_CameraIntrinsicsOptions* camera_intrinsics_options) {
CameraIntrinsics *camera_intrinsics = NULL;
CameraIntrinsics* camera_intrinsics = NULL;
switch (camera_intrinsics_options->distortion_model) {
case LIBMV_DISTORTION_MODEL_POLYNOMIAL:
camera_intrinsics = LIBMV_OBJECT_NEW(PolynomialCameraIntrinsics);
@@ -457,8 +418,7 @@ CameraIntrinsics* libmv_cameraIntrinsicsCreateFromOptions(
case LIBMV_DISTORTION_MODEL_BROWN:
camera_intrinsics = LIBMV_OBJECT_NEW(BrownCameraIntrinsics);
break;
default:
assert(!"Unknown distortion model");
default: assert(!"Unknown distortion model");
}
libmv_cameraIntrinsicsFillFromOptions(camera_intrinsics_options,
camera_intrinsics);

View File

@@ -56,10 +56,10 @@ typedef struct libmv_CameraIntrinsicsOptions {
double brown_p1, brown_p2;
} libmv_CameraIntrinsicsOptions;
libmv_CameraIntrinsics *libmv_cameraIntrinsicsNew(
libmv_CameraIntrinsics* libmv_cameraIntrinsicsNew(
const libmv_CameraIntrinsicsOptions* libmv_camera_intrinsics_options);
libmv_CameraIntrinsics *libmv_cameraIntrinsicsCopy(
libmv_CameraIntrinsics* libmv_cameraIntrinsicsCopy(
const libmv_CameraIntrinsics* libmv_intrinsics);
void libmv_cameraIntrinsicsDestroy(libmv_CameraIntrinsics* libmv_intrinsics);
@@ -76,7 +76,7 @@ void libmv_cameraIntrinsicsExtractOptions(
void libmv_cameraIntrinsicsUndistortByte(
const libmv_CameraIntrinsics* libmv_intrinsics,
const unsigned char *source_image,
const unsigned char* source_image,
int width,
int height,
float overscan,
@@ -94,12 +94,12 @@ void libmv_cameraIntrinsicsUndistortFloat(
void libmv_cameraIntrinsicsDistortByte(
const struct libmv_CameraIntrinsics* libmv_intrinsics,
const unsigned char *source_image,
const unsigned char* source_image,
int width,
int height,
float overscan,
int channels,
unsigned char *destination_image);
unsigned char* destination_image);
void libmv_cameraIntrinsicsDistortFloat(
const libmv_CameraIntrinsics* libmv_intrinsics,
@@ -131,7 +131,7 @@ void libmv_cameraIntrinsicsInvert(
#ifdef __cplusplus
namespace libmv {
class CameraIntrinsics;
class CameraIntrinsics;
}
libmv::CameraIntrinsics* libmv_cameraIntrinsicsCreateFromOptions(

View File

@@ -34,7 +34,7 @@ struct libmv_Features {
namespace {
libmv_Features *libmv_featuresFromVector(
libmv_Features* libmv_featuresFromVector(
const libmv::vector<Feature>& features) {
libmv_Features* libmv_features = LIBMV_STRUCT_NEW(libmv_Features, 1);
int count = features.size();
@@ -50,12 +50,12 @@ libmv_Features *libmv_featuresFromVector(
return libmv_features;
}
void libmv_convertDetectorOptions(libmv_DetectOptions *options,
DetectOptions *detector_options) {
void libmv_convertDetectorOptions(libmv_DetectOptions* options,
DetectOptions* detector_options) {
switch (options->detector) {
#define LIBMV_CONVERT(the_detector) \
case LIBMV_DETECTOR_ ## the_detector: \
detector_options->type = DetectOptions::the_detector; \
#define LIBMV_CONVERT(the_detector) \
case LIBMV_DETECTOR_##the_detector: \
detector_options->type = DetectOptions::the_detector; \
break;
LIBMV_CONVERT(FAST)
LIBMV_CONVERT(MORAVEC)
@@ -72,7 +72,7 @@ void libmv_convertDetectorOptions(libmv_DetectOptions *options,
} // namespace
libmv_Features *libmv_detectFeaturesByte(const unsigned char* image_buffer,
libmv_Features* libmv_detectFeaturesByte(const unsigned char* image_buffer,
int width,
int height,
int channels,
@@ -133,7 +133,7 @@ void libmv_getFeature(const libmv_Features* libmv_features,
double* y,
double* score,
double* size) {
Feature &feature = libmv_features->features[number];
Feature& feature = libmv_features->features[number];
*x = feature.x;
*y = feature.y;
*score = feature.score;

View File

@@ -38,7 +38,7 @@ typedef struct libmv_DetectOptions {
int min_distance;
int fast_min_trackness;
int moravec_max_count;
unsigned char *moravec_pattern;
unsigned char* moravec_pattern;
double harris_threshold;
} libmv_DetectOptions;

View File

@@ -36,20 +36,18 @@ struct LibmvFrameAccessor : public FrameAccessor {
libmv_ReleaseImageCallback release_image_callback,
libmv_GetMaskForTrackCallback get_mask_for_track_callback,
libmv_ReleaseMaskCallback release_mask_callback)
: user_data_(user_data),
get_image_callback_(get_image_callback),
release_image_callback_(release_image_callback),
get_mask_for_track_callback_(get_mask_for_track_callback),
release_mask_callback_(release_mask_callback) { }
: user_data_(user_data),
get_image_callback_(get_image_callback),
release_image_callback_(release_image_callback),
get_mask_for_track_callback_(get_mask_for_track_callback),
release_mask_callback_(release_mask_callback) {}
virtual ~LibmvFrameAccessor() {
}
virtual ~LibmvFrameAccessor() {}
libmv_InputMode get_libmv_input_mode(InputMode input_mode) {
switch (input_mode) {
#define CHECK_INPUT_MODE(mode) \
case mode: \
return LIBMV_IMAGE_MODE_ ## mode;
#define CHECK_INPUT_MODE(mode) \
case mode: return LIBMV_IMAGE_MODE_##mode;
CHECK_INPUT_MODE(MONO)
CHECK_INPUT_MODE(RGBA)
#undef CHECK_INPUT_MODE
@@ -59,8 +57,7 @@ struct LibmvFrameAccessor : public FrameAccessor {
return LIBMV_IMAGE_MODE_MONO;
}
void get_libmv_region(const Region& region,
libmv_Region* libmv_region) {
void get_libmv_region(const Region& region, libmv_Region* libmv_region) {
libmv_region->min[0] = region.min(0);
libmv_region->min[1] = region.min(1);
libmv_region->max[0] = region.max(0);
@@ -74,7 +71,7 @@ struct LibmvFrameAccessor : public FrameAccessor {
const Region* region,
const Transform* transform,
FloatImage* destination) {
float *float_buffer;
float* float_buffer;
int width, height, channels;
libmv_Region libmv_region;
if (region) {
@@ -86,46 +83,41 @@ struct LibmvFrameAccessor : public FrameAccessor {
get_libmv_input_mode(input_mode),
downscale,
region != NULL ? &libmv_region : NULL,
(libmv_FrameTransform*) transform,
(libmv_FrameTransform*)transform,
&float_buffer,
&width,
&height,
&channels);
// TODO(sergey): Dumb code for until we can set data directly.
FloatImage temp_image(float_buffer,
height,
width,
channels);
FloatImage temp_image(float_buffer, height, width, channels);
destination->CopyFrom(temp_image);
return cache_key;
}
void ReleaseImage(Key cache_key) {
release_image_callback_(cache_key);
}
void ReleaseImage(Key cache_key) { release_image_callback_(cache_key); }
Key GetMaskForTrack(int clip,
int frame,
int track,
const Region* region,
FloatImage* destination) {
float *float_buffer;
float* float_buffer;
int width, height;
libmv_Region libmv_region;
if (region) {
get_libmv_region(*region, &libmv_region);
}
Key cache_key = get_mask_for_track_callback_(
user_data_,
clip,
frame,
track,
region != NULL ? &libmv_region : NULL,
&float_buffer,
&width,
&height);
Key cache_key =
get_mask_for_track_callback_(user_data_,
clip,
frame,
track,
region != NULL ? &libmv_region : NULL,
&float_buffer,
&width,
&height);
if (cache_key == NULL) {
// No mask for the given track.
@@ -133,30 +125,21 @@ struct LibmvFrameAccessor : public FrameAccessor {
}
// TODO(sergey): Dumb code for until we can set data directly.
FloatImage temp_image(float_buffer,
height,
width,
1);
FloatImage temp_image(float_buffer, height, width, 1);
destination->CopyFrom(temp_image);
return cache_key;
}
void ReleaseMask(Key key) {
release_mask_callback_(key);
}
void ReleaseMask(Key key) { release_mask_callback_(key); }
bool GetClipDimensions(int /*clip*/, int * /*width*/, int * /*height*/) {
bool GetClipDimensions(int /*clip*/, int* /*width*/, int* /*height*/) {
return false;
}
int NumClips() {
return 1;
}
int NumClips() { return 1; }
int NumFrames(int /*clip*/) {
return 0;
}
int NumFrames(int /*clip*/) { return 0; }
libmv_FrameAccessorUserData* user_data_;
libmv_GetImageCallback get_image_callback_;
@@ -173,35 +156,35 @@ libmv_FrameAccessor* libmv_FrameAccessorNew(
libmv_ReleaseImageCallback release_image_callback,
libmv_GetMaskForTrackCallback get_mask_for_track_callback,
libmv_ReleaseMaskCallback release_mask_callback) {
return (libmv_FrameAccessor*) LIBMV_OBJECT_NEW(LibmvFrameAccessor,
user_data,
get_image_callback,
release_image_callback,
get_mask_for_track_callback,
release_mask_callback);
return (libmv_FrameAccessor*)LIBMV_OBJECT_NEW(LibmvFrameAccessor,
user_data,
get_image_callback,
release_image_callback,
get_mask_for_track_callback,
release_mask_callback);
}
void libmv_FrameAccessorDestroy(libmv_FrameAccessor* frame_accessor) {
LIBMV_OBJECT_DELETE(frame_accessor, LibmvFrameAccessor);
}
int64_t libmv_frameAccessorgetTransformKey(const libmv_FrameTransform *transform) {
return ((FrameAccessor::Transform*) transform)->key();
int64_t libmv_frameAccessorgetTransformKey(
const libmv_FrameTransform* transform) {
return ((FrameAccessor::Transform*)transform)->key();
}
void libmv_frameAccessorgetTransformRun(const libmv_FrameTransform *transform,
const libmv_FloatImage *input_image,
libmv_FloatImage *output_image) {
void libmv_frameAccessorgetTransformRun(const libmv_FrameTransform* transform,
const libmv_FloatImage* input_image,
libmv_FloatImage* output_image) {
const FloatImage input(input_image->buffer,
input_image->height,
input_image->width,
input_image->channels);
FloatImage output;
((FrameAccessor::Transform*) transform)->run(input,
&output);
((FrameAccessor::Transform*)transform)->run(input, &output);
int num_pixels = output.Width() *output.Height() * output.Depth();
int num_pixels = output.Width() * output.Height() * output.Depth();
output_image->buffer = new float[num_pixels];
memcpy(output_image->buffer, output.Data(), num_pixels * sizeof(float));
output_image->width = output.Width();

View File

@@ -32,14 +32,14 @@ extern "C" {
typedef struct libmv_FrameAccessor libmv_FrameAccessor;
typedef struct libmv_FrameTransform libmv_FrameTransform;
typedef struct libmv_FrameAccessorUserData libmv_FrameAccessorUserData;
typedef void *libmv_CacheKey;
typedef void* libmv_CacheKey;
typedef enum {
LIBMV_IMAGE_MODE_MONO,
LIBMV_IMAGE_MODE_RGBA,
} libmv_InputMode;
typedef libmv_CacheKey (*libmv_GetImageCallback) (
typedef libmv_CacheKey (*libmv_GetImageCallback)(
libmv_FrameAccessorUserData* user_data,
int clip,
int frame,
@@ -52,9 +52,9 @@ typedef libmv_CacheKey (*libmv_GetImageCallback) (
int* height,
int* channels);
typedef void (*libmv_ReleaseImageCallback) (libmv_CacheKey cache_key);
typedef void (*libmv_ReleaseImageCallback)(libmv_CacheKey cache_key);
typedef libmv_CacheKey (*libmv_GetMaskForTrackCallback) (
typedef libmv_CacheKey (*libmv_GetMaskForTrackCallback)(
libmv_FrameAccessorUserData* user_data,
int clip,
int frame,
@@ -63,7 +63,7 @@ typedef libmv_CacheKey (*libmv_GetMaskForTrackCallback) (
float** destination,
int* width,
int* height);
typedef void (*libmv_ReleaseMaskCallback) (libmv_CacheKey cache_key);
typedef void (*libmv_ReleaseMaskCallback)(libmv_CacheKey cache_key);
libmv_FrameAccessor* libmv_FrameAccessorNew(
libmv_FrameAccessorUserData* user_data,
@@ -73,11 +73,12 @@ libmv_FrameAccessor* libmv_FrameAccessorNew(
libmv_ReleaseMaskCallback release_mask_callback);
void libmv_FrameAccessorDestroy(libmv_FrameAccessor* frame_accessor);
int64_t libmv_frameAccessorgetTransformKey(const libmv_FrameTransform *transform);
int64_t libmv_frameAccessorgetTransformKey(
const libmv_FrameTransform* transform);
void libmv_frameAccessorgetTransformRun(const libmv_FrameTransform *transform,
const libmv_FloatImage *input_image,
libmv_FloatImage *output_image);
void libmv_frameAccessorgetTransformRun(const libmv_FrameTransform* transform,
const libmv_FloatImage* input_image,
libmv_FloatImage* output_image);
#ifdef __cplusplus
}
#endif

View File

@@ -41,10 +41,8 @@ void libmv_homography2DFromCorrespondencesEuc(/* const */ double (*x1)[2],
LG << "x2: " << x2_mat;
libmv::EstimateHomographyOptions options;
libmv::EstimateHomography2DFromCorrespondences(x1_mat,
x2_mat,
options,
&H_mat);
libmv::EstimateHomography2DFromCorrespondences(
x1_mat, x2_mat, options, &H_mat);
LG << "H: " << H_mat;

View File

@@ -21,14 +21,14 @@
#include "intern/utildefines.h"
#include "libmv/tracking/track_region.h"
#include <cassert>
#include <png.h>
#include <cassert>
using libmv::FloatImage;
using libmv::SamplePlanarPatch;
void libmv_floatImageDestroy(libmv_FloatImage *image) {
delete [] image->buffer;
void libmv_floatImageDestroy(libmv_FloatImage* image) {
delete[] image->buffer;
}
/* Image <-> buffers conversion */
@@ -63,8 +63,7 @@ void libmv_floatBufferToFloatImage(const float* buffer,
}
}
void libmv_floatImageToFloatBuffer(const FloatImage &image,
float* buffer) {
void libmv_floatImageToFloatBuffer(const FloatImage& image, float* buffer) {
for (int y = 0, a = 0; y < image.Height(); y++) {
for (int x = 0; x < image.Width(); x++) {
for (int k = 0; k < image.Depth(); k++) {
@@ -74,9 +73,9 @@ void libmv_floatImageToFloatBuffer(const FloatImage &image,
}
}
void libmv_floatImageToByteBuffer(const libmv::FloatImage &image,
void libmv_floatImageToByteBuffer(const libmv::FloatImage& image,
unsigned char* buffer) {
for (int y = 0, a= 0; y < image.Height(); y++) {
for (int y = 0, a = 0; y < image.Height(); y++) {
for (int x = 0; x < image.Width(); x++) {
for (int k = 0; k < image.Depth(); k++) {
buffer[a++] = image(y, x, k) * 255.0f;
@@ -93,7 +92,7 @@ static bool savePNGImage(png_bytep* row_pointers,
const char* file_name) {
png_infop info_ptr;
png_structp png_ptr;
FILE *fp = fopen(file_name, "wb");
FILE* fp = fopen(file_name, "wb");
if (fp == NULL) {
return false;
@@ -153,7 +152,7 @@ bool libmv_saveImage(const FloatImage& image,
int x0,
int y0) {
int x, y;
png_bytep *row_pointers;
png_bytep* row_pointers;
assert(image.Depth() == 1);
@@ -180,9 +179,8 @@ bool libmv_saveImage(const FloatImage& image,
static int image_counter = 0;
char file_name[128];
snprintf(file_name, sizeof(file_name),
"%s_%02d.png",
prefix, ++image_counter);
snprintf(
file_name, sizeof(file_name), "%s_%02d.png", prefix, ++image_counter);
bool result = savePNGImage(row_pointers,
image.Width(),
image.Height(),
@@ -191,9 +189,9 @@ bool libmv_saveImage(const FloatImage& image,
file_name);
for (y = 0; y < image.Height(); y++) {
delete [] row_pointers[y];
delete[] row_pointers[y];
}
delete [] row_pointers;
delete[] row_pointers;
return result;
}
@@ -211,7 +209,7 @@ void libmv_samplePlanarPatchFloat(const float* image,
double* warped_position_x,
double* warped_position_y) {
FloatImage libmv_image, libmv_patch, libmv_mask;
FloatImage *libmv_mask_for_sample = NULL;
FloatImage* libmv_mask_for_sample = NULL;
libmv_floatBufferToFloatImage(image, width, height, channels, &libmv_image);
@@ -221,8 +219,10 @@ void libmv_samplePlanarPatchFloat(const float* image,
}
SamplePlanarPatch(libmv_image,
xs, ys,
num_samples_x, num_samples_y,
xs,
ys,
num_samples_x,
num_samples_y,
libmv_mask_for_sample,
&libmv_patch,
warped_position_x,
@@ -232,19 +232,19 @@ void libmv_samplePlanarPatchFloat(const float* image,
}
void libmv_samplePlanarPatchByte(const unsigned char* image,
int width,
int height,
int channels,
const double* xs,
const double* ys,
int num_samples_x,
int num_samples_y,
const float* mask,
unsigned char* patch,
double* warped_position_x,
double* warped_position_y) {
int width,
int height,
int channels,
const double* xs,
const double* ys,
int num_samples_x,
int num_samples_y,
const float* mask,
unsigned char* patch,
double* warped_position_x,
double* warped_position_y) {
libmv::FloatImage libmv_image, libmv_patch, libmv_mask;
libmv::FloatImage *libmv_mask_for_sample = NULL;
libmv::FloatImage* libmv_mask_for_sample = NULL;
libmv_byteBufferToFloatImage(image, width, height, channels, &libmv_image);
@@ -254,8 +254,10 @@ void libmv_samplePlanarPatchByte(const unsigned char* image,
}
libmv::SamplePlanarPatch(libmv_image,
xs, ys,
num_samples_x, num_samples_y,
xs,
ys,
num_samples_x,
num_samples_y,
libmv_mask_for_sample,
&libmv_patch,
warped_position_x,

View File

@@ -35,7 +35,7 @@ void libmv_floatBufferToFloatImage(const float* buffer,
libmv::FloatImage* image);
void libmv_floatImageToFloatBuffer(const libmv::FloatImage& image,
float *buffer);
float* buffer);
void libmv_floatImageToByteBuffer(const libmv::FloatImage& image,
unsigned char* buffer);
@@ -51,13 +51,13 @@ extern "C" {
#endif
typedef struct libmv_FloatImage {
float *buffer;
float* buffer;
int width;
int height;
int channels;
} libmv_FloatImage;
void libmv_floatImageDestroy(libmv_FloatImage *image);
void libmv_floatImageDestroy(libmv_FloatImage* image);
void libmv_samplePlanarPatchFloat(const float* image,
int width,
@@ -72,18 +72,18 @@ void libmv_samplePlanarPatchFloat(const float* image,
double* warped_position_x,
double* warped_position_y);
void libmv_samplePlanarPatchByte(const unsigned char* image,
int width,
int height,
int channels,
const double* xs,
const double* ys,
int num_samples_x,
int num_samples_y,
const float* mask,
unsigned char* patch,
double* warped_position_x,
double* warped_position_y);
void libmv_samplePlanarPatchByte(const unsigned char* image,
int width,
int height,
int channels,
const double* xs,
const double* ys,
int num_samples_x,
int num_samples_y,
const float* mask,
unsigned char* patch,
double* warped_position_x,
double* warped_position_y);
#ifdef __cplusplus
}

View File

@@ -24,8 +24,8 @@
#include "libmv/logging/logging.h"
#include "libmv/simple_pipeline/bundle.h"
#include "libmv/simple_pipeline/keyframe_selection.h"
#include "libmv/simple_pipeline/initialize_reconstruction.h"
#include "libmv/simple_pipeline/keyframe_selection.h"
#include "libmv/simple_pipeline/modal_solver.h"
#include "libmv/simple_pipeline/pipeline.h"
#include "libmv/simple_pipeline/reconstruction_scale.h"
@@ -39,19 +39,19 @@ using libmv::EuclideanScaleToUnity;
using libmv::Marker;
using libmv::ProgressUpdateCallback;
using libmv::PolynomialCameraIntrinsics;
using libmv::Tracks;
using libmv::EuclideanBundle;
using libmv::EuclideanCompleteReconstruction;
using libmv::EuclideanReconstructTwoFrames;
using libmv::EuclideanReprojectionError;
using libmv::PolynomialCameraIntrinsics;
using libmv::Tracks;
struct libmv_Reconstruction {
EuclideanReconstruction reconstruction;
/* Used for per-track average error calculation after reconstruction */
Tracks tracks;
CameraIntrinsics *intrinsics;
CameraIntrinsics* intrinsics;
double error;
bool is_valid;
@@ -63,7 +63,7 @@ class ReconstructUpdateCallback : public ProgressUpdateCallback {
public:
ReconstructUpdateCallback(
reconstruct_progress_update_cb progress_update_callback,
void *callback_customdata) {
void* callback_customdata) {
progress_update_callback_ = progress_update_callback;
callback_customdata_ = callback_customdata;
}
@@ -73,13 +73,14 @@ class ReconstructUpdateCallback : public ProgressUpdateCallback {
progress_update_callback_(callback_customdata_, progress, message);
}
}
protected:
reconstruct_progress_update_cb progress_update_callback_;
void* callback_customdata_;
};
void libmv_solveRefineIntrinsics(
const Tracks &tracks,
const Tracks& tracks,
const int refine_intrinsics,
const int bundle_constraints,
reconstruct_progress_update_cb progress_update_callback,
@@ -96,11 +97,11 @@ void libmv_solveRefineIntrinsics(
bundle_intrinsics |= libmv::BUNDLE_PRINCIPAL_POINT;
}
#define SET_DISTORTION_FLAG_CHECKED(type, coefficient) \
do { \
if (refine_intrinsics & LIBMV_REFINE_ ## type ##_DISTORTION_ ## coefficient) { \
bundle_intrinsics |= libmv::BUNDLE_ ## type ## _ ## coefficient; \
} \
#define SET_DISTORTION_FLAG_CHECKED(type, coefficient) \
do { \
if (refine_intrinsics & LIBMV_REFINE_##type##_DISTORTION_##coefficient) { \
bundle_intrinsics |= libmv::BUNDLE_##type##_##coefficient; \
} \
} while (0)
SET_DISTORTION_FLAG_CHECKED(RADIAL, K1);
@@ -123,20 +124,19 @@ void libmv_solveRefineIntrinsics(
}
void finishReconstruction(
const Tracks &tracks,
const CameraIntrinsics &camera_intrinsics,
libmv_Reconstruction *libmv_reconstruction,
const Tracks& tracks,
const CameraIntrinsics& camera_intrinsics,
libmv_Reconstruction* libmv_reconstruction,
reconstruct_progress_update_cb progress_update_callback,
void *callback_customdata) {
EuclideanReconstruction &reconstruction =
libmv_reconstruction->reconstruction;
void* callback_customdata) {
EuclideanReconstruction& reconstruction =
libmv_reconstruction->reconstruction;
/* Reprojection error calculation. */
progress_update_callback(callback_customdata, 1.0, "Finishing solution");
libmv_reconstruction->tracks = tracks;
libmv_reconstruction->error = EuclideanReprojectionError(tracks,
reconstruction,
camera_intrinsics);
libmv_reconstruction->error =
EuclideanReprojectionError(tracks, reconstruction, camera_intrinsics);
}
bool selectTwoKeyframesBasedOnGRICAndVariance(
@@ -148,9 +148,8 @@ bool selectTwoKeyframesBasedOnGRICAndVariance(
libmv::vector<int> keyframes;
/* Get list of all keyframe candidates first. */
SelectKeyframesBasedOnGRICAndVariance(normalized_tracks,
camera_intrinsics,
keyframes);
SelectKeyframesBasedOnGRICAndVariance(
normalized_tracks, camera_intrinsics, keyframes);
if (keyframes.size() < 2) {
LG << "Not enough keyframes detected by GRIC";
@@ -175,24 +174,20 @@ bool selectTwoKeyframesBasedOnGRICAndVariance(
EuclideanReconstruction reconstruction;
int current_keyframe = keyframes[i];
libmv::vector<Marker> keyframe_markers =
normalized_tracks.MarkersForTracksInBothImages(previous_keyframe,
current_keyframe);
normalized_tracks.MarkersForTracksInBothImages(previous_keyframe,
current_keyframe);
Tracks keyframe_tracks(keyframe_markers);
/* get a solution from two keyframes only */
EuclideanReconstructTwoFrames(keyframe_markers, &reconstruction);
EuclideanBundle(keyframe_tracks, &reconstruction);
EuclideanCompleteReconstruction(keyframe_tracks,
&reconstruction,
NULL);
EuclideanCompleteReconstruction(keyframe_tracks, &reconstruction, NULL);
double current_error = EuclideanReprojectionError(tracks,
reconstruction,
camera_intrinsics);
double current_error =
EuclideanReprojectionError(tracks, reconstruction, camera_intrinsics);
LG << "Error between " << previous_keyframe
<< " and " << current_keyframe
LG << "Error between " << previous_keyframe << " and " << current_keyframe
<< ": " << current_error;
if (current_error < best_error) {
@@ -214,53 +209,49 @@ Marker libmv_projectMarker(const EuclideanPoint& point,
projected /= projected(2);
libmv::Marker reprojected_marker;
intrinsics.ApplyIntrinsics(projected(0), projected(1),
&reprojected_marker.x,
&reprojected_marker.y);
intrinsics.ApplyIntrinsics(
projected(0), projected(1), &reprojected_marker.x, &reprojected_marker.y);
reprojected_marker.image = camera.image;
reprojected_marker.track = point.track;
return reprojected_marker;
}
void libmv_getNormalizedTracks(const Tracks &tracks,
const CameraIntrinsics &camera_intrinsics,
Tracks *normalized_tracks) {
void libmv_getNormalizedTracks(const Tracks& tracks,
const CameraIntrinsics& camera_intrinsics,
Tracks* normalized_tracks) {
libmv::vector<Marker> markers = tracks.AllMarkers();
for (int i = 0; i < markers.size(); ++i) {
Marker &marker = markers[i];
camera_intrinsics.InvertIntrinsics(marker.x, marker.y,
&marker.x, &marker.y);
normalized_tracks->Insert(marker.image,
marker.track,
marker.x, marker.y,
marker.weight);
Marker& marker = markers[i];
camera_intrinsics.InvertIntrinsics(
marker.x, marker.y, &marker.x, &marker.y);
normalized_tracks->Insert(
marker.image, marker.track, marker.x, marker.y, marker.weight);
}
}
} // namespace
libmv_Reconstruction *libmv_solveReconstruction(
libmv_Reconstruction* libmv_solveReconstruction(
const libmv_Tracks* libmv_tracks,
const libmv_CameraIntrinsicsOptions* libmv_camera_intrinsics_options,
libmv_ReconstructionOptions* libmv_reconstruction_options,
reconstruct_progress_update_cb progress_update_callback,
void* callback_customdata) {
libmv_Reconstruction *libmv_reconstruction =
LIBMV_OBJECT_NEW(libmv_Reconstruction);
libmv_Reconstruction* libmv_reconstruction =
LIBMV_OBJECT_NEW(libmv_Reconstruction);
Tracks &tracks = *((Tracks *) libmv_tracks);
EuclideanReconstruction &reconstruction =
libmv_reconstruction->reconstruction;
Tracks& tracks = *((Tracks*)libmv_tracks);
EuclideanReconstruction& reconstruction =
libmv_reconstruction->reconstruction;
ReconstructUpdateCallback update_callback =
ReconstructUpdateCallback(progress_update_callback,
callback_customdata);
ReconstructUpdateCallback(progress_update_callback, callback_customdata);
/* Retrieve reconstruction options from C-API to libmv API. */
CameraIntrinsics *camera_intrinsics;
CameraIntrinsics* camera_intrinsics;
camera_intrinsics = libmv_reconstruction->intrinsics =
libmv_cameraIntrinsicsCreateFromOptions(libmv_camera_intrinsics_options);
libmv_cameraIntrinsicsCreateFromOptions(libmv_camera_intrinsics_options);
/* Invert the camera intrinsics/ */
Tracks normalized_tracks;
@@ -276,10 +267,10 @@ libmv_Reconstruction *libmv_solveReconstruction(
update_callback.invoke(0, "Selecting keyframes");
if (selectTwoKeyframesBasedOnGRICAndVariance(tracks,
normalized_tracks,
*camera_intrinsics,
keyframe1,
keyframe2)) {
normalized_tracks,
*camera_intrinsics,
keyframe1,
keyframe2)) {
/* so keyframes in the interface would be updated */
libmv_reconstruction_options->keyframe1 = keyframe1;
libmv_reconstruction_options->keyframe2 = keyframe2;
@@ -290,7 +281,7 @@ libmv_Reconstruction *libmv_solveReconstruction(
LG << "frames to init from: " << keyframe1 << " " << keyframe2;
libmv::vector<Marker> keyframe_markers =
normalized_tracks.MarkersForTracksInBothImages(keyframe1, keyframe2);
normalized_tracks.MarkersForTracksInBothImages(keyframe1, keyframe2);
LG << "number of markers for init: " << keyframe_markers.size();
@@ -309,14 +300,12 @@ libmv_Reconstruction *libmv_solveReconstruction(
}
EuclideanBundle(normalized_tracks, &reconstruction);
EuclideanCompleteReconstruction(normalized_tracks,
&reconstruction,
&update_callback);
EuclideanCompleteReconstruction(
normalized_tracks, &reconstruction, &update_callback);
/* Refinement. */
if (libmv_reconstruction_options->refine_intrinsics) {
libmv_solveRefineIntrinsics(
tracks,
libmv_solveRefineIntrinsics(tracks,
libmv_reconstruction_options->refine_intrinsics,
libmv::BUNDLE_NO_CONSTRAINTS,
progress_update_callback,
@@ -336,31 +325,29 @@ libmv_Reconstruction *libmv_solveReconstruction(
callback_customdata);
libmv_reconstruction->is_valid = true;
return (libmv_Reconstruction *) libmv_reconstruction;
return (libmv_Reconstruction*)libmv_reconstruction;
}
libmv_Reconstruction *libmv_solveModal(
const libmv_Tracks *libmv_tracks,
const libmv_CameraIntrinsicsOptions *libmv_camera_intrinsics_options,
const libmv_ReconstructionOptions *libmv_reconstruction_options,
libmv_Reconstruction* libmv_solveModal(
const libmv_Tracks* libmv_tracks,
const libmv_CameraIntrinsicsOptions* libmv_camera_intrinsics_options,
const libmv_ReconstructionOptions* libmv_reconstruction_options,
reconstruct_progress_update_cb progress_update_callback,
void *callback_customdata) {
libmv_Reconstruction *libmv_reconstruction =
LIBMV_OBJECT_NEW(libmv_Reconstruction);
void* callback_customdata) {
libmv_Reconstruction* libmv_reconstruction =
LIBMV_OBJECT_NEW(libmv_Reconstruction);
Tracks &tracks = *((Tracks *) libmv_tracks);
EuclideanReconstruction &reconstruction =
libmv_reconstruction->reconstruction;
Tracks& tracks = *((Tracks*)libmv_tracks);
EuclideanReconstruction& reconstruction =
libmv_reconstruction->reconstruction;
ReconstructUpdateCallback update_callback =
ReconstructUpdateCallback(progress_update_callback,
callback_customdata);
ReconstructUpdateCallback(progress_update_callback, callback_customdata);
/* Retrieve reconstruction options from C-API to libmv API. */
CameraIntrinsics *camera_intrinsics;
CameraIntrinsics* camera_intrinsics;
camera_intrinsics = libmv_reconstruction->intrinsics =
libmv_cameraIntrinsicsCreateFromOptions(
libmv_camera_intrinsics_options);
libmv_cameraIntrinsicsCreateFromOptions(libmv_camera_intrinsics_options);
/* Invert the camera intrinsics. */
Tracks normalized_tracks;
@@ -378,11 +365,11 @@ libmv_Reconstruction *libmv_solveModal(
/* Refinement. */
if (libmv_reconstruction_options->refine_intrinsics) {
libmv_solveRefineIntrinsics(
tracks,
libmv_solveRefineIntrinsics(tracks,
libmv_reconstruction_options->refine_intrinsics,
libmv::BUNDLE_NO_TRANSLATION,
progress_update_callback, callback_customdata,
progress_update_callback,
callback_customdata,
&reconstruction,
camera_intrinsics);
}
@@ -395,26 +382,25 @@ libmv_Reconstruction *libmv_solveModal(
callback_customdata);
libmv_reconstruction->is_valid = true;
return (libmv_Reconstruction *) libmv_reconstruction;
return (libmv_Reconstruction*)libmv_reconstruction;
}
int libmv_reconstructionIsValid(libmv_Reconstruction *libmv_reconstruction) {
int libmv_reconstructionIsValid(libmv_Reconstruction* libmv_reconstruction) {
return libmv_reconstruction->is_valid;
}
void libmv_reconstructionDestroy(libmv_Reconstruction *libmv_reconstruction) {
void libmv_reconstructionDestroy(libmv_Reconstruction* libmv_reconstruction) {
LIBMV_OBJECT_DELETE(libmv_reconstruction->intrinsics, CameraIntrinsics);
LIBMV_OBJECT_DELETE(libmv_reconstruction, libmv_Reconstruction);
}
int libmv_reprojectionPointForTrack(
const libmv_Reconstruction *libmv_reconstruction,
const libmv_Reconstruction* libmv_reconstruction,
int track,
double pos[3]) {
const EuclideanReconstruction *reconstruction =
&libmv_reconstruction->reconstruction;
const EuclideanPoint *point =
reconstruction->PointForTrack(track);
const EuclideanReconstruction* reconstruction =
&libmv_reconstruction->reconstruction;
const EuclideanPoint* point = reconstruction->PointForTrack(track);
if (point) {
pos[0] = point->X[0];
pos[1] = point->X[2];
@@ -425,23 +411,22 @@ int libmv_reprojectionPointForTrack(
}
double libmv_reprojectionErrorForTrack(
const libmv_Reconstruction *libmv_reconstruction,
int track) {
const EuclideanReconstruction *reconstruction =
&libmv_reconstruction->reconstruction;
const CameraIntrinsics *intrinsics = libmv_reconstruction->intrinsics;
const libmv_Reconstruction* libmv_reconstruction, int track) {
const EuclideanReconstruction* reconstruction =
&libmv_reconstruction->reconstruction;
const CameraIntrinsics* intrinsics = libmv_reconstruction->intrinsics;
libmv::vector<Marker> markers =
libmv_reconstruction->tracks.MarkersForTrack(track);
libmv_reconstruction->tracks.MarkersForTrack(track);
int num_reprojected = 0;
double total_error = 0.0;
for (int i = 0; i < markers.size(); ++i) {
double weight = markers[i].weight;
const EuclideanCamera *camera =
reconstruction->CameraForImage(markers[i].image);
const EuclideanPoint *point =
reconstruction->PointForTrack(markers[i].track);
const EuclideanCamera* camera =
reconstruction->CameraForImage(markers[i].image);
const EuclideanPoint* point =
reconstruction->PointForTrack(markers[i].track);
if (!camera || !point || weight == 0.0) {
continue;
@@ -450,7 +435,7 @@ double libmv_reprojectionErrorForTrack(
num_reprojected++;
Marker reprojected_marker =
libmv_projectMarker(*point, *camera, *intrinsics);
libmv_projectMarker(*point, *camera, *intrinsics);
double ex = (reprojected_marker.x - markers[i].x) * weight;
double ey = (reprojected_marker.y - markers[i].y) * weight;
@@ -461,14 +446,13 @@ double libmv_reprojectionErrorForTrack(
}
double libmv_reprojectionErrorForImage(
const libmv_Reconstruction *libmv_reconstruction,
int image) {
const EuclideanReconstruction *reconstruction =
&libmv_reconstruction->reconstruction;
const CameraIntrinsics *intrinsics = libmv_reconstruction->intrinsics;
const libmv_Reconstruction* libmv_reconstruction, int image) {
const EuclideanReconstruction* reconstruction =
&libmv_reconstruction->reconstruction;
const CameraIntrinsics* intrinsics = libmv_reconstruction->intrinsics;
libmv::vector<Marker> markers =
libmv_reconstruction->tracks.MarkersInImage(image);
const EuclideanCamera *camera = reconstruction->CameraForImage(image);
libmv_reconstruction->tracks.MarkersInImage(image);
const EuclideanCamera* camera = reconstruction->CameraForImage(image);
int num_reprojected = 0;
double total_error = 0.0;
@@ -477,8 +461,8 @@ double libmv_reprojectionErrorForImage(
}
for (int i = 0; i < markers.size(); ++i) {
const EuclideanPoint *point =
reconstruction->PointForTrack(markers[i].track);
const EuclideanPoint* point =
reconstruction->PointForTrack(markers[i].track);
if (!point) {
continue;
@@ -487,7 +471,7 @@ double libmv_reprojectionErrorForImage(
num_reprojected++;
Marker reprojected_marker =
libmv_projectMarker(*point, *camera, *intrinsics);
libmv_projectMarker(*point, *camera, *intrinsics);
double ex = (reprojected_marker.x - markers[i].x) * markers[i].weight;
double ey = (reprojected_marker.y - markers[i].y) * markers[i].weight;
@@ -498,13 +482,12 @@ double libmv_reprojectionErrorForImage(
}
int libmv_reprojectionCameraForImage(
const libmv_Reconstruction *libmv_reconstruction,
const libmv_Reconstruction* libmv_reconstruction,
int image,
double mat[4][4]) {
const EuclideanReconstruction *reconstruction =
&libmv_reconstruction->reconstruction;
const EuclideanCamera *camera =
reconstruction->CameraForImage(image);
const EuclideanReconstruction* reconstruction =
&libmv_reconstruction->reconstruction;
const EuclideanCamera* camera = reconstruction->CameraForImage(image);
if (camera) {
for (int j = 0; j < 3; ++j) {
@@ -541,11 +524,11 @@ int libmv_reprojectionCameraForImage(
}
double libmv_reprojectionError(
const libmv_Reconstruction *libmv_reconstruction) {
const libmv_Reconstruction* libmv_reconstruction) {
return libmv_reconstruction->error;
}
libmv_CameraIntrinsics *libmv_reconstructionExtractIntrinsics(
libmv_Reconstruction *libmv_reconstruction) {
return (libmv_CameraIntrinsics *) libmv_reconstruction->intrinsics;
libmv_CameraIntrinsics* libmv_reconstructionExtractIntrinsics(
libmv_Reconstruction* libmv_reconstruction) {
return (libmv_CameraIntrinsics*)libmv_reconstruction->intrinsics;
}

View File

@@ -31,17 +31,16 @@ struct libmv_CameraIntrinsicsOptions;
typedef struct libmv_Reconstruction libmv_Reconstruction;
enum {
LIBMV_REFINE_FOCAL_LENGTH = (1 << 0),
LIBMV_REFINE_PRINCIPAL_POINT = (1 << 1),
LIBMV_REFINE_FOCAL_LENGTH = (1 << 0),
LIBMV_REFINE_PRINCIPAL_POINT = (1 << 1),
LIBMV_REFINE_RADIAL_DISTORTION_K1 = (1 << 2),
LIBMV_REFINE_RADIAL_DISTORTION_K2 = (1 << 3),
LIBMV_REFINE_RADIAL_DISTORTION_K3 = (1 << 4),
LIBMV_REFINE_RADIAL_DISTORTION_K4 = (1 << 5),
LIBMV_REFINE_RADIAL_DISTORTION = (LIBMV_REFINE_RADIAL_DISTORTION_K1 |
LIBMV_REFINE_RADIAL_DISTORTION_K2 |
LIBMV_REFINE_RADIAL_DISTORTION_K3 |
LIBMV_REFINE_RADIAL_DISTORTION_K4),
LIBMV_REFINE_RADIAL_DISTORTION_K1 = (1 << 2),
LIBMV_REFINE_RADIAL_DISTORTION_K2 = (1 << 3),
LIBMV_REFINE_RADIAL_DISTORTION_K3 = (1 << 4),
LIBMV_REFINE_RADIAL_DISTORTION_K4 = (1 << 5),
LIBMV_REFINE_RADIAL_DISTORTION =
(LIBMV_REFINE_RADIAL_DISTORTION_K1 | LIBMV_REFINE_RADIAL_DISTORTION_K2 |
LIBMV_REFINE_RADIAL_DISTORTION_K3 | LIBMV_REFINE_RADIAL_DISTORTION_K4),
LIBMV_REFINE_TANGENTIAL_DISTORTION_P1 = (1 << 6),
LIBMV_REFINE_TANGENTIAL_DISTORTION_P2 = (1 << 7),
@@ -55,9 +54,9 @@ typedef struct libmv_ReconstructionOptions {
int refine_intrinsics;
} libmv_ReconstructionOptions;
typedef void (*reconstruct_progress_update_cb) (void* customdata,
double progress,
const char* message);
typedef void (*reconstruct_progress_update_cb)(void* customdata,
double progress,
const char* message);
libmv_Reconstruction* libmv_solveReconstruction(
const struct libmv_Tracks* libmv_tracks,
@@ -73,35 +72,32 @@ libmv_Reconstruction* libmv_solveModal(
reconstruct_progress_update_cb progress_update_callback,
void* callback_customdata);
int libmv_reconstructionIsValid(libmv_Reconstruction *libmv_reconstruction);
int libmv_reconstructionIsValid(libmv_Reconstruction* libmv_reconstruction);
void libmv_reconstructionDestroy(libmv_Reconstruction* libmv_reconstruction);
int libmv_reprojectionPointForTrack(
const libmv_Reconstruction* libmv_reconstruction,
int track,
double pos[3]);
const libmv_Reconstruction* libmv_reconstruction, int track, double pos[3]);
double libmv_reprojectionErrorForTrack(
const libmv_Reconstruction* libmv_reconstruction,
int track);
const libmv_Reconstruction* libmv_reconstruction, int track);
double libmv_reprojectionErrorForImage(
const libmv_Reconstruction* libmv_reconstruction,
int image);
const libmv_Reconstruction* libmv_reconstruction, int image);
int libmv_reprojectionCameraForImage(
const libmv_Reconstruction* libmv_reconstruction,
int image,
double mat[4][4]);
double libmv_reprojectionError(const libmv_Reconstruction* libmv_reconstruction);
double libmv_reprojectionError(
const libmv_Reconstruction* libmv_reconstruction);
struct libmv_CameraIntrinsics* libmv_reconstructionExtractIntrinsics(
libmv_Reconstruction *libmv_Reconstruction);
libmv_Reconstruction* libmv_Reconstruction);
#ifdef __cplusplus
}
#endif
#endif // LIBMV_C_API_RECONSTRUCTION_H_
#endif // LIBMV_C_API_RECONSTRUCTION_H_

View File

@@ -24,7 +24,7 @@
/* ************ Logging ************ */
void libmv_initLogging(const char * /*argv0*/) {
void libmv_initLogging(const char* /*argv0*/) {
}
void libmv_startDebugLogging(void) {
@@ -36,18 +36,18 @@ void libmv_setLoggingVerbosity(int /*verbosity*/) {
/* ************ Planar tracker ************ */
/* TrackRegion (new planar tracker) */
int libmv_trackRegion(const libmv_TrackRegionOptions * /*options*/,
const float * /*image1*/,
int libmv_trackRegion(const libmv_TrackRegionOptions* /*options*/,
const float* /*image1*/,
int /*image1_width*/,
int /*image1_height*/,
const float * /*image2*/,
const float* /*image2*/,
int /*image2_width*/,
int /*image2_height*/,
const double *x1,
const double *y1,
libmv_TrackRegionResult *result,
double *x2,
double *y2) {
const double* x1,
const double* y1,
libmv_TrackRegionResult* result,
double* x2,
double* y2) {
/* Convert to doubles for the libmv api. The four corners and the center. */
for (int i = 0; i < 5; ++i) {
x2[i] = x1[i];
@@ -61,46 +61,46 @@ int libmv_trackRegion(const libmv_TrackRegionOptions * /*options*/,
return false;
}
void libmv_samplePlanarPatchFloat(const float * /*image*/,
void libmv_samplePlanarPatchFloat(const float* /*image*/,
int /*width*/,
int /*height*/,
int /*channels*/,
const double * /*xs*/,
const double * /*ys*/,
const double* /*xs*/,
const double* /*ys*/,
int /*num_samples_x*/,
int /*num_samples_y*/,
const float * /*mask*/,
float * /*patch*/,
double * /*warped_position_x*/,
double * /*warped_position_y*/) {
const float* /*mask*/,
float* /*patch*/,
double* /*warped_position_x*/,
double* /*warped_position_y*/) {
/* TODO(sergey): implement */
}
void libmv_samplePlanarPatchByte(const unsigned char * /*image*/,
void libmv_samplePlanarPatchByte(const unsigned char* /*image*/,
int /*width*/,
int /*height*/,
int /*channels*/,
const double * /*xs*/,
const double * /*ys*/,
int /*num_samples_x*/, int /*num_samples_y*/,
const float * /*mask*/,
unsigned char * /*patch*/,
double * /*warped_position_x*/,
double * /*warped_position_y*/) {
const double* /*xs*/,
const double* /*ys*/,
int /*num_samples_x*/,
int /*num_samples_y*/,
const float* /*mask*/,
unsigned char* /*patch*/,
double* /*warped_position_x*/,
double* /*warped_position_y*/) {
/* TODO(sergey): implement */
}
void libmv_floatImageDestroy(libmv_FloatImage* /*image*/)
{
void libmv_floatImageDestroy(libmv_FloatImage* /*image*/) {
}
/* ************ Tracks ************ */
libmv_Tracks *libmv_tracksNew(void) {
libmv_Tracks* libmv_tracksNew(void) {
return NULL;
}
void libmv_tracksInsert(libmv_Tracks * /*libmv_tracks*/,
void libmv_tracksInsert(libmv_Tracks* /*libmv_tracks*/,
int /*image*/,
int /*track*/,
double /*x*/,
@@ -108,152 +108,152 @@ void libmv_tracksInsert(libmv_Tracks * /*libmv_tracks*/,
double /*weight*/) {
}
void libmv_tracksDestroy(libmv_Tracks * /*libmv_tracks*/) {
void libmv_tracksDestroy(libmv_Tracks* /*libmv_tracks*/) {
}
/* ************ Reconstruction solver ************ */
libmv_Reconstruction *libmv_solveReconstruction(
const libmv_Tracks * /*libmv_tracks*/,
const libmv_CameraIntrinsicsOptions * /*libmv_camera_intrinsics_options*/,
libmv_ReconstructionOptions * /*libmv_reconstruction_options*/,
libmv_Reconstruction* libmv_solveReconstruction(
const libmv_Tracks* /*libmv_tracks*/,
const libmv_CameraIntrinsicsOptions* /*libmv_camera_intrinsics_options*/,
libmv_ReconstructionOptions* /*libmv_reconstruction_options*/,
reconstruct_progress_update_cb /*progress_update_callback*/,
void * /*callback_customdata*/) {
void* /*callback_customdata*/) {
return NULL;
}
libmv_Reconstruction *libmv_solveModal(
const libmv_Tracks * /*libmv_tracks*/,
const libmv_CameraIntrinsicsOptions * /*libmv_camera_intrinsics_options*/,
const libmv_ReconstructionOptions * /*libmv_reconstruction_options*/,
libmv_Reconstruction* libmv_solveModal(
const libmv_Tracks* /*libmv_tracks*/,
const libmv_CameraIntrinsicsOptions* /*libmv_camera_intrinsics_options*/,
const libmv_ReconstructionOptions* /*libmv_reconstruction_options*/,
reconstruct_progress_update_cb /*progress_update_callback*/,
void * /*callback_customdata*/) {
void* /*callback_customdata*/) {
return NULL;
}
int libmv_reconstructionIsValid(libmv_Reconstruction * /*libmv_reconstruction*/) {
int libmv_reconstructionIsValid(
libmv_Reconstruction* /*libmv_reconstruction*/) {
return 0;
}
int libmv_reprojectionPointForTrack(
const libmv_Reconstruction * /*libmv_reconstruction*/,
const libmv_Reconstruction* /*libmv_reconstruction*/,
int /*track*/,
double /*pos*/[3]) {
return 0;
}
double libmv_reprojectionErrorForTrack(
const libmv_Reconstruction * /*libmv_reconstruction*/,
int /*track*/) {
const libmv_Reconstruction* /*libmv_reconstruction*/, int /*track*/) {
return 0.0;
}
double libmv_reprojectionErrorForImage(
const libmv_Reconstruction * /*libmv_reconstruction*/,
int /*image*/) {
const libmv_Reconstruction* /*libmv_reconstruction*/, int /*image*/) {
return 0.0;
}
int libmv_reprojectionCameraForImage(
const libmv_Reconstruction * /*libmv_reconstruction*/,
const libmv_Reconstruction* /*libmv_reconstruction*/,
int /*image*/,
double /*mat*/[4][4]) {
return 0;
}
double libmv_reprojectionError(
const libmv_Reconstruction * /*libmv_reconstruction*/) {
const libmv_Reconstruction* /*libmv_reconstruction*/) {
return 0.0;
}
void libmv_reconstructionDestroy(
struct libmv_Reconstruction * /*libmv_reconstruction*/) {
struct libmv_Reconstruction* /*libmv_reconstruction*/) {
}
/* ************ Feature detector ************ */
libmv_Features *libmv_detectFeaturesByte(const unsigned char * /*image_buffer*/,
libmv_Features* libmv_detectFeaturesByte(const unsigned char* /*image_buffer*/,
int /*width*/,
int /*height*/,
int /*channels*/,
libmv_DetectOptions * /*options*/) {
libmv_DetectOptions* /*options*/) {
return NULL;
}
struct libmv_Features *libmv_detectFeaturesFloat(
const float * /*image_buffer*/,
struct libmv_Features* libmv_detectFeaturesFloat(
const float* /*image_buffer*/,
int /*width*/,
int /*height*/,
int /*channels*/,
libmv_DetectOptions * /*options*/) {
libmv_DetectOptions* /*options*/) {
return NULL;
}
int libmv_countFeatures(const libmv_Features * /*libmv_features*/) {
int libmv_countFeatures(const libmv_Features* /*libmv_features*/) {
return 0;
}
void libmv_getFeature(const libmv_Features * /*libmv_features*/,
void libmv_getFeature(const libmv_Features* /*libmv_features*/,
int /*number*/,
double *x,
double *y,
double *score,
double *size) {
double* x,
double* y,
double* score,
double* size) {
*x = 0.0;
*y = 0.0;
*score = 0.0;
*size = 0.0;
}
void libmv_featuresDestroy(struct libmv_Features * /*libmv_features*/) {
void libmv_featuresDestroy(struct libmv_Features* /*libmv_features*/) {
}
/* ************ Camera intrinsics ************ */
libmv_CameraIntrinsics *libmv_reconstructionExtractIntrinsics(
libmv_Reconstruction * /*libmv_reconstruction*/) {
libmv_CameraIntrinsics* libmv_reconstructionExtractIntrinsics(
libmv_Reconstruction* /*libmv_reconstruction*/) {
return NULL;
}
libmv_CameraIntrinsics *libmv_cameraIntrinsicsNew(
const libmv_CameraIntrinsicsOptions * /*libmv_camera_intrinsics_options*/) {
libmv_CameraIntrinsics* libmv_cameraIntrinsicsNew(
const libmv_CameraIntrinsicsOptions* /*libmv_camera_intrinsics_options*/) {
return NULL;
}
libmv_CameraIntrinsics *libmv_cameraIntrinsicsCopy(
const libmv_CameraIntrinsics * /*libmvIntrinsics*/) {
libmv_CameraIntrinsics* libmv_cameraIntrinsicsCopy(
const libmv_CameraIntrinsics* /*libmvIntrinsics*/) {
return NULL;
}
void libmv_cameraIntrinsicsDestroy(
libmv_CameraIntrinsics * /*libmvIntrinsics*/) {
libmv_CameraIntrinsics* /*libmvIntrinsics*/) {
}
void libmv_cameraIntrinsicsUpdate(
const libmv_CameraIntrinsicsOptions * /*libmv_camera_intrinsics_options*/,
libmv_CameraIntrinsics * /*libmv_intrinsics*/) {
const libmv_CameraIntrinsicsOptions* /*libmv_camera_intrinsics_options*/,
libmv_CameraIntrinsics* /*libmv_intrinsics*/) {
}
void libmv_cameraIntrinsicsSetThreads(
libmv_CameraIntrinsics * /*libmv_intrinsics*/,
int /*threads*/) {
libmv_CameraIntrinsics* /*libmv_intrinsics*/, int /*threads*/) {
}
void libmv_cameraIntrinsicsExtractOptions(
const libmv_CameraIntrinsics * /*libmv_intrinsics*/,
libmv_CameraIntrinsicsOptions *camera_intrinsics_options) {
const libmv_CameraIntrinsics* /*libmv_intrinsics*/,
libmv_CameraIntrinsicsOptions* camera_intrinsics_options) {
memset(camera_intrinsics_options, 0, sizeof(libmv_CameraIntrinsicsOptions));
camera_intrinsics_options->focal_length = 1.0;
}
void libmv_cameraIntrinsicsUndistortByte(
const libmv_CameraIntrinsics * /*libmv_intrinsics*/,
const unsigned char *source_image,
int width, int height,
const libmv_CameraIntrinsics* /*libmv_intrinsics*/,
const unsigned char* source_image,
int width,
int height,
float /*overscan*/,
int channels,
unsigned char *destination_image) {
memcpy(destination_image, source_image,
unsigned char* destination_image) {
memcpy(destination_image,
source_image,
channels * width * height * sizeof(unsigned char));
}
@@ -265,19 +265,21 @@ void libmv_cameraIntrinsicsUndistortFloat(
float /*overscan*/,
int channels,
float* destination_image) {
memcpy(destination_image, source_image,
memcpy(destination_image,
source_image,
channels * width * height * sizeof(float));
}
void libmv_cameraIntrinsicsDistortByte(
const struct libmv_CameraIntrinsics* /*libmv_intrinsics*/,
const unsigned char *source_image,
const unsigned char* source_image,
int width,
int height,
float /*overscan*/,
int channels,
unsigned char *destination_image) {
memcpy(destination_image, source_image,
unsigned char* destination_image) {
memcpy(destination_image,
source_image,
channels * width * height * sizeof(unsigned char));
}
@@ -289,7 +291,8 @@ void libmv_cameraIntrinsicsDistortFloat(
float /*overscan*/,
int channels,
float* destination_image) {
memcpy(destination_image, source_image,
memcpy(destination_image,
source_image,
channels * width * height * sizeof(float));
}
@@ -315,8 +318,8 @@ void libmv_cameraIntrinsicsInvert(
*y1 = 0.0;
}
void libmv_homography2DFromCorrespondencesEuc(/* const */ double (* /*x1*/)[2],
/* const */ double (* /*x2*/)[2],
void libmv_homography2DFromCorrespondencesEuc(/* const */ double (*/*x1*/)[2],
/* const */ double (*/*x2*/)[2],
int /*num_points*/,
double H[3][3]) {
memset(H, 0, sizeof(double[3][3]));
@@ -327,45 +330,38 @@ void libmv_homography2DFromCorrespondencesEuc(/* const */ double (* /*x1*/)[2],
/* ************ autotrack ************ */
libmv_AutoTrack* libmv_autoTrackNew(libmv_FrameAccessor* /*frame_accessor*/)
{
libmv_AutoTrack* libmv_autoTrackNew(libmv_FrameAccessor* /*frame_accessor*/) {
return NULL;
}
void libmv_autoTrackDestroy(libmv_AutoTrack* /*libmv_autotrack*/)
{
void libmv_autoTrackDestroy(libmv_AutoTrack* /*libmv_autotrack*/) {
}
void libmv_autoTrackSetOptions(libmv_AutoTrack* /*libmv_autotrack*/,
const libmv_AutoTrackOptions* /*options*/)
{
const libmv_AutoTrackOptions* /*options*/) {
}
int libmv_autoTrackMarker(libmv_AutoTrack* /*libmv_autotrack*/,
const libmv_TrackRegionOptions* /*libmv_options*/,
libmv_Marker * /*libmv_tracker_marker*/,
libmv_TrackRegionResult* /*libmv_result*/)
{
libmv_Marker* /*libmv_tracker_marker*/,
libmv_TrackRegionResult* /*libmv_result*/) {
return 0;
}
void libmv_autoTrackAddMarker(libmv_AutoTrack* /*libmv_autotrack*/,
const libmv_Marker* /*libmv_marker*/)
{
const libmv_Marker* /*libmv_marker*/) {
}
void libmv_autoTrackSetMarkers(libmv_AutoTrack* /*libmv_autotrack*/,
const libmv_Marker* /*libmv_marker-*/,
size_t /*num_markers*/)
{
size_t /*num_markers*/) {
}
int libmv_autoTrackGetMarker(libmv_AutoTrack* /*libmv_autotrack*/,
int /*clip*/,
int /*frame*/,
int /*track*/,
libmv_Marker* /*libmv_marker*/)
{
libmv_Marker* /*libmv_marker*/) {
return 0;
}
@@ -376,24 +372,20 @@ libmv_FrameAccessor* libmv_FrameAccessorNew(
libmv_GetImageCallback /*get_image_callback*/,
libmv_ReleaseImageCallback /*release_image_callback*/,
libmv_GetMaskForTrackCallback /*get_mask_for_track_callback*/,
libmv_ReleaseMaskCallback /*release_mask_callback*/)
{
libmv_ReleaseMaskCallback /*release_mask_callback*/) {
return NULL;
}
void libmv_FrameAccessorDestroy(libmv_FrameAccessor* /*frame_accessor*/)
{
void libmv_FrameAccessorDestroy(libmv_FrameAccessor* /*frame_accessor*/) {
}
int64_t libmv_frameAccessorgetTransformKey(
const libmv_FrameTransform * /*transform*/)
{
const libmv_FrameTransform* /*transform*/) {
return 0;
}
void libmv_frameAccessorgetTransformRun(const libmv_FrameTransform* /*transform*/,
const libmv_FloatImage* /*input_image*/,
libmv_FloatImage* /*output_image*/)
{
void libmv_frameAccessorgetTransformRun(
const libmv_FrameTransform* /*transform*/,
const libmv_FloatImage* /*input_image*/,
libmv_FloatImage* /*output_image*/) {
}

View File

@@ -32,17 +32,17 @@
#undef DUMP_ALWAYS
using libmv::FloatImage;
using libmv::TrackRegion;
using libmv::TrackRegionOptions;
using libmv::TrackRegionResult;
using libmv::TrackRegion;
void libmv_configureTrackRegionOptions(
const libmv_TrackRegionOptions& options,
TrackRegionOptions* track_region_options) {
switch (options.motion_model) {
#define LIBMV_CONVERT(the_model) \
case TrackRegionOptions::the_model: \
track_region_options->mode = TrackRegionOptions::the_model; \
#define LIBMV_CONVERT(the_model) \
case TrackRegionOptions::the_model: \
track_region_options->mode = TrackRegionOptions::the_model; \
break;
LIBMV_CONVERT(TRANSLATION)
LIBMV_CONVERT(TRANSLATION_ROTATION)
@@ -66,7 +66,8 @@ void libmv_configureTrackRegionOptions(
* so disabling for now for until proper prediction model is landed.
*
* The thing is, currently blender sends input coordinates as the guess to
* region tracker and in case of fast motion such an early out ruins the track.
* region tracker and in case of fast motion such an early out ruins the
* track.
*/
track_region_options->attempt_refine_before_brute = false;
track_region_options->use_normalized_intensities = options.use_normalization;
@@ -74,7 +75,7 @@ void libmv_configureTrackRegionOptions(
void libmv_regionTrackergetResult(const TrackRegionResult& track_region_result,
libmv_TrackRegionResult* result) {
result->termination = (int) track_region_result.termination;
result->termination = (int)track_region_result.termination;
result->termination_reason = "";
result->correlation = track_region_result.correlation;
}
@@ -108,33 +109,27 @@ int libmv_trackRegion(const libmv_TrackRegionOptions* options,
libmv_configureTrackRegionOptions(*options, &track_region_options);
if (options->image1_mask) {
libmv_floatBufferToFloatImage(options->image1_mask,
image1_width,
image1_height,
1,
&image1_mask);
libmv_floatBufferToFloatImage(
options->image1_mask, image1_width, image1_height, 1, &image1_mask);
track_region_options.image1_mask = &image1_mask;
}
// Convert from raw float buffers to libmv's FloatImage.
FloatImage old_patch, new_patch;
libmv_floatBufferToFloatImage(image1,
image1_width,
image1_height,
1,
&old_patch);
libmv_floatBufferToFloatImage(image2,
image2_width,
image2_height,
1,
&new_patch);
libmv_floatBufferToFloatImage(
image1, image1_width, image1_height, 1, &old_patch);
libmv_floatBufferToFloatImage(
image2, image2_width, image2_height, 1, &new_patch);
TrackRegionResult track_region_result;
TrackRegion(old_patch, new_patch,
xx1, yy1,
TrackRegion(old_patch,
new_patch,
xx1,
yy1,
track_region_options,
xx2, yy2,
xx2,
yy2,
&track_region_result);
// Convert to floats for the blender api.

View File

@@ -31,7 +31,7 @@ typedef struct libmv_TrackRegionOptions {
int use_normalization;
double minimum_correlation;
double sigma;
float *image1_mask;
float* image1_mask;
} libmv_TrackRegionOptions;
typedef struct libmv_TrackRegionResult {
@@ -42,9 +42,9 @@ typedef struct libmv_TrackRegionResult {
#ifdef __cplusplus
namespace libmv {
struct TrackRegionOptions;
struct TrackRegionResult;
}
struct TrackRegionOptions;
struct TrackRegionResult;
} // namespace libmv
void libmv_configureTrackRegionOptions(
const libmv_TrackRegionOptions& options,
libmv::TrackRegionOptions* track_region_options);

View File

@@ -28,18 +28,18 @@ using libmv::Tracks;
libmv_Tracks* libmv_tracksNew(void) {
Tracks* tracks = LIBMV_OBJECT_NEW(Tracks);
return (libmv_Tracks*) tracks;
return (libmv_Tracks*)tracks;
}
void libmv_tracksDestroy(libmv_Tracks* libmv_tracks) {
LIBMV_OBJECT_DELETE(libmv_tracks, Tracks);
}
void libmv_tracksInsert(libmv_Tracks *libmv_tracks,
void libmv_tracksInsert(libmv_Tracks* libmv_tracks,
int image,
int track,
double x,
double y,
double weight) {
((Tracks *) libmv_tracks)->Insert(image, track, x, y, weight);
((Tracks*)libmv_tracks)->Insert(image, track, x, y, weight);
}

View File

@@ -25,8 +25,7 @@
using mv::Marker;
using mv::Tracks;
void libmv_apiMarkerToMarker(const libmv_Marker& libmv_marker,
Marker *marker) {
void libmv_apiMarkerToMarker(const libmv_Marker& libmv_marker, Marker* marker) {
marker->clip = libmv_marker.clip;
marker->frame = libmv_marker.frame;
marker->track = libmv_marker.track;
@@ -41,17 +40,16 @@ void libmv_apiMarkerToMarker(const libmv_Marker& libmv_marker,
marker->search_region.max(0) = libmv_marker.search_region_max[0];
marker->search_region.max(1) = libmv_marker.search_region_max[1];
marker->weight = libmv_marker.weight;
marker->source = (Marker::Source) libmv_marker.source;
marker->status = (Marker::Status) libmv_marker.status;
marker->source = (Marker::Source)libmv_marker.source;
marker->status = (Marker::Status)libmv_marker.status;
marker->reference_clip = libmv_marker.reference_clip;
marker->reference_frame = libmv_marker.reference_frame;
marker->model_type = (Marker::ModelType) libmv_marker.model_type;
marker->model_type = (Marker::ModelType)libmv_marker.model_type;
marker->model_id = libmv_marker.model_id;
marker->disabled_channels = libmv_marker.disabled_channels;
}
void libmv_markerToApiMarker(const Marker& marker,
libmv_Marker *libmv_marker) {
void libmv_markerToApiMarker(const Marker& marker, libmv_Marker* libmv_marker) {
libmv_marker->clip = marker.clip;
libmv_marker->frame = marker.frame;
libmv_marker->track = marker.track;
@@ -66,11 +64,11 @@ void libmv_markerToApiMarker(const Marker& marker,
libmv_marker->search_region_max[0] = marker.search_region.max(0);
libmv_marker->search_region_max[1] = marker.search_region.max(1);
libmv_marker->weight = marker.weight;
libmv_marker->source = (libmv_MarkerSource) marker.source;
libmv_marker->status = (libmv_MarkerStatus) marker.status;
libmv_marker->source = (libmv_MarkerSource)marker.source;
libmv_marker->status = (libmv_MarkerStatus)marker.status;
libmv_marker->reference_clip = marker.reference_clip;
libmv_marker->reference_frame = marker.reference_frame;
libmv_marker->model_type = (libmv_MarkerModelType) marker.model_type;
libmv_marker->model_type = (libmv_MarkerModelType)marker.model_type;
libmv_marker->model_id = marker.model_id;
libmv_marker->disabled_channels = marker.disabled_channels;
}
@@ -78,7 +76,7 @@ void libmv_markerToApiMarker(const Marker& marker,
libmv_TracksN* libmv_tracksNewN(void) {
Tracks* tracks = LIBMV_OBJECT_NEW(Tracks);
return (libmv_TracksN*) tracks;
return (libmv_TracksN*)tracks;
}
void libmv_tracksDestroyN(libmv_TracksN* libmv_tracks) {
@@ -89,7 +87,7 @@ void libmv_tracksAddMarkerN(libmv_TracksN* libmv_tracks,
const libmv_Marker* libmv_marker) {
Marker marker;
libmv_apiMarkerToMarker(*libmv_marker, &marker);
((Tracks*) libmv_tracks)->AddMarker(marker);
((Tracks*)libmv_tracks)->AddMarker(marker);
}
void libmv_tracksGetMarkerN(libmv_TracksN* libmv_tracks,
@@ -98,7 +96,7 @@ void libmv_tracksGetMarkerN(libmv_TracksN* libmv_tracks,
int track,
libmv_Marker* libmv_marker) {
Marker marker;
((Tracks*) libmv_tracks)->GetMarker(clip, frame, track, &marker);
((Tracks*)libmv_tracks)->GetMarker(clip, frame, track, &marker);
libmv_markerToApiMarker(marker, libmv_marker);
}
@@ -106,26 +104,25 @@ void libmv_tracksRemoveMarkerN(libmv_TracksN* libmv_tracks,
int clip,
int frame,
int track) {
((Tracks *) libmv_tracks)->RemoveMarker(clip, frame, track);
((Tracks*)libmv_tracks)->RemoveMarker(clip, frame, track);
}
void libmv_tracksRemoveMarkersForTrack(libmv_TracksN* libmv_tracks,
int track) {
((Tracks *) libmv_tracks)->RemoveMarkersForTrack(track);
void libmv_tracksRemoveMarkersForTrack(libmv_TracksN* libmv_tracks, int track) {
((Tracks*)libmv_tracks)->RemoveMarkersForTrack(track);
}
int libmv_tracksMaxClipN(libmv_TracksN* libmv_tracks) {
return ((Tracks*) libmv_tracks)->MaxClip();
return ((Tracks*)libmv_tracks)->MaxClip();
}
int libmv_tracksMaxFrameN(libmv_TracksN* libmv_tracks, int clip) {
return ((Tracks*) libmv_tracks)->MaxFrame(clip);
return ((Tracks*)libmv_tracks)->MaxFrame(clip);
}
int libmv_tracksMaxTrackN(libmv_TracksN* libmv_tracks) {
return ((Tracks*) libmv_tracks)->MaxTrack();
return ((Tracks*)libmv_tracks)->MaxTrack();
}
int libmv_tracksNumMarkersN(libmv_TracksN* libmv_tracks) {
return ((Tracks*) libmv_tracks)->NumMarkers();
return ((Tracks*)libmv_tracks)->NumMarkers();
}

View File

@@ -79,20 +79,19 @@ typedef struct libmv_Marker {
#ifdef __cplusplus
namespace mv {
struct Marker;
struct Marker;
}
void libmv_apiMarkerToMarker(const libmv_Marker& libmv_marker,
mv::Marker *marker);
mv::Marker* marker);
void libmv_markerToApiMarker(const mv::Marker& marker,
libmv_Marker *libmv_marker);
libmv_Marker* libmv_marker);
#endif
libmv_TracksN* libmv_tracksNewN(void);
void libmv_tracksDestroyN(libmv_TracksN* libmv_tracks);
void libmv_tracksAddMarkerN(libmv_TracksN* libmv_tracks,
const libmv_Marker* libmv_marker);
@@ -107,8 +106,7 @@ void libmv_tracksRemoveMarkerN(libmv_TracksN* libmv_tracks,
int frame,
int track);
void libmv_tracksRemoveMarkersForTrack(libmv_TracksN* libmv_tracks,
int track);
void libmv_tracksRemoveMarkersForTrack(libmv_TracksN* libmv_tracks, int track);
int libmv_tracksMaxClipN(libmv_TracksN* libmv_tracks);
int libmv_tracksMaxFrameN(libmv_TracksN* libmv_tracks, int clip);

View File

@@ -30,27 +30,33 @@
# define LIBMV_OBJECT_NEW OBJECT_GUARDED_NEW
# define LIBMV_OBJECT_DELETE OBJECT_GUARDED_DELETE
# define LIBMV_OBJECT_DELETE OBJECT_GUARDED_DELETE
# define LIBMV_STRUCT_NEW(type, count) \
(type*)MEM_mallocN(sizeof(type) * count, __func__)
# define LIBMV_STRUCT_NEW(type, count) \
(type*)MEM_mallocN(sizeof(type) * count, __func__)
# define LIBMV_STRUCT_DELETE(what) MEM_freeN(what)
#else
// Need this to keep libmv-capi potentially standalone.
# if defined __GNUC__ || defined __sun
# define LIBMV_OBJECT_NEW(type, args ...) \
new(malloc(sizeof(type))) type(args)
# define LIBMV_OBJECT_NEW(type, args...) \
new (malloc(sizeof(type))) type(args)
# else
# define LIBMV_OBJECT_NEW(type, ...) \
new(malloc(sizeof(type))) type(__VA_ARGS__)
#endif
# define LIBMV_OBJECT_DELETE(what, type) \
{ \
if (what) { \
((type*)(what))->~type(); \
free(what); \
} \
} (void)0
# define LIBMV_OBJECT_NEW(type, ...) \
new (malloc(sizeof(type))) type(__VA_ARGS__)
# endif
# define LIBMV_OBJECT_DELETE(what, type) \
{ \
if (what) { \
((type*)(what))->~type(); \
free(what); \
} \
} \
(void)0
# define LIBMV_STRUCT_NEW(type, count) (type*)malloc(sizeof(type) * count)
# define LIBMV_STRUCT_DELETE(what) { if (what) free(what); } (void)0
# define LIBMV_STRUCT_DELETE(what) \
{ \
if (what) \
free(what); \
} \
(void)0
#endif
#endif // LIBMV_C_API_UTILDEFINES_H_

View File

@@ -21,9 +21,9 @@
// Author: mierle@gmail.com (Keir Mierle)
#include "libmv/autotrack/autotrack.h"
#include "libmv/autotrack/quad.h"
#include "libmv/autotrack/frame_accessor.h"
#include "libmv/autotrack/predict_tracks.h"
#include "libmv/autotrack/quad.h"
#include "libmv/base/scoped_ptr.h"
#include "libmv/logging/logging.h"
#include "libmv/numeric/numeric.h"
@@ -35,34 +35,30 @@ namespace {
class DisableChannelsTransform : public FrameAccessor::Transform {
public:
DisableChannelsTransform(int disabled_channels)
: disabled_channels_(disabled_channels) { }
: disabled_channels_(disabled_channels) {}
int64_t key() const {
return disabled_channels_;
}
int64_t key() const { return disabled_channels_; }
void run(const FloatImage& input, FloatImage* output) const {
bool disable_red = (disabled_channels_ & Marker::CHANNEL_R) != 0,
bool disable_red = (disabled_channels_ & Marker::CHANNEL_R) != 0,
disable_green = (disabled_channels_ & Marker::CHANNEL_G) != 0,
disable_blue = (disabled_channels_ & Marker::CHANNEL_B) != 0;
disable_blue = (disabled_channels_ & Marker::CHANNEL_B) != 0;
LG << "Disabling channels: "
<< (disable_red ? "R " : "")
<< (disable_green ? "G " : "")
<< (disable_blue ? "B" : "");
LG << "Disabling channels: " << (disable_red ? "R " : "")
<< (disable_green ? "G " : "") << (disable_blue ? "B" : "");
// It's important to rescale the resultappropriately so that e.g. if only
// blue is selected, it's not zeroed out.
float scale = (disable_red ? 0.0f : 0.2126f) +
float scale = (disable_red ? 0.0f : 0.2126f) +
(disable_green ? 0.0f : 0.7152f) +
(disable_blue ? 0.0f : 0.0722f);
(disable_blue ? 0.0f : 0.0722f);
output->Resize(input.Height(), input.Width(), 1);
for (int y = 0; y < input.Height(); y++) {
for (int x = 0; x < input.Width(); x++) {
float r = disable_red ? 0.0f : input(y, x, 0);
float r = disable_red ? 0.0f : input(y, x, 0);
float g = disable_green ? 0.0f : input(y, x, 1);
float b = disable_blue ? 0.0f : input(y, x, 2);
float b = disable_blue ? 0.0f : input(y, x, 2);
(*output)(y, x, 0) = (0.2126f * r + 0.7152f * g + 0.0722f * b) / scale;
}
}
@@ -73,7 +69,7 @@ class DisableChannelsTransform : public FrameAccessor::Transform {
int disabled_channels_;
};
template<typename QuadT, typename ArrayT>
template <typename QuadT, typename ArrayT>
void QuadToArrays(const QuadT& quad, ArrayT* x, ArrayT* y) {
for (int i = 0; i < 4; ++i) {
x[i] = quad.coordinates(i, 0);
@@ -115,11 +111,8 @@ FrameAccessor::Key GetMaskForMarker(const Marker& marker,
FrameAccessor* frame_accessor,
FloatImage* mask) {
Region region = marker.search_region.Rounded();
return frame_accessor->GetMaskForTrack(marker.clip,
marker.frame,
marker.track,
&region,
mask);
return frame_accessor->GetMaskForTrack(
marker.clip, marker.frame, marker.track, &region, mask);
}
} // namespace
@@ -152,23 +145,20 @@ bool AutoTrack::TrackMarker(Marker* tracked_marker,
// TODO(keir): Technically this could take a smaller slice from the source
// image instead of taking one the size of the search window.
FloatImage reference_image;
FrameAccessor::Key reference_key = GetImageForMarker(reference_marker,
frame_accessor_,
&reference_image);
FrameAccessor::Key reference_key =
GetImageForMarker(reference_marker, frame_accessor_, &reference_image);
if (!reference_key) {
LG << "Couldn't get frame for reference marker: " << reference_marker;
return false;
}
FloatImage reference_mask;
FrameAccessor::Key reference_mask_key = GetMaskForMarker(reference_marker,
frame_accessor_,
&reference_mask);
FrameAccessor::Key reference_mask_key =
GetMaskForMarker(reference_marker, frame_accessor_, &reference_mask);
FloatImage tracked_image;
FrameAccessor::Key tracked_key = GetImageForMarker(*tracked_marker,
frame_accessor_,
&tracked_image);
FrameAccessor::Key tracked_key =
GetImageForMarker(*tracked_marker, frame_accessor_, &tracked_image);
if (!tracked_key) {
frame_accessor_->ReleaseImage(reference_key);
LG << "Couldn't get frame for tracked marker: " << tracked_marker;
@@ -191,9 +181,11 @@ bool AutoTrack::TrackMarker(Marker* tracked_marker,
local_track_region_options.attempt_refine_before_brute = predicted_position;
TrackRegion(reference_image,
tracked_image,
x1, y1,
x1,
y1,
local_track_region_options,
x2, y2,
x2,
y2,
result);
// Copy results over the tracked marker.
@@ -208,7 +200,7 @@ bool AutoTrack::TrackMarker(Marker* tracked_marker,
tracked_marker->search_region.Offset(delta);
tracked_marker->source = Marker::TRACKED;
tracked_marker->status = Marker::UNKNOWN;
tracked_marker->reference_clip = reference_marker.clip;
tracked_marker->reference_clip = reference_marker.clip;
tracked_marker->reference_frame = reference_marker.frame;
// Release the images and masks from the accessor cache.
@@ -230,7 +222,9 @@ void AutoTrack::SetMarkers(vector<Marker>* markers) {
tracks_.SetMarkers(markers);
}
bool AutoTrack::GetMarker(int clip, int frame, int track,
bool AutoTrack::GetMarker(int clip,
int frame,
int track,
Marker* markers) const {
return tracks_.GetMarker(clip, frame, track, markers);
}
@@ -242,7 +236,8 @@ void AutoTrack::DetectAndTrack(const DetectAndTrackOptions& options) {
vector<Marker> previous_frame_markers;
// Q: How to decide track #s when detecting?
// Q: How to match markers from previous frame? set of prev frame tracks?
// Q: How to decide what markers should get tracked and which ones should not?
// Q: How to decide what markers should get tracked and which ones should
// not?
for (int frame = 0; frame < num_frames; ++frame) {
if (Cancelled()) {
LG << "Got cancel message while detecting and tracking...";
@@ -271,8 +266,7 @@ void AutoTrack::DetectAndTrack(const DetectAndTrackOptions& options) {
for (int i = 0; i < this_frame_markers.size(); ++i) {
tracks_in_this_frame.push_back(this_frame_markers[i].track);
}
std::sort(tracks_in_this_frame.begin(),
tracks_in_this_frame.end());
std::sort(tracks_in_this_frame.begin(), tracks_in_this_frame.end());
// Find tracks in the previous frame that are not in this one.
vector<Marker*> previous_frame_markers_to_track;

View File

@@ -23,8 +23,8 @@
#ifndef LIBMV_AUTOTRACK_AUTOTRACK_H_
#define LIBMV_AUTOTRACK_AUTOTRACK_H_
#include "libmv/autotrack/tracks.h"
#include "libmv/autotrack/region.h"
#include "libmv/autotrack/tracks.h"
#include "libmv/tracking/track_region.h"
namespace libmv {
@@ -74,15 +74,14 @@ class AutoTrack {
Region search_region;
};
AutoTrack(FrameAccessor* frame_accessor)
: frame_accessor_(frame_accessor) {}
AutoTrack(FrameAccessor* frame_accessor) : frame_accessor_(frame_accessor) {}
// Marker manipulation.
// Clip manipulation.
// Set the number of clips. These clips will get accessed from the frame
// accessor, matches between frames found, and a reconstruction created.
//void SetNumFrames(int clip, int num_frames);
// void SetNumFrames(int clip, int num_frames);
// Tracking & Matching
@@ -90,7 +89,7 @@ class AutoTrack {
// Caller maintains ownership of *result and *tracked_marker.
bool TrackMarker(Marker* tracked_marker,
TrackRegionResult* result,
const TrackRegionOptions* track_options=NULL);
const TrackRegionOptions* track_options = NULL);
// Wrapper around Tracks API; however these may add additional processing.
void AddMarker(const Marker& tracked_marker);
@@ -99,36 +98,36 @@ class AutoTrack {
// TODO(keir): Implement frame matching! This could be very cool for loop
// closing and connecting across clips.
//void MatchFrames(int clip1, int frame1, int clip2, int frame2) {}
// void MatchFrames(int clip1, int frame1, int clip2, int frame2) {}
// Wrapper around the Reconstruction API.
// Returns the new ID.
int AddCameraIntrinsics(CameraIntrinsics* intrinsics) {
(void) intrinsics;
(void)intrinsics;
return 0;
} // XXX
int SetClipIntrinsics(int clip, int intrinsics) {
(void) clip;
(void) intrinsics;
(void)clip;
(void)intrinsics;
return 0;
} // XXX
} // XXX
enum Motion {
GENERAL_CAMERA_MOTION,
TRIPOD_CAMERA_MOTION,
};
int SetClipMotion(int clip, Motion motion) {
(void) clip;
(void) motion;
(void)clip;
(void)motion;
return 0;
} // XXX
} // XXX
// Decide what to refine for the given intrinsics. bundle_options is from
// bundle.h (e.g. BUNDLE_FOCAL_LENGTH | BUNDLE_RADIAL_K1).
void SetIntrinsicsRefine(int intrinsics, int bundle_options) {
(void) intrinsics;
(void) bundle_options;
} // XXX
(void)intrinsics;
(void)bundle_options;
} // XXX
// Keyframe read/write.
struct ClipFrame {
@@ -150,20 +149,19 @@ class AutoTrack {
};
void DetectAndTrack(const DetectAndTrackOptions& options);
struct DetectFeaturesInFrameOptions {
};
void DetectFeaturesInFrame(int clip, int frame,
const DetectFeaturesInFrameOptions* options=NULL) {
(void) clip;
(void) frame;
(void) options;
} // XXX
struct DetectFeaturesInFrameOptions {};
void DetectFeaturesInFrame(
int clip, int frame, const DetectFeaturesInFrameOptions* options = NULL) {
(void)clip;
(void)frame;
(void)options;
} // XXX
// Does not take ownership of the given listener, but keeps a reference to it.
void AddListener(OperationListener* listener) {(void) listener;} // XXX
void AddListener(OperationListener* listener) { (void)listener; } // XXX
// Create the initial reconstruction,
//void FindInitialReconstruction();
// void FindInitialReconstruction();
// State machine
//
@@ -202,17 +200,17 @@ class AutoTrack {
bool Cancelled() { return false; }
Tracks tracks_; // May be normalized camera coordinates or raw pixels.
//Reconstruction reconstruction_;
// Reconstruction reconstruction_;
// TODO(keir): Add the motion models here.
//vector<MotionModel> motion_models_;
// vector<MotionModel> motion_models_;
// TODO(keir): Should num_clips and num_frames get moved to FrameAccessor?
// TODO(keir): What about masking for clips and frames to prevent various
// things like reconstruction or tracking from happening on certain frames?
FrameAccessor* frame_accessor_;
//int num_clips_;
//vector<int> num_frames_; // Indexed by clip.
// int num_clips_;
// vector<int> num_frames_; // Indexed by clip.
// The intrinsics for each clip, assuming each clip has fixed intrinsics.
// TODO(keir): Decide what the semantics should be for varying focal length.

View File

@@ -41,7 +41,7 @@ using libmv::FloatImage;
// implementations to cache filtered image pieces).
struct FrameAccessor {
struct Transform {
virtual ~Transform() { }
virtual ~Transform() {}
// The key should depend on the transform arguments. Must be non-zero.
virtual int64_t key() const = 0;
@@ -50,10 +50,7 @@ struct FrameAccessor {
virtual void run(const FloatImage& input, FloatImage* output) const = 0;
};
enum InputMode {
MONO,
RGBA
};
enum InputMode { MONO, RGBA };
typedef void* Key;
@@ -100,6 +97,6 @@ struct FrameAccessor {
virtual int NumFrames(int clip) = 0;
};
} // namespace libmv
} // namespace mv
#endif // LIBMV_AUTOTRACK_FRAME_ACCESSOR_H_

View File

@@ -57,23 +57,19 @@ struct Marker {
float weight;
enum Source {
MANUAL, // The user placed this marker manually.
DETECTED, // A keypoint detector found this point.
TRACKED, // The tracking algorithm placed this marker.
MATCHED, // A matching algorithm (e.g. SIFT or SURF or ORB) found this.
PREDICTED, // A motion model predicted this marker. This is needed for
// handling occlusions in some cases where an imaginary marker
// is placed to keep camera motion smooth.
MANUAL, // The user placed this marker manually.
DETECTED, // A keypoint detector found this point.
TRACKED, // The tracking algorithm placed this marker.
MATCHED, // A matching algorithm (e.g. SIFT or SURF or ORB) found this.
PREDICTED, // A motion model predicted this marker. This is needed for
// handling occlusions in some cases where an imaginary marker
// is placed to keep camera motion smooth.
};
Source source;
// Markers may be inliers or outliers if the tracking fails; this allows
// visualizing the markers in the image.
enum Status {
UNKNOWN,
INLIER,
OUTLIER
};
enum Status { UNKNOWN, INLIER, OUTLIER };
Status status;
// When doing correlation tracking, where to search in the current frame for
@@ -90,12 +86,7 @@ struct Marker {
// another primitive (a rectangular prisim). This captures the information
// needed to say that for example a collection of markers belongs to model #2
// (and model #2 is a plane).
enum ModelType {
POINT,
PLANE,
LINE,
CUBE
};
enum ModelType { POINT, PLANE, LINE, CUBE };
ModelType model_type;
// The model ID this track (e.g. the second model, which is a plane).
@@ -114,7 +105,7 @@ struct Marker {
int disabled_channels;
// Offset everything (center, patch, search) by the given delta.
template<typename T>
template <typename T>
void Offset(const T& offset) {
center += offset.template cast<float>();
patch.coordinates.rowwise() += offset.template cast<int>();
@@ -122,19 +113,15 @@ struct Marker {
}
// Shift the center to the given new position (and patch, search).
template<typename T>
template <typename T>
void SetPosition(const T& new_center) {
Offset(new_center - center);
}
};
inline std::ostream& operator<<(std::ostream& out, const Marker& marker) {
out << "{"
<< marker.clip << ", "
<< marker.frame << ", "
<< marker.track << ", ("
<< marker.center.x() << ", "
<< marker.center.y() << ")"
out << "{" << marker.clip << ", " << marker.frame << ", " << marker.track
<< ", (" << marker.center.x() << ", " << marker.center.y() << ")"
<< "}";
return out;
}

View File

@@ -23,18 +23,13 @@
#ifndef LIBMV_AUTOTRACK_MODEL_H_
#define LIBMV_AUTOTRACK_MODEL_H_
#include "libmv/numeric/numeric.h"
#include "libmv/autotrack/quad.h"
#include "libmv/numeric/numeric.h"
namespace mv {
struct Model {
enum ModelType {
POINT,
PLANE,
LINE,
CUBE
};
enum ModelType { POINT, PLANE, LINE, CUBE };
// ???
};

View File

@@ -20,8 +20,8 @@
//
// Author: mierle@gmail.com (Keir Mierle)
#include "libmv/autotrack/marker.h"
#include "libmv/autotrack/predict_tracks.h"
#include "libmv/autotrack/marker.h"
#include "libmv/autotrack/tracks.h"
#include "libmv/base/vector.h"
#include "libmv/logging/logging.h"
@@ -31,8 +31,8 @@ namespace mv {
namespace {
using libmv::vector;
using libmv::Vec2;
using libmv::vector;
// Implied time delta between steps. Set empirically by tweaking and seeing
// what numbers did best at prediction.
@@ -57,6 +57,8 @@ const double dt = 3.8;
// For a typical system having constant velocity. This gives smooth-appearing
// predictions, but they are not always as accurate.
//
// clang-format off
const double velocity_state_transition_data[] = {
1, dt, 0, 0, 0, 0,
0, 1, 0, 0, 0, 0,
@@ -65,10 +67,13 @@ const double velocity_state_transition_data[] = {
0, 0, 0, 0, 1, 0,
0, 0, 0, 0, 0, 1
};
// clang-format on
#if 0
// This 3rd-order system also models acceleration. This makes for "jerky"
// predictions, but that tend to be more accurate.
//
// clang-format off
const double acceleration_state_transition_data[] = {
1, dt, dt*dt/2, 0, 0, 0,
0, 1, dt, 0, 0, 0,
@@ -77,9 +82,12 @@ const double acceleration_state_transition_data[] = {
0, 0, 0, 0, 1, dt,
0, 0, 0, 0, 0, 1
};
// clang-format on
// This system (attempts) to add an angular velocity component. However, it's
// total junk.
//
// clang-format off
const double angular_state_transition_data[] = {
1, dt, -dt, 0, 0, 0, // Position x
0, 1, 0, 0, 0, 0, // Velocity x
@@ -88,17 +96,22 @@ const double angular_state_transition_data[] = {
0, 0, 0, 0, 1, 0, // Velocity y
0, 0, 0, 0, 0, 1 // Ignored
};
// clang-format on
#endif
const double* state_transition_data = velocity_state_transition_data;
// Observation matrix.
// clang-format off
const double observation_data[] = {
1., 0., 0., 0., 0., 0.,
0., 0., 0., 1., 0., 0.
};
// clang-format on
// Process covariance.
//
// clang-format off
const double process_covariance_data[] = {
35, 0, 0, 0, 0, 0,
0, 5, 0, 0, 0, 0,
@@ -107,14 +120,19 @@ const double process_covariance_data[] = {
0, 0, 0, 0, 5, 0,
0, 0, 0, 0, 0, 5
};
// clang-format on
// Process covariance.
const double measurement_covariance_data[] = {
0.01, 0.00,
0.00, 0.01,
0.01,
0.00,
0.00,
0.01,
};
// Initial covariance.
//
// clang-format off
const double initial_covariance_data[] = {
10, 0, 0, 0, 0, 0,
0, 1, 0, 0, 0, 0,
@@ -123,6 +141,7 @@ const double initial_covariance_data[] = {
0, 0, 0, 0, 1, 0,
0, 0, 0, 0, 0, 1
};
// clang-format on
typedef mv::KalmanFilter<double, 6, 2> TrackerKalman;
@@ -138,7 +157,7 @@ bool OrderByFrameLessThan(const Marker* a, const Marker* b) {
}
return a->clip < b->clip;
}
return a->frame < b-> frame;
return a->frame < b->frame;
}
// Predicted must be after the previous markers (in the frame numbering sense).
@@ -146,9 +165,9 @@ void RunPrediction(const vector<Marker*> previous_markers,
Marker* predicted_marker) {
TrackerKalman::State state;
state.mean << previous_markers[0]->center.x(), 0, 0,
previous_markers[0]->center.y(), 0, 0;
state.covariance = Eigen::Matrix<double, 6, 6, Eigen::RowMajor>(
initial_covariance_data);
previous_markers[0]->center.y(), 0, 0;
state.covariance =
Eigen::Matrix<double, 6, 6, Eigen::RowMajor>(initial_covariance_data);
int current_frame = previous_markers[0]->frame;
int target_frame = predicted_marker->frame;
@@ -159,19 +178,18 @@ void RunPrediction(const vector<Marker*> previous_markers,
for (int i = 1; i < previous_markers.size(); ++i) {
// Step forward predicting the state until it is on the current marker.
int predictions = 0;
for (;
current_frame != previous_markers[i]->frame;
for (; current_frame != previous_markers[i]->frame;
current_frame += frame_delta) {
filter.Step(&state);
predictions++;
LG << "Predicted point (frame " << current_frame << "): "
<< state.mean(0) << ", " << state.mean(3);
LG << "Predicted point (frame " << current_frame << "): " << state.mean(0)
<< ", " << state.mean(3);
}
// Log the error -- not actually used, but interesting.
Vec2 error = previous_markers[i]->center.cast<double>() -
Vec2(state.mean(0), state.mean(3));
LG << "Prediction error for " << predictions << " steps: ("
<< error.x() << ", " << error.y() << "); norm: " << error.norm();
LG << "Prediction error for " << predictions << " steps: (" << error.x()
<< ", " << error.y() << "); norm: " << error.norm();
// Now that the state is predicted in the current frame, update the state
// based on the measurement from the current frame.
filter.Update(previous_markers[i]->center.cast<double>(),
@@ -184,8 +202,8 @@ void RunPrediction(const vector<Marker*> previous_markers,
// predict until the target frame.
for (; current_frame != target_frame; current_frame += frame_delta) {
filter.Step(&state);
LG << "Final predicted point (frame " << current_frame << "): "
<< state.mean(0) << ", " << state.mean(3);
LG << "Final predicted point (frame " << current_frame
<< "): " << state.mean(0) << ", " << state.mean(3);
}
// The x and y positions are at 0 and 3; ignore acceleration and velocity.
@@ -253,13 +271,13 @@ bool PredictMarkerPosition(const Tracks& tracks, Marker* marker) {
} else if (insert_at != -1) {
// Found existing marker; scan before and after it.
forward_scan_begin = insert_at + 1;
forward_scan_end = markers.size() - 1;;
forward_scan_end = markers.size() - 1;
backward_scan_begin = insert_at - 1;
backward_scan_end = 0;
} else {
// Didn't find existing marker but found an insertion point.
forward_scan_begin = insert_before;
forward_scan_end = markers.size() - 1;;
forward_scan_end = markers.size() - 1;
backward_scan_begin = insert_before - 1;
backward_scan_end = 0;
}
@@ -301,9 +319,8 @@ bool PredictMarkerPosition(const Tracks& tracks, Marker* marker) {
return false;
}
LG << "Predicting backward";
int predict_begin =
std::min(forward_scan_begin + max_frames_to_predict_from,
forward_scan_end);
int predict_begin = std::min(
forward_scan_begin + max_frames_to_predict_from, forward_scan_end);
int predict_end = forward_scan_begin;
vector<Marker*> previous_markers;
for (int i = predict_begin; i >= predict_end; --i) {
@@ -312,7 +329,6 @@ bool PredictMarkerPosition(const Tracks& tracks, Marker* marker) {
RunPrediction(previous_markers, marker);
return false;
}
}
} // namespace mv

View File

@@ -35,17 +35,15 @@ static void AddMarker(int frame, float x, float y, Tracks* tracks) {
marker.frame = frame;
marker.center.x() = x;
marker.center.y() = y;
marker.patch.coordinates << x - 1, y - 1,
x + 1, y - 1,
x + 1, y + 1,
x - 1, y + 1;
marker.patch.coordinates << x - 1, y - 1, x + 1, y - 1, x + 1, y + 1, x - 1,
y + 1;
tracks->AddMarker(marker);
}
TEST(PredictMarkerPosition, EasyLinearMotion) {
Tracks tracks;
AddMarker(0, 1.0, 0.0, &tracks);
AddMarker(1, 2.0, 5.0, &tracks);
AddMarker(0, 1.0, 0.0, &tracks);
AddMarker(1, 2.0, 5.0, &tracks);
AddMarker(2, 3.0, 10.0, &tracks);
AddMarker(3, 4.0, 15.0, &tracks);
AddMarker(4, 5.0, 20.0, &tracks);
@@ -66,10 +64,8 @@ TEST(PredictMarkerPosition, EasyLinearMotion) {
// Check the patch coordinates as well.
double x = 9, y = 40.0;
Quad2Df expected_patch;
expected_patch.coordinates << x - 1, y - 1,
x + 1, y - 1,
x + 1, y + 1,
x - 1, y + 1;
expected_patch.coordinates << x - 1, y - 1, x + 1, y - 1, x + 1, y + 1, x - 1,
y + 1;
error = (expected_patch.coordinates - predicted.patch.coordinates).norm();
LG << "Patch error: " << error;
@@ -78,8 +74,8 @@ TEST(PredictMarkerPosition, EasyLinearMotion) {
TEST(PredictMarkerPosition, EasyBackwardLinearMotion) {
Tracks tracks;
AddMarker(8, 1.0, 0.0, &tracks);
AddMarker(7, 2.0, 5.0, &tracks);
AddMarker(8, 1.0, 0.0, &tracks);
AddMarker(7, 2.0, 5.0, &tracks);
AddMarker(6, 3.0, 10.0, &tracks);
AddMarker(5, 4.0, 15.0, &tracks);
AddMarker(4, 5.0, 20.0, &tracks);
@@ -101,10 +97,8 @@ TEST(PredictMarkerPosition, EasyBackwardLinearMotion) {
// Check the patch coordinates as well.
double x = 9.0, y = 40.0;
Quad2Df expected_patch;
expected_patch.coordinates << x - 1, y - 1,
x + 1, y - 1,
x + 1, y + 1,
x - 1, y + 1;
expected_patch.coordinates << x - 1, y - 1, x + 1, y - 1, x + 1, y + 1, x - 1,
y + 1;
error = (expected_patch.coordinates - predicted.patch.coordinates).norm();
LG << "Patch error: " << error;
@@ -113,8 +107,8 @@ TEST(PredictMarkerPosition, EasyBackwardLinearMotion) {
TEST(PredictMarkerPosition, TwoFrameGap) {
Tracks tracks;
AddMarker(0, 1.0, 0.0, &tracks);
AddMarker(1, 2.0, 5.0, &tracks);
AddMarker(0, 1.0, 0.0, &tracks);
AddMarker(1, 2.0, 5.0, &tracks);
AddMarker(2, 3.0, 10.0, &tracks);
AddMarker(3, 4.0, 15.0, &tracks);
AddMarker(4, 5.0, 20.0, &tracks);
@@ -135,8 +129,8 @@ TEST(PredictMarkerPosition, TwoFrameGap) {
TEST(PredictMarkerPosition, FourFrameGap) {
Tracks tracks;
AddMarker(0, 1.0, 0.0, &tracks);
AddMarker(1, 2.0, 5.0, &tracks);
AddMarker(0, 1.0, 0.0, &tracks);
AddMarker(1, 2.0, 5.0, &tracks);
AddMarker(2, 3.0, 10.0, &tracks);
AddMarker(3, 4.0, 15.0, &tracks);
// Missing frames 4, 5, 6, 7.
@@ -154,13 +148,13 @@ TEST(PredictMarkerPosition, FourFrameGap) {
TEST(PredictMarkerPosition, MultipleGaps) {
Tracks tracks;
AddMarker(0, 1.0, 0.0, &tracks);
AddMarker(1, 2.0, 5.0, &tracks);
AddMarker(0, 1.0, 0.0, &tracks);
AddMarker(1, 2.0, 5.0, &tracks);
AddMarker(2, 3.0, 10.0, &tracks);
// AddMarker(3, 4.0, 15.0, &tracks); // Note the 3-frame gap.
// AddMarker(4, 5.0, 20.0, &tracks);
// AddMarker(5, 6.0, 25.0, &tracks);
AddMarker(6, 7.0, 30.0, &tracks); // Intermediate measurement.
AddMarker(6, 7.0, 30.0, &tracks); // Intermediate measurement.
// AddMarker(7, 8.0, 35.0, &tracks);
Marker predicted;
@@ -178,14 +172,14 @@ TEST(PredictMarkerPosition, MarkersInRandomOrder) {
Tracks tracks;
// This is the same as the easy, except that the tracks are randomly ordered.
AddMarker(0, 1.0, 0.0, &tracks);
AddMarker(0, 1.0, 0.0, &tracks);
AddMarker(2, 3.0, 10.0, &tracks);
AddMarker(7, 8.0, 35.0, &tracks);
AddMarker(5, 6.0, 25.0, &tracks);
AddMarker(4, 5.0, 20.0, &tracks);
AddMarker(3, 4.0, 15.0, &tracks);
AddMarker(6, 7.0, 30.0, &tracks);
AddMarker(1, 2.0, 5.0, &tracks);
AddMarker(1, 2.0, 5.0, &tracks);
Marker predicted;
predicted.clip = 0;

View File

@@ -27,7 +27,7 @@
namespace mv {
template<typename T, int D>
template <typename T, int D>
struct Quad {
// A quad is 4 points; generally in 2D or 3D.
//
@@ -35,7 +35,7 @@ struct Quad {
// |\.
// | \.
// | z (z goes into screen)
// |
// |
// | r0----->r1
// | ^ |
// | | . |
@@ -44,7 +44,7 @@ struct Quad {
// | \.
// | \.
// v normal goes away (right handed).
// y
// y
//
// Each row is one of the corners coordinates; either (x, y) or (x, y, z).
Eigen::Matrix<T, 4, D> coordinates;

View File

@@ -57,17 +57,17 @@ class Reconstruction {
public:
// All methods copy their input reference or take ownership of the pointer.
void AddCameraPose(const CameraPose& pose);
int AddCameraIntrinsics(CameraIntrinsics* intrinsics);
int AddPoint(const Point& point);
int AddModel(Model* model);
int AddCameraIntrinsics(CameraIntrinsics* intrinsics);
int AddPoint(const Point& point);
int AddModel(Model* model);
// Returns the corresponding pose or point or NULL if missing.
CameraPose* CameraPoseForFrame(int clip, int frame);
CameraPose* CameraPoseForFrame(int clip, int frame);
const CameraPose* CameraPoseForFrame(int clip, int frame) const;
Point* PointForTrack(int track);
Point* PointForTrack(int track);
const Point* PointForTrack(int track) const;
const vector<vector<CameraPose> >& camera_poses() const {
const vector<vector<CameraPose>>& camera_poses() const {
return camera_poses_;
}

View File

@@ -46,7 +46,7 @@ struct Region {
Vec2f min;
Vec2f max;
template<typename T>
template <typename T>
void Offset(const T& offset) {
min += offset.template cast<float>();
max += offset.template cast<float>();

View File

@@ -23,8 +23,8 @@
#include "libmv/autotrack/tracks.h"
#include <algorithm>
#include <vector>
#include <iterator>
#include <vector>
#include "libmv/numeric/numeric.h"
@@ -34,12 +34,12 @@ Tracks::Tracks(const Tracks& other) {
markers_ = other.markers_;
}
Tracks::Tracks(const vector<Marker>& markers) : markers_(markers) {}
Tracks::Tracks(const vector<Marker>& markers) : markers_(markers) {
}
bool Tracks::GetMarker(int clip, int frame, int track, Marker* marker) const {
for (int i = 0; i < markers_.size(); ++i) {
if (markers_[i].clip == clip &&
markers_[i].frame == frame &&
if (markers_[i].clip == clip && markers_[i].frame == frame &&
markers_[i].track == track) {
*marker = markers_[i];
return true;
@@ -60,8 +60,7 @@ void Tracks::GetMarkersForTrackInClip(int clip,
int track,
vector<Marker>* markers) const {
for (int i = 0; i < markers_.size(); ++i) {
if (clip == markers_[i].clip &&
track == markers_[i].track) {
if (clip == markers_[i].clip && track == markers_[i].track) {
markers->push_back(markers_[i]);
}
}
@@ -71,15 +70,16 @@ void Tracks::GetMarkersInFrame(int clip,
int frame,
vector<Marker>* markers) const {
for (int i = 0; i < markers_.size(); ++i) {
if (markers_[i].clip == clip &&
markers_[i].frame == frame) {
if (markers_[i].clip == clip && markers_[i].frame == frame) {
markers->push_back(markers_[i]);
}
}
}
void Tracks::GetMarkersForTracksInBothImages(int clip1, int frame1,
int clip2, int frame2,
void Tracks::GetMarkersForTracksInBothImages(int clip1,
int frame1,
int clip2,
int frame2,
vector<Marker>* markers) const {
std::vector<int> image1_tracks;
std::vector<int> image2_tracks;
@@ -99,20 +99,19 @@ void Tracks::GetMarkersForTracksInBothImages(int clip1, int frame1,
std::sort(image1_tracks.begin(), image1_tracks.end());
std::sort(image2_tracks.begin(), image2_tracks.end());
std::vector<int> intersection;
std::set_intersection(image1_tracks.begin(), image1_tracks.end(),
image2_tracks.begin(), image2_tracks.end(),
std::set_intersection(image1_tracks.begin(),
image1_tracks.end(),
image2_tracks.begin(),
image2_tracks.end(),
std::back_inserter(intersection));
// Scan through and get the relevant tracks from the two images.
for (int i = 0; i < markers_.size(); ++i) {
// Save markers that are in either frame and are in our candidate set.
if (((markers_[i].clip == clip1 &&
markers_[i].frame == frame1) ||
(markers_[i].clip == clip2 &&
markers_[i].frame == frame2)) &&
std::binary_search(intersection.begin(),
intersection.end(),
markers_[i].track)) {
if (((markers_[i].clip == clip1 && markers_[i].frame == frame1) ||
(markers_[i].clip == clip2 && markers_[i].frame == frame2)) &&
std::binary_search(
intersection.begin(), intersection.end(), markers_[i].track)) {
markers->push_back(markers_[i]);
}
}
@@ -122,8 +121,7 @@ void Tracks::AddMarker(const Marker& marker) {
// TODO(keir): This is quadratic for repeated insertions. Fix this by adding
// a smarter data structure like a set<>.
for (int i = 0; i < markers_.size(); ++i) {
if (markers_[i].clip == marker.clip &&
markers_[i].frame == marker.frame &&
if (markers_[i].clip == marker.clip && markers_[i].frame == marker.frame &&
markers_[i].track == marker.track) {
markers_[i] = marker;
return;
@@ -139,8 +137,7 @@ void Tracks::SetMarkers(vector<Marker>* markers) {
bool Tracks::RemoveMarker(int clip, int frame, int track) {
int size = markers_.size();
for (int i = 0; i < markers_.size(); ++i) {
if (markers_[i].clip == clip &&
markers_[i].frame == frame &&
if (markers_[i].clip == clip && markers_[i].frame == frame &&
markers_[i].track == track) {
markers_[i] = markers_[size - 1];
markers_.resize(size - 1);

View File

@@ -23,8 +23,8 @@
#ifndef LIBMV_AUTOTRACK_TRACKS_H_
#define LIBMV_AUTOTRACK_TRACKS_H_
#include "libmv/base/vector.h"
#include "libmv/autotrack/marker.h"
#include "libmv/base/vector.h"
namespace mv {
@@ -33,8 +33,8 @@ using libmv::vector;
// The Tracks container stores correspondences between frames.
class Tracks {
public:
Tracks() { }
Tracks(const Tracks &other);
Tracks() {}
Tracks(const Tracks& other);
// Create a tracks object with markers already initialized. Copies markers.
explicit Tracks(const vector<Marker>& markers);
@@ -51,8 +51,10 @@ class Tracks {
//
// This is not the same as the union of the markers in frame1 and
// frame2; each marker is for a track that appears in both images.
void GetMarkersForTracksInBothImages(int clip1, int frame1,
int clip2, int frame2,
void GetMarkersForTracksInBothImages(int clip1,
int frame1,
int clip2,
int frame2,
vector<Marker>* markers) const;
void AddMarker(const Marker& marker);

View File

@@ -22,8 +22,8 @@
#include "libmv/autotrack/tracks.h"
#include "testing/testing.h"
#include "libmv/logging/logging.h"
#include "testing/testing.h"
namespace mv {

View File

@@ -41,11 +41,11 @@
namespace libmv {
void *aligned_malloc(int size, int alignment) {
void* aligned_malloc(int size, int alignment) {
#ifdef _WIN32
return _aligned_malloc(size, alignment);
#elif defined(__FreeBSD__) || defined(__NetBSD__) || defined(__APPLE__)
void *result;
void* result;
if (posix_memalign(&result, alignment, size)) {
// non-zero means allocation error
@@ -58,7 +58,7 @@ void *aligned_malloc(int size, int alignment) {
#endif
}
void aligned_free(void *ptr) {
void aligned_free(void* ptr) {
#ifdef _WIN32
_aligned_free(ptr);
#else

View File

@@ -24,10 +24,10 @@
namespace libmv {
// Allocate block of size bytes at least aligned to a given value.
void *aligned_malloc(int size, int alignment);
void* aligned_malloc(int size, int alignment);
// Free memory allocated by aligned_malloc.
void aligned_free(void *ptr);
void aligned_free(void* ptr);
} // namespace libmv

View File

@@ -28,6 +28,7 @@ class IdGenerator {
public:
IdGenerator() : next_(0) {}
ID Generate() { return next_++; }
private:
ID next_;
};

View File

@@ -26,8 +26,8 @@
namespace libmv {
using std::map;
using std::make_pair;
using std::map;
} // namespace libmv

View File

@@ -30,44 +30,44 @@ namespace libmv {
* A handle for a heap-allocated resource that should be freed when it goes out
* of scope. This looks similar to the one found in TR1.
*/
template<typename T>
template <typename T>
class scoped_ptr {
public:
scoped_ptr(T *resource) : resource_(resource) {}
scoped_ptr(T* resource) : resource_(resource) {}
~scoped_ptr() { reset(0); }
T *get() const { return resource_; }
T *operator->() const { return resource_; }
T &operator*() const { return *resource_; }
T* get() const { return resource_; }
T* operator->() const { return resource_; }
T& operator*() const { return *resource_; }
void reset(T *new_resource) {
void reset(T* new_resource) {
if (sizeof(T)) {
delete resource_;
}
resource_ = new_resource;
}
T *release() {
T *released_resource = resource_;
T* release() {
T* released_resource = resource_;
resource_ = 0;
return released_resource;
}
private:
// No copying allowed.
T *resource_;
T* resource_;
};
// Same as scoped_ptr but caller must allocate the data
// with new[] and the destructor will free the memory
// using delete[].
template<typename T>
template <typename T>
class scoped_array {
public:
scoped_array(T *array) : array_(array) {}
scoped_array(T* array) : array_(array) {}
~scoped_array() { reset(NULL); }
T *get() const { return array_; }
T* get() const { return array_; }
T& operator[](std::ptrdiff_t i) const {
assert(i >= 0);
@@ -75,25 +75,27 @@ class scoped_array {
return array_[i];
}
void reset(T *new_array) {
void reset(T* new_array) {
if (sizeof(T)) {
delete array_;
}
array_ = new_array;
}
T *release() {
T *released_array = array_;
T* release() {
T* released_array = array_;
array_ = NULL;
return released_array;
}
private:
T *array_;
T* array_;
// Forbid comparison of different scoped_array types.
template <typename T2> bool operator==(scoped_array<T2> const& p2) const;
template <typename T2> bool operator!=(scoped_array<T2> const& p2) const;
template <typename T2>
bool operator==(scoped_array<T2> const& p2) const;
template <typename T2>
bool operator!=(scoped_array<T2> const& p2) const;
// Disallow evil constructors
scoped_array(const scoped_array&);

View File

@@ -25,9 +25,9 @@ namespace libmv {
namespace {
struct FreeMe {
FreeMe(int *freed) : freed(freed) {}
FreeMe(int* freed) : freed(freed) {}
~FreeMe() { (*freed)++; }
int *freed;
int* freed;
};
TEST(ScopedPtr, NullDoesNothing) {
@@ -61,8 +61,8 @@ TEST(ScopedPtr, Reset) {
TEST(ScopedPtr, ReleaseAndGet) {
int frees = 0;
FreeMe *allocated = new FreeMe(&frees);
FreeMe *released = NULL;
FreeMe* allocated = new FreeMe(&frees);
FreeMe* released = NULL;
{
scoped_ptr<FreeMe> scoped(allocated);
EXPECT_EQ(0, frees);

View File

@@ -19,9 +19,9 @@
// IN THE SOFTWARE.
#include "libmv/base/vector.h"
#include <algorithm>
#include "libmv/numeric/numeric.h"
#include "testing/testing.h"
#include <algorithm>
namespace {
using namespace libmv;
@@ -62,7 +62,7 @@ int foo_destruct_calls = 0;
struct Foo {
public:
Foo() : value(5) { foo_construct_calls++; }
~Foo() { foo_destruct_calls++; }
~Foo() { foo_destruct_calls++; }
int value;
};
@@ -150,7 +150,7 @@ TEST_F(VectorTest, CopyConstructor) {
a.push_back(3);
vector<int> b(a);
EXPECT_EQ(a.size(), b.size());
EXPECT_EQ(a.size(), b.size());
for (int i = 0; i < a.size(); ++i) {
EXPECT_EQ(a[i], b[i]);
}
@@ -164,7 +164,7 @@ TEST_F(VectorTest, OperatorEquals) {
b = a;
EXPECT_EQ(a.size(), b.size());
EXPECT_EQ(a.size(), b.size());
for (int i = 0; i < a.size(); ++i) {
EXPECT_EQ(a[i], b[i]);
}

View File

@@ -18,14 +18,13 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
// IN THE SOFTWARE.
#ifndef LIBMV_BASE_VECTOR_UTILS_H_
#define LIBMV_BASE_VECTOR_UTILS_H_
/// Delete the contents of a container.
template <class Array>
void DeleteElements(Array *array) {
for (int i = 0; i < array->size(); ++i) {
void DeleteElements(Array* array) {
for (int i = 0; i < array->size(); ++i) {
delete (*array)[i];
}
array->clear();

View File

@@ -18,18 +18,17 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
// IN THE SOFTWARE.
#include "libmv/image/image.h"
#include <iostream>
#include <cmath>
#include <iostream>
#include "libmv/image/image.h"
namespace libmv {
void FloatArrayToScaledByteArray(const Array3Df &float_array,
Array3Du *byte_array,
bool automatic_range_detection
) {
void FloatArrayToScaledByteArray(const Array3Df& float_array,
Array3Du* byte_array,
bool automatic_range_detection) {
byte_array->ResizeLike(float_array);
float minval = HUGE_VAL;
float minval = HUGE_VAL;
float maxval = -HUGE_VAL;
if (automatic_range_detection) {
for (int i = 0; i < float_array.Height(); ++i) {
@@ -54,8 +53,8 @@ void FloatArrayToScaledByteArray(const Array3Df &float_array,
}
}
void ByteArrayToScaledFloatArray(const Array3Du &byte_array,
Array3Df *float_array) {
void ByteArrayToScaledFloatArray(const Array3Du& byte_array,
Array3Df* float_array) {
float_array->ResizeLike(byte_array);
for (int i = 0; i < byte_array.Height(); ++i) {
for (int j = 0; j < byte_array.Width(); ++j) {
@@ -66,10 +65,10 @@ void ByteArrayToScaledFloatArray(const Array3Du &byte_array,
}
}
void SplitChannels(const Array3Df &input,
Array3Df *channel0,
Array3Df *channel1,
Array3Df *channel2) {
void SplitChannels(const Array3Df& input,
Array3Df* channel0,
Array3Df* channel1,
Array3Df* channel2) {
assert(input.Depth() >= 3);
channel0->Resize(input.Height(), input.Width());
channel1->Resize(input.Height(), input.Width());
@@ -83,7 +82,7 @@ void SplitChannels(const Array3Df &input,
}
}
void PrintArray(const Array3Df &array) {
void PrintArray(const Array3Df& array) {
using namespace std;
printf("[\n");

View File

@@ -44,13 +44,13 @@ class ArrayND : public BaseArray {
ArrayND() : data_(NULL), own_data_(true) { Resize(Index(0)); }
/// Create an array with the specified shape.
ArrayND(const Index &shape) : data_(NULL), own_data_(true) { Resize(shape); }
ArrayND(const Index& shape) : data_(NULL), own_data_(true) { Resize(shape); }
/// Create an array with the specified shape.
ArrayND(int *shape) : data_(NULL), own_data_(true) { Resize(shape); }
ArrayND(int* shape) : data_(NULL), own_data_(true) { Resize(shape); }
/// Copy constructor.
ArrayND(const ArrayND<T, N> &b) : data_(NULL), own_data_(true) {
ArrayND(const ArrayND<T, N>& b) : data_(NULL), own_data_(true) {
ResizeLike(b);
std::memcpy(Data(), b.Data(), sizeof(T) * Size());
}
@@ -58,7 +58,7 @@ class ArrayND : public BaseArray {
ArrayND(int s0) : data_(NULL), own_data_(true) { Resize(s0); }
ArrayND(int s0, int s1) : data_(NULL), own_data_(true) { Resize(s0, s1); }
ArrayND(int s0, int s1, int s2) : data_(NULL), own_data_(true) {
Resize(s0, s1, s2);
Resize(s0, s1, s2);
}
ArrayND(T* data, int s0, int s1, int s2)
@@ -69,28 +69,24 @@ class ArrayND : public BaseArray {
/// Destructor deletes pixel data.
~ArrayND() {
if (own_data_) {
delete [] data_;
delete[] data_;
}
}
/// Assignation copies pixel data.
ArrayND &operator=(const ArrayND<T, N> &b) {
ArrayND& operator=(const ArrayND<T, N>& b) {
assert(this != &b);
ResizeLike(b);
std::memcpy(Data(), b.Data(), sizeof(T) * Size());
return *this;
}
const Index &Shapes() const {
return shape_;
}
const Index& Shapes() const { return shape_; }
const Index &Strides() const {
return strides_;
}
const Index& Strides() const { return strides_; }
/// Create an array of shape s.
void Resize(const Index &new_shape) {
void Resize(const Index& new_shape) {
if (data_ != NULL && shape_ == new_shape) {
// Don't bother realloacting if the shapes match.
return;
@@ -101,7 +97,7 @@ class ArrayND : public BaseArray {
strides_(i - 1) = strides_(i) * shape_(i);
}
if (own_data_) {
delete [] data_;
delete[] data_;
data_ = NULL;
if (Size() > 0) {
data_ = new T[Size()];
@@ -109,15 +105,13 @@ class ArrayND : public BaseArray {
}
}
template<typename D>
void ResizeLike(const ArrayND<D, N> &other) {
template <typename D>
void ResizeLike(const ArrayND<D, N>& other) {
Resize(other.Shape());
}
/// Resizes the array to shape s. All data is lost.
void Resize(const int *new_shape_array) {
Resize(Index(new_shape_array));
}
void Resize(const int* new_shape_array) { Resize(Index(new_shape_array)); }
/// Resize a 1D array to length s0.
void Resize(int s0) {
@@ -136,9 +130,7 @@ class ArrayND : public BaseArray {
}
// Match Eigen2's API.
void resize(int rows, int cols) {
Resize(rows, cols);
}
void resize(int rows, int cols) { Resize(rows, cols); }
/// Resize a 3D array to shape (s0,s1,s2).
void Resize(int s0, int s1, int s2) {
@@ -147,11 +139,11 @@ class ArrayND : public BaseArray {
Resize(shape);
}
template<typename D>
void CopyFrom(const ArrayND<D, N> &other) {
template <typename D>
void CopyFrom(const ArrayND<D, N>& other) {
ResizeLike(other);
T *data = Data();
const D *other_data = other.Data();
T* data = Data();
const D* other_data = other.Data();
for (int i = 0; i < Size(); ++i) {
data[i] = T(other_data[i]);
}
@@ -171,19 +163,13 @@ class ArrayND : public BaseArray {
}
/// Return a tuple containing the length of each axis.
const Index &Shape() const {
return shape_;
}
const Index& Shape() const { return shape_; }
/// Return the length of an axis.
int Shape(int axis) const {
return shape_(axis);
}
int Shape(int axis) const { return shape_(axis); }
/// Return the distance between neighboring elements along axis.
int Stride(int axis) const {
return strides_(axis);
}
int Stride(int axis) const { return strides_(axis); }
/// Return the number of elements of the array.
int Size() const {
@@ -194,18 +180,16 @@ class ArrayND : public BaseArray {
}
/// Return the total amount of memory used by the array.
int MemorySizeInBytes() const {
return sizeof(*this) + Size() * sizeof(T);
}
int MemorySizeInBytes() const { return sizeof(*this) + Size() * sizeof(T); }
/// Pointer to the first element of the array.
T *Data() { return data_; }
T* Data() { return data_; }
/// Constant pointer to the first element of the array.
const T *Data() const { return data_; }
const T* Data() const { return data_; }
/// Distance between the first element and the element at position index.
int Offset(const Index &index) const {
int Offset(const Index& index) const {
int offset = 0;
for (int i = 0; i < N; ++i)
offset += index(i) * Stride(i);
@@ -231,25 +215,23 @@ class ArrayND : public BaseArray {
}
/// Return a reference to the element at position index.
T &operator()(const Index &index) {
T& operator()(const Index& index) {
// TODO(pau) Boundary checking in debug mode.
return *( Data() + Offset(index) );
return *(Data() + Offset(index));
}
/// 1D specialization.
T &operator()(int i0) {
return *( Data() + Offset(i0) );
}
T& operator()(int i0) { return *(Data() + Offset(i0)); }
/// 2D specialization.
T &operator()(int i0, int i1) {
T& operator()(int i0, int i1) {
assert(0 <= i0 && i0 < Shape(0));
assert(0 <= i1 && i1 < Shape(1));
return *(Data() + Offset(i0, i1));
}
/// 3D specialization.
T &operator()(int i0, int i1, int i2) {
T& operator()(int i0, int i1, int i2) {
assert(0 <= i0 && i0 < Shape(0));
assert(0 <= i1 && i1 < Shape(1));
assert(0 <= i2 && i2 < Shape(2));
@@ -257,29 +239,27 @@ class ArrayND : public BaseArray {
}
/// Return a constant reference to the element at position index.
const T &operator()(const Index &index) const {
const T& operator()(const Index& index) const {
return *(Data() + Offset(index));
}
/// 1D specialization.
const T &operator()(int i0) const {
return *(Data() + Offset(i0));
}
const T& operator()(int i0) const { return *(Data() + Offset(i0)); }
/// 2D specialization.
const T &operator()(int i0, int i1) const {
const T& operator()(int i0, int i1) const {
assert(0 <= i0 && i0 < Shape(0));
assert(0 <= i1 && i1 < Shape(1));
return *(Data() + Offset(i0, i1));
}
/// 3D specialization.
const T &operator()(int i0, int i1, int i2) const {
const T& operator()(int i0, int i1, int i2) const {
return *(Data() + Offset(i0, i1, i2));
}
/// True if index is inside array.
bool Contains(const Index &index) const {
bool Contains(const Index& index) const {
for (int i = 0; i < N; ++i)
if (index(i) < 0 || index(i) >= Shape(i))
return false;
@@ -287,26 +267,24 @@ class ArrayND : public BaseArray {
}
/// 1D specialization.
bool Contains(int i0) const {
return 0 <= i0 && i0 < Shape(0);
}
bool Contains(int i0) const { return 0 <= i0 && i0 < Shape(0); }
/// 2D specialization.
bool Contains(int i0, int i1) const {
return 0 <= i0 && i0 < Shape(0)
&& 0 <= i1 && i1 < Shape(1);
return 0 <= i0 && i0 < Shape(0) && 0 <= i1 && i1 < Shape(1);
}
/// 3D specialization.
bool Contains(int i0, int i1, int i2) const {
return 0 <= i0 && i0 < Shape(0)
&& 0 <= i1 && i1 < Shape(1)
&& 0 <= i2 && i2 < Shape(2);
return 0 <= i0 && i0 < Shape(0) && 0 <= i1 && i1 < Shape(1) && 0 <= i2 &&
i2 < Shape(2);
}
bool operator==(const ArrayND<T, N> &other) const {
if (shape_ != other.shape_) return false;
if (strides_ != other.strides_) return false;
bool operator==(const ArrayND<T, N>& other) const {
if (shape_ != other.shape_)
return false;
if (strides_ != other.strides_)
return false;
for (int i = 0; i < Size(); ++i) {
if (this->Data()[i] != other.Data()[i])
return false;
@@ -314,11 +292,11 @@ class ArrayND : public BaseArray {
return true;
}
bool operator!=(const ArrayND<T, N> &other) const {
bool operator!=(const ArrayND<T, N>& other) const {
return !(*this == other);
}
ArrayND<T, N> operator*(const ArrayND<T, N> &other) const {
ArrayND<T, N> operator*(const ArrayND<T, N>& other) const {
assert(Shape() = other.Shape());
ArrayND<T, N> res;
res.ResizeLike(*this);
@@ -336,7 +314,7 @@ class ArrayND : public BaseArray {
Index strides_;
/// Pointer to the first element of the array.
T *data_;
T* data_;
/// Flag if this Array either own or reference the data
bool own_data_;
@@ -346,30 +324,20 @@ class ArrayND : public BaseArray {
template <typename T>
class Array3D : public ArrayND<T, 3> {
typedef ArrayND<T, 3> Base;
public:
Array3D()
: Base() {
}
Array3D(int height, int width, int depth = 1)
: Base(height, width, depth) {
}
Array3D() : Base() {}
Array3D(int height, int width, int depth = 1) : Base(height, width, depth) {}
Array3D(T* data, int height, int width, int depth = 1)
: Base(data, height, width, depth) {
}
: Base(data, height, width, depth) {}
void Resize(int height, int width, int depth = 1) {
Base::Resize(height, width, depth);
}
int Height() const {
return Base::Shape(0);
}
int Width() const {
return Base::Shape(1);
}
int Depth() const {
return Base::Shape(2);
}
int Height() const { return Base::Shape(0); }
int Width() const { return Base::Shape(1); }
int Depth() const { return Base::Shape(2); }
// Match Eigen2's API so that Array3D's and Mat*'s can work together via
// template magic.
@@ -377,15 +345,15 @@ class Array3D : public ArrayND<T, 3> {
int cols() const { return Width(); }
int depth() const { return Depth(); }
int Get_Step() const { return Width()*Depth(); }
int Get_Step() const { return Width() * Depth(); }
/// Enable accessing with 2 indices for grayscale images.
T &operator()(int i0, int i1, int i2 = 0) {
T& operator()(int i0, int i1, int i2 = 0) {
assert(0 <= i0 && i0 < Height());
assert(0 <= i1 && i1 < Width());
return Base::operator()(i0, i1, i2);
}
const T &operator()(int i0, int i1, int i2 = 0) const {
const T& operator()(int i0, int i1, int i2 = 0) const {
assert(0 <= i0 && i0 < Height());
assert(0 <= i1 && i1 < Width());
return Base::operator()(i0, i1, i2);
@@ -398,31 +366,29 @@ typedef Array3D<int> Array3Di;
typedef Array3D<float> Array3Df;
typedef Array3D<short> Array3Ds;
void SplitChannels(const Array3Df &input,
Array3Df *channel0,
Array3Df *channel1,
Array3Df *channel2);
void SplitChannels(const Array3Df& input,
Array3Df* channel0,
Array3Df* channel1,
Array3Df* channel2);
void PrintArray(const Array3Df &array);
void PrintArray(const Array3Df& array);
/** Convert a float array into a byte array by scaling values by 255* (max-min).
* where max and min are automatically detected
* where max and min are automatically detected
* (if automatic_range_detection = true)
* \note and TODO this automatic detection only works when the image contains
* at least one pixel of both bounds.
**/
void FloatArrayToScaledByteArray(const Array3Df &float_array,
Array3Du *byte_array,
void FloatArrayToScaledByteArray(const Array3Df& float_array,
Array3Du* byte_array,
bool automatic_range_detection = false);
//! Convert a byte array into a float array by dividing values by 255.
void ByteArrayToScaledFloatArray(const Array3Du &byte_array,
Array3Df *float_array);
void ByteArrayToScaledFloatArray(const Array3Du& byte_array,
Array3Df* float_array);
template <typename AArrayType, typename BArrayType, typename CArrayType>
void MultiplyElements(const AArrayType &a,
const BArrayType &b,
CArrayType *c) {
void MultiplyElements(const AArrayType& a, const BArrayType& b, CArrayType* c) {
// This function does an element-wise multiply between
// the two Arrays A and B, and stores the result in C.
// A and B must have the same dimensions.
@@ -435,7 +401,7 @@ void MultiplyElements(const AArrayType &a,
// The index starts at the maximum value for each dimension
const typename CArrayType::Index& cShape = c->Shape();
for ( int i = 0; i < CArrayType::Index::SIZE; ++i )
for (int i = 0; i < CArrayType::Index::SIZE; ++i)
index(i) = cShape(i) - 1;
// After each multiplication, the highest-dimensional index is reduced.
@@ -443,12 +409,12 @@ void MultiplyElements(const AArrayType &a,
// and decrements the index of the next lower dimension.
// This ripple-action continues until the entire new array has been
// calculated, indicated by dimension zero having a negative index.
while ( index(0) >= 0 ) {
while (index(0) >= 0) {
(*c)(index) = a(index) * b(index);
int dimension = CArrayType::Index::SIZE - 1;
index(dimension) = index(dimension) - 1;
while ( dimension > 0 && index(dimension) < 0 ) {
while (dimension > 0 && index(dimension) < 0) {
index(dimension) = cShape(dimension) - 1;
index(dimension - 1) = index(dimension - 1) - 1;
--dimension;
@@ -457,9 +423,9 @@ void MultiplyElements(const AArrayType &a,
}
template <typename TA, typename TB, typename TC>
void MultiplyElements(const ArrayND<TA, 3> &a,
const ArrayND<TB, 3> &b,
ArrayND<TC, 3> *c) {
void MultiplyElements(const ArrayND<TA, 3>& a,
const ArrayND<TB, 3>& b,
ArrayND<TC, 3>* c) {
// Specialization for N==3
c->ResizeLike(a);
assert(a.Shape(0) == b.Shape(0));
@@ -475,9 +441,9 @@ void MultiplyElements(const ArrayND<TA, 3> &a,
}
template <typename TA, typename TB, typename TC>
void MultiplyElements(const Array3D<TA> &a,
const Array3D<TB> &b,
Array3D<TC> *c) {
void MultiplyElements(const Array3D<TA>& a,
const Array3D<TB>& b,
Array3D<TC>* c) {
// Specialization for N==3
c->ResizeLike(a);
assert(a.Shape(0) == b.Shape(0));

View File

@@ -21,9 +21,9 @@
#include "libmv/image/array_nd.h"
#include "testing/testing.h"
using libmv::ArrayND;
using libmv::Array3D;
using libmv::Array3Df;
using libmv::ArrayND;
namespace {
@@ -100,7 +100,7 @@ TEST(ArrayND, Size) {
int l[] = {0, 1, 2};
ArrayND<int, 3>::Index last(l);
EXPECT_EQ(a.Size(), a.Offset(last)+1);
EXPECT_EQ(a.Size(), a.Offset(last) + 1);
EXPECT_TRUE(a.Contains(last));
EXPECT_FALSE(a.Contains(shape));
}
@@ -120,8 +120,8 @@ TEST(ArrayND, Parenthesis) {
int s[] = {3, 3};
ArrayND<int, 2> a(s);
*(a.Data()+0) = 0;
*(a.Data()+5) = 5;
*(a.Data() + 0) = 0;
*(a.Data() + 5) = 5;
int i1[] = {0, 0};
EXPECT_EQ(0, a(Index(i1)));
@@ -210,7 +210,7 @@ TEST(ArrayND, MultiplyElements) {
b(1, 1, 0) = 3;
ArrayND<int, 3> c;
MultiplyElements(a, b, &c);
EXPECT_FLOAT_EQ(6, c(0, 0, 0));
EXPECT_FLOAT_EQ(6, c(0, 0, 0));
EXPECT_FLOAT_EQ(10, c(0, 1, 0));
EXPECT_FLOAT_EQ(12, c(1, 0, 0));
EXPECT_FLOAT_EQ(12, c(1, 1, 0));

View File

@@ -29,7 +29,7 @@ namespace libmv {
// Compute a Gaussian kernel and derivative, such that you can take the
// derivative of an image by convolving with the kernel horizontally then the
// derivative vertically to get (eg) the y derivative.
void ComputeGaussianKernel(double sigma, Vec *kernel, Vec *derivative) {
void ComputeGaussianKernel(double sigma, Vec* kernel, Vec* derivative) {
assert(sigma >= 0.0);
// 0.004 implies a 3 pixel kernel with 1 pixel sigma.
@@ -37,7 +37,7 @@ void ComputeGaussianKernel(double sigma, Vec *kernel, Vec *derivative) {
// Calculate the kernel size based on sigma such that it is odd.
float precisehalfwidth = GaussianInversePositive(truncation_factor, sigma);
int width = lround(2*precisehalfwidth);
int width = lround(2 * precisehalfwidth);
if (width % 2 == 0) {
width++;
}
@@ -47,7 +47,7 @@ void ComputeGaussianKernel(double sigma, Vec *kernel, Vec *derivative) {
kernel->setZero();
derivative->setZero();
int halfwidth = width / 2;
for (int i = -halfwidth; i <= halfwidth; ++i) {
for (int i = -halfwidth; i <= halfwidth; ++i) {
(*kernel)(i + halfwidth) = Gaussian(i, sigma);
(*derivative)(i + halfwidth) = GaussianDerivative(i, sigma);
}
@@ -57,16 +57,21 @@ void ComputeGaussianKernel(double sigma, Vec *kernel, Vec *derivative) {
// Normalize the derivative differently. See
// www.cs.duke.edu/courses/spring03/cps296.1/handouts/Image%20Processing.pdf
double factor = 0.;
for (int i = -halfwidth; i <= halfwidth; ++i) {
factor -= i*(*derivative)(i+halfwidth);
for (int i = -halfwidth; i <= halfwidth; ++i) {
factor -= i * (*derivative)(i + halfwidth);
}
*derivative /= factor;
}
template <int size, bool vertical>
void FastConvolve(const Vec &kernel, int width, int height,
const float* src, int src_stride, int src_line_stride,
float* dst, int dst_stride) {
void FastConvolve(const Vec& kernel,
int width,
int height,
const float* src,
int src_stride,
int src_line_stride,
float* dst,
int dst_stride) {
double coefficients[2 * size + 1];
for (int k = 0; k < 2 * size + 1; ++k) {
coefficients[k] = kernel(2 * size - k);
@@ -93,14 +98,14 @@ void FastConvolve(const Vec &kernel, int width, int height,
}
}
template<bool vertical>
void Convolve(const Array3Df &in,
const Vec &kernel,
Array3Df *out_pointer,
template <bool vertical>
void Convolve(const Array3Df& in,
const Vec& kernel,
Array3Df* out_pointer,
int plane) {
int width = in.Width();
int height = in.Height();
Array3Df &out = *out_pointer;
Array3Df& out = *out_pointer;
if (plane == -1) {
out.ResizeLike(in);
plane = 0;
@@ -119,61 +124,62 @@ void Convolve(const Array3Df &in,
// fast path.
int half_width = kernel.size() / 2;
switch (half_width) {
#define static_convolution(size) case size: \
FastConvolve<size, vertical>(kernel, width, height, src, src_stride, \
src_line_stride, dst, dst_stride); break;
static_convolution(1)
static_convolution(2)
static_convolution(3)
static_convolution(4)
static_convolution(5)
static_convolution(6)
static_convolution(7)
#define static_convolution(size) \
case size: \
FastConvolve<size, vertical>(kernel, \
width, \
height, \
src, \
src_stride, \
src_line_stride, \
dst, \
dst_stride); \
break;
static_convolution(1) static_convolution(2) static_convolution(3)
static_convolution(4) static_convolution(5) static_convolution(6)
static_convolution(7)
#undef static_convolution
default:
int dynamic_size = kernel.size() / 2;
for (int y = 0; y < height; ++y) {
for (int x = 0; x < width; ++x) {
double sum = 0;
// Slow path: this loop cannot be unrolled.
for (int k = -dynamic_size; k <= dynamic_size; ++k) {
if (vertical) {
if (y + k >= 0 && y + k < height) {
sum += src[k * src_line_stride] *
kernel(2 * dynamic_size - (k + dynamic_size));
}
} else {
if (x + k >= 0 && x + k < width) {
sum += src[k * src_stride] *
kernel(2 * dynamic_size - (k + dynamic_size));
}
default : int dynamic_size = kernel.size() / 2;
for (int y = 0; y < height; ++y) {
for (int x = 0; x < width; ++x) {
double sum = 0;
// Slow path: this loop cannot be unrolled.
for (int k = -dynamic_size; k <= dynamic_size; ++k) {
if (vertical) {
if (y + k >= 0 && y + k < height) {
sum += src[k * src_line_stride] *
kernel(2 * dynamic_size - (k + dynamic_size));
}
} else {
if (x + k >= 0 && x + k < width) {
sum += src[k * src_stride] *
kernel(2 * dynamic_size - (k + dynamic_size));
}
}
dst[0] = static_cast<float>(sum);
src += src_stride;
dst += dst_stride;
}
dst[0] = static_cast<float>(sum);
src += src_stride;
dst += dst_stride;
}
}
}
}
void ConvolveHorizontal(const Array3Df &in,
const Vec &kernel,
Array3Df *out_pointer,
void ConvolveHorizontal(const Array3Df& in,
const Vec& kernel,
Array3Df* out_pointer,
int plane) {
Convolve<false>(in, kernel, out_pointer, plane);
}
void ConvolveVertical(const Array3Df &in,
const Vec &kernel,
Array3Df *out_pointer,
void ConvolveVertical(const Array3Df& in,
const Vec& kernel,
Array3Df* out_pointer,
int plane) {
Convolve<true>(in, kernel, out_pointer, plane);
}
void ConvolveGaussian(const Array3Df &in,
double sigma,
Array3Df *out_pointer) {
void ConvolveGaussian(const Array3Df& in, double sigma, Array3Df* out_pointer) {
Vec kernel, derivative;
ComputeGaussianKernel(sigma, &kernel, &derivative);
@@ -182,10 +188,10 @@ void ConvolveGaussian(const Array3Df &in,
ConvolveHorizontal(tmp, kernel, out_pointer);
}
void ImageDerivatives(const Array3Df &in,
void ImageDerivatives(const Array3Df& in,
double sigma,
Array3Df *gradient_x,
Array3Df *gradient_y) {
Array3Df* gradient_x,
Array3Df* gradient_y) {
Vec kernel, derivative;
ComputeGaussianKernel(sigma, &kernel, &derivative);
Array3Df tmp;
@@ -199,11 +205,11 @@ void ImageDerivatives(const Array3Df &in,
ConvolveVertical(tmp, derivative, gradient_y);
}
void BlurredImageAndDerivatives(const Array3Df &in,
void BlurredImageAndDerivatives(const Array3Df& in,
double sigma,
Array3Df *blurred_image,
Array3Df *gradient_x,
Array3Df *gradient_y) {
Array3Df* blurred_image,
Array3Df* gradient_x,
Array3Df* gradient_y) {
Vec kernel, derivative;
ComputeGaussianKernel(sigma, &kernel, &derivative);
Array3Df tmp;
@@ -224,9 +230,9 @@ void BlurredImageAndDerivatives(const Array3Df &in,
// image, and store the results in three channels. Since the blurred value and
// gradients are closer in memory, this leads to better performance if all
// three values are needed at the same time.
void BlurredImageAndDerivativesChannels(const Array3Df &in,
void BlurredImageAndDerivativesChannels(const Array3Df& in,
double sigma,
Array3Df *blurred_and_gradxy) {
Array3Df* blurred_and_gradxy) {
assert(in.Depth() == 1);
Vec kernel, derivative;
@@ -246,10 +252,10 @@ void BlurredImageAndDerivativesChannels(const Array3Df &in,
ConvolveVertical(tmp, derivative, blurred_and_gradxy, 2);
}
void BoxFilterHorizontal(const Array3Df &in,
void BoxFilterHorizontal(const Array3Df& in,
int window_size,
Array3Df *out_pointer) {
Array3Df &out = *out_pointer;
Array3Df* out_pointer) {
Array3Df& out = *out_pointer;
out.ResizeLike(in);
int half_width = (window_size - 1) / 2;
@@ -266,7 +272,7 @@ void BoxFilterHorizontal(const Array3Df &in,
out(i, j, k) = sum;
}
// Fill interior.
for (int j = half_width + 1; j < in.Width()-half_width; ++j) {
for (int j = half_width + 1; j < in.Width() - half_width; ++j) {
sum -= in(i, j - half_width - 1, k);
sum += in(i, j + half_width, k);
out(i, j, k) = sum;
@@ -280,10 +286,10 @@ void BoxFilterHorizontal(const Array3Df &in,
}
}
void BoxFilterVertical(const Array3Df &in,
void BoxFilterVertical(const Array3Df& in,
int window_size,
Array3Df *out_pointer) {
Array3Df &out = *out_pointer;
Array3Df* out_pointer) {
Array3Df& out = *out_pointer;
out.ResizeLike(in);
int half_width = (window_size - 1) / 2;
@@ -300,7 +306,7 @@ void BoxFilterVertical(const Array3Df &in,
out(i, j, k) = sum;
}
// Fill interior.
for (int i = half_width + 1; i < in.Height()-half_width; ++i) {
for (int i = half_width + 1; i < in.Height() - half_width; ++i) {
sum -= in(i - half_width - 1, j, k);
sum += in(i + half_width, j, k);
out(i, j, k) = sum;
@@ -314,9 +320,7 @@ void BoxFilterVertical(const Array3Df &in,
}
}
void BoxFilter(const Array3Df &in,
int box_width,
Array3Df *out) {
void BoxFilter(const Array3Df& in, int box_width, Array3Df* out) {
Array3Df tmp;
BoxFilterHorizontal(in, box_width, &tmp);
BoxFilterVertical(tmp, box_width, out);
@@ -327,17 +331,17 @@ void LaplaceFilter(unsigned char* src,
int width,
int height,
int strength) {
for (int y = 1; y < height-1; y++)
for (int x = 1; x < width-1; x++) {
const unsigned char* s = &src[y*width+x];
int l = 128 +
s[-width-1] + s[-width] + s[-width+1] +
s[1] - 8*s[0] + s[1] +
s[ width-1] + s[ width] + s[ width+1];
int d = ((256-strength)*s[0] + strength*l) / 256;
if (d < 0) d=0;
if (d > 255) d=255;
dst[y*width+x] = d;
for (int y = 1; y < height - 1; y++)
for (int x = 1; x < width - 1; x++) {
const unsigned char* s = &src[y * width + x];
int l = 128 + s[-width - 1] + s[-width] + s[-width + 1] + s[1] -
8 * s[0] + s[1] + s[width - 1] + s[width] + s[width + 1];
int d = ((256 - strength) * s[0] + strength * l) / 256;
if (d < 0)
d = 0;
if (d > 255)
d = 255;
dst[y * width + x] = d;
}
}

View File

@@ -30,70 +30,71 @@ namespace libmv {
// Zero mean Gaussian.
inline double Gaussian(double x, double sigma) {
return 1/sqrt(2*M_PI*sigma*sigma) * exp(-(x*x/2/sigma/sigma));
return 1 / sqrt(2 * M_PI * sigma * sigma) * exp(-(x * x / 2 / sigma / sigma));
}
// 2D gaussian (zero mean)
// (9) in http://mathworld.wolfram.com/GaussianFunction.html
inline double Gaussian2D(double x, double y, double sigma) {
return 1.0/(2.0*M_PI*sigma*sigma) * exp( -(x*x+y*y)/(2.0*sigma*sigma));
return 1.0 / (2.0 * M_PI * sigma * sigma) *
exp(-(x * x + y * y) / (2.0 * sigma * sigma));
}
inline double GaussianDerivative(double x, double sigma) {
return -x / sigma / sigma * Gaussian(x, sigma);
}
// Solve the inverse of the Gaussian for positive x.
inline double GaussianInversePositive(double y, double sigma) {
return sqrt(-2 * sigma * sigma * log(y * sigma * sqrt(2*M_PI)));
return sqrt(-2 * sigma * sigma * log(y * sigma * sqrt(2 * M_PI)));
}
void ComputeGaussianKernel(double sigma, Vec *kernel, Vec *derivative);
void ConvolveHorizontal(const FloatImage &in,
const Vec &kernel,
FloatImage *out_pointer,
void ComputeGaussianKernel(double sigma, Vec* kernel, Vec* derivative);
void ConvolveHorizontal(const FloatImage& in,
const Vec& kernel,
FloatImage* out_pointer,
int plane = -1);
void ConvolveVertical(const FloatImage &in,
const Vec &kernel,
FloatImage *out_pointer,
void ConvolveVertical(const FloatImage& in,
const Vec& kernel,
FloatImage* out_pointer,
int plane = -1);
void ConvolveGaussian(const FloatImage &in,
void ConvolveGaussian(const FloatImage& in,
double sigma,
FloatImage *out_pointer);
FloatImage* out_pointer);
void ImageDerivatives(const FloatImage &in,
void ImageDerivatives(const FloatImage& in,
double sigma,
FloatImage *gradient_x,
FloatImage *gradient_y);
FloatImage* gradient_x,
FloatImage* gradient_y);
void BlurredImageAndDerivatives(const FloatImage &in,
void BlurredImageAndDerivatives(const FloatImage& in,
double sigma,
FloatImage *blurred_image,
FloatImage *gradient_x,
FloatImage *gradient_y);
FloatImage* blurred_image,
FloatImage* gradient_x,
FloatImage* gradient_y);
// Blur and take the gradients of an image, storing the results inside the
// three channels of blurred_and_gradxy.
void BlurredImageAndDerivativesChannels(const FloatImage &in,
void BlurredImageAndDerivativesChannels(const FloatImage& in,
double sigma,
FloatImage *blurred_and_gradxy);
FloatImage* blurred_and_gradxy);
void BoxFilterHorizontal(const FloatImage &in,
void BoxFilterHorizontal(const FloatImage& in,
int window_size,
FloatImage *out_pointer);
FloatImage* out_pointer);
void BoxFilterVertical(const FloatImage &in,
void BoxFilterVertical(const FloatImage& in,
int window_size,
FloatImage *out_pointer);
FloatImage* out_pointer);
void BoxFilter(const FloatImage &in,
int box_width,
FloatImage *out);
void BoxFilter(const FloatImage& in, int box_width, FloatImage* out);
/*!
Convolve \a src into \a dst with the discrete laplacian operator.
\a src and \a dst should be \a width x \a height images.
\a strength is an interpolation coefficient (0-256) between original image and the laplacian.
\a strength is an interpolation coefficient (0-256) between original image
and the laplacian.
\note Make sure the search region is filtered with the same strength as the pattern.
\note Make sure the search region is filtered with the same strength as the
pattern.
*/
void LaplaceFilter(unsigned char* src,
unsigned char* dst,
@@ -104,4 +105,3 @@ void LaplaceFilter(unsigned char* src,
} // namespace libmv
#endif // LIBMV_IMAGE_CONVOLVE_H_

View File

@@ -85,26 +85,26 @@ TEST(Convolve, BlurredImageAndDerivativesChannelsHorizontalSlope) {
FloatImage image(10, 10), blurred_and_derivatives;
for (int j = 0; j < 10; ++j) {
for (int i = 0; i < 10; ++i) {
image(j, i) = 2*i;
image(j, i) = 2 * i;
}
}
BlurredImageAndDerivativesChannels(image, 0.9, &blurred_and_derivatives);
EXPECT_NEAR(blurred_and_derivatives(5, 5, 0), 10.0, 1e-7);
EXPECT_NEAR(blurred_and_derivatives(5, 5, 1), 2.0, 1e-7);
EXPECT_NEAR(blurred_and_derivatives(5, 5, 2), 0.0, 1e-7);
EXPECT_NEAR(blurred_and_derivatives(5, 5, 1), 2.0, 1e-7);
EXPECT_NEAR(blurred_and_derivatives(5, 5, 2), 0.0, 1e-7);
}
TEST(Convolve, BlurredImageAndDerivativesChannelsVerticalSlope) {
FloatImage image(10, 10), blurred_and_derivatives;
for (int j = 0; j < 10; ++j) {
for (int i = 0; i < 10; ++i) {
image(j, i) = 2*j;
image(j, i) = 2 * j;
}
}
BlurredImageAndDerivativesChannels(image, 0.9, &blurred_and_derivatives);
EXPECT_NEAR(blurred_and_derivatives(5, 5, 0), 10.0, 1e-7);
EXPECT_NEAR(blurred_and_derivatives(5, 5, 1), 0.0, 1e-7);
EXPECT_NEAR(blurred_and_derivatives(5, 5, 2), 2.0, 1e-7);
EXPECT_NEAR(blurred_and_derivatives(5, 5, 1), 0.0, 1e-7);
EXPECT_NEAR(blurred_and_derivatives(5, 5, 2), 2.0, 1e-7);
}
} // namespace

View File

@@ -21,14 +21,14 @@
#ifndef LIBMV_IMAGE_CORRELATION_H
#define LIBMV_IMAGE_CORRELATION_H
#include "libmv/logging/logging.h"
#include "libmv/image/image.h"
#include "libmv/logging/logging.h"
namespace libmv {
inline double PearsonProductMomentCorrelation(
const FloatImage &image_and_gradient1_sampled,
const FloatImage &image_and_gradient2_sampled) {
const FloatImage& image_and_gradient1_sampled,
const FloatImage& image_and_gradient2_sampled) {
assert(image_and_gradient1_sampled.Width() ==
image_and_gradient2_sampled.Width());
assert(image_and_gradient1_sampled.Height() ==
@@ -63,9 +63,8 @@ inline double PearsonProductMomentCorrelation(
double covariance_xy = sXY - sX * sY;
double correlation = covariance_xy / sqrt(var_x * var_y);
LG << "Covariance xy: " << covariance_xy
<< ", var 1: " << var_x << ", var 2: " << var_y
<< ", correlation: " << correlation;
LG << "Covariance xy: " << covariance_xy << ", var 1: " << var_x
<< ", var 2: " << var_y << ", correlation: " << correlation;
return correlation;
}

View File

@@ -39,14 +39,11 @@ typedef Array3Ds ShortImage;
// is the best solution after all.
class Image {
public:
// Create an image from an array. The image takes ownership of the array.
Image(Array3Du *array) : array_type_(BYTE), array_(array) {}
Image(Array3Df *array) : array_type_(FLOAT), array_(array) {}
Image(Array3Du* array) : array_type_(BYTE), array_(array) {}
Image(Array3Df* array) : array_type_(FLOAT), array_(array) {}
Image(const Image &img): array_type_(NONE), array_(NULL) {
*this = img;
}
Image(const Image& img) : array_type_(NONE), array_(NULL) { *this = img; }
// Underlying data type.
enum DataType {
@@ -62,20 +59,18 @@ class Image {
int size;
switch (array_type_) {
case BYTE:
size = reinterpret_cast<Array3Du *>(array_)->MemorySizeInBytes();
break;
size = reinterpret_cast<Array3Du*>(array_)->MemorySizeInBytes();
break;
case FLOAT:
size = reinterpret_cast<Array3Df *>(array_)->MemorySizeInBytes();
break;
size = reinterpret_cast<Array3Df*>(array_)->MemorySizeInBytes();
break;
case INT:
size = reinterpret_cast<Array3Di *>(array_)->MemorySizeInBytes();
break;
size = reinterpret_cast<Array3Di*>(array_)->MemorySizeInBytes();
break;
case SHORT:
size = reinterpret_cast<Array3Ds *>(array_)->MemorySizeInBytes();
break;
default :
size = 0;
assert(0);
size = reinterpret_cast<Array3Ds*>(array_)->MemorySizeInBytes();
break;
default: size = 0; assert(0);
}
size += sizeof(*this);
return size;
@@ -83,71 +78,57 @@ class Image {
~Image() {
switch (array_type_) {
case BYTE:
delete reinterpret_cast<Array3Du *>(array_);
break;
case FLOAT:
delete reinterpret_cast<Array3Df *>(array_);
break;
case INT:
delete reinterpret_cast<Array3Di *>(array_);
break;
case SHORT:
delete reinterpret_cast<Array3Ds *>(array_);
break;
default:
assert(0);
}
case BYTE: delete reinterpret_cast<Array3Du*>(array_); break;
case FLOAT: delete reinterpret_cast<Array3Df*>(array_); break;
case INT: delete reinterpret_cast<Array3Di*>(array_); break;
case SHORT: delete reinterpret_cast<Array3Ds*>(array_); break;
default: assert(0);
}
}
Image& operator= (const Image& f) {
Image& operator=(const Image& f) {
if (this != &f) {
array_type_ = f.array_type_;
switch (array_type_) {
case BYTE:
delete reinterpret_cast<Array3Du *>(array_);
array_ = new Array3Du(*(Array3Du *)f.array_);
break;
delete reinterpret_cast<Array3Du*>(array_);
array_ = new Array3Du(*(Array3Du*)f.array_);
break;
case FLOAT:
delete reinterpret_cast<Array3Df *>(array_);
array_ = new Array3Df(*(Array3Df *)f.array_);
break;
delete reinterpret_cast<Array3Df*>(array_);
array_ = new Array3Df(*(Array3Df*)f.array_);
break;
case INT:
delete reinterpret_cast<Array3Di *>(array_);
array_ = new Array3Di(*(Array3Di *)f.array_);
break;
delete reinterpret_cast<Array3Di*>(array_);
array_ = new Array3Di(*(Array3Di*)f.array_);
break;
case SHORT:
delete reinterpret_cast<Array3Ds *>(array_);
array_ = new Array3Ds(*(Array3Ds *)f.array_);
break;
default:
assert(0);
delete reinterpret_cast<Array3Ds*>(array_);
array_ = new Array3Ds(*(Array3Ds*)f.array_);
break;
default: assert(0);
}
}
return *this;
}
Array3Du *AsArray3Du() const {
Array3Du* AsArray3Du() const {
if (array_type_ == BYTE) {
return reinterpret_cast<Array3Du *>(array_);
return reinterpret_cast<Array3Du*>(array_);
}
return NULL;
}
Array3Df *AsArray3Df() const {
Array3Df* AsArray3Df() const {
if (array_type_ == FLOAT) {
return reinterpret_cast<Array3Df *>(array_);
return reinterpret_cast<Array3Df*>(array_);
}
return NULL;
}
private:
DataType array_type_;
BaseArray *array_;
BaseArray* array_;
};
} // namespace libmv

View File

@@ -28,7 +28,7 @@ namespace libmv {
// The factor comes from http://www.easyrgb.com/
// RGB to XYZ : Y is the luminance channel
// var_R * 0.2126 + var_G * 0.7152 + var_B * 0.0722
template<typename T>
template <typename T>
inline T RGB2GRAY(const T r, const T g, const T b) {
return static_cast<T>(r * 0.2126 + g * 0.7152 + b * 0.0722);
}
@@ -42,8 +42,8 @@ inline unsigned char RGB2GRAY<unsigned char>(const unsigned char r,
return (unsigned char)(r * 0.2126 + g * 0.7152 + b * 0.0722 +0.5);
}*/
template<class ImageIn, class ImageOut>
void Rgb2Gray(const ImageIn &imaIn, ImageOut *imaOut) {
template <class ImageIn, class ImageOut>
void Rgb2Gray(const ImageIn& imaIn, ImageOut* imaOut) {
// It is all fine to cnvert RGBA image here as well,
// all the additional channels will be nicely ignored.
assert(imaIn.Depth() >= 3);
@@ -52,21 +52,22 @@ void Rgb2Gray(const ImageIn &imaIn, ImageOut *imaOut) {
// Convert each RGB pixel into Gray value (luminance)
for (int j = 0; j < imaIn.Height(); ++j) {
for (int i = 0; i < imaIn.Width(); ++i) {
(*imaOut)(j, i) = RGB2GRAY(imaIn(j, i, 0) , imaIn(j, i, 1), imaIn(j, i, 2));
for (int i = 0; i < imaIn.Width(); ++i) {
(*imaOut)(j, i) =
RGB2GRAY(imaIn(j, i, 0), imaIn(j, i, 1), imaIn(j, i, 2));
}
}
}
// Convert given float image to an unsigned char array of pixels.
template<class Image>
unsigned char *FloatImageToUCharArray(const Image &image) {
unsigned char *buffer =
template <class Image>
unsigned char* FloatImageToUCharArray(const Image& image) {
unsigned char* buffer =
new unsigned char[image.Width() * image.Height() * image.Depth()];
for (int y = 0; y < image.Height(); y++) {
for (int x = 0; x < image.Width(); x++) {
for (int d = 0; d < image.Depth(); d++) {
for (int x = 0; x < image.Width(); x++) {
for (int d = 0; d < image.Depth(); d++) {
int index = (y * image.Width() + x) * image.Depth() + d;
buffer[index] = 255.0 * image(y, x, d);
}

View File

@@ -34,9 +34,9 @@ namespace libmv {
/// Put the pixel in the image to the given color only if the point (xc,yc)
/// is inside the image.
template <class Image, class Color>
inline void safePutPixel(int yc, int xc, const Color & col, Image *pim) {
inline void safePutPixel(int yc, int xc, const Color& col, Image* pim) {
if (!pim)
return;
return;
if (pim->Contains(yc, xc)) {
(*pim)(yc, xc) = col;
}
@@ -45,9 +45,9 @@ inline void safePutPixel(int yc, int xc, const Color & col, Image *pim) {
/// is inside the image. This function support multi-channel color
/// \note The color pointer col must have size as the image depth
template <class Image, class Color>
inline void safePutPixel(int yc, int xc, const Color *col, Image *pim) {
inline void safePutPixel(int yc, int xc, const Color* col, Image* pim) {
if (!pim)
return;
return;
if (pim->Contains(yc, xc)) {
for (int i = 0; i < pim->Depth(); ++i)
(*pim)(yc, xc, i) = *(col + i);
@@ -59,19 +59,23 @@ inline void safePutPixel(int yc, int xc, const Color *col, Image *pim) {
// Add the rotation of the ellipse.
// As the algo. use symmetry we must use 4 rotations.
template <class Image, class Color>
void DrawEllipse(int xc, int yc, int radiusA, int radiusB,
const Color &col, Image *pim, double angle = 0.0) {
void DrawEllipse(int xc,
int yc,
int radiusA,
int radiusB,
const Color& col,
Image* pim,
double angle = 0.0) {
int a = radiusA;
int b = radiusB;
// Counter Clockwise rotation matrix.
double matXY[4] = { cos(angle), sin(angle),
-sin(angle), cos(angle)};
double matXY[4] = {cos(angle), sin(angle), -sin(angle), cos(angle)};
int x, y;
double d1, d2;
x = 0;
y = b;
d1 = b*b - a*a*b + a*a/4;
d1 = b * b - a * a * b + a * a / 4;
float rotX = (matXY[0] * x + matXY[1] * y);
float rotY = (matXY[2] * x + matXY[3] * y);
@@ -86,12 +90,12 @@ void DrawEllipse(int xc, int yc, int radiusA, int radiusB,
rotY = (-matXY[2] * x + matXY[3] * y);
safePutPixel(yc + rotY, xc + rotX, col, pim);
while (a*a*(y-.5) > b*b*(x+1)) {
while (a * a * (y - .5) > b * b * (x + 1)) {
if (d1 < 0) {
d1 += b*b*(2*x+3);
d1 += b * b * (2 * x + 3);
++x;
} else {
d1 += b*b*(2*x+3) + a*a*(-2*y+2);
d1 += b * b * (2 * x + 3) + a * a * (-2 * y + 2);
++x;
--y;
}
@@ -108,14 +112,14 @@ void DrawEllipse(int xc, int yc, int radiusA, int radiusB,
rotY = (-matXY[2] * x + matXY[3] * y);
safePutPixel(yc + rotY, xc + rotX, col, pim);
}
d2 = b*b*(x+.5)*(x+.5) + a*a*(y-1)*(y-1) - a*a*b*b;
d2 = b * b * (x + .5) * (x + .5) + a * a * (y - 1) * (y - 1) - a * a * b * b;
while (y > 0) {
if (d2 < 0) {
d2 += b*b*(2*x+2) + a*a*(-2*y+3);
d2 += b * b * (2 * x + 2) + a * a * (-2 * y + 3);
--y;
++x;
} else {
d2 += a*a*(-2*y+3);
d2 += a * a * (-2 * y + 3);
--y;
}
rotX = (matXY[0] * x + matXY[1] * y);
@@ -137,23 +141,23 @@ void DrawEllipse(int xc, int yc, int radiusA, int radiusB,
// So it's better the use the Andres method.
// http://fr.wikipedia.org/wiki/Algorithme_de_tracé_de_cercle_d'Andres.
template <class Image, class Color>
void DrawCircle(int x, int y, int radius, const Color &col, Image *pim) {
Image &im = *pim;
if ( im.Contains(y + radius, x + radius)
|| im.Contains(y + radius, x - radius)
|| im.Contains(y - radius, x + radius)
|| im.Contains(y - radius, x - radius)) {
void DrawCircle(int x, int y, int radius, const Color& col, Image* pim) {
Image& im = *pim;
if (im.Contains(y + radius, x + radius) ||
im.Contains(y + radius, x - radius) ||
im.Contains(y - radius, x + radius) ||
im.Contains(y - radius, x - radius)) {
int x1 = 0;
int y1 = radius;
int d = radius - 1;
while (y1 >= x1) {
// Draw the point for each octant.
safePutPixel( y1 + y, x1 + x, col, pim);
safePutPixel( x1 + y, y1 + x, col, pim);
safePutPixel( y1 + y, -x1 + x, col, pim);
safePutPixel( x1 + y, -y1 + x, col, pim);
safePutPixel(-y1 + y, x1 + x, col, pim);
safePutPixel(-x1 + y, y1 + x, col, pim);
safePutPixel(y1 + y, x1 + x, col, pim);
safePutPixel(x1 + y, y1 + x, col, pim);
safePutPixel(y1 + y, -x1 + x, col, pim);
safePutPixel(x1 + y, -y1 + x, col, pim);
safePutPixel(-y1 + y, x1 + x, col, pim);
safePutPixel(-x1 + y, y1 + x, col, pim);
safePutPixel(-y1 + y, -x1 + x, col, pim);
safePutPixel(-x1 + y, -y1 + x, col, pim);
if (d >= 2 * x1) {
@@ -163,7 +167,7 @@ void DrawCircle(int x, int y, int radius, const Color &col, Image *pim) {
if (d <= 2 * (radius - y1)) {
d = d + 2 * y1 - 1;
y1 -= 1;
} else {
} else {
d = d + 2 * (y1 - x1 - 1);
y1 -= 1;
x1 += 1;
@@ -175,8 +179,8 @@ void DrawCircle(int x, int y, int radius, const Color &col, Image *pim) {
// Bresenham algorithm
template <class Image, class Color>
void DrawLine(int xa, int ya, int xb, int yb, const Color &col, Image *pim) {
Image &im = *pim;
void DrawLine(int xa, int ya, int xb, int yb, const Color& col, Image* pim) {
Image& im = *pim;
// If one point is outside the image
// Replace the outside point by the intersection of the line and
@@ -185,35 +189,37 @@ void DrawLine(int xa, int ya, int xb, int yb, const Color &col, Image *pim) {
int width = pim->Width();
int height = pim->Height();
const bool xdir = xa < xb, ydir = ya < yb;
float nx0 = xa, nx1 = xb, ny0 = ya, ny1 = yb,
&xleft = xdir?nx0:nx1, &yleft = xdir?ny0:ny1,
&xright = xdir?nx1:nx0, &yright = xdir?ny1:ny0,
&xup = ydir?nx0:nx1, &yup = ydir?ny0:ny1,
&xdown = ydir?nx1:nx0, &ydown = ydir?ny1:ny0;
float nx0 = xa, nx1 = xb, ny0 = ya, ny1 = yb, &xleft = xdir ? nx0 : nx1,
&yleft = xdir ? ny0 : ny1, &xright = xdir ? nx1 : nx0,
&yright = xdir ? ny1 : ny0, &xup = ydir ? nx0 : nx1,
&yup = ydir ? ny0 : ny1, &xdown = ydir ? nx1 : nx0,
&ydown = ydir ? ny1 : ny0;
if (xright < 0 || xleft >= width) return;
if (xright < 0 || xleft >= width)
return;
if (xleft < 0) {
yleft -= xleft*(yright - yleft)/(xright - xleft);
xleft = 0;
yleft -= xleft * (yright - yleft) / (xright - xleft);
xleft = 0;
}
if (xright >= width) {
yright -= (xright - width)*(yright - yleft)/(xright - xleft);
xright = width - 1;
yright -= (xright - width) * (yright - yleft) / (xright - xleft);
xright = width - 1;
}
if (ydown < 0 || yup >= height) return;
if (ydown < 0 || yup >= height)
return;
if (yup < 0) {
xup -= yup*(xdown - xup)/(ydown - yup);
yup = 0;
xup -= yup * (xdown - xup) / (ydown - yup);
yup = 0;
}
if (ydown >= height) {
xdown -= (ydown - height)*(xdown - xup)/(ydown - yup);
ydown = height - 1;
xdown -= (ydown - height) * (xdown - xup) / (ydown - yup);
ydown = height - 1;
}
xa = (int) xleft;
xb = (int) xright;
ya = (int) yleft;
yb = (int) yright;
xa = (int)xleft;
xb = (int)xright;
ya = (int)yleft;
yb = (int)yright;
}
int xbas, xhaut, ybas, yhaut;
@@ -241,7 +247,7 @@ void DrawLine(int xa, int ya, int xb, int yb, const Color &col, Image *pim) {
}
if (dy > 0) { // Positive slope will increment X.
incrmY = 1;
} else { // Negative slope.
} else { // Negative slope.
incrmY = -1;
}
if (dx >= dy) {
@@ -255,9 +261,9 @@ void DrawLine(int xa, int ya, int xb, int yb, const Color &col, Image *pim) {
x += incrmX;
if (dp <= 0) { // Go in direction of the South Pixel.
dp += S;
} else { // Go to the North.
} else { // Go to the North.
dp += N;
y+=incrmY;
y += incrmY;
}
}
} else {
@@ -271,7 +277,7 @@ void DrawLine(int xa, int ya, int xb, int yb, const Color &col, Image *pim) {
y += incrmY;
if (dp <= 0) { // Go in direction of the South Pixel.
dp += S;
} else { // Go to the North.
} else { // Go to the North.
dp += N;
x += incrmX;
}

View File

@@ -23,20 +23,20 @@
#include "libmv/image/image.h"
#include "testing/testing.h"
using libmv::Image;
using libmv::Array3Df;
using libmv::Image;
namespace {
TEST(Image, SimpleImageAccessors) {
Array3Df *array = new Array3Df(2, 3);
Array3Df* array = new Array3Df(2, 3);
Image image(array);
EXPECT_EQ(array, image.AsArray3Df());
EXPECT_TRUE(NULL == image.AsArray3Du());
}
TEST(Image, MemorySizeInBytes) {
Array3Df *array = new Array3Df(2, 3);
Array3Df* array = new Array3Df(2, 3);
Image image(array);
int size = sizeof(image) + array->MemorySizeInBytes();
EXPECT_EQ(size, image.MemorySizeInBytes());

View File

@@ -26,17 +26,14 @@
namespace libmv {
/// Nearest neighbor interpolation.
template<typename T>
inline T SampleNearest(const Array3D<T> &image,
float y, float x, int v = 0) {
template <typename T>
inline T SampleNearest(const Array3D<T>& image, float y, float x, int v = 0) {
const int i = int(round(y));
const int j = int(round(x));
return image(i, j, v);
}
inline void LinearInitAxis(float x, int size,
int *x1, int *x2,
float *dx) {
inline void LinearInitAxis(float x, int size, int* x1, int* x2, float* dx) {
const int ix = static_cast<int>(x);
if (ix < 0) {
*x1 = 0;
@@ -54,32 +51,32 @@ inline void LinearInitAxis(float x, int size,
}
/// Linear interpolation.
template<typename T>
inline T SampleLinear(const Array3D<T> &image, float y, float x, int v = 0) {
template <typename T>
inline T SampleLinear(const Array3D<T>& image, float y, float x, int v = 0) {
int x1, y1, x2, y2;
float dx, dy;
LinearInitAxis(y, image.Height(), &y1, &y2, &dy);
LinearInitAxis(x, image.Width(), &x1, &x2, &dx);
LinearInitAxis(x, image.Width(), &x1, &x2, &dx);
const T im11 = image(y1, x1, v);
const T im12 = image(y1, x2, v);
const T im21 = image(y2, x1, v);
const T im22 = image(y2, x2, v);
return T( dy * (dx * im11 + (1.0 - dx) * im12) +
return T(dy * (dx * im11 + (1.0 - dx) * im12) +
(1 - dy) * (dx * im21 + (1.0 - dx) * im22));
}
/// Linear interpolation, of all channels. The sample is assumed to have the
/// same size as the number of channels in image.
template<typename T>
inline void SampleLinear(const Array3D<T> &image, float y, float x, T *sample) {
template <typename T>
inline void SampleLinear(const Array3D<T>& image, float y, float x, T* sample) {
int x1, y1, x2, y2;
float dx, dy;
LinearInitAxis(y, image.Height(), &y1, &y2, &dy);
LinearInitAxis(x, image.Width(), &x1, &x2, &dx);
LinearInitAxis(x, image.Width(), &x1, &x2, &dx);
for (int i = 0; i < image.Depth(); ++i) {
const T im11 = image(y1, x1, i);
@@ -87,7 +84,7 @@ inline void SampleLinear(const Array3D<T> &image, float y, float x, T *sample) {
const T im21 = image(y2, x1, i);
const T im22 = image(y2, x2, i);
sample[i] = T( dy * (dx * im11 + (1.0 - dx) * im12) +
sample[i] = T(dy * (dx * im11 + (1.0 - dx) * im12) +
(1 - dy) * (dx * im21 + (1.0 - dx) * im22));
}
}
@@ -95,7 +92,7 @@ inline void SampleLinear(const Array3D<T> &image, float y, float x, T *sample) {
// Downsample all channels by 2. If the image has odd width or height, the last
// row or column is ignored.
// FIXME(MatthiasF): this implementation shouldn't be in an interface file
inline void DownsampleChannelsBy2(const Array3Df &in, Array3Df *out) {
inline void DownsampleChannelsBy2(const Array3Df& in, Array3Df* out) {
int height = in.Height() / 2;
int width = in.Width() / 2;
int depth = in.Depth();
@@ -106,10 +103,12 @@ inline void DownsampleChannelsBy2(const Array3Df &in, Array3Df *out) {
for (int r = 0; r < height; ++r) {
for (int c = 0; c < width; ++c) {
for (int k = 0; k < depth; ++k) {
// clang-format off
(*out)(r, c, k) = (in(2 * r, 2 * c, k) +
in(2 * r + 1, 2 * c, k) +
in(2 * r, 2 * c + 1, k) +
in(2 * r + 1, 2 * c + 1, k)) / 4.0f;
// clang-format on
}
}
}
@@ -117,11 +116,12 @@ inline void DownsampleChannelsBy2(const Array3Df &in, Array3Df *out) {
// Sample a region centered at x,y in image with size extending by half_width
// from x,y. Channels specifies the number of channels to sample from.
inline void SamplePattern(const FloatImage &image,
double x, double y,
int half_width,
int channels,
FloatImage *sampled) {
inline void SamplePattern(const FloatImage& image,
double x,
double y,
int half_width,
int channels,
FloatImage* sampled) {
sampled->Resize(2 * half_width + 1, 2 * half_width + 1, channels);
for (int r = -half_width; r <= half_width; ++r) {
for (int c = -half_width; c <= half_width; ++c) {

View File

@@ -32,9 +32,9 @@ TEST(Image, Nearest) {
image(1, 0) = 2;
image(1, 1) = 3;
EXPECT_EQ(0, SampleNearest(image, -0.4f, -0.4f));
EXPECT_EQ(0, SampleNearest(image, 0.4f, 0.4f));
EXPECT_EQ(3, SampleNearest(image, 0.6f, 0.6f));
EXPECT_EQ(3, SampleNearest(image, 1.4f, 1.4f));
EXPECT_EQ(0, SampleNearest(image, 0.4f, 0.4f));
EXPECT_EQ(3, SampleNearest(image, 0.6f, 0.6f));
EXPECT_EQ(3, SampleNearest(image, 1.4f, 1.4f));
}
TEST(Image, Linear) {
@@ -57,7 +57,7 @@ TEST(Image, DownsampleBy2) {
ASSERT_EQ(1, resampled_image.Height());
ASSERT_EQ(1, resampled_image.Width());
ASSERT_EQ(1, resampled_image.Depth());
EXPECT_FLOAT_EQ(6./4., resampled_image(0, 0));
EXPECT_FLOAT_EQ(6. / 4., resampled_image(0, 0));
}
TEST(Image, DownsampleBy2MultiChannel) {
@@ -82,8 +82,8 @@ TEST(Image, DownsampleBy2MultiChannel) {
ASSERT_EQ(1, resampled_image.Height());
ASSERT_EQ(1, resampled_image.Width());
ASSERT_EQ(3, resampled_image.Depth());
EXPECT_FLOAT_EQ((0+1+2+3)/4., resampled_image(0, 0, 0));
EXPECT_FLOAT_EQ((5+6+7+8)/4., resampled_image(0, 0, 1));
EXPECT_FLOAT_EQ((9+10+11+12)/4., resampled_image(0, 0, 2));
EXPECT_FLOAT_EQ((0 + 1 + 2 + 3) / 4., resampled_image(0, 0, 0));
EXPECT_FLOAT_EQ((5 + 6 + 7 + 8) / 4., resampled_image(0, 0, 1));
EXPECT_FLOAT_EQ((9 + 10 + 11 + 12) / 4., resampled_image(0, 0, 2));
}
} // namespace

View File

@@ -34,10 +34,14 @@ class Tuple {
Tuple(T initial_value) { Reset(initial_value); }
template <typename D>
Tuple(D *values) { Reset(values); }
Tuple(D* values) {
Reset(values);
}
template <typename D>
Tuple(const Tuple<D, N> &b) { Reset(b); }
Tuple(const Tuple<D, N>& b) {
Reset(b);
}
template <typename D>
Tuple& operator=(const Tuple<D, N>& b) {
@@ -46,30 +50,32 @@ class Tuple {
}
template <typename D>
void Reset(const Tuple<D, N>& b) { Reset(b.Data()); }
void Reset(const Tuple<D, N>& b) {
Reset(b.Data());
}
template <typename D>
void Reset(D *values) {
for (int i = 0;i < N; i++) {
void Reset(D* values) {
for (int i = 0; i < N; i++) {
data_[i] = T(values[i]);
}
}
// Set all tuple values to the same thing.
void Reset(T value) {
for (int i = 0;i < N; i++) {
for (int i = 0; i < N; i++) {
data_[i] = value;
}
}
// Pointer to the first element.
T *Data() { return &data_[0]; }
const T *Data() const { return &data_[0]; }
T* Data() { return &data_[0]; }
const T* Data() const { return &data_[0]; }
T &operator()(int i) { return data_[i]; }
const T &operator()(int i) const { return data_[i]; }
T& operator()(int i) { return data_[i]; }
const T& operator()(int i) const { return data_[i]; }
bool operator==(const Tuple<T, N> &other) const {
bool operator==(const Tuple<T, N>& other) const {
for (int i = 0; i < N; ++i) {
if ((*this)(i) != other(i)) {
return false;
@@ -77,9 +83,7 @@ class Tuple {
}
return true;
}
bool operator!=(const Tuple<T, N> &other) const {
return !(*this == other);
}
bool operator!=(const Tuple<T, N>& other) const { return !(*this == other); }
private:
T data_[N];

View File

@@ -24,7 +24,7 @@
namespace libmv {
// HZ 4.4.4 pag.109: Point conditioning (non isotropic)
void PreconditionerFromPoints(const Mat &points, Mat3 *T) {
void PreconditionerFromPoints(const Mat& points, Mat3* T) {
Vec mean, variance;
MeanAndVarianceAlongRows(points, &mean, &variance);
@@ -38,12 +38,14 @@ void PreconditionerFromPoints(const Mat &points, Mat3 *T) {
if (variance(1) < 1e-8)
yfactor = mean(1) = 1.0;
// clang-format off
*T << xfactor, 0, -xfactor * mean(0),
0, yfactor, -yfactor * mean(1),
0, 0, 1;
// clang-format on
}
// HZ 4.4.4 pag.107: Point conditioning (isotropic)
void IsotropicPreconditionerFromPoints(const Mat &points, Mat3 *T) {
void IsotropicPreconditionerFromPoints(const Mat& points, Mat3* T) {
Vec mean, variance;
MeanAndVarianceAlongRows(points, &mean, &variance);
@@ -57,14 +59,16 @@ void IsotropicPreconditionerFromPoints(const Mat &points, Mat3 *T) {
mean.setOnes();
}
// clang-format off
*T << factor, 0, -factor * mean(0),
0, factor, -factor * mean(1),
0, 0, 1;
// clang-format on
}
void ApplyTransformationToPoints(const Mat &points,
const Mat3 &T,
Mat *transformed_points) {
void ApplyTransformationToPoints(const Mat& points,
const Mat3& T,
Mat* transformed_points) {
int n = points.cols();
transformed_points->resize(2, n);
Mat3X p(3, n);
@@ -73,26 +77,24 @@ void ApplyTransformationToPoints(const Mat &points,
HomogeneousToEuclidean(p, transformed_points);
}
void NormalizePoints(const Mat &points,
Mat *normalized_points,
Mat3 *T) {
void NormalizePoints(const Mat& points, Mat* normalized_points, Mat3* T) {
PreconditionerFromPoints(points, T);
ApplyTransformationToPoints(points, *T, normalized_points);
}
void NormalizeIsotropicPoints(const Mat &points,
Mat *normalized_points,
Mat3 *T) {
void NormalizeIsotropicPoints(const Mat& points,
Mat* normalized_points,
Mat3* T) {
IsotropicPreconditionerFromPoints(points, T);
ApplyTransformationToPoints(points, *T, normalized_points);
}
// Denormalize the results. See HZ page 109.
void UnnormalizerT::Unnormalize(const Mat3 &T1, const Mat3 &T2, Mat3 *H) {
void UnnormalizerT::Unnormalize(const Mat3& T1, const Mat3& T2, Mat3* H) {
*H = T2.transpose() * (*H) * T1;
}
void UnnormalizerI::Unnormalize(const Mat3 &T1, const Mat3 &T2, Mat3 *H) {
void UnnormalizerI::Unnormalize(const Mat3& T1, const Mat3& T2, Mat3* H) {
*H = T2.inverse() * (*H) * T1;
}

View File

@@ -26,32 +26,30 @@
namespace libmv {
// Point conditioning (non isotropic)
void PreconditionerFromPoints(const Mat &points, Mat3 *T);
void PreconditionerFromPoints(const Mat& points, Mat3* T);
// Point conditioning (isotropic)
void IsotropicPreconditionerFromPoints(const Mat &points, Mat3 *T);
void IsotropicPreconditionerFromPoints(const Mat& points, Mat3* T);
void ApplyTransformationToPoints(const Mat &points,
const Mat3 &T,
Mat *transformed_points);
void ApplyTransformationToPoints(const Mat& points,
const Mat3& T,
Mat* transformed_points);
void NormalizePoints(const Mat &points,
Mat *normalized_points,
Mat3 *T);
void NormalizePoints(const Mat& points, Mat* normalized_points, Mat3* T);
void NormalizeIsotropicPoints(const Mat &points,
Mat *normalized_points,
Mat3 *T);
void NormalizeIsotropicPoints(const Mat& points,
Mat* normalized_points,
Mat3* T);
/// Use inverse for unnormalize
struct UnnormalizerI {
// Denormalize the results. See HZ page 109.
static void Unnormalize(const Mat3 &T1, const Mat3 &T2, Mat3 *H);
static void Unnormalize(const Mat3& T1, const Mat3& T2, Mat3* H);
};
/// Use transpose for unnormalize
struct UnnormalizerT {
// Denormalize the results. See HZ page 109.
static void Unnormalize(const Mat3 &T1, const Mat3 &T2, Mat3 *H);
static void Unnormalize(const Mat3& T1, const Mat3& T2, Mat3* H);
};
} // namespace libmv

View File

@@ -23,8 +23,8 @@
#include <cmath>
#include <limits>
#include <Eigen/SVD>
#include <Eigen/Geometry>
#include <Eigen/SVD>
#include "libmv/base/vector.h"
#include "libmv/logging/logging.h"
@@ -35,9 +35,10 @@ namespace euclidean_resection {
typedef unsigned int uint;
bool EuclideanResection(const Mat2X &x_camera,
const Mat3X &X_world,
Mat3 *R, Vec3 *t,
bool EuclideanResection(const Mat2X& x_camera,
const Mat3X& X_world,
Mat3* R,
Vec3* t,
ResectionMethod method) {
switch (method) {
case RESECTION_ANSAR_DANIILIDIS:
@@ -49,20 +50,20 @@ bool EuclideanResection(const Mat2X &x_camera,
case RESECTION_PPNP:
return EuclideanResectionPPnP(x_camera, X_world, R, t);
break;
default:
LOG(FATAL) << "Unknown resection method.";
default: LOG(FATAL) << "Unknown resection method.";
}
return false;
}
bool EuclideanResection(const Mat &x_image,
const Mat3X &X_world,
const Mat3 &K,
Mat3 *R, Vec3 *t,
bool EuclideanResection(const Mat& x_image,
const Mat3X& X_world,
const Mat3& K,
Mat3* R,
Vec3* t,
ResectionMethod method) {
CHECK(x_image.rows() == 2 || x_image.rows() == 3)
<< "Invalid size for x_image: "
<< x_image.rows() << "x" << x_image.cols();
<< "Invalid size for x_image: " << x_image.rows() << "x"
<< x_image.cols();
Mat2X x_camera;
if (x_image.rows() == 2) {
@@ -73,18 +74,15 @@ bool EuclideanResection(const Mat &x_image,
return EuclideanResection(x_camera, X_world, R, t, method);
}
void AbsoluteOrientation(const Mat3X &X,
const Mat3X &Xp,
Mat3 *R,
Vec3 *t) {
void AbsoluteOrientation(const Mat3X& X, const Mat3X& Xp, Mat3* R, Vec3* t) {
int num_points = X.cols();
Vec3 C = X.rowwise().sum() / num_points; // Centroid of X.
Vec3 C = X.rowwise().sum() / num_points; // Centroid of X.
Vec3 Cp = Xp.rowwise().sum() / num_points; // Centroid of Xp.
// Normalize the two point sets.
Mat3X Xn(3, num_points), Xpn(3, num_points);
for (int i = 0; i < num_points; ++i) {
Xn.col(i) = X.col(i) - C;
Xn.col(i) = X.col(i) - C;
Xpn.col(i) = Xp.col(i) - Cp;
}
@@ -100,10 +98,12 @@ void AbsoluteOrientation(const Mat3X &X,
double Szy = Xn.row(2).dot(Xpn.row(1));
Mat4 N;
// clang-format off
N << Sxx + Syy + Szz, Syz - Szy, Szx - Sxz, Sxy - Syx,
Syz - Szy, Sxx - Syy - Szz, Sxy + Syx, Szx + Sxz,
Szx - Sxz, Sxy + Syx, -Sxx + Syy - Szz, Syz + Szy,
Sxy - Syx, Szx + Sxz, Syz + Szy, -Sxx - Syy + Szz;
// clang-format on
// Find the unit quaternion q that maximizes qNq. It is the eigenvector
// corresponding to the lagest eigenvalue.
@@ -118,6 +118,7 @@ void AbsoluteOrientation(const Mat3X &X,
double q1q3 = q(1) * q(3);
double q2q3 = q(2) * q(3);
// clang-format off
(*R) << qq(0) + qq(1) - qq(2) - qq(3),
2 * (q1q2 - q0q3),
2 * (q1q3 + q0q2),
@@ -127,6 +128,7 @@ void AbsoluteOrientation(const Mat3X &X,
2 * (q1q3 - q0q2),
2 * (q2q3 + q0q1),
qq(0) - qq(1) - qq(2) + qq(3);
// clang-format on
// Fix the handedness of the R matrix.
if (R->determinant() < 0) {
@@ -176,9 +178,7 @@ static int Sign(double value) {
// Lambda to create the constraints in equation (5) in "Linear Pose Estimation
// from Points or Lines", by Ansar, A. and Daniilidis, PAMI 2003. vol. 25, no.
// 5.
static Vec MatrixToConstraint(const Mat &A,
int num_k_columns,
int num_lambda) {
static Vec MatrixToConstraint(const Mat& A, int num_k_columns, int num_lambda) {
Vec C(num_k_columns);
C.setZero();
int idx = 0;
@@ -195,17 +195,17 @@ static Vec MatrixToConstraint(const Mat &A,
}
// Normalizes the columns of vectors.
static void NormalizeColumnVectors(Mat3X *vectors) {
static void NormalizeColumnVectors(Mat3X* vectors) {
int num_columns = vectors->cols();
for (int i = 0; i < num_columns; ++i) {
vectors->col(i).normalize();
}
}
void EuclideanResectionAnsarDaniilidis(const Mat2X &x_camera,
const Mat3X &X_world,
Mat3 *R,
Vec3 *t) {
void EuclideanResectionAnsarDaniilidis(const Mat2X& x_camera,
const Mat3X& X_world,
Mat3* R,
Vec3* t) {
CHECK(x_camera.cols() == X_world.cols());
CHECK(x_camera.cols() > 3);
@@ -229,14 +229,14 @@ void EuclideanResectionAnsarDaniilidis(const Mat2X &x_camera,
// them into the M matrix (8). Also store the initial (i, j) indices.
int row = 0;
for (int i = 0; i < num_points; ++i) {
for (int j = i+1; j < num_points; ++j) {
for (int j = i + 1; j < num_points; ++j) {
M(row, row) = -2 * x_camera_unit.col(i).dot(x_camera_unit.col(j));
M(row, num_m_rows + i) = x_camera_unit.col(i).dot(x_camera_unit.col(i));
M(row, num_m_rows + j) = x_camera_unit.col(j).dot(x_camera_unit.col(j));
Vec3 Xdiff = X_world.col(i) - X_world.col(j);
double center_to_point_distance = Xdiff.norm();
M(row, num_m_columns - 1) =
- center_to_point_distance * center_to_point_distance;
-center_to_point_distance * center_to_point_distance;
ij_index(row, 0) = i;
ij_index(row, 1) = j;
++row;
@@ -246,17 +246,17 @@ void EuclideanResectionAnsarDaniilidis(const Mat2X &x_camera,
}
int num_lambda = num_points + 1; // Dimension of the null space of M.
Mat V = M.jacobiSvd(Eigen::ComputeFullV).matrixV().block(0,
num_m_rows,
num_m_columns,
num_lambda);
Mat V = M.jacobiSvd(Eigen::ComputeFullV)
.matrixV()
.block(0, num_m_rows, num_m_columns, num_lambda);
// TODO(vess): The number of constraint equations in K (num_k_rows) must be
// (num_points + 1) * (num_points + 2)/2. This creates a performance issue
// for more than 4 points. It is fine for 4 points at the moment with 18
// instead of 15 equations.
int num_k_rows = num_m_rows + num_points *
(num_points*(num_points-1)/2 - num_points+1);
int num_k_rows =
num_m_rows +
num_points * (num_points * (num_points - 1) / 2 - num_points + 1);
int num_k_columns = num_lambda * (num_lambda + 1) / 2;
Mat K(num_k_rows, num_k_columns);
K.setZero();
@@ -275,8 +275,8 @@ void EuclideanResectionAnsarDaniilidis(const Mat2X &x_camera,
int idx4 = IJToPointIndex(i, k, num_points);
K.row(counter_k_row) =
MatrixToConstraint(V.row(idx1).transpose() * V.row(idx2)-
V.row(idx3).transpose() * V.row(idx4),
MatrixToConstraint(V.row(idx1).transpose() * V.row(idx2) -
V.row(idx3).transpose() * V.row(idx4),
num_k_columns,
num_lambda);
++counter_k_row;
@@ -296,8 +296,8 @@ void EuclideanResectionAnsarDaniilidis(const Mat2X &x_camera,
int idx4 = IJToPointIndex(i, k, num_points);
K.row(counter_k_row) =
MatrixToConstraint(V.row(idx1).transpose() * V.row(idx2)-
V.row(idx3).transpose() * V.row(idx4),
MatrixToConstraint(V.row(idx1).transpose() * V.row(idx2) -
V.row(idx3).transpose() * V.row(idx4),
num_k_columns,
num_lambda);
++counter_k_row;
@@ -317,14 +317,12 @@ void EuclideanResectionAnsarDaniilidis(const Mat2X &x_camera,
}
}
// Ensure positiveness of the largest value corresponding to lambda_ii.
L_sq = L_sq * Sign(L_sq(IJToIndex(max_L_sq_index,
max_L_sq_index,
num_lambda)));
L_sq =
L_sq * Sign(L_sq(IJToIndex(max_L_sq_index, max_L_sq_index, num_lambda)));
Vec L(num_lambda);
L(max_L_sq_index) = sqrt(L_sq(IJToIndex(max_L_sq_index,
max_L_sq_index,
num_lambda)));
L(max_L_sq_index) =
sqrt(L_sq(IJToIndex(max_L_sq_index, max_L_sq_index, num_lambda)));
for (int i = 0; i < num_lambda; ++i) {
if (i != max_L_sq_index) {
@@ -353,9 +351,9 @@ void EuclideanResectionAnsarDaniilidis(const Mat2X &x_camera,
}
// Selects 4 virtual control points using mean and PCA.
static void SelectControlPoints(const Mat3X &X_world,
Mat *X_centered,
Mat34 *X_control_points) {
static void SelectControlPoints(const Mat3X& X_world,
Mat* X_centered,
Mat34* X_control_points) {
size_t num_points = X_world.cols();
// The first virtual control point, C0, is the centroid.
@@ -379,13 +377,13 @@ static void SelectControlPoints(const Mat3X &X_world,
}
// Computes the barycentric coordinates for all real points
static void ComputeBarycentricCoordinates(const Mat3X &X_world_centered,
const Mat34 &X_control_points,
Mat4X *alphas) {
static void ComputeBarycentricCoordinates(const Mat3X& X_world_centered,
const Mat34& X_control_points,
Mat4X* alphas) {
size_t num_points = X_world_centered.cols();
Mat3 C2;
for (size_t c = 1; c < 4; c++) {
C2.col(c-1) = X_control_points.col(c) - X_control_points.col(0);
C2.col(c - 1) = X_control_points.col(c) - X_control_points.col(0);
}
Mat3 C2inv = C2.inverse();
@@ -401,14 +399,15 @@ static void ComputeBarycentricCoordinates(const Mat3X &X_world_centered,
// Estimates the coordinates of all real points in the camera coordinate frame
static void ComputePointsCoordinatesInCameraFrame(
const Mat4X &alphas,
const Vec4 &betas,
const Eigen::Matrix<double, 12, 12> &U,
Mat3X *X_camera) {
const Mat4X& alphas,
const Vec4& betas,
const Eigen::Matrix<double, 12, 12>& U,
Mat3X* X_camera) {
size_t num_points = alphas.cols();
// Estimates the control points in the camera reference frame.
Mat34 C2b; C2b.setZero();
Mat34 C2b;
C2b.setZero();
for (size_t cu = 0; cu < 4; cu++) {
for (size_t c = 0; c < 4; c++) {
C2b.col(c) += betas(cu) * U.block(11 - cu, c * 3, 1, 3).transpose();
@@ -436,9 +435,10 @@ static void ComputePointsCoordinatesInCameraFrame(
}
}
bool EuclideanResectionEPnP(const Mat2X &x_camera,
const Mat3X &X_world,
Mat3 *R, Vec3 *t) {
bool EuclideanResectionEPnP(const Mat2X& x_camera,
const Mat3X& X_world,
Mat3* R,
Vec3* t) {
CHECK(x_camera.cols() == X_world.cols());
CHECK(x_camera.cols() > 3);
size_t num_points = X_world.cols();
@@ -462,6 +462,7 @@ bool EuclideanResectionEPnP(const Mat2X &x_camera,
double a3 = alphas(3, c);
double ui = x_camera(0, c);
double vi = x_camera(1, c);
// clang-format off
M.block(2*c, 0, 2, 12) << a0, 0,
a0*(-ui), a1, 0,
a1*(-ui), a2, 0,
@@ -471,10 +472,11 @@ bool EuclideanResectionEPnP(const Mat2X &x_camera,
a1, a1*(-vi), 0,
a2, a2*(-vi), 0,
a3, a3*(-vi);
// clang-format on
}
// TODO(julien): Avoid the transpose by rewriting the u2.block() calls.
Eigen::JacobiSVD<Mat> MtMsvd(M.transpose()*M, Eigen::ComputeFullU);
Eigen::JacobiSVD<Mat> MtMsvd(M.transpose() * M, Eigen::ComputeFullU);
Eigen::Matrix<double, 12, 12> u2 = MtMsvd.matrixU().transpose();
// Estimate the L matrix.
@@ -495,21 +497,22 @@ bool EuclideanResectionEPnP(const Mat2X &x_camera,
dv2.row(3) = u2.block(10, 3, 1, 3) - u2.block(10, 6, 1, 3);
dv2.row(4) = u2.block(10, 3, 1, 3) - u2.block(10, 9, 1, 3);
dv2.row(5) = u2.block(10, 6, 1, 3) - u2.block(10, 9, 1, 3);
dv3.row(0) = u2.block(9, 0, 1, 3) - u2.block(9, 3, 1, 3);
dv3.row(1) = u2.block(9, 0, 1, 3) - u2.block(9, 6, 1, 3);
dv3.row(2) = u2.block(9, 0, 1, 3) - u2.block(9, 9, 1, 3);
dv3.row(3) = u2.block(9, 3, 1, 3) - u2.block(9, 6, 1, 3);
dv3.row(4) = u2.block(9, 3, 1, 3) - u2.block(9, 9, 1, 3);
dv3.row(5) = u2.block(9, 6, 1, 3) - u2.block(9, 9, 1, 3);
dv4.row(0) = u2.block(8, 0, 1, 3) - u2.block(8, 3, 1, 3);
dv4.row(1) = u2.block(8, 0, 1, 3) - u2.block(8, 6, 1, 3);
dv4.row(2) = u2.block(8, 0, 1, 3) - u2.block(8, 9, 1, 3);
dv4.row(3) = u2.block(8, 3, 1, 3) - u2.block(8, 6, 1, 3);
dv4.row(4) = u2.block(8, 3, 1, 3) - u2.block(8, 9, 1, 3);
dv4.row(5) = u2.block(8, 6, 1, 3) - u2.block(8, 9, 1, 3);
dv3.row(0) = u2.block(9, 0, 1, 3) - u2.block(9, 3, 1, 3);
dv3.row(1) = u2.block(9, 0, 1, 3) - u2.block(9, 6, 1, 3);
dv3.row(2) = u2.block(9, 0, 1, 3) - u2.block(9, 9, 1, 3);
dv3.row(3) = u2.block(9, 3, 1, 3) - u2.block(9, 6, 1, 3);
dv3.row(4) = u2.block(9, 3, 1, 3) - u2.block(9, 9, 1, 3);
dv3.row(5) = u2.block(9, 6, 1, 3) - u2.block(9, 9, 1, 3);
dv4.row(0) = u2.block(8, 0, 1, 3) - u2.block(8, 3, 1, 3);
dv4.row(1) = u2.block(8, 0, 1, 3) - u2.block(8, 6, 1, 3);
dv4.row(2) = u2.block(8, 0, 1, 3) - u2.block(8, 9, 1, 3);
dv4.row(3) = u2.block(8, 3, 1, 3) - u2.block(8, 6, 1, 3);
dv4.row(4) = u2.block(8, 3, 1, 3) - u2.block(8, 9, 1, 3);
dv4.row(5) = u2.block(8, 6, 1, 3) - u2.block(8, 9, 1, 3);
Eigen::Matrix<double, 6, 10> L;
for (size_t r = 0; r < 6; r++) {
// clang-format off
L.row(r) << dv1.row(r).dot(dv1.row(r)),
2.0 * dv1.row(r).dot(dv2.row(r)),
dv2.row(r).dot(dv2.row(r)),
@@ -520,19 +523,23 @@ bool EuclideanResectionEPnP(const Mat2X &x_camera,
2.0 * dv2.row(r).dot(dv4.row(r)),
2.0 * dv3.row(r).dot(dv4.row(r)),
dv4.row(r).dot(dv4.row(r));
// clang-format on
}
Vec6 rho;
// clang-format off
rho << (X_control_points.col(0) - X_control_points.col(1)).squaredNorm(),
(X_control_points.col(0) - X_control_points.col(2)).squaredNorm(),
(X_control_points.col(0) - X_control_points.col(3)).squaredNorm(),
(X_control_points.col(1) - X_control_points.col(2)).squaredNorm(),
(X_control_points.col(1) - X_control_points.col(3)).squaredNorm(),
(X_control_points.col(2) - X_control_points.col(3)).squaredNorm();
// clang-format on
// There are three possible solutions based on the three approximations of L
// (betas). Below, each one is solved for then the best one is chosen.
Mat3X X_camera;
Mat3 K; K.setIdentity();
Mat3 K;
K.setIdentity();
vector<Mat3> Rs(3);
vector<Vec3> ts(3);
Vec rmse(3);
@@ -546,7 +553,7 @@ bool EuclideanResectionEPnP(const Mat2X &x_camera,
//
// TODO(keir): Decide if setting this to infinity, effectively disabling the
// check, is the right approach. So far this seems the case.
double kSuccessThreshold = std::numeric_limits<double>::max();
double kSuccessThreshold = std::numeric_limits<double>::max();
// Find the first possible solution for R, t corresponding to:
// Betas = [b00 b01 b11 b02 b12 b22 b03 b13 b23 b33]
@@ -563,7 +570,7 @@ bool EuclideanResectionEPnP(const Mat2X &x_camera,
if (b4(0) < 0) {
b4 = -b4;
}
b4(0) = std::sqrt(b4(0));
b4(0) = std::sqrt(b4(0));
betas << b4(0), b4(1) / b4(0), b4(2) / b4(0), b4(3) / b4(0);
ComputePointsCoordinatesInCameraFrame(alphas, betas, u2, &X_camera);
AbsoluteOrientation(X_world, X_camera, &Rs[0], &ts[0]);
@@ -669,12 +676,12 @@ bool EuclideanResectionEPnP(const Mat2X &x_camera,
// TODO(julien): Improve the solutions with non-linear refinement.
return true;
}
/*
Straight from the paper:
http://www.diegm.uniud.it/fusiello/papers/3dimpvt12-b.pdf
function [R T] = ppnp(P,S,tol)
% input
% P : matrix (nx3) image coordinates in camera reference [u v 1]
@@ -708,33 +715,34 @@ bool EuclideanResectionEPnP(const Mat2X &x_camera,
end
T = -R*c;
end
*/
// TODO(keir): Re-do all the variable names and add comments matching the paper.
// This implementation has too much of the terseness of the original. On the
// other hand, it did work on the first try.
bool EuclideanResectionPPnP(const Mat2X &x_camera,
const Mat3X &X_world,
Mat3 *R, Vec3 *t) {
bool EuclideanResectionPPnP(const Mat2X& x_camera,
const Mat3X& X_world,
Mat3* R,
Vec3* t) {
int n = x_camera.cols();
Mat Z = Mat::Zero(n, n);
Vec e = Vec::Ones(n);
Mat A = Mat::Identity(n, n) - (e * e.transpose() / n);
Vec II = e / n;
Mat P(n, 3);
P.col(0) = x_camera.row(0);
P.col(1) = x_camera.row(1);
P.col(2).setConstant(1.0);
Mat S = X_world.transpose();
double error = std::numeric_limits<double>::infinity();
Mat E_old = 1000 * Mat::Ones(n, 3);
Vec3 c;
Mat E(n, 3);
int iteration = 0;
double tolerance = 1e-5;
// TODO(keir): The limit of 100 can probably be reduced, but this will require
@@ -748,20 +756,21 @@ bool EuclideanResectionPPnP(const Mat2X &x_camera,
s << 1, 1, (U * VT).determinant();
*R = U * s.asDiagonal() * VT;
Mat PR = P * *R; // n x 3
c = (S - Z*PR).transpose() * II;
Mat Y = S - e*c.transpose(); // n x 3
Vec Zmindiag = (PR * Y.transpose()).diagonal()
.cwiseQuotient(P.rowwise().squaredNorm());
c = (S - Z * PR).transpose() * II;
Mat Y = S - e * c.transpose(); // n x 3
Vec Zmindiag = (PR * Y.transpose())
.diagonal()
.cwiseQuotient(P.rowwise().squaredNorm());
for (int i = 0; i < n; ++i) {
Zmindiag[i] = std::max(Zmindiag[i], 0.0);
}
Z = Zmindiag.asDiagonal();
E = Y - Z*PR;
E = Y - Z * PR;
error = (E - E_old).norm();
LG << "PPnP error(" << (iteration++) << "): " << error;
E_old = E;
}
*t = -*R*c;
*t = -*R * c;
// TODO(keir): Figure out what the failure cases are. Is it too many
// iterations? Spend some time going through the math figuring out if there
@@ -769,6 +778,5 @@ bool EuclideanResectionPPnP(const Mat2X &x_camera,
return true;
}
} // namespace resection
} // namespace euclidean_resection
} // namespace libmv

View File

@@ -21,8 +21,8 @@
#ifndef LIBMV_MULTIVIEW_EUCLIDEAN_RESECTION_H_
#define LIBMV_MULTIVIEW_EUCLIDEAN_RESECTION_H_
#include "libmv/numeric/numeric.h"
#include "libmv/multiview/projection.h"
#include "libmv/numeric/numeric.h"
namespace libmv {
namespace euclidean_resection {
@@ -33,7 +33,7 @@ enum ResectionMethod {
// The "EPnP" algorithm by Lepetit et al.
// http://cvlab.epfl.ch/~lepetit/papers/lepetit_ijcv08.pdf
RESECTION_EPNP,
// The Procrustes PNP algorithm ("PPnP")
// http://www.diegm.uniud.it/fusiello/papers/3dimpvt12-b.pdf
RESECTION_PPNP
@@ -50,9 +50,10 @@ enum ResectionMethod {
* \param t Solution for the camera translation vector
* \param method The resection method to use.
*/
bool EuclideanResection(const Mat2X &x_camera,
const Mat3X &X_world,
Mat3 *R, Vec3 *t,
bool EuclideanResection(const Mat2X& x_camera,
const Mat3X& X_world,
Mat3* R,
Vec3* t,
ResectionMethod method = RESECTION_EPNP);
/**
@@ -68,10 +69,11 @@ bool EuclideanResection(const Mat2X &x_camera,
* \param t Solution for the camera translation vector
* \param method Resection method
*/
bool EuclideanResection(const Mat &x_image,
const Mat3X &X_world,
const Mat3 &K,
Mat3 *R, Vec3 *t,
bool EuclideanResection(const Mat& x_image,
const Mat3X& X_world,
const Mat3& K,
Mat3* R,
Vec3* t,
ResectionMethod method = RESECTION_EPNP);
/**
@@ -84,10 +86,7 @@ bool EuclideanResection(const Mat &x_image,
* Horn, Hilden, "Closed-form solution of absolute orientation using
* orthonormal matrices"
*/
void AbsoluteOrientation(const Mat3X &X,
const Mat3X &Xp,
Mat3 *R,
Vec3 *t);
void AbsoluteOrientation(const Mat3X& X, const Mat3X& Xp, Mat3* R, Vec3* t);
/**
* Computes the extrinsic parameters, R and t for a calibrated camera from 4 or
@@ -102,9 +101,10 @@ void AbsoluteOrientation(const Mat3X &X,
* This is the algorithm described in: "Linear Pose Estimation from Points or
* Lines", by Ansar, A. and Daniilidis, PAMI 2003. vol. 25, no. 5.
*/
void EuclideanResectionAnsarDaniilidis(const Mat2X &x_camera,
const Mat3X &X_world,
Mat3 *R, Vec3 *t);
void EuclideanResectionAnsarDaniilidis(const Mat2X& x_camera,
const Mat3X& X_world,
Mat3* R,
Vec3* t);
/**
* Computes the extrinsic parameters, R and t for a calibrated camera from 4 or
* more 3D points and their images.
@@ -120,9 +120,10 @@ void EuclideanResectionAnsarDaniilidis(const Mat2X &x_camera,
* and F. Moreno-Noguer and P. Fua, IJCV 2009. vol. 81, no. 2
* \note: the non-linear optimization is not implemented here.
*/
bool EuclideanResectionEPnP(const Mat2X &x_camera,
const Mat3X &X_world,
Mat3 *R, Vec3 *t);
bool EuclideanResectionEPnP(const Mat2X& x_camera,
const Mat3X& X_world,
Mat3* R,
Vec3* t);
/**
* Computes the extrinsic parameters, R and t for a calibrated camera from 4 or
@@ -137,12 +138,12 @@ bool EuclideanResectionEPnP(const Mat2X &x_camera,
* Straight from the paper:
* http://www.diegm.uniud.it/fusiello/papers/3dimpvt12-b.pdf
*/
bool EuclideanResectionPPnP(const Mat2X &x_camera,
const Mat3X &X_world,
Mat3 *R, Vec3 *t);
bool EuclideanResectionPPnP(const Mat2X& x_camera,
const Mat3X& X_world,
Mat3* R,
Vec3* t);
} // namespace euclidean_resection
} // namespace libmv
#endif /* LIBMV_MULTIVIEW_EUCLIDEAN_RESECTION_H_ */

View File

@@ -19,9 +19,9 @@
// IN THE SOFTWARE.
#include "libmv/multiview/euclidean_resection.h"
#include "libmv/numeric/numeric.h"
#include "libmv/logging/logging.h"
#include "libmv/multiview/projection.h"
#include "libmv/numeric/numeric.h"
#include "testing/testing.h"
using namespace libmv::euclidean_resection;
@@ -33,10 +33,10 @@ static void CreateCameraSystem(const Mat3& KK,
const Vec& X_distances,
const Mat3& R_input,
const Vec3& T_input,
Mat2X *x_camera,
Mat3X *X_world,
Mat3 *R_expected,
Vec3 *T_expected) {
Mat2X* x_camera,
Mat3X* X_world,
Mat3* R_expected,
Vec3* T_expected) {
int num_points = x_image.cols();
Mat3X x_unit_cam(3, num_points);
@@ -76,9 +76,9 @@ TEST(AbsoluteOrientation, QuaternionSolution) {
// Create a random translation and rotation.
Mat3 R_input;
R_input = Eigen::AngleAxisd(rand(), Eigen::Vector3d::UnitZ())
* Eigen::AngleAxisd(rand(), Eigen::Vector3d::UnitY())
* Eigen::AngleAxisd(rand(), Eigen::Vector3d::UnitZ());
R_input = Eigen::AngleAxisd(rand(), Eigen::Vector3d::UnitZ()) *
Eigen::AngleAxisd(rand(), Eigen::Vector3d::UnitY()) *
Eigen::AngleAxisd(rand(), Eigen::Vector3d::UnitZ());
Vec3 t_input;
t_input.setRandom();
@@ -109,26 +109,29 @@ TEST(EuclideanResection, Points4KnownImagePointsRandomTranslationRotation) {
image_dimensions << 1600, 1200;
Mat3 KK;
// clang-format off
KK << 2796, 0, 804,
0 , 2796, 641,
0, 0, 1;
// clang-format on
// The real image points.
int num_points = 4;
Mat3X x_image(3, num_points);
// clang-format off
x_image << 1164.06, 734.948, 749.599, 430.727,
681.386, 844.59, 496.315, 580.775,
1, 1, 1, 1;
// clang-format on
// A vector of the 4 distances to the 3D points.
Vec X_distances = 100 * Vec::Random(num_points).array().abs();
// Create the random camera motion R and t that resection should recover.
Mat3 R_input;
R_input = Eigen::AngleAxisd(rand(), Eigen::Vector3d::UnitZ())
* Eigen::AngleAxisd(rand(), Eigen::Vector3d::UnitY())
* Eigen::AngleAxisd(rand(), Eigen::Vector3d::UnitZ());
R_input = Eigen::AngleAxisd(rand(), Eigen::Vector3d::UnitZ()) *
Eigen::AngleAxisd(rand(), Eigen::Vector3d::UnitY()) *
Eigen::AngleAxisd(rand(), Eigen::Vector3d::UnitZ());
Vec3 T_input;
T_input.setRandom();
@@ -140,15 +143,21 @@ TEST(EuclideanResection, Points4KnownImagePointsRandomTranslationRotation) {
Vec3 T_expected;
Mat3X X_world;
Mat2X x_camera;
CreateCameraSystem(KK, x_image, X_distances, R_input, T_input,
&x_camera, &X_world, &R_expected, &T_expected);
CreateCameraSystem(KK,
x_image,
X_distances,
R_input,
T_input,
&x_camera,
&X_world,
&R_expected,
&T_expected);
// Finally, run the code under test.
Mat3 R_output;
Vec3 T_output;
EuclideanResection(x_camera, X_world,
&R_output, &T_output,
RESECTION_ANSAR_DANIILIDIS);
EuclideanResection(
x_camera, X_world, &R_output, &T_output, RESECTION_ANSAR_DANIILIDIS);
EXPECT_MATRIX_NEAR(T_output, T_expected, 1e-5);
EXPECT_MATRIX_NEAR(R_output, R_expected, 1e-7);
@@ -173,9 +182,11 @@ TEST(EuclideanResection, Points4KnownImagePointsRandomTranslationRotation) {
// TODO(jmichot): Reduce the code duplication here with the code above.
TEST(EuclideanResection, Points6AllRandomInput) {
Mat3 KK;
// clang-format off
KK << 2796, 0, 804,
0 , 2796, 641,
0, 0, 1;
// clang-format on
// Create random image points for a 1600x1200 image.
int w = 1600;
@@ -192,9 +203,9 @@ TEST(EuclideanResection, Points6AllRandomInput) {
// Create the random camera motion R and t that resection should recover.
Mat3 R_input;
R_input = Eigen::AngleAxisd(rand(), Eigen::Vector3d::UnitZ())
* Eigen::AngleAxisd(rand(), Eigen::Vector3d::UnitY())
* Eigen::AngleAxisd(rand(), Eigen::Vector3d::UnitZ());
R_input = Eigen::AngleAxisd(rand(), Eigen::Vector3d::UnitZ()) *
Eigen::AngleAxisd(rand(), Eigen::Vector3d::UnitY()) *
Eigen::AngleAxisd(rand(), Eigen::Vector3d::UnitZ());
Vec3 T_input;
T_input.setRandom();
@@ -204,33 +215,36 @@ TEST(EuclideanResection, Points6AllRandomInput) {
Mat3 R_expected;
Vec3 T_expected;
Mat3X X_world;
CreateCameraSystem(KK, x_image, X_distances, R_input, T_input,
&x_camera, &X_world, &R_expected, &T_expected);
CreateCameraSystem(KK,
x_image,
X_distances,
R_input,
T_input,
&x_camera,
&X_world,
&R_expected,
&T_expected);
// Test each of the resection methods.
{
Mat3 R_output;
Vec3 T_output;
EuclideanResection(x_camera, X_world,
&R_output, &T_output,
RESECTION_ANSAR_DANIILIDIS);
EuclideanResection(
x_camera, X_world, &R_output, &T_output, RESECTION_ANSAR_DANIILIDIS);
EXPECT_MATRIX_NEAR(T_output, T_expected, 1e-5);
EXPECT_MATRIX_NEAR(R_output, R_expected, 1e-7);
}
{
Mat3 R_output;
Vec3 T_output;
EuclideanResection(x_camera, X_world,
&R_output, &T_output,
RESECTION_EPNP);
EuclideanResection(x_camera, X_world, &R_output, &T_output, RESECTION_EPNP);
EXPECT_MATRIX_NEAR(T_output, T_expected, 1e-5);
EXPECT_MATRIX_NEAR(R_output, R_expected, 1e-7);
}
{
Mat3 R_output;
Vec3 T_output;
EuclideanResection(x_image, X_world, KK,
&R_output, &T_output);
EuclideanResection(x_image, X_world, KK, &R_output, &T_output);
EXPECT_MATRIX_NEAR(T_output, T_expected, 1e-5);
EXPECT_MATRIX_NEAR(R_output, R_expected, 1e-7);
}

View File

@@ -22,15 +22,15 @@
#include "ceres/ceres.h"
#include "libmv/logging/logging.h"
#include "libmv/numeric/numeric.h"
#include "libmv/numeric/poly.h"
#include "libmv/multiview/conditioning.h"
#include "libmv/multiview/projection.h"
#include "libmv/multiview/triangulation.h"
#include "libmv/numeric/numeric.h"
#include "libmv/numeric/poly.h"
namespace libmv {
static void EliminateRow(const Mat34 &P, int row, Mat *X) {
static void EliminateRow(const Mat34& P, int row, Mat* X) {
X->resize(2, 4);
int first_row = (row + 1) % 3;
@@ -42,7 +42,7 @@ static void EliminateRow(const Mat34 &P, int row, Mat *X) {
}
}
void ProjectionsFromFundamental(const Mat3 &F, Mat34 *P1, Mat34 *P2) {
void ProjectionsFromFundamental(const Mat3& F, Mat34* P1, Mat34* P2) {
*P1 << Mat3::Identity(), Vec3::Zero();
Vec3 e2;
Mat3 Ft = F.transpose();
@@ -51,7 +51,7 @@ void ProjectionsFromFundamental(const Mat3 &F, Mat34 *P1, Mat34 *P2) {
}
// Addapted from vgg_F_from_P.
void FundamentalFromProjections(const Mat34 &P1, const Mat34 &P2, Mat3 *F) {
void FundamentalFromProjections(const Mat34& P1, const Mat34& P2, Mat3* F) {
Mat X[3];
Mat Y[3];
Mat XY;
@@ -71,7 +71,7 @@ void FundamentalFromProjections(const Mat34 &P1, const Mat34 &P2, Mat3 *F) {
// HZ 11.1 pag.279 (x1 = x, x2 = x')
// http://www.cs.unc.edu/~marc/tutorial/node54.html
static double EightPointSolver(const Mat &x1, const Mat &x2, Mat3 *F) {
static double EightPointSolver(const Mat& x1, const Mat& x2, Mat3* F) {
DCHECK_EQ(x1.rows(), 2);
DCHECK_GE(x1.cols(), 8);
DCHECK_EQ(x1.rows(), x2.rows());
@@ -98,7 +98,7 @@ static double EightPointSolver(const Mat &x1, const Mat &x2, Mat3 *F) {
}
// HZ 11.1.1 pag.280
void EnforceFundamentalRank2Constraint(Mat3 *F) {
void EnforceFundamentalRank2Constraint(Mat3* F) {
Eigen::JacobiSVD<Mat3> USV(*F, Eigen::ComputeFullU | Eigen::ComputeFullV);
Vec3 d = USV.singularValues();
d(2) = 0.0;
@@ -106,9 +106,7 @@ void EnforceFundamentalRank2Constraint(Mat3 *F) {
}
// HZ 11.2 pag.281 (x1 = x, x2 = x')
double NormalizedEightPointSolver(const Mat &x1,
const Mat &x2,
Mat3 *F) {
double NormalizedEightPointSolver(const Mat& x1, const Mat& x2, Mat3* F) {
DCHECK_EQ(x1.rows(), 2);
DCHECK_GE(x1.cols(), 8);
DCHECK_EQ(x1.rows(), x2.rows());
@@ -135,9 +133,9 @@ double NormalizedEightPointSolver(const Mat &x1,
// Seven-point algorithm.
// http://www.cs.unc.edu/~marc/tutorial/node55.html
double FundamentalFrom7CorrespondencesLinear(const Mat &x1,
const Mat &x2,
std::vector<Mat3> *F) {
double FundamentalFrom7CorrespondencesLinear(const Mat& x1,
const Mat& x2,
std::vector<Mat3>* F) {
DCHECK_EQ(x1.rows(), 2);
DCHECK_EQ(x1.cols(), 7);
DCHECK_EQ(x1.rows(), x2.rows());
@@ -169,25 +167,29 @@ double FundamentalFrom7CorrespondencesLinear(const Mat &x1,
// Then, use the condition det(F) = 0 to determine F. In other words, solve
// det(F1 + a*F2) = 0 for a.
double a = F1(0, 0), j = F2(0, 0),
b = F1(0, 1), k = F2(0, 1),
c = F1(0, 2), l = F2(0, 2),
d = F1(1, 0), m = F2(1, 0),
e = F1(1, 1), n = F2(1, 1),
f = F1(1, 2), o = F2(1, 2),
g = F1(2, 0), p = F2(2, 0),
h = F1(2, 1), q = F2(2, 1),
i = F1(2, 2), r = F2(2, 2);
double a = F1(0, 0), j = F2(0, 0);
double b = F1(0, 1), k = F2(0, 1);
double c = F1(0, 2), l = F2(0, 2);
double d = F1(1, 0), m = F2(1, 0);
double e = F1(1, 1), n = F2(1, 1);
double f = F1(1, 2), o = F2(1, 2);
double g = F1(2, 0), p = F2(2, 0);
double h = F1(2, 1), q = F2(2, 1);
double i = F1(2, 2), r = F2(2, 2);
// Run fundamental_7point_coeffs.py to get the below coefficients.
// The coefficients are in ascending powers of alpha, i.e. P[N]*x^N.
double P[4] = {
a*e*i + b*f*g + c*d*h - a*f*h - b*d*i - c*e*g,
a*e*r + a*i*n + b*f*p + b*g*o + c*d*q + c*h*m + d*h*l + e*i*j + f*g*k -
a*f*q - a*h*o - b*d*r - b*i*m - c*e*p - c*g*n - d*i*k - e*g*l - f*h*j,
a*n*r + b*o*p + c*m*q + d*l*q + e*j*r + f*k*p + g*k*o + h*l*m + i*j*n -
a*o*q - b*m*r - c*n*p - d*k*r - e*l*p - f*j*q - g*l*n - h*j*o - i*k*m,
j*n*r + k*o*p + l*m*q - j*o*q - k*m*r - l*n*p,
a * e * i + b * f * g + c * d * h - a * f * h - b * d * i - c * e * g,
a * e * r + a * i * n + b * f * p + b * g * o + c * d * q + c * h * m +
d * h * l + e * i * j + f * g * k - a * f * q - a * h * o -
b * d * r - b * i * m - c * e * p - c * g * n - d * i * k -
e * g * l - f * h * j,
a * n * r + b * o * p + c * m * q + d * l * q + e * j * r + f * k * p +
g * k * o + h * l * m + i * j * n - a * o * q - b * m * r -
c * n * p - d * k * r - e * l * p - f * j * q - g * l * n -
h * j * o - i * k * m,
j * n * r + k * o * p + l * m * q - j * o * q - k * m * r - l * n * p,
};
// Solve for the roots of P[3]*x^3 + P[2]*x^2 + P[1]*x + P[0] = 0.
@@ -195,15 +197,15 @@ double FundamentalFrom7CorrespondencesLinear(const Mat &x1,
int num_roots = SolveCubicPolynomial(P, roots);
// Build the fundamental matrix for each solution.
for (int kk = 0; kk < num_roots; ++kk) {
for (int kk = 0; kk < num_roots; ++kk) {
F->push_back(F1 + roots[kk] * F2);
}
return s;
}
double FundamentalFromCorrespondences7Point(const Mat &x1,
const Mat &x2,
std::vector<Mat3> *F) {
double FundamentalFromCorrespondences7Point(const Mat& x1,
const Mat& x2,
std::vector<Mat3>* F) {
DCHECK_EQ(x1.rows(), 2);
DCHECK_GE(x1.cols(), 7);
DCHECK_EQ(x1.rows(), x2.rows());
@@ -218,25 +220,25 @@ double FundamentalFromCorrespondences7Point(const Mat &x1,
ApplyTransformationToPoints(x2, T2, &x2_normalized);
// Estimate the fundamental matrix.
double smaller_singular_value =
FundamentalFrom7CorrespondencesLinear(x1_normalized, x2_normalized, &(*F));
double smaller_singular_value = FundamentalFrom7CorrespondencesLinear(
x1_normalized, x2_normalized, &(*F));
for (int k = 0; k < F->size(); ++k) {
Mat3 & Fmat = (*F)[k];
Mat3& Fmat = (*F)[k];
// Denormalize the fundamental matrix.
Fmat = T2.transpose() * Fmat * T1;
}
return smaller_singular_value;
}
void NormalizeFundamental(const Mat3 &F, Mat3 *F_normalized) {
void NormalizeFundamental(const Mat3& F, Mat3* F_normalized) {
*F_normalized = F / FrobeniusNorm(F);
if ((*F_normalized)(2, 2) < 0) {
*F_normalized *= -1;
}
}
double SampsonDistance(const Mat &F, const Vec2 &x1, const Vec2 &x2) {
double SampsonDistance(const Mat& F, const Vec2& x1, const Vec2& x2) {
Vec3 x(x1(0), x1(1), 1.0);
Vec3 y(x2(0), x2(1), 1.0);
@@ -244,11 +246,11 @@ double SampsonDistance(const Mat &F, const Vec2 &x1, const Vec2 &x2) {
Vec3 Ft_y = F.transpose() * y;
double y_F_x = y.dot(F_x);
return Square(y_F_x) / ( F_x.head<2>().squaredNorm()
+ Ft_y.head<2>().squaredNorm());
return Square(y_F_x) /
(F_x.head<2>().squaredNorm() + Ft_y.head<2>().squaredNorm());
}
double SymmetricEpipolarDistance(const Mat &F, const Vec2 &x1, const Vec2 &x2) {
double SymmetricEpipolarDistance(const Mat& F, const Vec2& x1, const Vec2& x2) {
Vec3 x(x1(0), x1(1), 1.0);
Vec3 y(x2(0), x2(1), 1.0);
@@ -256,43 +258,40 @@ double SymmetricEpipolarDistance(const Mat &F, const Vec2 &x1, const Vec2 &x2) {
Vec3 Ft_y = F.transpose() * y;
double y_F_x = y.dot(F_x);
return Square(y_F_x) * ( 1 / F_x.head<2>().squaredNorm()
+ 1 / Ft_y.head<2>().squaredNorm());
return Square(y_F_x) *
(1 / F_x.head<2>().squaredNorm() + 1 / Ft_y.head<2>().squaredNorm());
}
// HZ 9.6 pag 257 (formula 9.12)
void EssentialFromFundamental(const Mat3 &F,
const Mat3 &K1,
const Mat3 &K2,
Mat3 *E) {
void EssentialFromFundamental(const Mat3& F,
const Mat3& K1,
const Mat3& K2,
Mat3* E) {
*E = K2.transpose() * F * K1;
}
// HZ 9.6 pag 257 (formula 9.12)
// Or http://ai.stanford.edu/~birch/projective/node20.html
void FundamentalFromEssential(const Mat3 &E,
const Mat3 &K1,
const Mat3 &K2,
Mat3 *F) {
void FundamentalFromEssential(const Mat3& E,
const Mat3& K1,
const Mat3& K2,
Mat3* F) {
*F = K2.inverse().transpose() * E * K1.inverse();
}
void RelativeCameraMotion(const Mat3 &R1,
const Vec3 &t1,
const Mat3 &R2,
const Vec3 &t2,
Mat3 *R,
Vec3 *t) {
void RelativeCameraMotion(const Mat3& R1,
const Vec3& t1,
const Mat3& R2,
const Vec3& t2,
Mat3* R,
Vec3* t) {
*R = R2 * R1.transpose();
*t = t2 - (*R) * t1;
}
// HZ 9.6 pag 257
void EssentialFromRt(const Mat3 &R1,
const Vec3 &t1,
const Mat3 &R2,
const Vec3 &t2,
Mat3 *E) {
void EssentialFromRt(
const Mat3& R1, const Vec3& t1, const Mat3& R2, const Vec3& t2, Mat3* E) {
Mat3 R;
Vec3 t;
RelativeCameraMotion(R1, t1, R2, t2, &R, &t);
@@ -301,11 +300,11 @@ void EssentialFromRt(const Mat3 &R1,
}
// HZ 9.6 pag 259 (Result 9.19)
void MotionFromEssential(const Mat3 &E,
std::vector<Mat3> *Rs,
std::vector<Vec3> *ts) {
void MotionFromEssential(const Mat3& E,
std::vector<Mat3>* Rs,
std::vector<Vec3>* ts) {
Eigen::JacobiSVD<Mat3> USV(E, Eigen::ComputeFullU | Eigen::ComputeFullV);
Mat3 U = USV.matrixU();
Mat3 U = USV.matrixU();
Mat3 Vt = USV.matrixV().transpose();
// Last column of U is undetermined since d = (a a 0).
@@ -318,9 +317,11 @@ void MotionFromEssential(const Mat3 &E,
}
Mat3 W;
// clang-format off
W << 0, -1, 0,
1, 0, 0,
0, 0, 1;
// clang-format on
Mat3 U_W_Vt = U * W * Vt;
Mat3 U_Wt_Vt = U * W.transpose() * Vt;
@@ -332,18 +333,18 @@ void MotionFromEssential(const Mat3 &E,
(*Rs)[3] = U_Wt_Vt;
ts->resize(4);
(*ts)[0] = U.col(2);
(*ts)[0] = U.col(2);
(*ts)[1] = -U.col(2);
(*ts)[2] = U.col(2);
(*ts)[2] = U.col(2);
(*ts)[3] = -U.col(2);
}
int MotionFromEssentialChooseSolution(const std::vector<Mat3> &Rs,
const std::vector<Vec3> &ts,
const Mat3 &K1,
const Vec2 &x1,
const Mat3 &K2,
const Vec2 &x2) {
int MotionFromEssentialChooseSolution(const std::vector<Mat3>& Rs,
const std::vector<Vec3>& ts,
const Mat3& K1,
const Vec2& x1,
const Mat3& K2,
const Vec2& x2) {
DCHECK_EQ(4, Rs.size());
DCHECK_EQ(4, ts.size());
@@ -354,8 +355,8 @@ int MotionFromEssentialChooseSolution(const std::vector<Mat3> &Rs,
t1.setZero();
P_From_KRt(K1, R1, t1, &P1);
for (int i = 0; i < 4; ++i) {
const Mat3 &R2 = Rs[i];
const Vec3 &t2 = ts[i];
const Mat3& R2 = Rs[i];
const Vec3& t2 = ts[i];
P_From_KRt(K2, R2, t2, &P2);
Vec3 X;
TriangulateDLT(P1, x1, P2, x2, &X);
@@ -369,13 +370,13 @@ int MotionFromEssentialChooseSolution(const std::vector<Mat3> &Rs,
return -1;
}
bool MotionFromEssentialAndCorrespondence(const Mat3 &E,
const Mat3 &K1,
const Vec2 &x1,
const Mat3 &K2,
const Vec2 &x2,
Mat3 *R,
Vec3 *t) {
bool MotionFromEssentialAndCorrespondence(const Mat3& E,
const Mat3& K1,
const Vec2& x1,
const Mat3& K2,
const Vec2& x2,
Mat3* R,
Vec3* t) {
std::vector<Mat3> Rs;
std::vector<Vec3> ts;
MotionFromEssential(E, &Rs, &ts);
@@ -389,7 +390,7 @@ bool MotionFromEssentialAndCorrespondence(const Mat3 &E,
}
}
void FundamentalToEssential(const Mat3 &F, Mat3 *E) {
void FundamentalToEssential(const Mat3& F, Mat3* E) {
Eigen::JacobiSVD<Mat3> svd(F, Eigen::ComputeFullU | Eigen::ComputeFullV);
// See Hartley & Zisserman page 294, result 11.1, which shows how to get the
@@ -399,8 +400,8 @@ void FundamentalToEssential(const Mat3 &F, Mat3 *E) {
double s = (a + b) / 2.0;
LG << "Initial reconstruction's rotation is non-euclidean by "
<< (((a - b) / std::max(a, b)) * 100) << "%; singular values:"
<< svd.singularValues().transpose();
<< (((a - b) / std::max(a, b)) * 100)
<< "%; singular values:" << svd.singularValues().transpose();
Vec3 diag;
diag << s, s, 0;
@@ -410,9 +411,8 @@ void FundamentalToEssential(const Mat3 &F, Mat3 *E) {
// Default settings for fundamental estimation which should be suitable
// for a wide range of use cases.
EstimateFundamentalOptions::EstimateFundamentalOptions(void) :
max_num_iterations(50),
expected_average_symmetric_distance(1e-16) {
EstimateFundamentalOptions::EstimateFundamentalOptions(void)
: max_num_iterations(50), expected_average_symmetric_distance(1e-16) {
}
namespace {
@@ -420,12 +420,11 @@ namespace {
// used for fundamental matrix refinement.
class FundamentalSymmetricEpipolarCostFunctor {
public:
FundamentalSymmetricEpipolarCostFunctor(const Vec2 &x,
const Vec2 &y)
: x_(x), y_(y) {}
FundamentalSymmetricEpipolarCostFunctor(const Vec2& x, const Vec2& y)
: x_(x), y_(y) {}
template<typename T>
bool operator()(const T *fundamental_parameters, T *residuals) const {
template <typename T>
bool operator()(const T* fundamental_parameters, T* residuals) const {
typedef Eigen::Matrix<T, 3, 3> Mat3;
typedef Eigen::Matrix<T, 3, 1> Vec3;
@@ -454,9 +453,10 @@ class FundamentalSymmetricEpipolarCostFunctor {
// average value.
class TerminationCheckingCallback : public ceres::IterationCallback {
public:
TerminationCheckingCallback(const Mat &x1, const Mat &x2,
const EstimateFundamentalOptions &options,
Mat3 *F)
TerminationCheckingCallback(const Mat& x1,
const Mat& x2,
const EstimateFundamentalOptions& options,
Mat3* F)
: options_(options), x1_(x1), x2_(x2), F_(F) {}
virtual ceres::CallbackReturnType operator()(
@@ -469,9 +469,7 @@ class TerminationCheckingCallback : public ceres::IterationCallback {
// Calculate average of symmetric epipolar distance.
double average_distance = 0.0;
for (int i = 0; i < x1_.cols(); i++) {
average_distance = SymmetricEpipolarDistance(*F_,
x1_.col(i),
x2_.col(i));
average_distance = SymmetricEpipolarDistance(*F_, x1_.col(i), x2_.col(i));
}
average_distance /= x1_.cols();
@@ -483,19 +481,19 @@ class TerminationCheckingCallback : public ceres::IterationCallback {
}
private:
const EstimateFundamentalOptions &options_;
const Mat &x1_;
const Mat &x2_;
Mat3 *F_;
const EstimateFundamentalOptions& options_;
const Mat& x1_;
const Mat& x2_;
Mat3* F_;
};
} // namespace
/* Fundamental transformation estimation. */
bool EstimateFundamentalFromCorrespondences(
const Mat &x1,
const Mat &x2,
const EstimateFundamentalOptions &options,
Mat3 *F) {
const Mat& x1,
const Mat& x2,
const EstimateFundamentalOptions& options,
Mat3* F) {
// Step 1: Algebraic fundamental estimation.
// Assume algebraic estiation always succeeds,
@@ -506,16 +504,15 @@ bool EstimateFundamentalFromCorrespondences(
// Step 2: Refine matrix using Ceres minimizer.
ceres::Problem problem;
for (int i = 0; i < x1.cols(); i++) {
FundamentalSymmetricEpipolarCostFunctor
*fundamental_symmetric_epipolar_cost_function =
new FundamentalSymmetricEpipolarCostFunctor(x1.col(i),
x2.col(i));
FundamentalSymmetricEpipolarCostFunctor*
fundamental_symmetric_epipolar_cost_function =
new FundamentalSymmetricEpipolarCostFunctor(x1.col(i), x2.col(i));
problem.AddResidualBlock(
new ceres::AutoDiffCostFunction<
FundamentalSymmetricEpipolarCostFunctor,
2, // num_residuals
9>(fundamental_symmetric_epipolar_cost_function),
new ceres::AutoDiffCostFunction<FundamentalSymmetricEpipolarCostFunctor,
2, // num_residuals
9>(
fundamental_symmetric_epipolar_cost_function),
NULL,
F->data());
}

View File

@@ -27,36 +27,34 @@
namespace libmv {
void ProjectionsFromFundamental(const Mat3 &F, Mat34 *P1, Mat34 *P2);
void FundamentalFromProjections(const Mat34 &P1, const Mat34 &P2, Mat3 *F);
void ProjectionsFromFundamental(const Mat3& F, Mat34* P1, Mat34* P2);
void FundamentalFromProjections(const Mat34& P1, const Mat34& P2, Mat3* F);
/**
* 7 points (minimal case, points coordinates must be normalized before):
*/
double FundamentalFrom7CorrespondencesLinear(const Mat &x1,
const Mat &x2,
std::vector<Mat3> *F);
double FundamentalFrom7CorrespondencesLinear(const Mat& x1,
const Mat& x2,
std::vector<Mat3>* F);
/**
* 7 points (points coordinates must be in image space):
*/
double FundamentalFromCorrespondences7Point(const Mat &x1,
const Mat &x2,
std::vector<Mat3> *F);
double FundamentalFromCorrespondences7Point(const Mat& x1,
const Mat& x2,
std::vector<Mat3>* F);
/**
* 8 points (points coordinates must be in image space):
*/
double NormalizedEightPointSolver(const Mat &x1,
const Mat &x2,
Mat3 *F);
double NormalizedEightPointSolver(const Mat& x1, const Mat& x2, Mat3* F);
/**
* Fundamental matrix utility function:
*/
void EnforceFundamentalRank2Constraint(Mat3 *F);
void EnforceFundamentalRank2Constraint(Mat3* F);
void NormalizeFundamental(const Mat3 &F, Mat3 *F_normalized);
void NormalizeFundamental(const Mat3& F, Mat3* F_normalized);
/**
* Approximate squared reprojection errror.
@@ -64,14 +62,14 @@ void NormalizeFundamental(const Mat3 &F, Mat3 *F_normalized);
* See page 287 of HZ equation 11.9. This avoids triangulating the point,
* relying only on the entries in F.
*/
double SampsonDistance(const Mat &F, const Vec2 &x1, const Vec2 &x2);
double SampsonDistance(const Mat& F, const Vec2& x1, const Vec2& x2);
/**
* Calculates the sum of the distances from the points to the epipolar lines.
*
* See page 288 of HZ equation 11.10.
*/
double SymmetricEpipolarDistance(const Mat &F, const Vec2 &x1, const Vec2 &x2);
double SymmetricEpipolarDistance(const Mat& F, const Vec2& x1, const Vec2& x2);
/**
* Compute the relative camera motion between two cameras.
@@ -81,32 +79,29 @@ double SymmetricEpipolarDistance(const Mat &F, const Vec2 &x1, const Vec2 &x2);
* If T1 and T2 are the camera motions, the computed relative motion is
* T = T2 T1^{-1}
*/
void RelativeCameraMotion(const Mat3 &R1,
const Vec3 &t1,
const Mat3 &R2,
const Vec3 &t2,
Mat3 *R,
Vec3 *t);
void RelativeCameraMotion(const Mat3& R1,
const Vec3& t1,
const Mat3& R2,
const Vec3& t2,
Mat3* R,
Vec3* t);
void EssentialFromFundamental(const Mat3 &F,
const Mat3 &K1,
const Mat3 &K2,
Mat3 *E);
void EssentialFromFundamental(const Mat3& F,
const Mat3& K1,
const Mat3& K2,
Mat3* E);
void FundamentalFromEssential(const Mat3 &E,
const Mat3 &K1,
const Mat3 &K2,
Mat3 *F);
void FundamentalFromEssential(const Mat3& E,
const Mat3& K1,
const Mat3& K2,
Mat3* F);
void EssentialFromRt(const Mat3 &R1,
const Vec3 &t1,
const Mat3 &R2,
const Vec3 &t2,
Mat3 *E);
void EssentialFromRt(
const Mat3& R1, const Vec3& t1, const Mat3& R2, const Vec3& t2, Mat3* E);
void MotionFromEssential(const Mat3 &E,
std::vector<Mat3> *Rs,
std::vector<Vec3> *ts);
void MotionFromEssential(const Mat3& E,
std::vector<Mat3>* Rs,
std::vector<Vec3>* ts);
/**
* Choose one of the four possible motion solutions from an essential matrix.
@@ -117,25 +112,25 @@ void MotionFromEssential(const Mat3 &E,
*
* \return index of the right solution or -1 if no solution.
*/
int MotionFromEssentialChooseSolution(const std::vector<Mat3> &Rs,
const std::vector<Vec3> &ts,
const Mat3 &K1,
const Vec2 &x1,
const Mat3 &K2,
const Vec2 &x2);
int MotionFromEssentialChooseSolution(const std::vector<Mat3>& Rs,
const std::vector<Vec3>& ts,
const Mat3& K1,
const Vec2& x1,
const Mat3& K2,
const Vec2& x2);
bool MotionFromEssentialAndCorrespondence(const Mat3 &E,
const Mat3 &K1,
const Vec2 &x1,
const Mat3 &K2,
const Vec2 &x2,
Mat3 *R,
Vec3 *t);
bool MotionFromEssentialAndCorrespondence(const Mat3& E,
const Mat3& K1,
const Vec2& x1,
const Mat3& K2,
const Vec2& x2,
Mat3* R,
Vec3* t);
/**
* Find closest essential matrix E to fundamental F
*/
void FundamentalToEssential(const Mat3 &F, Mat3 *E);
void FundamentalToEssential(const Mat3& F, Mat3* E);
/**
* This structure contains options that controls how the fundamental
@@ -170,10 +165,10 @@ struct EstimateFundamentalOptions {
* refinement.
*/
bool EstimateFundamentalFromCorrespondences(
const Mat &x1,
const Mat &x2,
const EstimateFundamentalOptions &options,
Mat3 *F);
const Mat& x1,
const Mat& x2,
const EstimateFundamentalOptions& options,
Mat3* F);
} // namespace libmv

View File

@@ -34,12 +34,14 @@ using namespace libmv;
TEST(Fundamental, FundamentalFromProjections) {
Mat34 P1_gt, P2_gt;
// clang-format off
P1_gt << 1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0;
P2_gt << 1, 1, 1, 3,
0, 2, 0, 3,
0, 1, 1, 0;
// clang-format on
Mat3 F_gt;
FundamentalFromProjections(P1_gt, P2_gt, &F_gt);
@@ -55,8 +57,10 @@ TEST(Fundamental, FundamentalFromProjections) {
TEST(Fundamental, PreconditionerFromPoints) {
int n = 4;
Mat points(2, n);
// clang-format off
points << 0, 0, 1, 1,
0, 2, 1, 3;
// clang-format on
Mat3 T;
PreconditionerFromPoints(points, &T);
@@ -152,8 +156,8 @@ TEST(Fundamental, MotionFromEssentialAndCorrespondence) {
Mat3 R_estimated;
Vec3 t_estimated;
MotionFromEssentialAndCorrespondence(E, d.K1, x1, d.K2, x2,
&R_estimated, &t_estimated);
MotionFromEssentialAndCorrespondence(
E, d.K1, x1, d.K2, x2, &R_estimated, &t_estimated);
EXPECT_LE(FrobeniusDistance(R_estimated, R), 1e-8);
EXPECT_LE(DistanceL2(t_estimated, t), 1e-8);

View File

@@ -26,7 +26,7 @@
#include "libmv/multiview/homography_parameterization.h"
namespace libmv {
/** 2D Homography transformation estimation in the case that points are in
/** 2D Homography transformation estimation in the case that points are in
* euclidean coordinates.
*
* x = H y
@@ -44,10 +44,7 @@ namespace libmv {
* (-x2*a+x1*d)*y1 + (-x2*b+x1*e)*y2 + -x2*c+x1*f |0|
*/
static bool Homography2DFromCorrespondencesLinearEuc(
const Mat &x1,
const Mat &x2,
Mat3 *H,
double expected_precision) {
const Mat& x1, const Mat& x2, Mat3* H, double expected_precision) {
assert(2 == x1.rows());
assert(4 <= x1.cols());
assert(x1.rows() == x2.rows());
@@ -58,27 +55,27 @@ static bool Homography2DFromCorrespondencesLinearEuc(
Mat b = Mat::Zero(n * 3, 1);
for (int i = 0; i < n; ++i) {
int j = 3 * i;
L(j, 0) = x1(0, i); // a
L(j, 1) = x1(1, i); // b
L(j, 2) = 1.0; // c
L(j, 0) = x1(0, i); // a
L(j, 1) = x1(1, i); // b
L(j, 2) = 1.0; // c
L(j, 6) = -x2(0, i) * x1(0, i); // g
L(j, 7) = -x2(0, i) * x1(1, i); // h
b(j, 0) = x2(0, i); // i
b(j, 0) = x2(0, i); // i
++j;
L(j, 3) = x1(0, i); // d
L(j, 4) = x1(1, i); // e
L(j, 5) = 1.0; // f
L(j, 3) = x1(0, i); // d
L(j, 4) = x1(1, i); // e
L(j, 5) = 1.0; // f
L(j, 6) = -x2(1, i) * x1(0, i); // g
L(j, 7) = -x2(1, i) * x1(1, i); // h
b(j, 0) = x2(1, i); // i
b(j, 0) = x2(1, i); // i
// This ensures better stability
// TODO(julien) make a lite version without this 3rd set
++j;
L(j, 0) = x2(1, i) * x1(0, i); // a
L(j, 1) = x2(1, i) * x1(1, i); // b
L(j, 2) = x2(1, i); // c
L(j, 0) = x2(1, i) * x1(0, i); // a
L(j, 1) = x2(1, i) * x1(1, i); // b
L(j, 2) = x2(1, i); // c
L(j, 3) = -x2(0, i) * x1(0, i); // d
L(j, 4) = -x2(0, i) * x1(1, i); // e
L(j, 5) = -x2(0, i); // f
@@ -86,14 +83,15 @@ static bool Homography2DFromCorrespondencesLinearEuc(
// Solve Lx=B
Vec h = L.fullPivLu().solve(b);
Homography2DNormalizedParameterization<double>::To(h, H);
if ((L * h).isApprox(b, expected_precision)) {
if ((L * h).isApprox(b, expected_precision)) {
return true;
} else {
return false;
}
}
/** 2D Homography transformation estimation in the case that points are in
// clang-format off
/** 2D Homography transformation estimation in the case that points are in
* homogeneous coordinates.
*
* | 0 -x3 x2| |a b c| |y1| -x3*d+x2*g -x3*e+x2*h -x3*f+x2*1 |y1| (-x3*d+x2*g)*y1 (-x3*e+x2*h)*y2 (-x3*f+x2*1)*y3 |0|
@@ -101,13 +99,14 @@ static bool Homography2DFromCorrespondencesLinearEuc(
* |-x2 x1 0| |g h 1| |y3| -x2*a+x1*d -x2*b+x1*e -x2*c+x1*f |y3| (-x2*a+x1*d)*y1 (-x2*b+x1*e)*y2 (-x2*c+x1*f)*y3 |0|
* X = |a b c d e f g h|^t
*/
bool Homography2DFromCorrespondencesLinear(const Mat &x1,
const Mat &x2,
Mat3 *H,
// clang-format on
bool Homography2DFromCorrespondencesLinear(const Mat& x1,
const Mat& x2,
Mat3* H,
double expected_precision) {
if (x1.rows() == 2) {
return Homography2DFromCorrespondencesLinearEuc(x1, x2, H,
expected_precision);
return Homography2DFromCorrespondencesLinearEuc(
x1, x2, H, expected_precision);
}
assert(3 == x1.rows());
assert(4 <= x1.cols());
@@ -122,33 +121,33 @@ bool Homography2DFromCorrespondencesLinear(const Mat &x1,
Mat b = Mat::Zero(n * 3, 1);
for (int i = 0; i < n; ++i) {
int j = 3 * i;
L(j, 0) = x2(w, i) * x1(x, i); // a
L(j, 1) = x2(w, i) * x1(y, i); // b
L(j, 2) = x2(w, i) * x1(w, i); // c
L(j, 0) = x2(w, i) * x1(x, i); // a
L(j, 1) = x2(w, i) * x1(y, i); // b
L(j, 2) = x2(w, i) * x1(w, i); // c
L(j, 6) = -x2(x, i) * x1(x, i); // g
L(j, 7) = -x2(x, i) * x1(y, i); // h
b(j, 0) = x2(x, i) * x1(w, i);
b(j, 0) = x2(x, i) * x1(w, i);
++j;
L(j, 3) = x2(w, i) * x1(x, i); // d
L(j, 4) = x2(w, i) * x1(y, i); // e
L(j, 5) = x2(w, i) * x1(w, i); // f
L(j, 3) = x2(w, i) * x1(x, i); // d
L(j, 4) = x2(w, i) * x1(y, i); // e
L(j, 5) = x2(w, i) * x1(w, i); // f
L(j, 6) = -x2(y, i) * x1(x, i); // g
L(j, 7) = -x2(y, i) * x1(y, i); // h
b(j, 0) = x2(y, i) * x1(w, i);
b(j, 0) = x2(y, i) * x1(w, i);
// This ensures better stability
++j;
L(j, 0) = x2(y, i) * x1(x, i); // a
L(j, 1) = x2(y, i) * x1(y, i); // b
L(j, 2) = x2(y, i) * x1(w, i); // c
L(j, 0) = x2(y, i) * x1(x, i); // a
L(j, 1) = x2(y, i) * x1(y, i); // b
L(j, 2) = x2(y, i) * x1(w, i); // c
L(j, 3) = -x2(x, i) * x1(x, i); // d
L(j, 4) = -x2(x, i) * x1(y, i); // e
L(j, 5) = -x2(x, i) * x1(w, i); // f
}
// Solve Lx=B
Vec h = L.fullPivLu().solve(b);
if ((L * h).isApprox(b, expected_precision)) {
if ((L * h).isApprox(b, expected_precision)) {
Homography2DNormalizedParameterization<double>::To(h, H);
return true;
} else {
@@ -158,32 +157,30 @@ bool Homography2DFromCorrespondencesLinear(const Mat &x1,
// Default settings for homography estimation which should be suitable
// for a wide range of use cases.
EstimateHomographyOptions::EstimateHomographyOptions(void) :
use_normalization(true),
max_num_iterations(50),
expected_average_symmetric_distance(1e-16) {
EstimateHomographyOptions::EstimateHomographyOptions(void)
: use_normalization(true),
max_num_iterations(50),
expected_average_symmetric_distance(1e-16) {
}
namespace {
void GetNormalizedPoints(const Mat &original_points,
Mat *normalized_points,
Mat3 *normalization_matrix) {
void GetNormalizedPoints(const Mat& original_points,
Mat* normalized_points,
Mat3* normalization_matrix) {
IsotropicPreconditionerFromPoints(original_points, normalization_matrix);
ApplyTransformationToPoints(original_points,
*normalization_matrix,
normalized_points);
ApplyTransformationToPoints(
original_points, *normalization_matrix, normalized_points);
}
// Cost functor which computes symmetric geometric distance
// used for homography matrix refinement.
class HomographySymmetricGeometricCostFunctor {
public:
HomographySymmetricGeometricCostFunctor(const Vec2 &x,
const Vec2 &y)
: x_(x), y_(y) { }
HomographySymmetricGeometricCostFunctor(const Vec2& x, const Vec2& y)
: x_(x), y_(y) {}
template<typename T>
bool operator()(const T *homography_parameters, T *residuals) const {
template <typename T>
bool operator()(const T* homography_parameters, T* residuals) const {
typedef Eigen::Matrix<T, 3, 3> Mat3;
typedef Eigen::Matrix<T, 3, 1> Vec3;
@@ -221,9 +218,10 @@ class HomographySymmetricGeometricCostFunctor {
// average value.
class TerminationCheckingCallback : public ceres::IterationCallback {
public:
TerminationCheckingCallback(const Mat &x1, const Mat &x2,
const EstimateHomographyOptions &options,
Mat3 *H)
TerminationCheckingCallback(const Mat& x1,
const Mat& x2,
const EstimateHomographyOptions& options,
Mat3* H)
: options_(options), x1_(x1), x2_(x2), H_(H) {}
virtual ceres::CallbackReturnType operator()(
@@ -236,9 +234,8 @@ class TerminationCheckingCallback : public ceres::IterationCallback {
// Calculate average of symmetric geometric distance.
double average_distance = 0.0;
for (int i = 0; i < x1_.cols(); i++) {
average_distance = SymmetricGeometricDistance(*H_,
x1_.col(i),
x2_.col(i));
average_distance =
SymmetricGeometricDistance(*H_, x1_.col(i), x2_.col(i));
}
average_distance /= x1_.cols();
@@ -250,10 +247,10 @@ class TerminationCheckingCallback : public ceres::IterationCallback {
}
private:
const EstimateHomographyOptions &options_;
const Mat &x1_;
const Mat &x2_;
Mat3 *H_;
const EstimateHomographyOptions& options_;
const Mat& x1_;
const Mat& x2_;
Mat3* H_;
};
} // namespace
@@ -261,10 +258,10 @@ class TerminationCheckingCallback : public ceres::IterationCallback {
* euclidean coordinates.
*/
bool EstimateHomography2DFromCorrespondences(
const Mat &x1,
const Mat &x2,
const EstimateHomographyOptions &options,
Mat3 *H) {
const Mat& x1,
const Mat& x2,
const EstimateHomographyOptions& options,
Mat3* H) {
// TODO(sergey): Support homogenous coordinates, not just euclidean.
assert(2 == x1.rows());
@@ -272,8 +269,7 @@ bool EstimateHomography2DFromCorrespondences(
assert(x1.rows() == x2.rows());
assert(x1.cols() == x2.cols());
Mat3 T1 = Mat3::Identity(),
T2 = Mat3::Identity();
Mat3 T1 = Mat3::Identity(), T2 = Mat3::Identity();
// Step 1: Algebraic homography estimation.
Mat x1_normalized, x2_normalized;
@@ -300,16 +296,15 @@ bool EstimateHomography2DFromCorrespondences(
// Step 2: Refine matrix using Ceres minimizer.
ceres::Problem problem;
for (int i = 0; i < x1.cols(); i++) {
HomographySymmetricGeometricCostFunctor
*homography_symmetric_geometric_cost_function =
new HomographySymmetricGeometricCostFunctor(x1.col(i),
x2.col(i));
HomographySymmetricGeometricCostFunctor*
homography_symmetric_geometric_cost_function =
new HomographySymmetricGeometricCostFunctor(x1.col(i), x2.col(i));
problem.AddResidualBlock(
new ceres::AutoDiffCostFunction<
HomographySymmetricGeometricCostFunctor,
4, // num_residuals
9>(homography_symmetric_geometric_cost_function),
new ceres::AutoDiffCostFunction<HomographySymmetricGeometricCostFunctor,
4, // num_residuals
9>(
homography_symmetric_geometric_cost_function),
NULL,
H->data());
}
@@ -335,15 +330,16 @@ bool EstimateHomography2DFromCorrespondences(
return summary.IsSolutionUsable();
}
// clang-format off
/**
* x2 ~ A * x1
* x2^t * Hi * A *x1 = 0
* H1 = H2 = H3 =
* H1 = H2 = H3 =
* | 0 0 0 1| |-x2w| |0 0 0 0| | 0 | | 0 0 1 0| |-x2z|
* | 0 0 0 0| -> | 0 | |0 0 1 0| -> |-x2z| | 0 0 0 0| -> | 0 |
* | 0 0 0 0| | 0 | |0-1 0 0| | x2y| |-1 0 0 0| | x2x|
* |-1 0 0 0| | x2x| |0 0 0 0| | 0 | | 0 0 0 0| | 0 |
* H4 = H5 = H6 =
* H4 = H5 = H6 =
* |0 0 0 0| | 0 | | 0 1 0 0| |-x2y| |0 0 0 0| | 0 |
* |0 0 0 1| -> |-x2w| |-1 0 0 0| -> | x2x| |0 0 0 0| -> | 0 |
* |0 0 0 0| | 0 | | 0 0 0 0| | 0 | |0 0 0 1| |-x2w|
@@ -361,10 +357,11 @@ bool EstimateHomography2DFromCorrespondences(
* x2^t * H6 * A *x1 = (-x2w*i +x2z*m )*x1x + (-x2w*j +x2z*n )*x1y + (-x2w*k +x2z*o )*x1z + (-x2w*l +x2z*1 )*x1w = 0
*
* X = |a b c d e f g h i j k l m n o|^t
*/
bool Homography3DFromCorrespondencesLinear(const Mat &x1,
const Mat &x2,
Mat4 *H,
*/
// clang-format on
bool Homography3DFromCorrespondencesLinear(const Mat& x1,
const Mat& x2,
Mat4* H,
double expected_precision) {
assert(4 == x1.rows());
assert(5 <= x1.cols());
@@ -379,68 +376,68 @@ bool Homography3DFromCorrespondencesLinear(const Mat &x1,
Mat b = Mat::Zero(n * 6, 1);
for (int i = 0; i < n; ++i) {
int j = 6 * i;
L(j, 0) = -x2(w, i) * x1(x, i); // a
L(j, 1) = -x2(w, i) * x1(y, i); // b
L(j, 2) = -x2(w, i) * x1(z, i); // c
L(j, 3) = -x2(w, i) * x1(w, i); // d
L(j, 12) = x2(x, i) * x1(x, i); // m
L(j, 13) = x2(x, i) * x1(y, i); // n
L(j, 14) = x2(x, i) * x1(z, i); // o
b(j, 0) = -x2(x, i) * x1(w, i);
L(j, 0) = -x2(w, i) * x1(x, i); // a
L(j, 1) = -x2(w, i) * x1(y, i); // b
L(j, 2) = -x2(w, i) * x1(z, i); // c
L(j, 3) = -x2(w, i) * x1(w, i); // d
L(j, 12) = x2(x, i) * x1(x, i); // m
L(j, 13) = x2(x, i) * x1(y, i); // n
L(j, 14) = x2(x, i) * x1(z, i); // o
b(j, 0) = -x2(x, i) * x1(w, i);
++j;
L(j, 4) = -x2(z, i) * x1(x, i); // e
L(j, 5) = -x2(z, i) * x1(y, i); // f
L(j, 6) = -x2(z, i) * x1(z, i); // g
L(j, 7) = -x2(z, i) * x1(w, i); // h
L(j, 8) = x2(y, i) * x1(x, i); // i
L(j, 9) = x2(y, i) * x1(y, i); // j
L(j, 10) = x2(y, i) * x1(z, i); // k
L(j, 11) = x2(y, i) * x1(w, i); // l
L(j, 4) = -x2(z, i) * x1(x, i); // e
L(j, 5) = -x2(z, i) * x1(y, i); // f
L(j, 6) = -x2(z, i) * x1(z, i); // g
L(j, 7) = -x2(z, i) * x1(w, i); // h
L(j, 8) = x2(y, i) * x1(x, i); // i
L(j, 9) = x2(y, i) * x1(y, i); // j
L(j, 10) = x2(y, i) * x1(z, i); // k
L(j, 11) = x2(y, i) * x1(w, i); // l
++j;
L(j, 0) = -x2(z, i) * x1(x, i); // a
L(j, 1) = -x2(z, i) * x1(y, i); // b
L(j, 2) = -x2(z, i) * x1(z, i); // c
L(j, 3) = -x2(z, i) * x1(w, i); // d
L(j, 8) = x2(x, i) * x1(x, i); // i
L(j, 9) = x2(x, i) * x1(y, i); // j
L(j, 10) = x2(x, i) * x1(z, i); // k
L(j, 11) = x2(x, i) * x1(w, i); // l
L(j, 0) = -x2(z, i) * x1(x, i); // a
L(j, 1) = -x2(z, i) * x1(y, i); // b
L(j, 2) = -x2(z, i) * x1(z, i); // c
L(j, 3) = -x2(z, i) * x1(w, i); // d
L(j, 8) = x2(x, i) * x1(x, i); // i
L(j, 9) = x2(x, i) * x1(y, i); // j
L(j, 10) = x2(x, i) * x1(z, i); // k
L(j, 11) = x2(x, i) * x1(w, i); // l
++j;
L(j, 4) = -x2(w, i) * x1(x, i); // e
L(j, 5) = -x2(w, i) * x1(y, i); // f
L(j, 6) = -x2(w, i) * x1(z, i); // g
L(j, 7) = -x2(w, i) * x1(w, i); // h
L(j, 12) = x2(y, i) * x1(x, i); // m
L(j, 13) = x2(y, i) * x1(y, i); // n
L(j, 14) = x2(y, i) * x1(z, i); // o
b(j, 0) = -x2(y, i) * x1(w, i);
L(j, 4) = -x2(w, i) * x1(x, i); // e
L(j, 5) = -x2(w, i) * x1(y, i); // f
L(j, 6) = -x2(w, i) * x1(z, i); // g
L(j, 7) = -x2(w, i) * x1(w, i); // h
L(j, 12) = x2(y, i) * x1(x, i); // m
L(j, 13) = x2(y, i) * x1(y, i); // n
L(j, 14) = x2(y, i) * x1(z, i); // o
b(j, 0) = -x2(y, i) * x1(w, i);
++j;
L(j, 0) = -x2(y, i) * x1(x, i); // a
L(j, 1) = -x2(y, i) * x1(y, i); // b
L(j, 2) = -x2(y, i) * x1(z, i); // c
L(j, 3) = -x2(y, i) * x1(w, i); // d
L(j, 4) = x2(x, i) * x1(x, i); // e
L(j, 5) = x2(x, i) * x1(y, i); // f
L(j, 6) = x2(x, i) * x1(z, i); // g
L(j, 7) = x2(x, i) * x1(w, i); // h
L(j, 4) = x2(x, i) * x1(x, i); // e
L(j, 5) = x2(x, i) * x1(y, i); // f
L(j, 6) = x2(x, i) * x1(z, i); // g
L(j, 7) = x2(x, i) * x1(w, i); // h
++j;
L(j, 8) = -x2(w, i) * x1(x, i); // i
L(j, 9) = -x2(w, i) * x1(y, i); // j
L(j, 8) = -x2(w, i) * x1(x, i); // i
L(j, 9) = -x2(w, i) * x1(y, i); // j
L(j, 10) = -x2(w, i) * x1(z, i); // k
L(j, 11) = -x2(w, i) * x1(w, i); // l
L(j, 12) = x2(z, i) * x1(x, i); // m
L(j, 13) = x2(z, i) * x1(y, i); // n
L(j, 14) = x2(z, i) * x1(z, i); // o
b(j, 0) = -x2(z, i) * x1(w, i);
L(j, 12) = x2(z, i) * x1(x, i); // m
L(j, 13) = x2(z, i) * x1(y, i); // n
L(j, 14) = x2(z, i) * x1(z, i); // o
b(j, 0) = -x2(z, i) * x1(w, i);
}
// Solve Lx=B
Vec h = L.fullPivLu().solve(b);
if ((L * h).isApprox(b, expected_precision)) {
if ((L * h).isApprox(b, expected_precision)) {
Homography3DNormalizedParameterization<double>::To(h, H);
return true;
} else {
@@ -448,9 +445,9 @@ bool Homography3DFromCorrespondencesLinear(const Mat &x1,
}
}
double SymmetricGeometricDistance(const Mat3 &H,
const Vec2 &x1,
const Vec2 &x2) {
double SymmetricGeometricDistance(const Mat3& H,
const Vec2& x1,
const Vec2& x2) {
Vec3 x(x1(0), x1(1), 1.0);
Vec3 y(x2(0), x2(1), 1.0);

View File

@@ -49,11 +49,11 @@ namespace libmv {
* \return True if the transformation estimation has succeeded.
* \note There must be at least 4 non-colinear points.
*/
bool Homography2DFromCorrespondencesLinear(const Mat &x1,
const Mat &x2,
Mat3 *H,
double expected_precision =
EigenDouble::dummy_precision());
bool Homography2DFromCorrespondencesLinear(
const Mat& x1,
const Mat& x2,
Mat3* H,
double expected_precision = EigenDouble::dummy_precision());
/**
* This structure contains options that controls how the homography
@@ -101,10 +101,10 @@ struct EstimateHomographyOptions {
* refinement.
*/
bool EstimateHomography2DFromCorrespondences(
const Mat &x1,
const Mat &x2,
const EstimateHomographyOptions &options,
Mat3 *H);
const Mat& x1,
const Mat& x2,
const EstimateHomographyOptions& options,
Mat3* H);
/**
* 3D Homography transformation estimation.
@@ -129,20 +129,20 @@ bool EstimateHomography2DFromCorrespondences(
* \note Need at least 5 non coplanar points
* \note Points coordinates must be in homogeneous coordinates
*/
bool Homography3DFromCorrespondencesLinear(const Mat &x1,
const Mat &x2,
Mat4 *H,
double expected_precision =
EigenDouble::dummy_precision());
bool Homography3DFromCorrespondencesLinear(
const Mat& x1,
const Mat& x2,
Mat4* H,
double expected_precision = EigenDouble::dummy_precision());
/**
* Calculate symmetric geometric cost:
*
* D(H * x1, x2)^2 + D(H^-1 * x2, x1)
*/
double SymmetricGeometricDistance(const Mat3 &H,
const Vec2 &x1,
const Vec2 &x2);
double SymmetricGeometricDistance(const Mat3& H,
const Vec2& x1,
const Vec2& x2);
} // namespace libmv

View File

@@ -27,18 +27,18 @@ namespace libmv {
namespace homography {
namespace homography2D {
/**
* Structure for estimating the asymmetric error between a vector x2 and the
* transformed x1 such that
* Error = ||x2 - Psi(H * x1)||^2
* where Psi is the function that transforms homogeneous to euclidean coords.
* \note It should be distributed as Chi-squared with k = 2.
*/
/**
* Structure for estimating the asymmetric error between a vector x2 and the
* transformed x1 such that
* Error = ||x2 - Psi(H * x1)||^2
* where Psi is the function that transforms homogeneous to euclidean coords.
* \note It should be distributed as Chi-squared with k = 2.
*/
struct AsymmetricError {
/**
* Computes the asymmetric residuals between a set of 2D points x2 and the
* Computes the asymmetric residuals between a set of 2D points x2 and the
* transformed 2D point set x1 such that
* Residuals_i = x2_i - Psi(H * x1_i)
* Residuals_i = x2_i - Psi(H * x1_i)
* where Psi is the function that transforms homogeneous to euclidean coords.
*
* \param[in] H The 3x3 homography matrix.
@@ -47,8 +47,7 @@ struct AsymmetricError {
* \param[in] x2 A set of 2D points (2xN or 3xN matrix of column vectors).
* \param[out] dx A 2xN matrix of column vectors of residuals errors
*/
static void Residuals(const Mat &H, const Mat &x1,
const Mat &x2, Mat2X *dx) {
static void Residuals(const Mat& H, const Mat& x1, const Mat& x2, Mat2X* dx) {
dx->resize(2, x1.cols());
Mat3X x2h_est;
if (x1.rows() == 2)
@@ -63,19 +62,18 @@ struct AsymmetricError {
*dx = HomogeneousToEuclidean(static_cast<Mat3X>(x2)) - *dx;
}
/**
* Computes the asymmetric residuals between a 2D point x2 and the transformed
* Computes the asymmetric residuals between a 2D point x2 and the transformed
* 2D point x1 such that
* Residuals = x2 - Psi(H * x1)
* Residuals = x2 - Psi(H * x1)
* where Psi is the function that transforms homogeneous to euclidean coords.
*
* \param[in] H The 3x3 homography matrix.
* The estimated homography should approximatelly hold the condition y = H x.
* \param[in] x1 A 2D point (vector of size 2 or 3 (euclidean/homogeneous))
* \param[in] x2 A 2D point (vector of size 2 or 3 (euclidean/homogeneous))
* \param[in] x1 A 2D point (vector of size 2 or 3 (euclidean/homogeneous))
* \param[in] x2 A 2D point (vector of size 2 or 3 (euclidean/homogeneous))
* \param[out] dx A vector of size 2 of the residual error
*/
static void Residuals(const Mat &H, const Vec &x1,
const Vec &x2, Vec2 *dx) {
static void Residuals(const Mat& H, const Vec& x1, const Vec& x2, Vec2* dx) {
Vec3 x2h_est;
if (x1.rows() == 2)
x2h_est = H * EuclideanToHomogeneous(static_cast<Vec2>(x1));
@@ -85,10 +83,10 @@ struct AsymmetricError {
*dx = x2 - x2h_est.head<2>() / x2h_est[2];
else
*dx = HomogeneousToEuclidean(static_cast<Vec3>(x2)) -
x2h_est.head<2>() / x2h_est[2];
x2h_est.head<2>() / x2h_est[2];
}
/**
* Computes the squared norm of the residuals between a set of 2D points x2
* Computes the squared norm of the residuals between a set of 2D points x2
* and the transformed 2D point set x1 such that
* Error = || x2 - Psi(H * x1) ||^2
* where Psi is the function that transforms homogeneous to euclidean coords.
@@ -99,70 +97,70 @@ struct AsymmetricError {
* \param[in] x2 A set of 2D points (2xN or 3xN matrix of column vectors).
* \return The squared norm of the asymmetric residuals errors
*/
static double Error(const Mat &H, const Mat &x1, const Mat &x2) {
static double Error(const Mat& H, const Mat& x1, const Mat& x2) {
Mat2X dx;
Residuals(H, x1, x2, &dx);
return dx.squaredNorm();
}
/**
* Computes the squared norm of the residuals between a 2D point x2 and the
* transformed 2D point x1 such that rms = || x2 - Psi(H * x1) ||^2
* Computes the squared norm of the residuals between a 2D point x2 and the
* transformed 2D point x1 such that rms = || x2 - Psi(H * x1) ||^2
* where Psi is the function that transforms homogeneous to euclidean coords.
*
* \param[in] H The 3x3 homography matrix.
* The estimated homography should approximatelly hold the condition y = H x.
* \param[in] x1 A 2D point (vector of size 2 or 3 (euclidean/homogeneous))
* \param[in] x2 A 2D point (vector of size 2 or 3 (euclidean/homogeneous))
* \param[in] x1 A 2D point (vector of size 2 or 3 (euclidean/homogeneous))
* \param[in] x2 A 2D point (vector of size 2 or 3 (euclidean/homogeneous))
* \return The squared norm of the asymmetric residual error
*/
static double Error(const Mat &H, const Vec &x1, const Vec &x2) {
static double Error(const Mat& H, const Vec& x1, const Vec& x2) {
Vec2 dx;
Residuals(H, x1, x2, &dx);
return dx.squaredNorm();
}
};
/**
* Structure for estimating the symmetric error
* between a vector x2 and the transformed x1 such that
* Error = ||x2 - Psi(H * x1)||^2 + ||x1 - Psi(H^-1 * x2)||^2
* where Psi is the function that transforms homogeneous to euclidean coords.
* \note It should be distributed as Chi-squared with k = 4.
*/
/**
* Structure for estimating the symmetric error
* between a vector x2 and the transformed x1 such that
* Error = ||x2 - Psi(H * x1)||^2 + ||x1 - Psi(H^-1 * x2)||^2
* where Psi is the function that transforms homogeneous to euclidean coords.
* \note It should be distributed as Chi-squared with k = 4.
*/
struct SymmetricError {
/**
* Computes the squared norm of the residuals between x2 and the
* transformed x1 such that
* Computes the squared norm of the residuals between x2 and the
* transformed x1 such that
* Error = ||x2 - Psi(H * x1)||^2 + ||x1 - Psi(H^-1 * x2)||^2
* where Psi is the function that transforms homogeneous to euclidean coords.
*
* \param[in] H The 3x3 homography matrix.
* The estimated homography should approximatelly hold the condition y = H x.
* \param[in] x1 A 2D point (vector of size 2 or 3 (euclidean/homogeneous))
* \param[in] x2 A 2D point (vector of size 2 or 3 (euclidean/homogeneous))
* \param[in] x1 A 2D point (vector of size 2 or 3 (euclidean/homogeneous))
* \param[in] x2 A 2D point (vector of size 2 or 3 (euclidean/homogeneous))
* \return The squared norm of the symmetric residuals errors
*/
static double Error(const Mat &H, const Vec &x1, const Vec &x2) {
static double Error(const Mat& H, const Vec& x1, const Vec& x2) {
// TODO(keir): This is awesomely inefficient because it does a 3x3
// inversion for each evaluation.
Mat3 Hinv = H.inverse();
return AsymmetricError::Error(H, x1, x2) +
return AsymmetricError::Error(H, x1, x2) +
AsymmetricError::Error(Hinv, x2, x1);
}
// TODO(julien) Add residuals function \see AsymmetricError
};
/**
* Structure for estimating the algebraic error (cross product)
* between a vector x2 and the transformed x1 such that
* Error = ||[x2] * H * x1||^^2
* where [x2] is the skew matrix of x2.
*/
/**
* Structure for estimating the algebraic error (cross product)
* between a vector x2 and the transformed x1 such that
* Error = ||[x2] * H * x1||^^2
* where [x2] is the skew matrix of x2.
*/
struct AlgebraicError {
// TODO(julien) Make an AlgebraicError2Rows and AlgebraicError3Rows
/**
* Computes the algebraic residuals (cross product) between a set of 2D
* points x2 and the transformed 2D point set x1 such that
* Computes the algebraic residuals (cross product) between a set of 2D
* points x2 and the transformed 2D point set x1 such that
* [x2] * H * x1 where [x2] is the skew matrix of x2.
*
* \param[in] H The 3x3 homography matrix.
@@ -171,8 +169,7 @@ struct AlgebraicError {
* \param[in] x2 A set of 2D points (2xN or 3xN matrix of column vectors).
* \param[out] dx A 3xN matrix of column vectors of residuals errors
*/
static void Residuals(const Mat &H, const Mat &x1,
const Mat &x2, Mat3X *dx) {
static void Residuals(const Mat& H, const Mat& x1, const Mat& x2, Mat3X* dx) {
dx->resize(3, x1.cols());
Vec3 col;
for (int i = 0; i < x1.cols(); ++i) {
@@ -181,18 +178,17 @@ struct AlgebraicError {
}
}
/**
* Computes the algebraic residuals (cross product) between a 2D point x2
* and the transformed 2D point x1 such that
* Computes the algebraic residuals (cross product) between a 2D point x2
* and the transformed 2D point x1 such that
* [x2] * H * x1 where [x2] is the skew matrix of x2.
*
* \param[in] H The 3x3 homography matrix.
* The estimated homography should approximatelly hold the condition y = H x.
* \param[in] x1 A 2D point (vector of size 2 or 3 (euclidean/homogeneous))
* \param[in] x2 A 2D point (vector of size 2 or 3 (euclidean/homogeneous))
* \param[in] x1 A 2D point (vector of size 2 or 3 (euclidean/homogeneous))
* \param[in] x2 A 2D point (vector of size 2 or 3 (euclidean/homogeneous))
* \param[out] dx A vector of size 3 of the residual error
*/
static void Residuals(const Mat &H, const Vec &x1,
const Vec &x2, Vec3 *dx) {
static void Residuals(const Mat& H, const Vec& x1, const Vec& x2, Vec3* dx) {
Vec3 x2h_est;
if (x1.rows() == 2)
x2h_est = H * EuclideanToHomogeneous(static_cast<Vec2>(x1));
@@ -206,8 +202,8 @@ struct AlgebraicError {
// identical 3x3 skew matrix for each evaluation.
}
/**
* Computes the squared norm of the algebraic residuals between a set of 2D
* points x2 and the transformed 2D point set x1 such that
* Computes the squared norm of the algebraic residuals between a set of 2D
* points x2 and the transformed 2D point set x1 such that
* [x2] * H * x1 where [x2] is the skew matrix of x2.
*
* \param[in] H The 3x3 homography matrix.
@@ -216,23 +212,23 @@ struct AlgebraicError {
* \param[in] x2 A set of 2D points (2xN or 3xN matrix of column vectors).
* \return The squared norm of the asymmetric residuals errors
*/
static double Error(const Mat &H, const Mat &x1, const Mat &x2) {
static double Error(const Mat& H, const Mat& x1, const Mat& x2) {
Mat3X dx;
Residuals(H, x1, x2, &dx);
return dx.squaredNorm();
}
/**
* Computes the squared norm of the algebraic residuals between a 2D point x2
* and the transformed 2D point x1 such that
* Computes the squared norm of the algebraic residuals between a 2D point x2
* and the transformed 2D point x1 such that
* [x2] * H * x1 where [x2] is the skew matrix of x2.
*
* \param[in] H The 3x3 homography matrix.
* The estimated homography should approximatelly hold the condition y = H x.
* \param[in] x1 A 2D point (vector of size 2 or 3 (euclidean/homogeneous))
* \param[in] x2 A 2D point (vector of size 2 or 3 (euclidean/homogeneous))
* \param[in] x1 A 2D point (vector of size 2 or 3 (euclidean/homogeneous))
* \param[in] x2 A 2D point (vector of size 2 or 3 (euclidean/homogeneous))
* \return The squared norm of the asymmetric residual error
*/
static double Error(const Mat &H, const Vec &x1, const Vec &x2) {
static double Error(const Mat& H, const Vec& x1, const Vec& x2) {
Vec3 dx;
Residuals(H, x1, x2, &dx);
return dx.squaredNorm();

View File

@@ -25,64 +25,72 @@
namespace libmv {
/** A parameterization of the 2D homography matrix that uses 8 parameters so
/** A parameterization of the 2D homography matrix that uses 8 parameters so
* that the matrix is normalized (H(2,2) == 1).
* The homography matrix H is built from a list of 8 parameters (a, b,...g, h)
* as follows
* |a b c|
* |a b c|
* H = |d e f|
* |g h 1|
* |g h 1|
*/
template<typename T = double>
template <typename T = double>
class Homography2DNormalizedParameterization {
public:
typedef Eigen::Matrix<T, 8, 1> Parameters; // a, b, ... g, h
typedef Eigen::Matrix<T, 3, 3> Parameterized; // H
/// Convert from the 8 parameters to a H matrix.
static void To(const Parameters &p, Parameterized *h) {
static void To(const Parameters& p, Parameterized* h) {
// clang-format off
*h << p(0), p(1), p(2),
p(3), p(4), p(5),
p(6), p(7), 1.0;
// clang-format on
}
/// Convert from a H matrix to the 8 parameters.
static void From(const Parameterized &h, Parameters *p) {
static void From(const Parameterized& h, Parameters* p) {
// clang-format off
*p << h(0, 0), h(0, 1), h(0, 2),
h(1, 0), h(1, 1), h(1, 2),
h(2, 0), h(2, 1);
// clang-format on
}
};
/** A parameterization of the 2D homography matrix that uses 15 parameters so
/** A parameterization of the 2D homography matrix that uses 15 parameters so
* that the matrix is normalized (H(3,3) == 1).
* The homography matrix H is built from a list of 15 parameters (a, b,...n, o)
* as follows
* |a b c d|
* |a b c d|
* H = |e f g h|
* |i j k l|
* |m n o 1|
* |m n o 1|
*/
template<typename T = double>
template <typename T = double>
class Homography3DNormalizedParameterization {
public:
typedef Eigen::Matrix<T, 15, 1> Parameters; // a, b, ... n, o
typedef Eigen::Matrix<T, 4, 4> Parameterized; // H
typedef Eigen::Matrix<T, 15, 1> Parameters; // a, b, ... n, o
typedef Eigen::Matrix<T, 4, 4> Parameterized; // H
/// Convert from the 15 parameters to a H matrix.
static void To(const Parameters &p, Parameterized *h) {
static void To(const Parameters& p, Parameterized* h) {
// clang-format off
*h << p(0), p(1), p(2), p(3),
p(4), p(5), p(6), p(7),
p(8), p(9), p(10), p(11),
p(12), p(13), p(14), 1.0;
// clang-format on
}
/// Convert from a H matrix to the 15 parameters.
static void From(const Parameterized &h, Parameters *p) {
static void From(const Parameterized& h, Parameters* p) {
// clang-format off
*p << h(0, 0), h(0, 1), h(0, 2), h(0, 3),
h(1, 0), h(1, 1), h(1, 2), h(1, 3),
h(2, 0), h(2, 1), h(2, 2), h(2, 3),
h(3, 0), h(3, 1), h(3, 2);
// clang-format on
}
};

View File

@@ -18,10 +18,10 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
// IN THE SOFTWARE.
#include "testing/testing.h"
#include "libmv/multiview/homography.h"
#include "libmv/logging/logging.h"
#include "libmv/multiview/projection.h"
#include "libmv/multiview/homography.h"
#include "testing/testing.h"
namespace {
using namespace libmv;
@@ -34,9 +34,7 @@ namespace {
// TODO(sergey): Consider using this in all tests since possible homography
// matrix is not fixed to a single value and different-looking matrices
// might actually crrespond to the same exact transform.
void CheckHomography2DTransform(const Mat3 &H,
const Mat &x1,
const Mat &x2) {
void CheckHomography2DTransform(const Mat3& H, const Mat& x1, const Mat& x2) {
for (int i = 0; i < x2.cols(); ++i) {
Vec3 x2_expected = x2.col(i);
Vec3 x2_observed = H * x1.col(i);
@@ -49,15 +47,19 @@ void CheckHomography2DTransform(const Mat3 &H,
TEST(Homography2DTest, Rotation45AndTranslationXY) {
Mat x1(3, 4);
// clang-format off
x1 << 0, 1, 0, 5,
0, 0, 2, 3,
1, 1, 1, 1;
// clang-format on
double angle = 45.0;
Mat3 m;
// clang-format off
m << cos(angle), -sin(angle), -2,
sin(angle), cos(angle), 5,
0, 0, 1;
// clang-format on
Mat x2 = x1;
// Transform point from ground truth matrix
@@ -76,13 +78,17 @@ TEST(Homography2DTest, Rotation45AndTranslationXY) {
TEST(Homography2DTest, AffineGeneral4) {
// TODO(julien) find why it doesn't work with 4 points!!!
Mat x1(3, 4);
// clang-format off
x1 << 0, 1, 0, 2,
0, 0, 1, 2,
1, 1, 1, 1;
// clang-format on
Mat3 m;
// clang-format off
m << 3, -1, 4,
6, -2, -3,
0, 0, 1;
// clang-format on
Mat x2 = x1;
for (int i = 0; i < x2.cols(); ++i) {
@@ -109,13 +115,17 @@ TEST(Homography2DTest, AffineGeneral4) {
TEST(Homography2DTest, AffineGeneral5) {
Mat x1(3, 5);
// clang-format off
x1 << 0, 1, 0, 2, 5,
0, 0, 1, 2, 2,
1, 1, 1, 1, 1;
// clang-format on
Mat3 m;
// clang-format off
m << 3, -1, 4,
6, -2, -3,
0, 0, 1;
// clang-format on
Mat x2 = x1;
for (int i = 0; i < x2.cols(); ++i)
@@ -142,13 +152,17 @@ TEST(Homography2DTest, AffineGeneral5) {
TEST(Homography2DTest, HomographyGeneral) {
Mat x1(3, 4);
// clang-format off
x1 << 0, 1, 0, 5,
0, 0, 2, 3,
1, 1, 1, 1;
// clang-format on
Mat3 m;
// clang-format off
m << 3, -1, 4,
6, -2, -3,
1, -3, 1;
// clang-format on
Mat x2 = x1;
for (int i = 0; i < x2.cols(); ++i)
@@ -164,10 +178,12 @@ TEST(Homography2DTest, HomographyGeneral) {
TEST(Homography3DTest, RotationAndTranslationXYZ) {
Mat x1(4, 5);
// clang-format off
x1 << 0, 0, 1, 5, 2,
0, 1, 2, 3, 5,
0, 2, 0, 1, 5,
1, 1, 1, 1, 1;
// clang-format on
Mat4 M;
M.setIdentity();
/*
@@ -178,24 +194,30 @@ TEST(Homography3DTest, RotationAndTranslationXYZ) {
// Rotation on x + translation
double angle = 45.0;
Mat4 rot;
// clang-format off
rot << 1, 0, 0, 1,
0, cos(angle), -sin(angle), 3,
0, sin(angle), cos(angle), -2,
0, 0, 0, 1;
// clang-format on
M *= rot;
// Rotation on y
angle = 25.0;
// clang-format off
rot << cos(angle), 0, sin(angle), 0,
0, 1, 0, 0,
-sin(angle), 0, cos(angle), 0,
0, 0, 0, 1;
// clang-format on
M *= rot;
// Rotation on z
angle = 5.0;
// clang-format off
rot << cos(angle), -sin(angle), 0, 0,
sin(angle), cos(angle), 0, 0,
0, 0, 1, 0,
0, 0, 0, 1;
// clang-format on
M *= rot;
Mat x2 = x1;
for (int i = 0; i < x2.cols(); ++i) {
@@ -212,15 +234,19 @@ TEST(Homography3DTest, RotationAndTranslationXYZ) {
TEST(Homography3DTest, AffineGeneral) {
Mat x1(4, 5);
// clang-format off
x1 << 0, 0, 1, 5, 2,
0, 1, 2, 3, 5,
0, 2, 0, 1, 5,
1, 1, 1, 1, 1;
// clang-format on
Mat4 m;
// clang-format off
m << 3, -1, 4, 1,
6, -2, -3, -6,
1, 0, 1, 2,
0, 0, 0, 1;
// clang-format on
Mat x2 = x1;
for (int i = 0; i < x2.cols(); ++i) {
@@ -236,15 +262,19 @@ TEST(Homography3DTest, AffineGeneral) {
TEST(Homography3DTest, HomographyGeneral) {
Mat x1(4, 5);
// clang-format off
x1 << 0, 0, 1, 5, 2,
0, 1, 2, 3, 5,
0, 2, 0, 1, 5,
1, 1, 1, 1, 1;
// clang-format on
Mat4 m;
// clang-format off
m << 3, -1, 4, 1,
6, -2, -3, -6,
1, 0, 1, 2,
-3, 1, 0, 1;
// clang-format on
Mat x2 = x1;
for (int i = 0; i < x2.cols(); ++i) {

View File

@@ -34,22 +34,22 @@ namespace libmv {
// x's are 2D coordinates (x,y,1) in each image; Ps are projective cameras. The
// output, X, is a homogeneous four vectors.
template<typename T>
void NViewTriangulate(const Matrix<T, 2, Dynamic> &x,
const vector<Matrix<T, 3, 4> > &Ps,
Matrix<T, 4, 1> *X) {
template <typename T>
void NViewTriangulate(const Matrix<T, 2, Dynamic>& x,
const vector<Matrix<T, 3, 4>>& Ps,
Matrix<T, 4, 1>* X) {
int nviews = x.cols();
assert(nviews == Ps.size());
Matrix<T, Dynamic, Dynamic> design(3*nviews, 4 + nviews);
Matrix<T, Dynamic, Dynamic> design(3 * nviews, 4 + nviews);
design.setConstant(0.0);
for (int i = 0; i < nviews; i++) {
design.template block<3, 4>(3*i, 0) = -Ps[i];
design(3*i + 0, 4 + i) = x(0, i);
design(3*i + 1, 4 + i) = x(1, i);
design(3*i + 2, 4 + i) = 1.0;
design.template block<3, 4>(3 * i, 0) = -Ps[i];
design(3 * i + 0, 4 + i) = x(0, i);
design(3 * i + 1, 4 + i) = x(1, i);
design(3 * i + 2, 4 + i) = 1.0;
}
Matrix<T, Dynamic, 1> X_and_alphas;
Matrix<T, Dynamic, 1> X_and_alphas;
Nullspace(&design, &X_and_alphas);
X->resize(4);
*X = X_and_alphas.head(4);
@@ -60,16 +60,16 @@ void NViewTriangulate(const Matrix<T, 2, Dynamic> &x,
// This method uses the algebraic distance approximation.
// Note that this method works better when the 2D points are normalized
// with an isotopic normalization.
template<typename T>
void NViewTriangulateAlgebraic(const Matrix<T, 2, Dynamic> &x,
const vector<Matrix<T, 3, 4> > &Ps,
Matrix<T, 4, 1> *X) {
template <typename T>
void NViewTriangulateAlgebraic(const Matrix<T, 2, Dynamic>& x,
const vector<Matrix<T, 3, 4>>& Ps,
Matrix<T, 4, 1>* X) {
int nviews = x.cols();
assert(nviews == Ps.size());
Matrix<T, Dynamic, 4> design(2*nviews, 4);
Matrix<T, Dynamic, 4> design(2 * nviews, 4);
for (int i = 0; i < nviews; i++) {
design.template block<2, 4>(2*i, 0) = SkewMatMinimal(x.col(i)) * Ps[i];
design.template block<2, 4>(2 * i, 0) = SkewMatMinimal(x.col(i)) * Ps[i];
}
X->resize(4);
Nullspace(&design, X);

View File

@@ -54,7 +54,7 @@ TEST(NViewTriangulate, FiveViews) {
// Check reprojection error. Should be nearly zero.
for (int j = 0; j < nviews; ++j) {
Vec3 x_reprojected = Ps[j]*X;
Vec3 x_reprojected = Ps[j] * X;
x_reprojected /= x_reprojected(2);
double error = (x_reprojected.head(2) - xs.col(j)).norm();
EXPECT_NEAR(error, 0.0, 1e-9);
@@ -84,7 +84,7 @@ TEST(NViewTriangulateAlgebraic, FiveViews) {
// Check reprojection error. Should be nearly zero.
for (int j = 0; j < nviews; ++j) {
Vec3 x_reprojected = Ps[j]*X;
Vec3 x_reprojected = Ps[j] * X;
x_reprojected /= x_reprojected(2);
double error = (x_reprojected.head<2>() - xs.col(j)).norm();
EXPECT_NEAR(error, 0.0, 1e-9);

View File

@@ -24,8 +24,9 @@
namespace libmv {
static bool Build_Minimal2Point_PolynomialFactor(
const Mat & x1, const Mat & x2,
double * P) { // P must be a double[4]
const Mat& x1,
const Mat& x2,
double* P) { // P must be a double[4]
assert(2 == x1.rows());
assert(2 == x1.cols());
assert(x1.rows() == x2.rows());
@@ -40,11 +41,11 @@ static bool Build_Minimal2Point_PolynomialFactor(
Vec yx2 = (x2.col(1)).transpose();
double b12 = xx2.dot(yx2);
double a1 = xx1.squaredNorm();
double a2 = yx1.squaredNorm();
double a1 = xx1.squaredNorm();
double a2 = yx1.squaredNorm();
double b1 = xx2.squaredNorm();
double b2 = yx2.squaredNorm();
double b1 = xx2.squaredNorm();
double b2 = yx2.squaredNorm();
// Build the 3rd degre polynomial in F^2.
//
@@ -52,10 +53,12 @@ static bool Build_Minimal2Point_PolynomialFactor(
//
// Coefficients in ascending powers of alpha, i.e. P[N]*x^N.
// Run panography_coeffs.py to get the below coefficients.
P[0] = b1*b2*a12*a12-a1*a2*b12*b12;
P[1] = -2*a1*a2*b12+2*a12*b1*b2+b1*a12*a12+b2*a12*a12-a1*b12*b12-a2*b12*b12;
P[2] = b1*b2-a1*a2-2*a1*b12-2*a2*b12+2*a12*b1+2*a12*b2+a12*a12-b12*b12;
P[3] = b1+b2-2*b12-a1-a2+2*a12;
P[0] = b1 * b2 * a12 * a12 - a1 * a2 * b12 * b12;
P[1] = -2 * a1 * a2 * b12 + 2 * a12 * b1 * b2 + b1 * a12 * a12 +
b2 * a12 * a12 - a1 * b12 * b12 - a2 * b12 * b12;
P[2] = b1 * b2 - a1 * a2 - 2 * a1 * b12 - 2 * a2 * b12 + 2 * a12 * b1 +
2 * a12 * b2 + a12 * a12 - b12 * b12;
P[3] = b1 + b2 - 2 * b12 - a1 - a2 + 2 * a12;
// If P[3] equal to 0 we get ill conditionned data
return (P[3] != 0.0);
@@ -67,8 +70,9 @@ static bool Build_Minimal2Point_PolynomialFactor(
//
// [1] M. Brown and R. Hartley and D. Nister. Minimal Solutions for Panoramic
// Stitching. CVPR07.
void F_FromCorrespondance_2points(const Mat &x1, const Mat &x2,
vector<double> *fs) {
void F_FromCorrespondance_2points(const Mat& x1,
const Mat& x2,
vector<double>* fs) {
// Build Polynomial factor to get squared focal value.
double P[4];
Build_Minimal2Point_PolynomialFactor(x1, x2, &P[0]);
@@ -79,8 +83,8 @@ void F_FromCorrespondance_2points(const Mat &x1, const Mat &x2,
//
double roots[3];
int num_roots = SolveCubicPolynomial(P, roots);
for (int i = 0; i < num_roots; ++i) {
if (roots[i] > 0.0) {
for (int i = 0; i < num_roots; ++i) {
if (roots[i] > 0.0) {
fs->push_back(sqrt(roots[i]));
}
}
@@ -92,17 +96,18 @@ void F_FromCorrespondance_2points(const Mat &x1, const Mat &x2,
// K. Arun,T. Huand and D. Blostein. Least-squares fitting of 2 3-D point
// sets. IEEE Transactions on Pattern Analysis and Machine Intelligence,
// 9:698-700, 1987.
void GetR_FixedCameraCenter(const Mat &x1, const Mat &x2,
void GetR_FixedCameraCenter(const Mat& x1,
const Mat& x2,
const double focal,
Mat3 *R) {
Mat3* R) {
assert(3 == x1.rows());
assert(2 <= x1.cols());
assert(x1.rows() == x2.rows());
assert(x1.cols() == x2.cols());
// Build simplified K matrix
Mat3 K(Mat3::Identity() * 1.0/focal);
K(2, 2)= 1.0;
Mat3 K(Mat3::Identity() * 1.0 / focal);
K(2, 2) = 1.0;
// Build the correlation matrix; equation (22) in [1].
Mat3 C = Mat3::Zero();
@@ -115,9 +120,9 @@ void GetR_FixedCameraCenter(const Mat &x1, const Mat &x2,
// Solve for rotation. Equations (24) and (25) in [1].
Eigen::JacobiSVD<Mat> svd(C, Eigen::ComputeThinU | Eigen::ComputeThinV);
Mat3 scale = Mat3::Identity();
scale(2, 2) = ((svd.matrixU() * svd.matrixV().transpose()).determinant() > 0.0)
? 1.0
: -1.0;
scale(2, 2) =
((svd.matrixU() * svd.matrixV().transpose()).determinant() > 0.0) ? 1.0
: -1.0;
(*R) = svd.matrixU() * scale * svd.matrixV().transpose();
}

View File

@@ -22,9 +22,9 @@
#ifndef LIBMV_MULTIVIEW_PANOGRAPHY_H
#define LIBMV_MULTIVIEW_PANOGRAPHY_H
#include "libmv/base/vector.h"
#include "libmv/numeric/numeric.h"
#include "libmv/numeric/poly.h"
#include "libmv/base/vector.h"
namespace libmv {
@@ -53,8 +53,9 @@ namespace libmv {
// K = [0 f 0]
// [0 0 1]
//
void F_FromCorrespondance_2points(const Mat &x1, const Mat &x2,
vector<double> *fs);
void F_FromCorrespondance_2points(const Mat& x1,
const Mat& x2,
vector<double>* fs);
// Compute the 3x3 rotation matrix that fits two 3D point clouds in the least
// square sense. The method is from:
@@ -90,9 +91,10 @@ void F_FromCorrespondance_2points(const Mat &x1, const Mat &x2,
//
// R = arg min || X2 - R * x1 ||
//
void GetR_FixedCameraCenter(const Mat &x1, const Mat &x2,
void GetR_FixedCameraCenter(const Mat& x1,
const Mat& x2,
const double focal,
Mat3 *R);
Mat3* R);
} // namespace libmv

View File

@@ -25,7 +25,7 @@ namespace libmv {
namespace panography {
namespace kernel {
void TwoPointSolver::Solve(const Mat &x1, const Mat &x2, vector<Mat3> *Hs) {
void TwoPointSolver::Solve(const Mat& x1, const Mat& x2, vector<Mat3>* Hs) {
// Solve for the focal lengths.
vector<double> fs;
F_FromCorrespondance_2points(x1, x2, &fs);
@@ -34,7 +34,7 @@ void TwoPointSolver::Solve(const Mat &x1, const Mat &x2, vector<Mat3> *Hs) {
Mat x1h, x2h;
EuclideanToHomogeneous(x1, &x1h);
EuclideanToHomogeneous(x2, &x2h);
for (int i = 0; i < fs.size(); ++i) {
for (int i = 0; i < fs.size(); ++i) {
Mat3 K1 = Mat3::Identity() * fs[i];
K1(2, 2) = 1.0;

View File

@@ -23,9 +23,9 @@
#include "libmv/base/vector.h"
#include "libmv/multiview/conditioning.h"
#include "libmv/multiview/homography_error.h"
#include "libmv/multiview/projection.h"
#include "libmv/multiview/two_view_kernel.h"
#include "libmv/multiview/homography_error.h"
#include "libmv/numeric/numeric.h"
namespace libmv {
@@ -34,18 +34,18 @@ namespace kernel {
struct TwoPointSolver {
enum { MINIMUM_SAMPLES = 2 };
static void Solve(const Mat &x1, const Mat &x2, vector<Mat3> *Hs);
static void Solve(const Mat& x1, const Mat& x2, vector<Mat3>* Hs);
};
typedef two_view::kernel::Kernel<
TwoPointSolver, homography::homography2D::AsymmetricError, Mat3>
UnnormalizedKernel;
typedef two_view::kernel::
Kernel<TwoPointSolver, homography::homography2D::AsymmetricError, Mat3>
UnnormalizedKernel;
typedef two_view::kernel::Kernel<
two_view::kernel::NormalizedSolver<TwoPointSolver, UnnormalizerI>,
homography::homography2D::AsymmetricError,
Mat3>
Kernel;
two_view::kernel::NormalizedSolver<TwoPointSolver, UnnormalizerI>,
homography::homography2D::AsymmetricError,
Mat3>
Kernel;
} // namespace kernel
} // namespace panography

View File

@@ -18,8 +18,8 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
// IN THE SOFTWARE.
#include "libmv/logging/logging.h"
#include "libmv/multiview/panography.h"
#include "libmv/logging/logging.h"
#include "libmv/multiview/panography_kernel.h"
#include "libmv/multiview/projection.h"
#include "libmv/numeric/numeric.h"
@@ -30,18 +30,16 @@ namespace {
TEST(Panography, PrintSomeSharedFocalEstimationValues) {
Mat x1(2, 2), x2(2, 2);
x1<< 158, 78,
124, 113;
x2<< 300, 214,
125, 114;
x1 << 158, 78, 124, 113;
x2 << 300, 214, 125, 114;
// Normalize data (set principal point 0,0 and image border to 1.0).
x1.block<1, 2>(0, 0) /= 320;
x1.block<1, 2>(1, 0) /= 240;
x2.block<1, 2>(0, 0) /= 320;
x2.block<1, 2>(1, 0) /= 240;
x1+=Mat2::Constant(0.5);
x2+=Mat2::Constant(0.5);
x1 += Mat2::Constant(0.5);
x2 += Mat2::Constant(0.5);
vector<double> fs;
F_FromCorrespondance_2points(x1, x2, &fs);
@@ -53,9 +51,11 @@ TEST(Panography, PrintSomeSharedFocalEstimationValues) {
TEST(Panography, GetR_FixedCameraCenterWithIdentity) {
Mat x1(3, 3);
// clang-format off
x1 << 0.5, 0.6, 0.7,
0.5, 0.5, 0.4,
10.0, 10.0, 10.0;
// clang-format on
Mat3 R;
GetR_FixedCameraCenter(x1, x1, 1.0, &R);
@@ -68,16 +68,20 @@ TEST(Panography, Homography_GetR_Test_PitchY30) {
int n = 3;
Mat x1(3, n);
// clang-format off
x1 << 0.5, 0.6, 0.7,
0.5, 0.5, 0.4,
10, 10, 10;
// clang-format on
Mat x2 = x1;
const double alpha = 30.0 * M_PI / 180.0;
Mat3 rotY;
// clang-format off
rotY << cos(alpha), 0, -sin(alpha),
0, 1, 0,
sin(alpha), 0, cos(alpha);
// clang-format on
for (int i = 0; i < n; ++i) {
x2.block<3, 1>(0, i) = rotY * x1.col(i);
@@ -101,17 +105,23 @@ TEST(Panography, Homography_GetR_Test_PitchY30) {
TEST(MinimalPanoramic, Real_Case_Kernel) {
const int n = 2;
Mat x1(2, n); // From image 0.jpg
// clang-format off
x1<< 158, 78,
124, 113;
// clang-format on
Mat x2(2, n); // From image 3.jpg
// clang-format off
x2<< 300, 214,
125, 114;
// clang-format on
Mat3 Ground_TruthHomography;
// clang-format off
Ground_TruthHomography<< 1, 0.02, 129.83,
-0.02, 1.012, 0.07823,
0, 0, 1;
// clang-format on
vector<Mat3> Hs;
@@ -130,7 +140,7 @@ TEST(MinimalPanoramic, Real_Case_Kernel) {
// Assert that residuals are small enough
for (int i = 0; i < n; ++i) {
Vec x1p = H * x1h.col(i);
Vec residuals = x1p/x1p(2) - x2h.col(i);
Vec residuals = x1p / x1p(2) - x2h.col(i);
EXPECT_MATRIX_NEAR_ZERO(residuals, 1e-5);
}
}

View File

@@ -23,13 +23,13 @@
namespace libmv {
void P_From_KRt(const Mat3 &K, const Mat3 &R, const Vec3 &t, Mat34 *P) {
void P_From_KRt(const Mat3& K, const Mat3& R, const Vec3& t, Mat34* P) {
P->block<3, 3>(0, 0) = R;
P->col(3) = t;
(*P) = K * (*P);
}
void KRt_From_P(const Mat34 &P, Mat3 *Kp, Mat3 *Rp, Vec3 *tp) {
void KRt_From_P(const Mat34& P, Mat3* Kp, Mat3* Rp, Vec3* tp) {
// Decompose using the RQ decomposition HZ A4.1.1 pag.579.
Mat3 K = P.block(0, 0, 3, 3);
@@ -44,9 +44,11 @@ void KRt_From_P(const Mat34 &P, Mat3 *Kp, Mat3 *Rp, Vec3 *tp) {
c /= l;
s /= l;
Mat3 Qx;
// clang-format off
Qx << 1, 0, 0,
0, c, -s,
0, s, c;
// clang-format on
K = K * Qx;
Q = Qx.transpose() * Q;
}
@@ -58,9 +60,11 @@ void KRt_From_P(const Mat34 &P, Mat3 *Kp, Mat3 *Rp, Vec3 *tp) {
c /= l;
s /= l;
Mat3 Qy;
// clang-format off
Qy << c, 0, s,
0, 1, 0,
-s, 0, c;
// clang-format on
K = K * Qy;
Q = Qy.transpose() * Q;
}
@@ -72,9 +76,11 @@ void KRt_From_P(const Mat34 &P, Mat3 *Kp, Mat3 *Rp, Vec3 *tp) {
c /= l;
s /= l;
Mat3 Qz;
// clang-format off
Qz << c, -s, 0,
s, c, 0,
0, 0, 1;
// clang-format on
K = K * Qz;
Q = Qz.transpose() * Q;
}
@@ -92,17 +98,21 @@ void KRt_From_P(const Mat34 &P, Mat3 *Kp, Mat3 *Rp, Vec3 *tp) {
}
if (K(1, 1) < 0) {
Mat3 S;
// clang-format off
S << 1, 0, 0,
0, -1, 0,
0, 0, 1;
// clang-format on
K = K * S;
R = S * R;
}
if (K(0, 0) < 0) {
Mat3 S;
// clang-format off
S << -1, 0, 0,
0, 1, 0,
0, 0, 1;
// clang-format on
K = K * S;
R = S * R;
}
@@ -122,26 +132,30 @@ void KRt_From_P(const Mat34 &P, Mat3 *Kp, Mat3 *Rp, Vec3 *tp) {
*tp = t;
}
void ProjectionShiftPrincipalPoint(const Mat34 &P,
const Vec2 &principal_point,
const Vec2 &principal_point_new,
Mat34 *P_new) {
void ProjectionShiftPrincipalPoint(const Mat34& P,
const Vec2& principal_point,
const Vec2& principal_point_new,
Mat34* P_new) {
Mat3 T;
// clang-format off
T << 1, 0, principal_point_new(0) - principal_point(0),
0, 1, principal_point_new(1) - principal_point(1),
0, 0, 1;
// clang-format on
*P_new = T * P;
}
void ProjectionChangeAspectRatio(const Mat34 &P,
const Vec2 &principal_point,
void ProjectionChangeAspectRatio(const Mat34& P,
const Vec2& principal_point,
double aspect_ratio,
double aspect_ratio_new,
Mat34 *P_new) {
Mat34* P_new) {
Mat3 T;
// clang-format off
T << 1, 0, 0,
0, aspect_ratio_new / aspect_ratio, 0,
0, 0, 1;
// clang-format on
Mat34 P_temp;
ProjectionShiftPrincipalPoint(P, principal_point, Vec2(0, 0), &P_temp);
@@ -149,7 +163,7 @@ void ProjectionChangeAspectRatio(const Mat34 &P,
ProjectionShiftPrincipalPoint(P_temp, Vec2(0, 0), principal_point, P_new);
}
void HomogeneousToEuclidean(const Mat &H, Mat *X) {
void HomogeneousToEuclidean(const Mat& H, Mat* X) {
int d = H.rows() - 1;
int n = H.cols();
X->resize(d, n);
@@ -161,29 +175,29 @@ void HomogeneousToEuclidean(const Mat &H, Mat *X) {
}
}
void HomogeneousToEuclidean(const Mat3X &h, Mat2X *e) {
void HomogeneousToEuclidean(const Mat3X& h, Mat2X* e) {
e->resize(2, h.cols());
e->row(0) = h.row(0).array() / h.row(2).array();
e->row(1) = h.row(1).array() / h.row(2).array();
}
void HomogeneousToEuclidean(const Mat4X &h, Mat3X *e) {
void HomogeneousToEuclidean(const Mat4X& h, Mat3X* e) {
e->resize(3, h.cols());
e->row(0) = h.row(0).array() / h.row(3).array();
e->row(1) = h.row(1).array() / h.row(3).array();
e->row(2) = h.row(2).array() / h.row(3).array();
}
void HomogeneousToEuclidean(const Vec3 &H, Vec2 *X) {
void HomogeneousToEuclidean(const Vec3& H, Vec2* X) {
double w = H(2);
*X << H(0) / w, H(1) / w;
}
void HomogeneousToEuclidean(const Vec4 &H, Vec3 *X) {
void HomogeneousToEuclidean(const Vec4& H, Vec3* X) {
double w = H(3);
*X << H(0) / w, H(1) / w, H(2) / w;
}
void EuclideanToHomogeneous(const Mat &X, Mat *H) {
void EuclideanToHomogeneous(const Mat& X, Mat* H) {
int d = X.rows();
int n = X.cols();
H->resize(d + 1, n);
@@ -191,32 +205,32 @@ void EuclideanToHomogeneous(const Mat &X, Mat *H) {
H->row(d).setOnes();
}
void EuclideanToHomogeneous(const Vec2 &X, Vec3 *H) {
void EuclideanToHomogeneous(const Vec2& X, Vec3* H) {
*H << X(0), X(1), 1;
}
void EuclideanToHomogeneous(const Vec3 &X, Vec4 *H) {
void EuclideanToHomogeneous(const Vec3& X, Vec4* H) {
*H << X(0), X(1), X(2), 1;
}
// TODO(julien) Call conditioning.h/ApplyTransformationToPoints ?
void EuclideanToNormalizedCamera(const Mat2X &x, const Mat3 &K, Mat2X *n) {
void EuclideanToNormalizedCamera(const Mat2X& x, const Mat3& K, Mat2X* n) {
Mat3X x_image_h;
EuclideanToHomogeneous(x, &x_image_h);
Mat3X x_camera_h = K.inverse() * x_image_h;
HomogeneousToEuclidean(x_camera_h, n);
}
void HomogeneousToNormalizedCamera(const Mat3X &x, const Mat3 &K, Mat2X *n) {
void HomogeneousToNormalizedCamera(const Mat3X& x, const Mat3& K, Mat2X* n) {
Mat3X x_camera_h = K.inverse() * x;
HomogeneousToEuclidean(x_camera_h, n);
}
double Depth(const Mat3 &R, const Vec3 &t, const Vec3 &X) {
return (R*X)(2) + t(2);
double Depth(const Mat3& R, const Vec3& t, const Vec3& X) {
return (R * X)(2) + t(2);
}
double Depth(const Mat3 &R, const Vec3 &t, const Vec4 &X) {
double Depth(const Mat3& R, const Vec3& t, const Vec4& X) {
Vec3 Xe = X.head<3>() / X(3);
return Depth(R, t, Xe);
}

View File

@@ -25,108 +25,108 @@
namespace libmv {
void P_From_KRt(const Mat3 &K, const Mat3 &R, const Vec3 &t, Mat34 *P);
void KRt_From_P(const Mat34 &P, Mat3 *K, Mat3 *R, Vec3 *t);
void P_From_KRt(const Mat3& K, const Mat3& R, const Vec3& t, Mat34* P);
void KRt_From_P(const Mat34& P, Mat3* K, Mat3* R, Vec3* t);
// Applies a change of basis to the image coordinates of the projection matrix
// so that the principal point becomes principal_point_new.
void ProjectionShiftPrincipalPoint(const Mat34 &P,
const Vec2 &principal_point,
const Vec2 &principal_point_new,
Mat34 *P_new);
void ProjectionShiftPrincipalPoint(const Mat34& P,
const Vec2& principal_point,
const Vec2& principal_point_new,
Mat34* P_new);
// Applies a change of basis to the image coordinates of the projection matrix
// so that the aspect ratio becomes aspect_ratio_new. This is done by
// stretching the y axis. The aspect ratio is defined as the quotient between
// the focal length of the y and the x axis.
void ProjectionChangeAspectRatio(const Mat34 &P,
const Vec2 &principal_point,
void ProjectionChangeAspectRatio(const Mat34& P,
const Vec2& principal_point,
double aspect_ratio,
double aspect_ratio_new,
Mat34 *P_new);
Mat34* P_new);
void HomogeneousToEuclidean(const Mat &H, Mat *X);
void HomogeneousToEuclidean(const Mat3X &h, Mat2X *e);
void HomogeneousToEuclidean(const Mat4X &h, Mat3X *e);
void HomogeneousToEuclidean(const Vec3 &H, Vec2 *X);
void HomogeneousToEuclidean(const Vec4 &H, Vec3 *X);
inline Vec2 HomogeneousToEuclidean(const Vec3 &h) {
void HomogeneousToEuclidean(const Mat& H, Mat* X);
void HomogeneousToEuclidean(const Mat3X& h, Mat2X* e);
void HomogeneousToEuclidean(const Mat4X& h, Mat3X* e);
void HomogeneousToEuclidean(const Vec3& H, Vec2* X);
void HomogeneousToEuclidean(const Vec4& H, Vec3* X);
inline Vec2 HomogeneousToEuclidean(const Vec3& h) {
return h.head<2>() / h(2);
}
inline Vec3 HomogeneousToEuclidean(const Vec4 &h) {
inline Vec3 HomogeneousToEuclidean(const Vec4& h) {
return h.head<3>() / h(3);
}
inline Mat2X HomogeneousToEuclidean(const Mat3X &h) {
inline Mat2X HomogeneousToEuclidean(const Mat3X& h) {
Mat2X e(2, h.cols());
e.row(0) = h.row(0).array() / h.row(2).array();
e.row(1) = h.row(1).array() / h.row(2).array();
return e;
}
void EuclideanToHomogeneous(const Mat &X, Mat *H);
inline Mat3X EuclideanToHomogeneous(const Mat2X &x) {
void EuclideanToHomogeneous(const Mat& X, Mat* H);
inline Mat3X EuclideanToHomogeneous(const Mat2X& x) {
Mat3X h(3, x.cols());
h.block(0, 0, 2, x.cols()) = x;
h.row(2).setOnes();
return h;
}
inline void EuclideanToHomogeneous(const Mat2X &x, Mat3X *h) {
inline void EuclideanToHomogeneous(const Mat2X& x, Mat3X* h) {
h->resize(3, x.cols());
h->block(0, 0, 2, x.cols()) = x;
h->row(2).setOnes();
}
inline Mat4X EuclideanToHomogeneous(const Mat3X &x) {
inline Mat4X EuclideanToHomogeneous(const Mat3X& x) {
Mat4X h(4, x.cols());
h.block(0, 0, 3, x.cols()) = x;
h.row(3).setOnes();
return h;
}
inline void EuclideanToHomogeneous(const Mat3X &x, Mat4X *h) {
inline void EuclideanToHomogeneous(const Mat3X& x, Mat4X* h) {
h->resize(4, x.cols());
h->block(0, 0, 3, x.cols()) = x;
h->row(3).setOnes();
}
void EuclideanToHomogeneous(const Vec2 &X, Vec3 *H);
void EuclideanToHomogeneous(const Vec3 &X, Vec4 *H);
inline Vec3 EuclideanToHomogeneous(const Vec2 &x) {
void EuclideanToHomogeneous(const Vec2& X, Vec3* H);
void EuclideanToHomogeneous(const Vec3& X, Vec4* H);
inline Vec3 EuclideanToHomogeneous(const Vec2& x) {
return Vec3(x(0), x(1), 1);
}
inline Vec4 EuclideanToHomogeneous(const Vec3 &x) {
inline Vec4 EuclideanToHomogeneous(const Vec3& x) {
return Vec4(x(0), x(1), x(2), 1);
}
// Conversion from image coordinates to normalized camera coordinates
void EuclideanToNormalizedCamera(const Mat2X &x, const Mat3 &K, Mat2X *n);
void HomogeneousToNormalizedCamera(const Mat3X &x, const Mat3 &K, Mat2X *n);
void EuclideanToNormalizedCamera(const Mat2X& x, const Mat3& K, Mat2X* n);
void HomogeneousToNormalizedCamera(const Mat3X& x, const Mat3& K, Mat2X* n);
inline Vec2 Project(const Mat34 &P, const Vec3 &X) {
inline Vec2 Project(const Mat34& P, const Vec3& X) {
Vec4 HX;
HX << X, 1.0;
Vec3 hx = P * HX;
return hx.head<2>() / hx(2);
}
inline void Project(const Mat34 &P, const Vec4 &X, Vec3 *x) {
inline void Project(const Mat34& P, const Vec4& X, Vec3* x) {
*x = P * X;
}
inline void Project(const Mat34 &P, const Vec4 &X, Vec2 *x) {
inline void Project(const Mat34& P, const Vec4& X, Vec2* x) {
Vec3 hx = P * X;
*x = hx.head<2>() / hx(2);
}
inline void Project(const Mat34 &P, const Vec3 &X, Vec3 *x) {
inline void Project(const Mat34& P, const Vec3& X, Vec3* x) {
Vec4 HX;
HX << X, 1.0;
Project(P, HX, x);
}
inline void Project(const Mat34 &P, const Vec3 &X, Vec2 *x) {
inline void Project(const Mat34& P, const Vec3& X, Vec2* x) {
Vec3 hx;
Project(P, X, &hx);
*x = hx.head<2>() / hx(2);
}
inline void Project(const Mat34 &P, const Mat4X &X, Mat2X *x) {
inline void Project(const Mat34& P, const Mat4X& X, Mat2X* x) {
x->resize(2, X.cols());
for (int c = 0; c < X.cols(); ++c) {
Vec3 hx = P * X.col(c);
@@ -134,13 +134,13 @@ inline void Project(const Mat34 &P, const Mat4X &X, Mat2X *x) {
}
}
inline Mat2X Project(const Mat34 &P, const Mat4X &X) {
inline Mat2X Project(const Mat34& P, const Mat4X& X) {
Mat2X x;
Project(P, X, &x);
return x;
}
inline void Project(const Mat34 &P, const Mat3X &X, Mat2X *x) {
inline void Project(const Mat34& P, const Mat3X& X, Mat2X* x) {
x->resize(2, X.cols());
for (int c = 0; c < X.cols(); ++c) {
Vec4 HX;
@@ -150,7 +150,7 @@ inline void Project(const Mat34 &P, const Mat3X &X, Mat2X *x) {
}
}
inline void Project(const Mat34 &P, const Mat3X &X, const Vecu &ids, Mat2X *x) {
inline void Project(const Mat34& P, const Mat3X& X, const Vecu& ids, Mat2X* x) {
x->resize(2, ids.size());
Vec4 HX;
Vec3 hx;
@@ -161,26 +161,26 @@ inline void Project(const Mat34 &P, const Mat3X &X, const Vecu &ids, Mat2X *x) {
}
}
inline Mat2X Project(const Mat34 &P, const Mat3X &X) {
inline Mat2X Project(const Mat34& P, const Mat3X& X) {
Mat2X x(2, X.cols());
Project(P, X, &x);
return x;
}
inline Mat2X Project(const Mat34 &P, const Mat3X &X, const Vecu &ids) {
inline Mat2X Project(const Mat34& P, const Mat3X& X, const Vecu& ids) {
Mat2X x(2, ids.size());
Project(P, X, ids, &x);
return x;
}
double Depth(const Mat3 &R, const Vec3 &t, const Vec3 &X);
double Depth(const Mat3 &R, const Vec3 &t, const Vec4 &X);
double Depth(const Mat3& R, const Vec3& t, const Vec3& X);
double Depth(const Mat3& R, const Vec3& t, const Vec4& X);
/**
* Returns true if the homogenious 3D point X is in front of
* the camera P.
*/
inline bool isInFrontOfCamera(const Mat34 &P, const Vec4 &X) {
* Returns true if the homogenious 3D point X is in front of
* the camera P.
*/
inline bool isInFrontOfCamera(const Mat34& P, const Vec4& X) {
double condition_1 = P.row(2).dot(X) * X[3];
double condition_2 = X[2] * X[3];
if (condition_1 > 0 && condition_2 > 0)
@@ -189,37 +189,37 @@ inline bool isInFrontOfCamera(const Mat34 &P, const Vec4 &X) {
return false;
}
inline bool isInFrontOfCamera(const Mat34 &P, const Vec3 &X) {
inline bool isInFrontOfCamera(const Mat34& P, const Vec3& X) {
Vec4 X_homo;
X_homo.segment<3>(0) = X;
X_homo(3) = 1;
return isInFrontOfCamera( P, X_homo);
return isInFrontOfCamera(P, X_homo);
}
/**
* Transforms a 2D point from pixel image coordinates to a 2D point in
* normalized image coordinates.
*/
inline Vec2 ImageToNormImageCoordinates(Mat3 &Kinverse, Vec2 &x) {
Vec3 x_h = Kinverse*EuclideanToHomogeneous(x);
return HomogeneousToEuclidean( x_h );
* Transforms a 2D point from pixel image coordinates to a 2D point in
* normalized image coordinates.
*/
inline Vec2 ImageToNormImageCoordinates(Mat3& Kinverse, Vec2& x) {
Vec3 x_h = Kinverse * EuclideanToHomogeneous(x);
return HomogeneousToEuclidean(x_h);
}
/// Estimates the root mean square error (2D)
inline double RootMeanSquareError(const Mat2X &x_image,
const Mat4X &X_world,
const Mat34 &P) {
inline double RootMeanSquareError(const Mat2X& x_image,
const Mat4X& X_world,
const Mat34& P) {
size_t num_points = x_image.cols();
Mat2X dx = Project(P, X_world) - x_image;
return dx.norm() / num_points;
}
/// Estimates the root mean square error (2D)
inline double RootMeanSquareError(const Mat2X &x_image,
const Mat3X &X_world,
const Mat3 &K,
const Mat3 &R,
const Vec3 &t) {
inline double RootMeanSquareError(const Mat2X& x_image,
const Mat3X& X_world,
const Mat3& K,
const Mat3& R,
const Vec3& t) {
Mat34 P;
P_From_KRt(K, R, t, &P);
size_t num_points = x_image.cols();

View File

@@ -29,14 +29,18 @@ using namespace libmv;
TEST(Projection, P_From_KRt) {
Mat3 K, Kp;
// clang-format off
K << 10, 1, 30,
0, 20, 40,
0, 0, 1;
// clang-format on
Mat3 R, Rp;
// clang-format off
R << 1, 0, 0,
0, 1, 0,
0, 0, 1;
// clang-format on
Vec3 t, tp;
t << 1, 2, 3;
@@ -62,16 +66,18 @@ Vec4 GetRandomPoint() {
TEST(Projection, isInFrontOfCamera) {
Mat34 P;
// clang-format off
P << 1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0;
// clang-format on
Vec4 X_front = GetRandomPoint();
Vec4 X_back = GetRandomPoint();
X_front(2) = 10; /* Any point in the positive Z direction
* where Z > 1 is in front of the camera. */
X_back(2) = -10; /* Any point in the negative Z direction
* is behind the camera. */
X_front(2) = 10; /* Any point in the positive Z direction
* where Z > 1 is in front of the camera. */
X_back(2) = -10; /* Any point in the negative Z direction
* is behind the camera. */
bool res_front = isInFrontOfCamera(P, X_front);
bool res_back = isInFrontOfCamera(P, X_back);
@@ -82,12 +88,14 @@ TEST(Projection, isInFrontOfCamera) {
TEST(AutoCalibration, ProjectionShiftPrincipalPoint) {
Mat34 P1, P2;
// clang-format off
P1 << 1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0;
P2 << 1, 0, 3, 0,
0, 1, 4, 0,
0, 0, 1, 0;
// clang-format on
Mat34 P1_computed, P2_computed;
ProjectionShiftPrincipalPoint(P1, Vec2(0, 0), Vec2(3, 4), &P2_computed);
ProjectionShiftPrincipalPoint(P2, Vec2(3, 4), Vec2(0, 0), &P1_computed);
@@ -98,12 +106,14 @@ TEST(AutoCalibration, ProjectionShiftPrincipalPoint) {
TEST(AutoCalibration, ProjectionChangeAspectRatio) {
Mat34 P1, P2;
// clang-format off
P1 << 1, 0, 3, 0,
0, 1, 4, 0,
0, 0, 1, 0;
P2 << 1, 0, 3, 0,
0, 2, 4, 0,
0, 0, 1, 0;
// clang-format on
Mat34 P1_computed, P2_computed;
ProjectionChangeAspectRatio(P1, Vec2(3, 4), 1, 2, &P2_computed);
ProjectionChangeAspectRatio(P2, Vec2(3, 4), 2, 1, &P1_computed);

View File

@@ -33,23 +33,23 @@ namespace libmv {
namespace resection {
// x's are 2D image coordinates, (x,y,1), and X's are homogeneous four vectors.
template<typename T>
void Resection(const Matrix<T, 2, Dynamic> &x,
const Matrix<T, 4, Dynamic> &X,
Matrix<T, 3, 4> *P) {
template <typename T>
void Resection(const Matrix<T, 2, Dynamic>& x,
const Matrix<T, 4, Dynamic>& X,
Matrix<T, 3, 4>* P) {
int N = x.cols();
assert(X.cols() == N);
Matrix<T, Dynamic, 12> design(2*N, 12);
Matrix<T, Dynamic, 12> design(2 * N, 12);
design.setZero();
for (int i = 0; i < N; i++) {
T xi = x(0, i);
T yi = x(1, i);
// See equation (7.2) on page 179 of H&Z.
design.template block<1, 4>(2*i, 4) = -X.col(i).transpose();
design.template block<1, 4>(2*i, 8) = yi*X.col(i).transpose();
design.template block<1, 4>(2*i + 1, 0) = X.col(i).transpose();
design.template block<1, 4>(2*i + 1, 8) = -xi*X.col(i).transpose();
design.template block<1, 4>(2 * i, 4) = -X.col(i).transpose();
design.template block<1, 4>(2 * i, 8) = yi * X.col(i).transpose();
design.template block<1, 4>(2 * i + 1, 0) = X.col(i).transpose();
design.template block<1, 4>(2 * i + 1, 8) = -xi * X.col(i).transpose();
}
Matrix<T, 12, 1> p;
Nullspace(&design, &p);

Some files were not shown because too many files have changed in this diff Show More