Compare commits

...

537 Commits

Author SHA1 Message Date
be37172a61 GPencil: Fix edit curve not being written to file 2020-11-13 19:46:57 +01:00
15e79aeb18 Merge branch 'master' into greasepencil-edit-curve 2020-11-13 18:12:28 +01:00
e9b955b99c GPencil: Remove ID from operators to fix T82597
Instead to use the ID of the object, now the parameter is an Enum with Selected object or New.

If use selected mode, the first grease pencil object selected is used. If none of the selected objects is a grease pencil object, a new object is created.

Small cleanup changes to the original patch.

Differential Revision: https://developer.blender.org/D9529
2020-11-13 15:21:15 +01:00
50ccf346f0 LibOverride: Adjust PointCache operators to properly handle overrides.
LibOverrides only support a small sub-set of PointCache features for now
(one cannot add new caches, baking in memory is not supported...).

Part of first step of T82503: support disk cache in liboverrides.
2020-11-13 14:21:27 +01:00
7e210e68ba LibOverride: Do not tag overrides for complete recalc.
This was done as some sort of safety, but should not actually be needed,
and including tags like `ID_RECALC_POINT_CACHE` e.g. makes usage of
point caches impossible with liboverrides (since it would systematically
invalidate all cache on file load).

In theory we should not have to tag anything here in fact, RNA accessors
are supposed to take care of it, but for now we keep the
`ID_RECALC_COPY_ON_WRITE` one.

Part of first step of T82503: support disk cache in liboverrides.
2020-11-13 14:21:27 +01:00
59910f7217 LibOverride: Make PointCache RNA properties overridable.
Note that due to convoluted layout of point caches in RNA (active one
also storing list of all available ones), we'll often have the
pointcache overrides rules twice. Should not be a huge problem,
practically speaking.

Part of first step of T82503: support disk cache in liboverrides.
2020-11-13 14:21:27 +01:00
75ea4b8a1f Ceres: Update to upstream version 2.0.0
We already were using one of earlier RC of the library, so there is no
expected big changes. Just making the update official, using official
version and stating it in the readme file.
2020-11-13 11:52:59 +01:00
7146e9696e CMake: Extend strict flags cancellation flags
Becomes rather annoying to duplicate them across C/C++ GCC/Clang sets,
almost as if the test should test both C and C++, and to do it for all
compilers.

Solves strict warning in the upstream of Ceres library.
2020-11-13 11:51:49 +01:00
a2e00236d0 Revert "Codesign: Versioning code to support older branches"
This reverts commit 9d172f007e.

Got a second thought and remembered why it was not done in the first place.
The issue here is that the server needs to communicate codesign result back
and that must happen within the new protocol. So if the client talks old
protocol it is possible to receieve data from it, but is not possible to
communicate result back to it.
2020-11-13 11:35:04 +01:00
a8f9a24939 Cleanup: use IMB_FTYPE_NONE instead of 0 for imbuf format comparison
Image format code checked the file type against an enum except for
zero which is used when the format can't be detected.

Also add doc-strings to some of the image file type callbacks.
2020-11-13 20:32:15 +11:00
4a3b26dd5e Fix building after 2e53b646f6f02ab112e0823b9577ff2e1920faaeq 2020-11-13 20:32:15 +11:00
2e53b646f6 GPencil: Remove "angle_split" from Multiply modifier. 2020-11-13 17:20:20 +08:00
5035057281 Object: show preview plane for add-object tool
The orientation & depth settings are used to show the preview plane
that is used when adding the object.
2020-11-13 20:15:03 +11:00
9d172f007e Codesign: Versioning code to support older branches
Turns out it is easier to have suboptimal versioning code on the server
side than to deal with branches where changes are to be merged into.
2020-11-13 09:49:02 +01:00
12f394ece7 refactor vec_roll_to_mat3_normalized() for clarity
the function vec_roll_to_mat3_normalized() has a bug as described in T82455. This Differential is only for refactoring the code such that it becomes more clear what the function does and how the bug can be fixed. This differential is supposed to not introduce any functional changes.

Reviewed By: sybren

Differential Revision: https://developer.blender.org/D9410
2020-11-13 09:46:18 +01:00
Ivan Perevala
4efd87d56b UI: Adaptive HDRI preview resolution
HDRI preview should have resolution dependent on dpi, viewport scale and HDRI gizmo size.
This patch uses a LOD to render a more round sphere.

Reviewed By: Jeroen Bakker

Differential Revision: https://developer.blender.org/D9382
2020-11-13 08:26:27 +01:00
YimingWu
b35b8c8849 Adding 3D_POLYLINE_UNIFORM_COLOR to PyGPU shader API
This would allow python script to access `lineWidth` uniform when drawing lines
without using `glLineWidth`.

Reviewed By: Jeroen Bakker

Differential Revision: https://developer.blender.org/D9518
2020-11-13 08:23:04 +01:00
Manuel Castilla
4dc5920525 Fix CalculateStandardDeviationOperation incorrect results for R G B channels
Standard deviation formula wasn't being applied correctly when selecting
R G B cases. Issue is there since Blender 2.64 as it was incorrectly
ported over from the previous compositor.

Reviewed By: Sergey Sharybin, Jeroen Bakker

Differential Revision: https://developer.blender.org/D9384
2020-11-13 08:20:15 +01:00
Jun Mizutani
db7d8281c5 Add An Opacity Slider to Overlay Wireframe
This patch adds an opacity slider to the wireframe overlay. The previous
wireframe in dense geometry scenes could be too dark and sometimes the
user just wants an impression of the geometry during modelling.

Reviewed By: Jeroen Bakker

Differential Revision: https://developer.blender.org/D7622
2020-11-13 08:14:56 +01:00
f00ebd4dba UI: make add object tool experimental
Some changes here are planned which need feedback from users before
declaring this ready for the next release.
2020-11-13 17:27:39 +11:00
ccf8df66fe BLI_math: add floor_power_of_10, ceil_power_of_10
Add utility functions to get the floor/ceiling of a float value
to the next power of 10.
2020-11-13 17:05:46 +11:00
40b2ce5ea7 Cleanup: Remove unecessary logic in panel code
Also use short for panel flag arguments to functions since it matches
the type in DNA, and remove a comment that isn't helpful.
2020-11-12 23:32:50 -05:00
f3ab698951 Cleanup: Simplify panel activate state function
This commit moves some of the logic around so that the logic in
panel_activate_state is clearly separated by the state being activated.
There are fewer nested and redundant checks, and it's easier to see
the progression of interaction with the panel handler.
2020-11-12 23:28:53 -05:00
4eb57d00bb Merge branch 'blender-v2.91-release' 2020-11-13 11:37:47 +11:00
2e08500d04 Fix memory leak writing PNG when opening the file fails 2020-11-13 11:36:29 +11:00
454b7876ff Cleanup: remove unnecessary ImFileType.ftype callback
This callback made some sense before moving the file-type information
from a bit-flag to an enum: e142ae77ca

Since then, we can compare the type value directly.

Also replace loops over file types with IMB_file_type_from_{ibuf/ftype}.
2020-11-13 11:28:24 +11:00
ac299bb453 Cleanup: imbuf file format callback declaration
Use named members as this wasn't very readable given the number
of unnamed NULL members.
2020-11-13 10:39:48 +11:00
9a73417337 Fix T82349: file extension not added to unpacked images
Ensure the appropriate extension is set when unpacking generated images
that don't have a filepath set.

Ref D9500
2020-11-13 10:11:00 +11:00
cd49afc596 Fix T82596: Fly/walk navigation crash
Add a NULL check for the View3D's camera, because these
modes can be entered even without an active camera.
2020-11-12 17:06:57 -05:00
55e2930c18 Outliner: Sync with property editor physics tab
This commit makes the property editor switch to the physics tab instead
of the modifier tab when selecting physics modifiers. Since the modifier
isn't visible then, it's confusing to change the expansion, so this commit
also disables the modifier expansion for these modifiers.

Differential Revision: https://developer.blender.org/D9544
2020-11-12 16:59:30 -05:00
78fcd5970c Merge branch 'master' into greasepencil-edit-curve 2020-11-12 22:43:01 +01:00
b2b0f35d4e GPencil: Fix curve point extrude selection 2020-11-12 22:42:13 +01:00
76e9d3638f GPencil: Clean up compiler warnings 2020-11-12 22:41:27 +01:00
b6988de22a Merge branch 'blender-v2.91-release' 2020-11-12 20:30:14 +01:00
d59fa12f2a Fix T82607: crash cancelling Cycles render during adaptive subdivision update
Now that the Blender sync mechanism deletes nodes from the scene, we need to
ensure scene update is stopped before we do this.

Also add some more early out in scene geometry update to ensure we do not
continue working on incomplete geometry data, though that was not the cause of
this crash.
2020-11-12 20:14:12 +01:00
a6c1c0427c Cleanup: remove accidentally committed merge files 2020-11-12 19:52:04 +01:00
ddc6a45a54 Cleanup: compiler warning 2020-11-12 19:52:04 +01:00
5c01ecd2bf Fix T82516: Cycles crash updateding animated volumes after NanoVDB
Two issues:
* Automatic deduplication of OpenVDB grid data was failing when Cycles had
  already cleared the OpenVDB grid, causing an empty grid. Instead rely on
  Blender return the same OpenVDB grid pointer when deduplication is possible.
* The volume bounds mesh was not properly cleared when the OpenVDB grid was
  empty, causing a mismatch between mesh and voxel data.
2020-11-12 19:48:59 +01:00
c43283d10b fix: added missing float declarations 2020-11-12 19:27:12 +01:00
923b314a7a Test cases for vec_roll_to_mat3_normalized
The function vec_roll_to_mat3_normalized() basically has to handle 3 scenarios:

- When a bone is oriented along the negative Y axis
- When a bone is very close to the negative Y axis
- All other cases

The tests in the Differential make sure that all 3 situations are covered.

Reviewed By: sybren, mont29

Differential Revision: https://developer.blender.org/D9525
2020-11-12 18:16:35 +01:00
f17dfd575c Fix empty Cycles render devices panel showing in preferences on macOS
There is no GPU rendering support on macOS, so showing the panel only adds
confusion. Also hide the panels in builds without Cycles.
2020-11-12 17:39:19 +01:00
fe808ca8a0 Merge branch 'master' into greasepencil-edit-curve 2020-11-12 17:21:19 +01:00
7a31486571 GPencil: Fix comments and rename _pad2 to _pad 2020-11-12 17:12:37 +01:00
b9bd47c2e2 Merge branch 'blender-v2.91-release' into master 2020-11-12 16:24:18 +01:00
dad228a19c Fix asserts when two (or more) SplineIK constraints have the same root
Only a single DEG operation node `POSE_SPLINE_IK_SOLVER` should
be added in this case [ see `build_splineik_pose`, same is already done
for overlapping IK in `build_ik_pose`]

ref T82347.

Reviewers: sybren

Maniphest Tasks: T82347

Differential Revision: https://developer.blender.org/D9471
2020-11-12 16:20:48 +01:00
987732181f Merge branch 'blender-v2.91-release' 2020-11-12 14:04:02 +01:00
d0c1d93b7e Fluid: Removed clamp from force assignment
The clamp is too aggressive and results in forces being too weak.
2020-11-12 14:03:03 +01:00
a8f1bea590 Fix NanoVDB not being enabled/disabled correctly in CMake profiles
This caused warnings when e.g. building the lite profile because NanoVDB was not disabled, but
OpenVDB was. This Fixes this by setting the "WITH_NANOVDB" flag too.
2020-11-12 12:49:12 +01:00
dbbfba9428 Fix T81813: Keyframe handles don't follow keyframes
Add a new property `co_ui` to Keyframes, the modification of which will
apply to the keyframe itself as well as its Bézier handles.

Dragging the "Keyframe" slider in the properties panel now maintains the
deltas between the keyframe and its handles, just like moving the key in
the graph editor would.

Reviewed by @sybren in T81813.
2020-11-12 12:02:49 +01:00
5db114ae0f Merge branch 'blender-v2.91-release' into master 2020-11-12 11:51:25 +01:00
9067cd64a5 Fix T82466: Library Overrides: overrides disappear when appending.
`BKE_library_make_local` was not properly checking for tags and/or libs
in liboverrides case.
2020-11-12 11:50:33 +01:00
0d04bcd566 Merge branch 'blender-v2.91-release' into master 2020-11-12 11:39:34 +01:00
12dd26a2bb Sculpt: fix face set extract clicking in empty space
Should not do anything in that case.

ref T82615

Maniphest Tasks: T82615

Differential Revision: https://developer.blender.org/D9532
2020-11-12 11:36:29 +01:00
b4b4532ce0 Sculpt: use ESC key in addition to RMB to cancel eyedropper
This is more in line to other eyedropper usages throughout blender.

Affected operators:
- Sample Dyntopo detail
- Extract Face Set (as reported in T82615)

ref T82615

Maniphest Tasks: T82615

Differential Revision: https://developer.blender.org/D9531
2020-11-12 11:36:18 +01:00
d706aaa53d Merge branch 'blender-v2.91-release' into master 2020-11-12 11:26:24 +01:00
beb1460f8e Codesign: Report codesign errors from server to worker
Pass codesign errors (if any) from codesign buildbot server to the
buildbot worker, so that the latter one can abort build process if
the error happens. This solves issues when non-properly-notarized
DMG package gets uploaded to the buildbot website.
2020-11-12 11:26:06 +01:00
d7a2032846 Merge branch 'blender-v2.91-release' into master 2020-11-12 11:23:16 +01:00
eaf9ae643b Fix T82624: Skin modifiers root bone cannot be moved
When creating an armature from the skin modifier, resulting bones would
always be flagged BONE_CONNECTED.
Those bones cannot be transformed (just rotated).

Now only flag bones that really have a parent BONE_CONNECTED.

Maniphest Tasks: T82624

Differential Revision: https://developer.blender.org/D9534
2020-11-12 11:20:24 +01:00
bc090387ac Fix T82388: Sculpt mode: Unexpected undo behavior.
Issue exposed by rB4c7b1766a7f1.

Main idea is that non-memfile first undo step should check into previous
memfile and tag the ID it is editing as `future_changed`.

That way, when we go back and undo to the memfile, said IDs are properly
detected as changed and re-read from the memfile.

Otherwise, undo system sees them as unchanged, and just re-use the
current data instead.

Note that currently only Sculpt mode seems affected (probably because it
is storing the mode switch itself as a Sculpt undo step instead of a
memfile one), but similar action might be needed in some other cases
too.

Maniphest Tasks: T82388

Differential Revision: https://developer.blender.org/D9510
2020-11-12 10:47:50 +01:00
fb4113defb Codesign: Report codesign errors from server to worker
Pass codesign errors (if any) from codesign buildbot server to the
buildbot worker, so that the latter one can abort build process if
the error happens. This solves issues when non-properly-notarized
DMG package gets uploaded to the buildbot website.
2020-11-12 10:12:56 +01:00
88bb29dea6 Fix T82617: artifacts in Cycles viewport when changing subdivision attributes
The old attributes were not cleared when synchronizing the geometries, this could also lead to crashes in other cases.

Ref T82608.
2020-11-12 09:17:38 +01:00
08452d9956 Merge branch 'blender-v2.91-release' 2020-11-12 09:14:36 +01:00
cd2dfacfa5 Merge branch 'blender-v2.91-release' 2020-11-12 09:14:05 +01:00
Jeroen Bakker
c08827e659 Fix T82093: Sampled Colors Mismatch When Painting (Partial)
When painting in the image editor on data images (Non-color, Raw) the
color mismatched between the sampled color and the actual effect that
the painting has on the image. The root cause is that the sampling is
color managed, but the painting still uses a fixed color management
pipeline with a lot of assumptions. Due to recent changes the drawing
of the image editor is color managed, but the painting isn't what made
these changes show up.

This patch is a work-a-round so that the sampled colors and the effect
the paint has on the texture matches. This isn't the correct solution
as that would be to migrate all the painting tools to use proper color
management.

Reviewed By: Pablo Dobarro

Differential Revision: https://developer.blender.org/D9411
2020-11-12 09:13:06 +01:00
Jeroen Bakker
f93081a01b Fix T81673: Color picker picks up UI and Overlay
There are two implementations of the Sample Color operation. One is used
by the paint texture and one by the image editor. The image editor
variant sampled from the ibuf directly, but the paint texture variant
was sampling from the screen front buffer. This can lead into incorrect
samples due to color pipeline.

This patch will use the image editor variant when sampling a color for
2d texture painting

Reviewed By: Pablo Dobarro

Differential Revision: https://developer.blender.org/D9408
2020-11-12 09:08:32 +01:00
89c8b074e7 Cleanup: split view3d_placement depth & orientation calculation
Split out functionality needed for preview plane drawing.
2020-11-12 16:16:49 +11:00
934c2c8ac5 Cleanup: clang-tidy, remove invalid comments 2020-11-12 15:18:08 +11:00
977b6ca305 Cleanup: Imperative tense in property description 2020-11-12 04:59:50 +01:00
f284a40385 ImBuf: pass the number of bytes read to 'is_a' callbacks
Previously the header was a fixed size and assumed to be zeroed.
Now read in bytes up to `HEADER_SIZE`, pass the number or bytes
read to the callback which must not read past those bytes.
2020-11-12 12:30:18 +11:00
fa81a42539 Cleanup: RNA ID enum utility function
- Avoid calling `GS(id->name)` on each iteration.
- Remove unused arguments.
- Pass `const ID *` to the filter callback.
2020-11-12 11:46:30 +11:00
e00bb5a4b7 Cleanup: spelling 2020-11-12 11:35:31 +11:00
9e1e9516a0 Cleanup: warnings 2020-11-12 11:23:21 +11:00
Aaron Carlisle
2ef2b3e0fd Cleanup: Remove SSE math optimization i386 macOS builds
We haven't supported 32bit mac builds for a while so this should be safe to remove.

Reviewed By: #platform_macos, brecht

Differential Revision: https://developer.blender.org/D9489
2020-11-11 17:08:48 -05:00
1043ec7991 Merge branch 'blender-v2.91-release' 2020-11-11 16:37:22 -05:00
b99faa0f56 Fix T80475, bad bevel: side vertex in bad plane in some cases.
Needed a better normal to for plane to offset into when there are
non in-plane edges between two beveled edges. It was using the vertex
normal, which is just wrong.

Differential Revision: https://developer.blender.org/D9508
2020-11-11 16:24:01 -05:00
88e6341ce8 UI: Tooltips: dont add period to labels
These are generally only one or two word phrases and are not sentences.
This change slightly improves readability.

Note, the check when display labels:

```
Tip Label (only for buttons not already showing the label).
```

Could be improved here because there are a lot of false positives.
2020-11-11 14:58:50 -05:00
40aa69e2eb Cleanup: Split header for Outliner tree building into C and C++ headers
See https://developer.blender.org/D9499.

It's odd to include a C++ header (".hh") in C code, we should avoid that. All
of the Outliner code should be moved to C++, I don't expect this C header to
stay for long.
2020-11-11 19:09:15 +01:00
c2b3a68f24 Cleanup: Rename Outliner "tree-view" types to "tree-display" & update comments
See https://developer.blender.org/D9499.

"View" leads to weird names like `TreeViewViewLayer` and after all they are
specific to what we call a "display mode", so "display" is more appropriate.

Also add, update and correct comments.
2020-11-11 19:09:11 +01:00
01318b3112 Cleanup: Follow C++ code style for new Outliner building code
See https://developer.blender.org/D9499.

* Use C++17 nested namespaces.
* Use `_` suffix rather than prefix for private member variables.

Also: Simplify code visually in `tree_view.cc` with `using namespace`.
2020-11-11 19:09:06 +01:00
43b4570dcf Cleanup: General cleanup of Outliner Blender File display mode building
See https://developer.blender.org/D9499.

* Turn functions into member functions (makes API for a type more obvious &
  local, allows implicitly sharing data through member variables, enables order
  independend definition of functions, allows more natural language for
  function names because of the obvious context).
* Prefer references over pointers for passing by reference (makes clear that
  NULL is not a valid value and that the current scope is not the owner).
* Reduce indentation levels, use `continue` in loops to ensure preconditions
  are met.
* Add asserts for sanity checks.
2020-11-11 19:09:01 +01:00
44d8fafd7f UI Code Quality: Convert Outliner Blender File mode to new tree buiding design
See https://developer.blender.org/D9499.

Also:
* Add `space_outliner/tree/common.cc` for functions shared between display
  modes.
* I had to add a cast to `ListBaseWrapper` to make it work with ID lists.
* Cleanup: Remove internal `Tree` alias for `ListBase`. That was more confusing
  than helpful.
2020-11-11 19:08:56 +01:00
ad0c387fdf Cleanup: Put Outliner C++ namespace into blender::ed namespace, add comments
See https://developer.blender.org/D9499.

Also remove unnecessary forward declaration.
2020-11-11 19:08:49 +01:00
5fb67573b5 Fix possible null-pointer dereference in new Outliner tree building code 2020-11-11 19:08:43 +01:00
dc9a52a303 Cleanup: Remove redundant parameter from new Outliner tree building code
See https://developer.blender.org/D9499.
2020-11-11 19:08:36 +01:00
cad2fd99e7 Cleanup: Comments and style improvements for new Outliner C++ code
See https://developer.blender.org/D9499.

* Add comments to explain the design ideas better.
* Follow code style guide for class layout.
* Avoid uninitialized value after construction (general good practice).
2020-11-11 19:08:29 +01:00
6b18e13c5b UI Code Quality: Use C++ data-structures for Outliner object hierarchy building
See https://developer.blender.org/D9499.

* Use `blender::Map` over `GHash`
* Use `blender::Vector` over allocated `ListBase *`

Benefits:
* Significantly reduces the amount of heap allocations in large trees (e.g.
  from O(n) to O(log(n)), where n is number of objects).
* Higher type safety (no `void *`, virtually no casts).
* More optimized (e.g. small buffer optimization).
* More practicable, const-correct APIs with well-defined exception behavior.

Code generally becomes more readable (less lines of code, less boilerplate,
more logic-focused APIs because of greater language flexibility).
2020-11-11 19:08:13 +01:00
c9cc03b688 UI Code Quality: General refactor of Outliner View Layer display mode creation
See https://developer.blender.org/D9499.

* Turn functions into member functions (makes API for a type more obvious &
  local, allows implicitly sharing data through member variables, enables order
  independend definition of functions, allows more natural language for
  function names because of the obvious context).
* Move important variables to classes rather than passing around all the time
  (shorter, more task-focused code, localizes important data names).
* Add helper class for adding object children sub-trees (smaller, more focused
  units are easier to reason about, have higher coherence, better testability,
  can manage own resources easily with RAII).
* Use C++ iterators over C-macros (arguably more readable, less macros is
  generally preferred)
* Add doxygen groups (visually emphasizes the coherence of code sections,
  provide place for higher level comments on sections).
* Prefer references over pointers for passing by reference (makes clear that
  NULL is not a valid value and that the current scope is not the owner).
2020-11-11 19:07:55 +01:00
249e4df110 UI Code Quality: Start refactoring Outliner tree building (using C++)
This introduces a new C++ abstraction "tree-display" (in this commit named
tree-view, renamed in a followup) to help constructing and managing the tree
for the different display types (View Layer, Scene, Blender file, etc.).

See https://developer.blender.org/D9499 for more context. Other developers
approved this rather significantly different design approach there.

----

Motivation

General problems with current design:
* The Outliner tree building code is messy and hard to follow.
* Hard-coded display mode checks are scattered over many places.
* Data is passed around in rather unsafe ways (e.g. lots of `void *`).
* There are no individually testable units.
* Data-structure use is inefficient.

The current Outliner code needs quite some untangling, the tree building seems
like a good place to start. This and the followup commits tackle that.

----

Design Idea

Idea is to have an abstract base class (`AbstractTreeDisplay`), and then
sub-classes with the implementation for each display type (e.g.
`TreeDisplayViewLayer`, `TreeDisplayDataAPI`, etc). The tree-display is kept
alive until tree-rebuild as runtime data of the space, so that further queries
based on the display type can be executed (e.g. "does the display support
selection syncing?", "does it support restriction toggle columns?", etc.).

New files are in a new `space_outliner/tree` sub-directory.

With the new design, display modes become proper units, making them more
maintainable, safer and testable. It should also be easier now to add new
display modes.
2020-11-11 18:51:57 +01:00
da40d91d46 GPencil: Fix compiler errors due merge conflicts 2020-11-09 19:56:21 +01:00
50deb5ceaf Merge branch 'master' into greasepencil-edit-curve 2020-11-09 16:25:56 +01:00
c051fd63ff Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/blenloader/intern/writefile.c
	source/blender/editors/gpencil/gpencil_edit.c
2020-11-07 11:13:53 +01:00
96a0f736a8 Merge branch 'master' into greasepencil-edit-curve 2020-11-05 15:47:48 +01:00
b00154d2f9 Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/editors/gpencil/gpencil_edit.c
2020-11-04 16:15:17 +01:00
a8ec05c587 Merge branch 'master' into greasepencil-edit-curve 2020-10-31 09:25:54 +01:00
eb29a98126 Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/editors/gpencil/gpencil_edit.c
2020-10-30 15:48:04 +01:00
2df6272a76 Merge branch 'master' into greasepencil-edit-curve 2020-10-29 19:29:56 +01:00
b650f5d52b Merge branch 'master' into greasepencil-edit-curve 2020-10-28 15:44:50 +01:00
f17c9d9cba Merge branch 'master' into greasepencil-edit-curve 2020-10-27 17:40:33 +01:00
cc0aa15269 Merge branch 'master' into greasepencil-edit-curve 2020-10-23 19:46:35 +02:00
514cb2b549 Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/editors/gpencil/gpencil_edit.c
2020-10-22 20:04:42 +02:00
628ac1472f Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/gpencil_modifiers/intern/MOD_gpencilsubdiv.c
2020-10-21 19:40:40 +02:00
509c11f2fb Merge branch 'master' into greasepencil-edit-curve 2020-10-21 15:32:50 +02:00
42c6ebfae8 Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/blenloader/intern/versioning_290.c
2020-10-19 15:53:08 +02:00
0ed9f4144b Merge branch 'master' into greasepencil-edit-curve 2020-10-17 12:31:29 +02:00
ebf150dca8 Merge branch 'master' into greasepencil-edit-curve 2020-10-15 18:43:42 +02:00
22afae2475 Merge branch 'master' into greasepencil-edit-curve 2020-10-14 15:30:40 +02:00
bae4e21304 Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/editors/gpencil/gpencil_edit.c
2020-10-12 15:43:01 +02:00
956bb4bf28 Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/blenloader/intern/versioning_290.c
2020-10-08 10:32:02 +02:00
50a86fdcf0 Merge branch 'master' into greasepencil-edit-curve 2020-10-07 17:10:33 +02:00
28d51f775f Merge branch 'master' into greasepencil-edit-curve 2020-10-05 15:34:12 +02:00
218fa0a23c Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/blenloader/intern/versioning_290.c
2020-10-03 11:00:19 +02:00
6387426fff Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/blenloader/intern/versioning_290.c
2020-10-01 16:23:53 +02:00
493158b5aa GPencil: Fix bug where curve points woud dissapear
When adaptive resolution was active, the calculated resolution for a
segment was not being claped to a minimum of 1, leading to segments
with zero points.
2020-09-28 18:52:06 +02:00
e76362b058 GPencil: Fix missing screen update when use Fixed Simplify 2020-09-28 17:57:48 +02:00
dc255302a4 GPencil: Fix parameter list after merge 2020-09-28 17:33:07 +02:00
b9642bc221 Merge branch 'master' into greasepencil-edit-curve 2020-09-28 17:06:19 +02:00
fdf9ea45af Merge branch 'master' into greasepencil-edit-curve 2020-09-28 10:57:23 +02:00
ff756d9523 GPencil: Remove unused function 2020-09-28 10:56:48 +02:00
eaed745804 Merge branch 'master' into greasepencil-edit-curve 2020-09-25 15:51:39 +02:00
5ce315f2b8 Merge branch 'master' into greasepencil-edit-curve 2020-09-24 15:56:43 +02:00
ea675cb350 Merge branch 'master' into greasepencil-edit-curve 2020-09-24 15:27:26 +02:00
4d44bd9ad2 Merge branch 'master' into greasepencil-edit-curve 2020-09-23 20:10:10 +02:00
da7a753430 Merge branch 'master' into greasepencil-edit-curve 2020-09-22 20:12:44 +02:00
da64eef55a Merge branch 'master' into greasepencil-edit-curve 2020-09-21 20:23:29 +02:00
b8cef3b2dc Merge branch 'master' into greasepencil-edit-curve 2020-09-21 15:48:04 +02:00
b1acd5e9b1 Merge branch 'master' into greasepencil-edit-curve 2020-09-21 12:09:14 +02:00
1e3e873526 GPencil: Check if layer is locked for Handles 2020-09-20 18:34:22 +02:00
261ddd900e GPencil: Hide Curve Handles when layer is locked 2020-09-20 16:58:47 +02:00
aafb5cee67 Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/blenloader/intern/versioning_290.c
2020-09-20 15:32:02 +02:00
cca904081e Merge branch 'master' into greasepencil-edit-curve 2020-09-15 15:27:14 +02:00
66aa884b89 Merge branch 'master' into greasepencil-edit-curve 2020-09-14 15:28:04 +02:00
9d186e78c8 Merge branch 'master' into greasepencil-edit-curve 2020-09-12 15:49:43 +02:00
269ceb666b Merge branch 'master' into greasepencil-edit-curve 2020-09-12 12:14:15 +02:00
91066cd0c3 Merge branch 'master' into greasepencil-edit-curve 2020-09-09 15:44:27 +02:00
2db8d319bc GPencil: Fix compiler errors and warnings
The error was related to the draw manager refactor.
2020-09-07 15:28:27 +02:00
8deb0d84eb Merge branch 'master' into greasepencil-edit-curve 2020-09-07 15:23:02 +02:00
b25a87ea49 Merge branch 'master' into greasepencil-edit-curve 2020-09-07 10:13:41 +02:00
61289c3629 Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/blenkernel/BKE_gpencil_geom.h
2020-09-05 15:13:10 +02:00
56a8977e92 Merge branch 'master' into greasepencil-edit-curve 2020-09-03 16:07:55 +02:00
c50ead0885 Merge branch 'master' into greasepencil-edit-curve 2020-09-03 08:22:05 +02:00
7c91f3f3a9 Merge branch 'master' into greasepencil-edit-curve 2020-09-01 15:58:58 +02:00
61d8c41378 Merge branch 'master' into greasepencil-edit-curve 2020-08-31 16:08:44 +02:00
0e8a04e1a5 Merge branch 'master' into greasepencil-edit-curve 2020-08-28 18:31:32 +02:00
18f637230d Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/blenloader/intern/writefile.c
2020-08-28 16:52:15 +02:00
767b27aabf Merge branch 'master' into greasepencil-edit-curve 2020-08-26 10:10:51 +02:00
22152804cc Merge branch 'master' into greasepencil-edit-curve 2020-08-25 17:52:02 +02:00
f2174e082a GPencil: Fix circle select curve conversion 2020-08-25 17:02:58 +02:00
9c6d0ba923 GPencil: Convert to curve on selection
Conversion to curve was only working for click slection.
This commit adds conversion to curve to box, circle and lasso selection.
2020-08-25 14:47:03 +02:00
d21280432f Merge branch 'master' into greasepencil-edit-curve 2020-08-25 10:36:57 +02:00
1775565a06 GPencil: Fix angle initialization 2020-08-24 19:20:29 +02:00
57ef6dca0b GPencil: Cleanup compiler warnings 2020-08-24 19:19:52 +02:00
9a82fcb51d Merge branch 'soc-2020-greasepencil-curve' of git.blender.org:blender into soc-2020-greasepencil-curve 2020-08-24 16:43:29 +02:00
3b23ce9992 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-08-24 16:43:21 +02:00
23f725e78c GPencil: Rename parameters 2020-08-24 16:42:41 +02:00
50b0bf6215 GPencil: Fix crash when delete stroke
The memory of the stroke is not avaliable after calling `BKE_gpencil_free_stroke` and cannot free the link.

Instead, before calling the free function, remove the link of the listbase.
2020-08-24 16:11:17 +02:00
ddeb4e8423 GPencil: Fix crash when delete stroke
The memory of the stroke is not avaliable after calling `BKE_gpencil_free_stroke` and cannot free the link.

Instead, before calling the free function, remove the link of the listbase.
2020-08-24 15:53:37 +02:00
274c8ae9e2 GPencil: Handle strokes with one or two points
Generating the curve for a stroke with one or two strokes could
crash because it was not handled properly.
2020-08-24 15:23:36 +02:00
ec36b04280 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-08-24 12:34:46 +02:00
79b309bc65 GPencil: Move Curve versioning to the end of the file
Some files were not patched. When move to master we decide if we must do a version bump.
2020-08-24 12:21:49 +02:00
c332956868 Merge branch 'master' into greasepencil-edit-curve 2020-08-23 16:24:02 +02:00
a936573aae GPencil: Move Curve versioning to the end of the file
Some files were not patched. When move to master we decide if we must do a version bump.
2020-08-22 16:24:53 +02:00
d88e1b7cff Merge branch 'master' into greasepencil-edit-curve 2020-08-22 13:10:48 +02:00
062595b6f9 GPencil: Report operators that are not implemented 2020-08-21 15:37:26 +02:00
4474b7c939 GPencil: Remove unessesary curve edit mode test 2020-08-21 14:26:48 +02:00
e0985ffcc1 GPencil: Draw transform gizmo curve edit 2020-08-21 14:25:46 +02:00
3633a6197e GPencil: Fix transform memory issue 2020-08-21 14:25:10 +02:00
1e5f65c91b GPencil: Fix handle lines display 2020-08-21 11:56:09 +02:00
9a89bb4d59 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-08-21 11:04:13 +02:00
2758e560a0 GPencil: Recalc transform only in curve edit mode
When changing the opacity or thickness, only reaclculate stroke if in
curve edit mode.
2020-08-21 11:03:40 +02:00
2cace77aa1 GPencil: Make parameters const 2020-08-21 10:46:35 +02:00
4844379f3f Merge branch 'soc-2020-greasepencil-curve' of git.blender.org:blender into soc-2020-greasepencil-curve 2020-08-21 10:36:42 +02:00
96a690c7ac Merge branch 'master' into greasepencil-edit-curve 2020-08-21 09:48:45 +02:00
30014b0616 GSoC Editing Grease Pencil Strokes Using Curves
This patch includes all the changes made in the soc-2020-greasepencil-curve branch.

Major changes:

* Add a new "sub-mode" to grease pencil edit mode called "curve edit mode".
* Grease pencil stroke data structure has a pointer to a grease pencil curve. It will be saved with the stroke to the blend file.
* Most edit mode operators have been changed to work in curve edit mode.
* Transformation code has been adjusted to work for grease pencil curves.
* New source file added (gpencil_edit_curve.c) for operators exclusive to curve edit mode.

Differential Revision: https://developer.blender.org/D8660
2020-08-20 16:53:55 +02:00
632b978af9 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-08-20 10:11:53 +02:00
2be3e78a7b GPencil: Skip unused code with macro 2020-08-20 10:11:15 +02:00
3e4fb5b5f1 GPencil: Add corner angle parameter 2020-08-20 10:10:09 +02:00
585089cdb6 Merge branch 'master' into greasepencil-edit-curve 2020-08-19 19:53:23 +02:00
e3079476f7 Merge branch 'master' into greasepencil-edit-curve 2020-08-19 07:51:55 +02:00
77efb597ce Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/editors/gpencil/gpencil_edit.c
2020-08-18 16:13:08 +02:00
de7470645a Merge branch 'master' into greasepencil-edit-curve 2020-08-18 11:03:29 +02:00
3f0e2dba68 Merge branch 'master' into greasepencil-edit-curve 2020-08-17 11:30:00 +02:00
13bbdc10d8 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-08-15 13:12:48 +02:00
4a00c69416 GPencil: Fix add needed gpd reference 2020-08-15 13:12:09 +02:00
8bd1469d86 GPencil: Add corner angle parameter
This parameter allows the user to control at what angle corners are
detected and considered by the fitting algorithm.
2020-08-15 13:10:37 +02:00
8b4f318f92 GPencil: Fix error and warnings after merge 2020-08-15 11:55:59 +02:00
f24a070d7e Merge branch 'master' into greasepencil-edit-curve 2020-08-15 11:14:53 +02:00
57d9453d8b Merge branch 'master' into greasepencil-edit-curve 2020-08-14 17:57:45 +02:00
b221d8d0a3 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-08-14 15:07:07 +02:00
735c717a63 Merge branch 'master' into greasepencil-edit-curve 2020-08-13 16:57:42 +02:00
a0152042b8 GPencil: UI: Move adaptive resolution
Since the adaptive resolution modifies the behaviour of the
curve resolution parameter, it makes more sense to put the checkbox
underneath the curve resolution silder.
2020-08-13 14:27:34 +02:00
cba7391d4a GPencil: Make format 2020-08-13 11:17:25 +02:00
b109537df1 Merge branch 'soc-2020-greasepencil-curve' into greasepencil-edit-curve 2020-08-13 10:21:29 +02:00
2f68c9fc68 Merge branch 'master' into greasepencil-edit-curve 2020-08-13 10:01:23 +02:00
4b93cdc98a Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/blenkernel/intern/gpencil_curve.c
2020-08-12 22:57:46 +02:00
c6650146eb GPencil: Extrude middle curve points
Previously, extruding one of the middle points of a curve would only
move the point. Now it creates a new curve from the point selected.
2020-08-12 16:45:03 +02:00
0e2f5614ef Merge branch 'master' into greasepencil-edit-curve 2020-08-11 19:15:22 +02:00
c934bb5f40 GPencil: Refactor: Make return more explicit 2020-08-11 17:29:37 +02:00
98ef4e0e8e Merge branch 'master' into greasepencil-edit-curve 2020-08-11 15:34:19 +02:00
a8cc21f09e GPencil: Recalculate curve when stroke data changed 2020-08-11 13:35:31 +02:00
f3b140cf0a GPencil: Fix lag editing strokes
During the changes for curves, the Hash insert key line was removed by error.
2020-08-11 11:16:08 +02:00
4d53f12d70 Merge branch 'soc-2020-greasepencil-curve' into greasepencil-edit-curve 2020-08-11 10:17:00 +02:00
d19a342966 Merge branch 'master' into greasepencil-edit-curve 2020-08-11 08:19:50 +02:00
f905f4d464 Gpencil: clang format 2020-08-10 22:47:38 +02:00
e1e1b303fa GPencil: Change handle type while transform
For vector and automatic handles we need to change their type to free
and aligned respectively while transforming.
2020-08-10 22:47:22 +02:00
d8b4d81da0 Merge branch 'master' into greasepencil-edit-curve 2020-08-10 15:35:52 +02:00
3190be57eb GPencil: Apply GSoC changes 2020-08-08 18:52:49 +02:00
e99ffb8dcc Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-08-08 16:43:25 +02:00
0ae02b130a GPencil: Split transform code
This commit splits the grease pencil transform code, so that editcurves and strokes get handled separately.
This fixes a bug where in normal edit mode, proportional editing would be very slow.
The center point of the transform is also more predictable and behaves similar to bezier triples in curve objects.
2020-08-08 16:42:02 +02:00
843eccb200 Merge branch 'master' into greasepencil-edit-curve 2020-08-08 12:49:41 +02:00
acd92bfd28 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-08-08 12:04:02 +02:00
c57ce2eb0b Merge branch 'master' into greasepencil-edit-curve 2020-08-05 18:30:29 +02:00
eee9eee0f5 Merge branch 'master' into greasepencil-edit-curve 2020-08-05 11:30:00 +02:00
6197d4dafa Merge branch 'master' into greasepencil-edit-curve 2020-08-04 16:44:16 +02:00
f0ed70eb12 Merge branch 'master' into greasepencil-edit-curve 2020-08-03 12:35:18 +02:00
d6b842205f Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-08-03 00:26:58 +02:00
944ba1b38d GPencil: Add snap to grid for curve points 2020-08-03 00:25:03 +02:00
c825d70811 GPencil: Implement basic extrude support 2020-08-03 00:24:54 +02:00
cd55053e65 Merge branch 'master' into greasepencil-edit-curve 2020-08-01 12:13:08 +02:00
3cae7df33f Merge branch 'master' into greasepencil-edit-curve 2020-08-01 11:20:57 +02:00
39fd500965 Merge branch 'master' into greasepencil-edit-curve 2020-07-31 12:44:31 +02:00
185aa0b2b1 Merge branch 'master' into greasepencil-edit-curve 2020-07-31 12:05:37 +02:00
438378b59d Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-30 14:53:42 +02:00
331e1639c4 GPencil: Fix curve point deletion in cyclic curve 2020-07-30 14:52:29 +02:00
045dc2c434 Merge branch 'master' into greasepencil-edit-curve 2020-07-29 10:38:38 +02:00
f2e2913a58 GPencil: Handle single point curve 2020-07-28 11:20:57 +02:00
c73809fbd1 Merge branch 'master' into greasepencil-edit-curve 2020-07-28 09:14:47 +02:00
68bebaf28f Merge branch 'master' into greasepencil-edit-curve 2020-07-27 18:34:04 +02:00
e64504ec04 Merge branch 'master' into greasepencil-edit-curve 2020-07-27 10:49:51 +02:00
0e478f2acc Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-27 00:23:56 +02:00
2b9ae89ad8 GPencil: WIP delete curve point 2020-07-25 17:15:56 +02:00
b9267f600e Merge branch 'master' into greasepencil-edit-curve 2020-07-25 16:24:56 +02:00
8045d8b861 GPencil: WIP delete curve points 2020-07-24 19:23:42 +02:00
cb1be5fed6 Merge branch 'master' into greasepencil-edit-curve 2020-07-24 16:24:58 +02:00
075276ee80 GPencil: Implement curve points dissolve 2020-07-23 19:24:24 +02:00
6f0029c87d GPencil: Fix compiler warnings 2020-07-23 17:56:17 +02:00
5d96ed364f GPencil: Apply GSoC changes 2020-07-23 17:54:11 +02:00
b21df4d473 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-23 13:59:20 +02:00
48e52c5a5c GPencil: use smooth interpolation
For the pressure, strength and color atributes, use smooth interpolation
to generate the stroke points.
2020-07-23 13:58:49 +02:00
2d9e869b18 GPencil: Fix coordinate influence fitting problem
The coordinates had a two small influence in the curve fitting.
All the values are now normalized and a factor used to make the
shape of the stroke more important.
2020-07-23 13:41:23 +02:00
c18eb01eb6 GPencil: WIP: multi-dimensional curve fitting 2020-07-23 13:41:13 +02:00
416d28323b Merge branch 'master' into greasepencil-edit-curve 2020-07-23 13:16:22 +02:00
d0c9a62904 GPencil: Update defaults for curve resolution 2020-07-22 17:33:29 +02:00
31ffbfd78f Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-22 16:11:18 +02:00
f16a50ba94 GPencil: Implement adaptive editcurve resolution
With adaptive curve resolution the arclength of a curve segment is
used to calculate the number of points between two control points.
This means that in general the stroke points are more spaced out.
2020-07-22 16:10:25 +02:00
02786fc761 Merge branch 'master' into greasepencil-edit-curve 2020-07-22 11:19:18 +02:00
ca6b730da1 Merge branch 'master' into greasepencil-edit-curve 2020-07-21 19:15:27 +02:00
fbbd03ae8a GPencil: remove unnecessary variables 2020-07-21 16:02:07 +02:00
ae4f40ef6b GPencil: Fix compiler warnings 2020-07-21 16:00:20 +02:00
e081c624f9 GPencil: Fix errors after merge 2020-07-21 15:58:37 +02:00
46e0b0404a Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-21 15:58:10 +02:00
f7a1512bdf GPencil: deselct curve when deselect all 2020-07-21 15:57:42 +02:00
35a6172ddc Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/blenkernel/BKE_gpencil_geom.h
	source/blender/blenkernel/intern/gpencil_geom.c
2020-07-21 15:54:19 +02:00
d28575213d Merge branch 'master' into greasepencil-edit-curve 2020-07-20 20:21:56 +02:00
56e88ada83 GPencil: Fix select fill
The function was not working correctly, because it would select
fills where one of the triangles would have a bounding box
large enought that it would intersect the selction.
2020-07-20 15:55:18 +02:00
5330fe4b70 GPencil: Fix select more
The first control point was not being selected due to an
off-by-one error.
2020-07-20 15:47:40 +02:00
fd0fffaede Merge branch 'master' into greasepencil-edit-curve 2020-07-20 11:11:40 +02:00
bc90636f61 GPencil: Disable box/lasso conversion, fill select
These features are currently not working correctly.
Disable them for now.
2020-07-19 13:44:45 +02:00
a98c9e3de9 GPencil: Fix curve points not deslecting 2020-07-19 13:40:00 +02:00
d7ae232dba GPencil: deselect stroke after convert from curve 2020-07-18 22:06:16 +02:00
88b98dfe32 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-18 13:06:25 +02:00
8eedcb6f9d GPencil: WIP: convert to curve box/lasso select 2020-07-18 13:05:06 +02:00
84f744719e Merge branch 'master' into greasepencil-edit-curve 2020-07-18 10:34:41 +02:00
237f9bc106 Merge branch 'master' into greasepencil-edit-curve 2020-07-17 16:46:06 +02:00
76dc552989 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-17 14:17:56 +02:00
741cd18398 GPencil: Draw edit lines under curve handles 2020-07-17 14:17:30 +02:00
5bac474903 GPencil: Convert to curve with click select 2020-07-17 14:16:28 +02:00
9d69d642ff GPencil: Recalculate curve when entering editmode 2020-07-17 13:36:14 +02:00
8b35bc7743 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-16 19:42:39 +02:00
5f87af907d Merge branch 'master' into greasepencil-edit-curve 2020-07-16 17:45:15 +02:00
030e86768b GPencil: Use GPencil Theme colors for Bezier control points
To flag the point the FREESTYLE flag is reused because the limit is 8 flags and all are already use it. As this handles never are displayed at the same time that freestyle is totally sure to reuse it.
2020-07-16 17:24:20 +02:00
0003809105 GPencil: Display edit line sin Curve edit mode 2020-07-16 16:54:32 +02:00
403d20581f GPencil: Fix errors after merge
The parameter list has changed.
2020-07-16 09:17:40 +02:00
aa72aebabe Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/blenkernel/intern/gpencil_geom.c
2020-07-16 08:16:01 +02:00
2e23224829 GPencil: Apply all GSoC changes 2020-07-15 12:18:30 +02:00
610a5452c1 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-15 12:11:12 +02:00
477fbf4e44 Gpencil: WIP: Update "enter curve edit mode" op 2020-07-15 12:10:33 +02:00
b474c82135 Merge branch 'master' into greasepencil-edit-curve 2020-07-15 12:08:06 +02:00
bd71007ebd GPencil: Implement select more/less curve points 2020-07-15 11:55:18 +02:00
95d0192308 GPencil: Implement select first/last curve point 2020-07-15 11:07:47 +02:00
fcd22007b5 GPencil: Implement select grouped for curves 2020-07-15 10:40:25 +02:00
d0d9490ac0 GPencil: Implement select alternate curve points 2020-07-15 10:14:19 +02:00
7e6b8f693c GPencil: Fix compiler warnings 2020-07-15 10:08:45 +02:00
eb22e7a650 GPencil: Select fill curves
The previous algorithm used the center of the selection to check if the
fill was selected. This uses the triangle data of the fill so it will
be a bit slower, but results in better select behaviour.
2020-07-14 20:59:28 +02:00
d589d50443 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-14 16:53:08 +02:00
5122c75fd6 GPencil: Implement box and lasso curve select 2020-07-14 16:52:35 +02:00
3e5910d273 Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/editors/gpencil/gpencil_utils.c
	source/blender/gpencil_modifiers/intern/MOD_gpencilbuild.c
2020-07-14 10:43:13 +02:00
93acaccecb Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-13 11:22:33 +02:00
5b49d8b8d0 Merge branch 'master' into greasepencil-edit-curve 2020-07-13 10:39:11 +02:00
91b9f99f04 Clang formatting 2020-07-11 20:43:29 +02:00
529313da75 GPencil: Fix compiler error after merge 2020-07-11 20:38:34 +02:00
9ff1fd96e8 GPencil: Apply GSoC changes
Also run a `make format`for fixing any clang issue
2020-07-11 20:34:19 +02:00
8b1182d6cf Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-11 20:30:58 +02:00
04f9959adf Merge branch 'master' into greasepencil-edit-curve 2020-07-11 20:29:57 +02:00
6b12994ccf GPencil: Adapt box and lasso select for editcurve 2020-07-11 18:10:58 +02:00
b822088014 GPencil: Fix curve handles not updating
When the handle types of the first and last handles of a cyclic curve
were changed, the handles did not update correctly.
2020-07-11 17:08:24 +02:00
9fa9ee6136 GPencil: Implement editcurve subdivide
Implement an algorithm to subdivide an editcurve.
2020-07-11 17:00:15 +02:00
2340540c44 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-11 13:12:14 +02:00
6fa8efd815 GPencil: Implement curve point circle select 2020-07-11 13:11:34 +02:00
e88c28d1a6 Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/blenloader/intern/versioning_290.c
2020-07-10 17:10:53 +02:00
dce8986223 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-10 16:37:37 +02:00
6394287b1c GPencil: Adapt edit mode operators for curve edit
Change the rest of the edit mode opearators (c62bc08f) so that they
check if curve edit mode is active (if needed).
2020-07-10 16:32:14 +02:00
596f10a0cf GPencil: Curve mode: Set stroke select flag
Set/unset the selection flag for a stroke, when its curve is selected.
This change is needed so that operators like "change end caps"
work as expected in curve edit mode.
2020-07-09 16:24:04 +02:00
c84773b3fb Merge branch 'master' into greasepencil-edit-curve 2020-07-09 15:48:28 +02:00
785e1ea62a Merge branch 'master' into greasepencil-edit-curve 2020-07-08 11:11:07 +02:00
1682e81257 GPencil: Apply all GSoC changes 2020-07-07 17:31:14 +02:00
b8087a2fa6 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-07 17:26:25 +02:00
c62bc08f45 GPencil: Adapt edit operators to curve edit mode
Currently all operators do nothing in curve edit mode, but they should
work normally in edit mode. This is the first step of implementing
the operators in curve edit mode.
2020-07-07 17:25:50 +02:00
987d86a127 Merge branch 'master' into greasepencil-edit-curve 2020-07-07 16:21:58 +02:00
967c5b7da5 GPencil: Adapt select operators to curve edit mode
All select operators now check if curve edit mode is active. These include GPENCIL_OT_select, GPENCIL_OT_select_all, GPENCIL_OT_select_circle, GPENCIL_OT_select_box, GPENCIL_OT_select_lasso, GPENCIL_OT_select_linked, GPENCIL_OT_select_grouped, GPENCIL_OT_select_more, GPENCIL_OT_select_less, GPENCIL_OT_select_first, GPENCIL_OT_select_last, GPENCIL_OT_select_alternate and GPENCIL_OT_select_vertex_color
2020-07-07 15:37:05 +02:00
a0fa831bfe GPencil: Remove error threshold as parameter
Use the error threshold in gpd instead of the operator property.
2020-07-07 10:52:57 +02:00
e6ed400b9a Merge branch 'master' into greasepencil-edit-curve 2020-07-06 20:27:26 +02:00
d1b1eaf9bb Merge branch 'soc-2020-greasepencil-curve' of git.blender.org:blender into soc-2020-greasepencil-curve 2020-07-06 17:20:11 +02:00
68a3c874ce GPencil: Set flag to fit curve to cyclic stroke 2020-07-06 17:19:59 +02:00
ab91491963 GPencil: Recalculating handles for cyclic curves 2020-07-06 17:07:34 +02:00
486a06d71b GPencil: Set handle type for individual handle
Before it was only possible to set the handle type for both handles
of a bezier tripple. Now the user can set the type individually.
2020-07-06 17:06:54 +02:00
fa662dceef GPencil: Support cyclic curves
Generte points between the first and last curve point when
converting a curve to a stroke.
2020-07-06 17:05:05 +02:00
96eca6c3d4 GPencil: Fix selection sync for cyclic strokes 2020-07-06 17:02:49 +02:00
b6f852ecec Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-05 17:33:02 +02:00
a960897959 Merge branch 'master' into greasepencil-edit-curve 2020-07-05 17:32:11 +02:00
2575fc9181 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-05 15:59:40 +02:00
55a3088775 Merge branch 'master' into greasepencil-edit-curve 2020-07-04 11:05:28 +02:00
9bf6743422 GPencil: Cleanup code - move functions 2020-07-04 11:03:14 +02:00
ff66e7afe8 Merge branch 'master' into greasepencil-edit-curve 2020-07-04 10:00:21 +02:00
591047baef GPencil: Apply GSoC changes 2020-07-04 10:00:08 +02:00
1fca1d9f74 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-03 20:57:13 +02:00
04fc48300b Gpencil: Fix compiler warning 2020-07-03 20:56:50 +02:00
2c79315b0f Merge branch 'master' into greasepencil-edit-curve 2020-07-03 20:09:00 +02:00
b645caafa6 GPencil: Implement different color by handle type
Before it was hardcode because only one type was supported.
2020-07-03 20:07:56 +02:00
a18bf89232 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-03 17:21:23 +02:00
85a11fd2b3 GPencil: Review of curve <> stroke conversion
This commit introduces a new flag GP_STROKE_NEEDS_CURVE_UPDATE in the stroke
that replaces GP_CURVE_RECALC_GEOMETRY. The name was not consistent and
it didn't make much sense to have it in the editcurve. Another flag was
added: GP_CURVE_NEEDS_STROKE_UPDATE. This indicates that the curve data
is dirty and needs to be regenerated.
2020-07-03 17:20:12 +02:00
02215eaae4 GPencil: remove test operator
This operator has no use and can be safely removed.
2020-07-03 17:13:02 +02:00
f37b34b36c Merge branch 'master' into greasepencil-edit-curve 2020-07-03 15:44:55 +02:00
a36c7e75d7 Gpencil: Fix merge error 2020-07-03 14:06:33 +02:00
3064b8d717 Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/blenkernel/intern/gpencil.c
	source/blender/blenkernel/intern/gpencil_geom.c
	source/blender/blenkernel/intern/gpencil_modifier.c
2020-07-03 13:56:02 +02:00
808ee09081 Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/blenkernel/intern/gpencil_geom.c
2020-07-03 11:22:00 +02:00
47be2d3fd0 GPencil: Fix crash when moving multiple handles
Fix a crash when selecting multiple handles (without the control point)
and then moving them. This was caused by the function
BKE_gpencil_curve_sync_selection which did not sync the
selection correctly. GP_CURVE_SELECT would not be set in gpc->flag if an
individual handle was selected.
2020-07-02 23:02:26 +02:00
f76851d5c1 GPencil: Fix crash when moveing a point in edit mode 2020-07-02 22:37:09 +02:00
99fa08e94f GPencil: Recalculate curve only in curve edit mode
Using bGPdata in BKE_gpencil_stroke_geometry_update we can check if we
are in curve edit mode and then test if we need to recalculate the curve.
2020-07-02 22:08:05 +02:00
a30af67a44 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-02 21:49:53 +02:00
998229d892 GPencil: Skip uneditble layer
Don't sync the selection of strokes on an uneditble layer.
2020-07-02 21:48:02 +02:00
d2933762f2 GPencil: sync selection after recalculating curve 2020-07-02 21:46:17 +02:00
b84cc14b51 GPencil: Refactor, add comments 2020-07-02 21:45:43 +02:00
2dc43b4aca GPencil: Remove end point in Operator description
Operator description never must end in period.
2020-07-02 20:13:30 +02:00
90828164a4 Merge branch 'master' into greasepencil-edit-curve 2020-07-02 20:05:28 +02:00
f32ace6712 GPencil: Rename gp_ to gpencil_ 2020-07-02 20:00:13 +02:00
f708aabf90 GPencil: Pass gpd datablock to BKE_gpencil_stroke_geometry_update
This include the reverts commit 6beb37b197.
2020-07-02 19:47:58 +02:00
99c50da0b7 Merge branch 'master' into greasepencil-edit-curve 2020-07-02 09:32:23 +02:00
3b4f98732d GPencil: Add shortcut to toggle curve edit mode
Its set to 'U' for testing. Can be changed later.
2020-07-01 16:02:17 +02:00
039d8fea23 GPencil: Apply GSoC changes 2020-07-01 13:22:28 +02:00
b40dfe6095 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-07-01 13:18:20 +02:00
52d2afae9d GPencil: Add custom keymap for curve edit mode
Added a keymap handler for curve edit mode. This allows us to set
keybindings when curve edit mode is active. Currently only one
keymap was added: set handle type, which is set to 'V' like everywhere
else.
2020-07-01 12:44:17 +02:00
6a256637b0 GPencil: Refactor: fix comment and function name 2020-07-01 09:31:01 +02:00
e0c89befb0 Merge branch 'master' into greasepencil-edit-curve 2020-07-01 09:17:39 +02:00
3e636c23e8 GPencil: Support all handle types in transform
Basic support for all handle types when transforming in curve edit mode.
2020-07-01 00:45:48 +02:00
64bc6f3942 GPencil: Only allow rotations around control point
Limit the center of rotation to only include the control points
2020-07-01 00:44:34 +02:00
1048abd378 GPencil: Add BKE to recalculate handle positions
This function will update the handle position depending on the handle
type that the BezTriple handles have.
2020-07-01 00:41:55 +02:00
dac85d01d2 GPencil: add set handle type operator for editcurve
This operator sets the handle type of the selected editcurve handles
and updates their positions acordingly.
2020-06-30 19:34:09 +02:00
1792c51bdf GPencil: Cleanup - Apply clang format 2020-06-30 16:21:38 +02:00
493695efd1 GPencil: Apply last GSoC changes 2020-06-30 16:17:21 +02:00
4379942a88 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-30 16:08:06 +02:00
467b905dbf GPencil: edit curve: Rename 'gp_' to 'gpencil_' 2020-06-30 16:06:42 +02:00
68a9379127 Merge branch 'master' into greasepencil-edit-curve 2020-06-30 15:49:12 +02:00
b2949942e7 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-30 15:04:28 +02:00
7cf16db0a2 Merge branch 'master' into greasepencil-edit-curve 2020-06-29 20:20:01 +02:00
2fcdebcc4e Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/editors/gpencil/gpencil_select.c
2020-06-29 15:53:45 +02:00
905c94df2d Merge branch 'master' into greasepencil-edit-curve 2020-06-28 16:56:28 +02:00
5a804e3d6f Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/editors/gpencil/gpencil_select.c
	source/blender/editors/gpencil/gpencil_utils.c
2020-06-28 11:31:37 +02:00
bc540c57f4 GPencil: Fix selection 2020-06-27 14:31:01 +02:00
73bded4cc6 Merge branch 'soc-2020-greasepencil-curve' of git.blender.org:blender into soc-2020-greasepencil-curve 2020-06-27 13:28:24 +02:00
b5640df6c0 Merge branch 'master' into greasepencil-edit-curve 2020-06-26 12:57:10 +02:00
302499b4a6 Merge branch 'master' into greasepencil-edit-curve 2020-06-25 15:40:40 +02:00
308b203578 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-25 11:09:27 +02:00
1e2c87b951 Merge branch 'master' into greasepencil-edit-curve 2020-06-24 16:02:41 +02:00
8cf1479f02 GPencil: Update point index when stroke is updated 2020-06-23 21:35:28 +02:00
995ea2004a Merge branch 'master' into greasepencil-edit-curve 2020-06-23 19:26:25 +02:00
ff3b9d63cb Merge branch 'master' into greasepencil-edit-curve 2020-06-23 16:29:14 +02:00
708ee8d75e Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-23 16:01:31 +02:00
78a38f1ec9 GPencil: Selection syncing from/to curve edit mode 2020-06-23 16:00:51 +02:00
769468d125 Merge branch 'master' into greasepencil-edit-curve 2020-06-22 19:47:04 +02:00
1f9965f852 GPencil: WIP: sync selection 2020-06-22 16:10:36 +02:00
aae29a6565 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-22 11:20:31 +02:00
f137b897d0 GPencil: Fix compiler warnings 2020-06-22 11:04:16 +02:00
476afb643a GPencil: Fix errors after merge 2020-06-22 11:03:15 +02:00
44173c60fc Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/editors/gpencil/gpencil_select.c
2020-06-22 10:57:20 +02:00
a50528ac42 Merge branch 'master' into greasepencil-edit-curve 2020-06-21 15:56:43 +02:00
1b09363d0f Merge branch 'master' into greasepencil-edit-curve 2020-06-20 11:58:16 +02:00
882ffe6a2e GPencil: Hide curve points when the curve in unselected
The points are displayed only if the stroke is selected or if the stroke is unselected but the Handles display mode is ALL.
2020-06-20 11:57:50 +02:00
681aaa32af GPencil: Add Curve Select prop to RNA
This prop is similar to Stroke select.
2020-06-20 10:31:44 +02:00
4a26045a9c Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-19 23:07:48 +02:00
028f97ed32 Merge branch 'master' into greasepencil-edit-curve 2020-06-19 23:04:41 +02:00
69e4e9c0c4 GPencil: Apply GSoC changes 2020-06-19 22:57:05 +02:00
82ae1d6d7a Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-19 20:46:54 +02:00
74fdcd6673 GPencil: Fix pressure and strength not updateing
Handles could be scaled, but the data was not written back to the
stroke. This fixes this issue by forceing a recalc after the data has
been written to the curve.
2020-06-19 20:46:23 +02:00
6bacfd3baa GPencil: Transformation of handles
Implemented free transformation of edit curve handles.
2020-06-19 20:44:53 +02:00
2d32dd0632 GPencil: Fix error threshold not showing in panel 2020-06-19 20:43:01 +02:00
74c46d11a0 Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/blenloader/intern/versioning_290.c
2020-06-19 19:45:05 +02:00
864df7730e Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-19 17:08:41 +02:00
915f80ad48 GPencil: Add transform of curve control points
This ignores handles for now. Only transforms control points.
2020-06-19 17:07:56 +02:00
0ba1d68c30 GPencil: Apply GSoC changes 2020-06-19 10:34:14 +02:00
ff52d7f756 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-19 10:14:39 +02:00
3536a2aac4 Merge branch 'master' into greasepencil-edit-curve 2020-06-18 20:35:31 +02:00
fc0ac4c032 GPencil: Selection and toggle selection of handles 2020-06-17 23:43:21 +02:00
70421bbe34 GPencil: Add point selection for curve handles 2020-06-17 23:01:37 +02:00
cc31b03417 GPencil: deselect all on curve edit mode exit
This is a temporary change. Selection syncing is planned for later
2020-06-17 23:01:14 +02:00
944d2f1b81 GPencil: Add curve iteration macro 2020-06-17 22:59:49 +02:00
592a833d96 GPencil: Add sync selection BKE for edit curves 2020-06-17 22:55:10 +02:00
362618ec0b GPencil: Apply all GSoC changes 2020-06-17 10:20:24 +02:00
15a065d318 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-16 19:54:08 +02:00
8bc6f574f8 GPencil: Add prop error threshold to select all
The error threshold for the curve fitting can be changed, when a
stroke that has no editcurve yet, is selected.
2020-06-16 19:53:35 +02:00
11e4742da9 GPencil: Change default threshold error 2020-06-16 19:51:40 +02:00
dff7a25891 GPencil: Fix compiler error after merge 2020-06-16 19:31:51 +02:00
1aa7698662 Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/blenkernel/intern/gpencil_geom.c
2020-06-16 19:27:55 +02:00
3cbfe47a6d Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-16 09:47:40 +02:00
46477fc5d7 GPencil: Add operator to enter strokes into curve edit mdoe
This operator creates an editcurve for all selected strokes that
do not have an editcurve already.
2020-06-16 09:46:59 +02:00
e055c0e285 GPencil: Use error_threshold parameter in gpd 2020-06-16 09:45:32 +02:00
9b8b1c8713 GPencil: Fix compiler errors after merge 2020-06-15 12:32:48 +02:00
f078b55485 Merge branch 'master' into greasepencil-edit-curve 2020-06-15 11:50:29 +02:00
213ba56cf6 Merge branch 'master' into greasepencil-edit-curve 2020-06-15 10:31:25 +02:00
9570f4bbd4 GPencil: Apply all GSoC changes 2020-06-14 13:34:32 +02:00
dc428a05fa Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-14 13:28:40 +02:00
a424960c06 GPencil: Refactor BKE_gpencil_selected_strokes_editcurve_update 2020-06-14 13:24:51 +02:00
110ad192a7 GPencil: Fix selection of BezTriple 2020-06-14 13:24:27 +02:00
9b4eb0739b GPencil: Replace loops by FOREACH macro 2020-06-14 12:39:11 +02:00
01c52179e5 GPencil: update the stroke after they are fitted to a curve 2020-06-14 12:17:23 +02:00
4cdd9bc497 GPencil: update the stroke after they are fitted to a curve 2020-06-14 12:16:06 +02:00
3cae796db0 GPencil: New macro to invert handles selection 2020-06-13 16:28:26 +02:00
a4b7d5568a Merge branch 'master' into greasepencil-edit-curve 2020-06-13 15:39:04 +02:00
f395cad4a3 GPencil: Fix compiler errors 2020-06-13 15:20:33 +02:00
7a785786bb Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-12 21:13:28 +02:00
6f0f441b9f GPencil: Fix crash, pointer to BezTriple was set wrong 2020-06-12 21:13:00 +02:00
065cec17e7 GPencil: Apply GSoC changes 2020-06-12 20:42:19 +02:00
1c7019e088 GPencil: Increase hard limit for curve resolution 2020-06-12 20:40:43 +02:00
c50199a352 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-12 20:39:36 +02:00
fd62421c9c GPencil: Implement interpolation for point props
Implement interpolation for point pressure, strength and vertex
color. Currently this uses linear interpolation.
2020-06-12 20:37:50 +02:00
279c4ba199 GPencil: Change update function for curve point props 2020-06-12 20:36:26 +02:00
0b56b60522 GPencil: Cleanup comment lines 2020-06-12 20:25:25 +02:00
6beb37b197 Revert "GPencil: Pass gpd datablock to BKE_gpencil_stroke_geometry_update"
This reverts commit 6a2a5b4589.

Also, some minor tweaks due conflicts
2020-06-12 20:16:55 +02:00
fb9d9130ee Merge branch 'master' into greasepencil-edit-curve 2020-06-12 15:58:10 +02:00
def72c7dcc Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-12 13:52:33 +02:00
cf4c3a72ed GPencil: Add curve point data structure
The BezTriple in bGPDcurve was changed to a new data structure: bGPDcurve_point. The RNA properties where adapted to support the new data structure.
2020-06-12 13:37:36 +02:00
01b43c53ff GPencil: Moves Curve Threshold to bGPdata 2020-06-12 13:27:12 +02:00
788efae56b GPencil: Cleanup code 2020-06-12 12:20:39 +02:00
7c825b6665 GPencil: Make editable Curve Editing props when disabled
Some parameters can be required before set to ON the mode, so now it's possible update these parameters before.
2020-06-12 11:25:00 +02:00
903bdac374 GPencil: Update last GSoC branch changes 2020-06-12 11:19:07 +02:00
307a68b91a Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-12 11:05:59 +02:00
de52a9c9f4 GPencil: Use resolution parameter
Use the curve resolution parameter in the stroke option panel
to control how many segments are generated from the edit curve.
2020-06-12 11:02:28 +02:00
b94abe6785 GPencil: Add Resolution prop to Curve Editing popover 2020-06-12 10:42:32 +02:00
58d06191fb Merge branch 'master' into greasepencil-edit-curve 2020-06-12 10:33:13 +02:00
1f98596023 GPencil: New Error Threshold parameter
This parameter is used to determine how precise is the curve conversion.
2020-06-12 10:32:51 +02:00
3b34d7a42d GPencil: Fix compiler warning
This is temp hack.
2020-06-11 20:41:01 +02:00
7cc8832ff1 GPencil: More transform cleanup and add is_curve_edit flag check
This is in preparation for transform Bezier handles
2020-06-11 20:39:38 +02:00
3fabcd3458 Merge branch 'master' into greasepencil-edit-curve 2020-06-11 20:36:21 +02:00
55e270878a GPencil: Change UI text Curve Editing 2020-06-11 17:54:03 +02:00
743e6c8750 GPencil: Change UI text
"Resolution" to "Curve Resolution"
2020-06-11 17:25:19 +02:00
97e9b953a4 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-11 17:22:06 +02:00
c7bb14f175 GPencil: WIP use resolution parameter, pressure and strength 2020-06-11 17:18:38 +02:00
94e68f4caa GPencil: Add resolution parameter to bGPDcurve 2020-06-11 17:16:26 +02:00
0f2ee2664f Merge branch 'master' into greasepencil-edit-curve 2020-06-11 17:13:53 +02:00
c2f3deb8bd GPencil: Initialize curve resolution for new objects 2020-06-11 17:06:29 +02:00
fa02e9e84d GPencil: Add resolution to strokes panel 2020-06-11 17:06:14 +02:00
3e66f03b86 GPencil: Move geometry functions to right module
These functions were not moved in a previous refactor
2020-06-11 16:47:18 +02:00
6a2a5b4589 GPencil: Pass gpd datablock to BKE_gpencil_stroke_geometry_update
This is needed to get curve edit data parameters
2020-06-11 16:47:12 +02:00
f872c6a8fe GPencil: Add new edit_curve_resolution prop
This will be used to define the qualitty of the generated stroke.
2020-06-11 16:47:05 +02:00
e821b96f3a Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-11 12:23:02 +02:00
15669c1c02 GPencil: Convert from curve to stroke in update_geometry
Call the function BKE_gpencil_stroke_update_geometry_from_editcurve
in BKE_gpencil_stroke_geometry_update and force
an update when curve handles change position in RNA.
2020-06-11 12:22:06 +02:00
d87d5f7801 GPencil: Add function to convert curve to stroke
This BKE takes the current editcurve of a stroke, then samples
a number of points on the curve and writes these points
back to the stroke points.
2020-06-11 12:19:28 +02:00
c70dcf9f85 Merge branch 'master' into greasepencil-edit-curve 2020-06-11 10:43:19 +02:00
8882abe767 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-10 19:42:13 +02:00
c6185e7693 GPencil: Disable modifiers when Curve Edit is enabled
The handles edition is incompatible with the modifiers due the logic used to recreate the stroke when is editted with handles.
2020-06-10 19:34:48 +02:00
16f5c480d7 Merge branch 'master' into greasepencil-edit-curve 2020-06-10 19:31:23 +02:00
64c0fe90b8 GPencil: Add select more/less for curve points 2020-06-10 17:31:18 +02:00
823d26ea3a GPencil: Use BezTriple selection marcos 2020-06-10 17:30:59 +02:00
073ed552ac Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-10 15:15:38 +02:00
7e91d4302d Merge branch 'master' into greasepencil-edit-curve 2020-06-10 11:46:20 +02:00
3579f1be2c Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-09 20:03:08 +02:00
ffe6a79c61 GPencil: Remove stroke Curve flag
Not needed
2020-06-09 20:00:06 +02:00
1b917609c0 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-09 19:58:11 +02:00
62828dee3c GPencil: todo error threshold should be parameter 2020-06-09 19:56:56 +02:00
9016ba041f GPencil: Implement 'select linked' for editcurve 2020-06-09 19:56:08 +02:00
9d078a02d2 Merge branch 'master' into greasepencil-edit-curve 2020-06-09 19:48:31 +02:00
88daf0fdf6 GPencil: Apply all GSOC changes 2020-06-09 19:35:16 +02:00
c0deb49bde Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-09 18:03:16 +02:00
b71ebc4c27 GPencil: Calculate curve when switching to edit curve mode 2020-06-09 18:02:31 +02:00
6a66ca5318 GPencil: Remove not needed macros and functions
Remove marco and function that was no longer needed and use
newer BKE function.
2020-06-09 18:01:57 +02:00
beb655cf7b GPencil: Add update functions for the editcurve
Add new functions that will refir the curve to the grease pencil stroke.
2020-06-09 18:00:18 +02:00
085a59ae6f GPencil: Remove curve flag and replace with RNA dynamic prop
We don't need this flag because we can check the struct pointer itself.
2020-06-09 16:15:47 +02:00
7df5a41d2b GPencil: FIx flag GP_STROKE_CURVE_MODE
The flag was set to 2^11 and it was 2^16 before. The type of the flag is short, so it was
never actually beeing set.
2020-06-09 15:40:38 +02:00
b96a1ac263 GPencil: Add 'select all' operator for curve
Change the 'select all' operator so that, if the stroke
is in curve edit mode, it selects the curve handles.
2020-06-09 14:31:52 +02:00
6cd634b94f GPencil: Use curve fitting write data for operator
Instead of writing test data to editcurve, use the
BKE_gpencil_stroke_curve_create function to write fitted
curve data to editcurve.
2020-06-09 14:28:18 +02:00
5830201294 GPencil: Add function to create editcurve from stroke
This BKE function uses curve_fit_cubic_to_points_fl to fit a curve
to a grease pencil stroke.
2020-06-09 14:23:33 +02:00
d63757e28b GPencil: Write operator creates two curve points
The dummy operator that writes data to the editcurve
of a bGPDstroke now writes two curve points. One for the first stroke
point and one for the last stroke point.
2020-06-08 18:59:46 +02:00
8bf4f78c8e GPencil: Remove duplicated toggle button code 2020-06-08 18:57:02 +02:00
f345d94efb Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-08 11:12:17 +02:00
eff629aaad Merge branch 'master' into greasepencil-edit-curve 2020-06-08 09:09:37 +02:00
53a9446b71 GPencil: Cleanup comment 2020-06-07 19:10:14 +02:00
9982d1bc09 GPencil: First test to create handles drawing
Basic structure
2020-06-07 19:03:17 +02:00
9ce9960f10 GPencil: Add RNA update when change handles data 2020-06-07 17:49:03 +02:00
9cac03cb21 GPencil: Add point index array RNA property
This property should be used for debugging purposes.
2020-06-07 17:45:31 +02:00
f99d2e8068 GPencil: Fix merge errors 2020-06-07 15:57:34 +02:00
96e1b26658 Merge branch 'master' into greasepencil-edit-curve 2020-06-07 13:26:47 +02:00
3696ecb0f6 GPencil: Add point index array RNA property
This property should be used for debugging purposes.
2020-06-06 16:50:11 +02:00
c60c721b5d GPencil: New prop to determine if a stroke is using Curve Edit mode
This flag will be used to determine if the selection must be done for points to generate handles or the handles are already visibles.
2020-06-06 12:26:47 +02:00
89724fb52d Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-06 12:10:57 +02:00
3cdf852425 GPencil: Fix Selection mode buttons in Beziser submode
Also changed the BEZIER flag bit becaus ethe previous was already used.
2020-06-06 11:53:00 +02:00
5eab1b3f21 GPencil: Include last GSoC commits
Minor update due merge error in writefile.c
2020-06-06 11:18:41 +02:00
be1f9284c1 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-06 11:00:40 +02:00
410703aa1a GPencil: Add edit curve mode toggle button 2020-06-06 10:59:06 +02:00
8a686f9ced Merge branch 'master' into greasepencil-edit-curve 2020-06-06 10:24:20 +02:00
39560f5285 Merge branch 'master' into greasepencil-edit-curve 2020-06-06 08:23:45 +02:00
a37a93dc52 GPencil: Fix writing of bGPDcurve struct.
Previously, the bGPDcurve struct itself was not written,
only the data that is referenced inside the struct
2020-06-05 16:14:36 +02:00
f503190be4 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-05 15:44:11 +02:00
d0b1606c0d GPencil: Fix wrong write_struct code
This is going to be replaced by new code, but it's better keep the right one.
2020-06-05 15:42:27 +02:00
6dc2ac9ecf GPencil: Fix merge errors 2020-06-05 15:24:15 +02:00
2c4a1fc84d Merge branch 'master' into greasepencil-edit-curve
Conflicts:
	source/blender/blenloader/intern/writefile.c
2020-06-05 15:14:10 +02:00
7c5d6d6074 GPencil: Fix read/load of bGPDcurve.
Remove unnessesary code when reading bGPDcurve struct.
Fix the size of array when writing bGPDCurve struct.
2020-06-05 10:21:57 +02:00
435288d48e Merge branch 'master' into greasepencil-edit-curve 2020-06-04 18:48:57 +02:00
f17e84e1e1 Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-04 16:39:59 +02:00
7ea8d3ed39 Merge branch 'master' into greasepencil-edit-curve 2020-06-04 09:42:35 +02:00
af2d86dc2c GPencil: Add num_points property to test operator 2020-06-03 22:27:23 +02:00
b03a479efd GPencil: Add curve point RNA structure 2020-06-03 19:06:18 +02:00
b4ba2458ae Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-03 17:22:20 +02:00
d8cbcf4e64 GPencil: Add curve point collection RNA to gp stroke 2020-06-03 17:21:22 +02:00
17f122b913 GPencil: Test write operator only writes to selected strokes
Now the operator will only write to selected strokes and also free previously allocated data.
2020-06-03 17:20:36 +02:00
705c2e81db GPencil: Fix duplicate code 2020-06-03 17:19:10 +02:00
d436ffd68f GPencil: Fixed stroke duplicate kernel function 2020-06-03 16:17:47 +02:00
b099b9035f Merge branch 'master' into greasepencil-edit-curve 2020-06-03 15:59:58 +02:00
b1f2825ce1 GPencil: Add function to write sample data to edit curve 2020-06-03 10:04:02 +02:00
3096fa7f02 GPencil: Change bGPDcurve struct to hold array of BezTriple 2020-06-03 10:00:59 +02:00
67269b82b2 GPencil: Add BKE function to free editcurve 2020-06-03 09:57:20 +02:00
54d02e6acf Merge branch 'greasepencil-edit-curve' into soc-2020-greasepencil-curve 2020-06-02 10:12:22 +02:00
e485836b3a GPencil: created a new file for edit curve mode operators
This new file will hold all the operators for curve edit mode.
Initially only one dummy operator was moved from gpencil_edit
to this file.
2020-06-01 18:51:50 +02:00
5f6bc522b8 GPencil: Added a dummy operator to test curve data
The main purpose for this operator is to test the curve data structure
that was added to the stroke data structure.
2020-06-01 17:55:20 +02:00
be7e61b31f GPencil: New Curve Edit submode flag
Also created the macro GPENCIL_CURVE_EDIT_SESSIONS_ON to make easier to check the submode.
2020-06-01 17:29:41 +02:00
56166b6ad4 GPencil: Initial code to Write and Read curve data 2020-06-01 17:12:35 +02:00
ca06bcdcbf GPencil: Basic structure for the Bezier Editing
This is the minimum data we need to create the Bezier Curve.
2020-06-01 16:07:10 +02:00
5cc3ae6f75 Merge branch 'master' into greasepencil-edit-curve 2020-06-01 15:24:45 +02:00
0892174b63 Merge branch 'master' into greasepencil-edit-curve 2020-05-27 18:41:30 +02:00
463 changed files with 11453 additions and 6557 deletions

View File

@@ -1480,10 +1480,12 @@ if(CMAKE_COMPILER_IS_GNUCC)
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_INT_IN_BOOL_CONTEXT -Wno-int-in-bool-context)
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_FORMAT -Wno-format)
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_SWITCH -Wno-switch)
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_UNUSED_VARIABLE -Wno-unused-variable)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_CLASS_MEMACCESS -Wno-class-memaccess)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_COMMENT -Wno-comment)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_TYPEDEFS -Wno-unused-local-typedefs)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_VARIABLE -Wno-unused-variable)
if(CMAKE_COMPILER_IS_GNUCC AND (NOT "${CMAKE_C_COMPILER_VERSION}" VERSION_LESS "7.0"))
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_IMPLICIT_FALLTHROUGH -Wno-implicit-fallthrough)
@@ -1538,6 +1540,7 @@ elseif(CMAKE_C_COMPILER_ID MATCHES "Clang")
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_CXX11_NARROWING -Wno-c++11-narrowing)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_NON_VIRTUAL_DTOR -Wno-non-virtual-dtor)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_MACROS -Wno-unused-macros)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_VARIABLE -Wno-unused-variable)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_REORDER -Wno-reorder)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_COMMENT -Wno-comment)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_TYPEDEFS -Wno-unused-local-typedefs)

View File

@@ -18,12 +18,72 @@
# <pep8 compliant>
import dataclasses
import json
import os
from pathlib import Path
from typing import Optional
import codesign.util as util
class ArchiveStateError(Exception):
message: str
def __init__(self, message):
self.message = message
super().__init__(self.message)
@dataclasses.dataclass
class ArchiveState:
"""
Additional information (state) of the archive
Includes information like expected file size of the archive file in the case
the archive file is expected to be successfully created.
If the archive can not be created, this state will contain error message
indicating details of error.
"""
# Size in bytes of the corresponding archive.
file_size: Optional[int] = None
# Non-empty value indicates that error has happenned.
error_message: str = ''
def has_error(self) -> bool:
"""
Check whether the archive is at error state
"""
return self.error_message
def serialize_to_string(self) -> str:
payload = dataclasses.asdict(self)
return json.dumps(payload, sort_keys=True, indent=4)
def serialize_to_file(self, filepath: Path) -> None:
string = self.serialize_to_string()
filepath.write_text(string)
@classmethod
def deserialize_from_string(cls, string: str) -> 'ArchiveState':
try:
object_as_dict = json.loads(string)
except json.decoder.JSONDecodeError:
raise ArchiveStateError('Error parsing JSON')
return cls(**object_as_dict)
@classmethod
def deserialize_from_file(cls, filepath: Path):
string = filepath.read_text()
return cls.deserialize_from_string(string)
class ArchiveWithIndicator:
"""
The idea of this class is to wrap around logic which takes care of keeping
@@ -79,6 +139,19 @@ class ArchiveWithIndicator:
if not self.ready_indicator_filepath.exists():
return False
try:
archive_state = ArchiveState.deserialize_from_file(
self.ready_indicator_filepath)
except ArchiveStateError as error:
print(f'Error deserializing archive state: {error.message}')
return False
if archive_state.has_error():
# If the error did happen during codesign procedure there will be no
# corresponding archive file.
# The caller code will deal with the error check further.
return True
# Sometimes on macOS indicator file appears prior to the actual archive
# despite the order of creation and os.sync() used in tag_ready().
# So consider archive not ready if there is an indicator without an
@@ -88,23 +161,11 @@ class ArchiveWithIndicator:
f'({self.archive_filepath}) to appear.')
return False
# Read archive size from indicator/
#
# Assume that file is either empty or is fully written. This is being checked
# by performing ValueError check since empty string will throw this exception
# when attempted to be converted to int.
expected_archive_size_str = self.ready_indicator_filepath.read_text()
try:
expected_archive_size = int(expected_archive_size_str)
except ValueError:
print(f'Invalid archive size "{expected_archive_size_str}"')
return False
# Wait for until archive is fully stored.
actual_archive_size = self.archive_filepath.stat().st_size
if actual_archive_size != expected_archive_size:
if actual_archive_size != archive_state.file_size:
print('Partial/invalid archive size (expected '
f'{expected_archive_size} got {actual_archive_size})')
f'{archive_state.file_size} got {actual_archive_size})')
return False
return True
@@ -129,7 +190,7 @@ class ArchiveWithIndicator:
print(f'Exception checking archive: {e}')
return False
def tag_ready(self) -> None:
def tag_ready(self, error_message='') -> None:
"""
Tag the archive as ready by creating the corresponding indication file.
@@ -138,13 +199,34 @@ class ArchiveWithIndicator:
If it is violated, an assert will fail.
"""
assert not self.is_ready()
# Try the best to make sure everything is synced to the file system,
# to avoid any possibility of stamp appearing on a network share prior to
# an actual file.
if util.get_current_platform() != util.Platform.WINDOWS:
os.sync()
archive_size = self.archive_filepath.stat().st_size
self.ready_indicator_filepath.write_text(str(archive_size))
archive_size = -1
if self.archive_filepath.exists():
archive_size = self.archive_filepath.stat().st_size
archive_info = ArchiveState(
file_size=archive_size, error_message=error_message)
self.ready_indicator_filepath.write_text(
archive_info.serialize_to_string())
def get_state(self) -> ArchiveState:
"""
Get state object for this archive
The state is read from the corresponding state file.
"""
try:
return ArchiveState.deserialize_from_file(self.ready_indicator_filepath)
except ArchiveStateError as error:
return ArchiveState(error_message=f'Error in information format: {error}')
def clean(self) -> None:
"""

View File

@@ -58,6 +58,7 @@ import codesign.util as util
from codesign.absolute_and_relative_filename import AbsoluteAndRelativeFileName
from codesign.archive_with_indicator import ArchiveWithIndicator
from codesign.exception import CodeSignException
logger = logging.getLogger(__name__)
@@ -145,13 +146,13 @@ class BaseCodeSigner(metaclass=abc.ABCMeta):
def cleanup_environment_for_builder(self) -> None:
# TODO(sergey): Revisit need of cleaning up the existing files.
# In practice it wasn't so helpful, and with multiple clients
# talking to the same server it becomes even mor etricky.
# talking to the same server it becomes even more tricky.
pass
def cleanup_environment_for_signing_server(self) -> None:
# TODO(sergey): Revisit need of cleaning up the existing files.
# In practice it wasn't so helpful, and with multiple clients
# talking to the same server it becomes even mor etricky.
# talking to the same server it becomes even more tricky.
pass
def generate_request_id(self) -> str:
@@ -220,9 +221,15 @@ class BaseCodeSigner(metaclass=abc.ABCMeta):
"""
Wait until archive with signed files is available.
Will only return if the archive with signed files is available. If there
was an error during code sign procedure the SystemExit exception is
raised, with the message set to the error reported by the codesign
server.
Will only wait for the configured time. If that time exceeds and there
is still no responce from the signing server the application will exit
with a non-zero exit code.
"""
signed_archive_info = self.signed_archive_info_for_request_id(
@@ -236,9 +243,17 @@ class BaseCodeSigner(metaclass=abc.ABCMeta):
time.sleep(1)
time_slept_in_seconds = time.monotonic() - time_start
if time_slept_in_seconds > timeout_in_seconds:
signed_archive_info.clean()
unsigned_archive_info.clean()
raise SystemExit("Signing server didn't finish signing in "
f"{timeout_in_seconds} seconds, dying :(")
f'{timeout_in_seconds} seconds, dying :(')
archive_state = signed_archive_info.get_state()
if archive_state.has_error():
signed_archive_info.clean()
unsigned_archive_info.clean()
raise SystemExit(
f'Error happenned during codesign procedure: {archive_state.error_message}')
def copy_signed_files_to_directory(
self, signed_dir: Path, destination_dir: Path) -> None:
@@ -396,7 +411,13 @@ class BaseCodeSigner(metaclass=abc.ABCMeta):
temp_dir)
logger_server.info('Signing all requested files...')
self.sign_all_files(files)
try:
self.sign_all_files(files)
except CodeSignException as error:
signed_archive_info.tag_ready(error_message=error.message)
unsigned_archive_info.clean()
logger_server.info('Signing is complete with errors.')
return
logger_server.info('Packing signed files...')
pack_files(files=files,

View File

@@ -0,0 +1,26 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
class CodeSignException(Exception):
message: str
def __init__(self, message):
self.message = message
super().__init__(self.message)

View File

@@ -33,6 +33,7 @@ from buildbot_utils import Builder
from codesign.absolute_and_relative_filename import AbsoluteAndRelativeFileName
from codesign.base_code_signer import BaseCodeSigner
from codesign.exception import CodeSignException
logger = logging.getLogger(__name__)
logger_server = logger.getChild('server')
@@ -45,6 +46,10 @@ EXTENSIONS_TO_BE_SIGNED = {'.dylib', '.so', '.dmg'}
NAME_PREFIXES_TO_BE_SIGNED = {'python'}
class NotarizationException(CodeSignException):
pass
def is_file_from_bundle(file: AbsoluteAndRelativeFileName) -> bool:
"""
Check whether file is coming from an .app bundle
@@ -186,7 +191,7 @@ class MacOSCodeSigner(BaseCodeSigner):
file.absolute_filepath]
self.run_command_or_mock(command, util.Platform.MACOS)
def codesign_all_files(self, files: List[AbsoluteAndRelativeFileName]) -> bool:
def codesign_all_files(self, files: List[AbsoluteAndRelativeFileName]) -> None:
"""
Run codesign tool on all eligible files in the given list.
@@ -225,8 +230,6 @@ class MacOSCodeSigner(BaseCodeSigner):
file_index + 1, num_signed_files,
signed_file.relative_filepath)
return True
def codesign_bundles(
self, files: List[AbsoluteAndRelativeFileName]) -> None:
"""
@@ -273,8 +276,6 @@ class MacOSCodeSigner(BaseCodeSigner):
files.extend(extra_files)
return True
############################################################################
# Notarization.
@@ -334,7 +335,40 @@ class MacOSCodeSigner(BaseCodeSigner):
logger_server.error('xcrun command did not report RequestUUID')
return None
def notarize_wait_result(self, request_uuid: str) -> bool:
def notarize_review_status(self, xcrun_output: str) -> bool:
"""
Review status returned by xcrun's notarization info
Returns truth if the notarization process has finished.
If there are errors during notarization, a NotarizationException()
exception is thrown with status message from the notarial office.
"""
# Parse status and message
status = xcrun_field_value_from_output('Status', xcrun_output)
status_message = xcrun_field_value_from_output(
'Status Message', xcrun_output)
if status == 'success':
logger_server.info(
'Package successfully notarized: %s', status_message)
return True
if status == 'invalid':
logger_server.error(xcrun_output)
logger_server.error(
'Package notarization has failed: %s', status_message)
raise NotarizationException(status_message)
if status == 'in progress':
return False
logger_server.info(
'Unknown notarization status %s (%s)', status, status_message)
return False
def notarize_wait_result(self, request_uuid: str) -> None:
"""
Wait for until notarial office have a reply
"""
@@ -351,29 +385,11 @@ class MacOSCodeSigner(BaseCodeSigner):
timeout_in_seconds = self.config.MACOS_NOTARIZE_TIMEOUT_IN_SECONDS
while True:
output = self.check_output_or_mock(
xcrun_output = self.check_output_or_mock(
command, util.Platform.MACOS, allow_nonzero_exit_code=True)
# Parse status and message
status = xcrun_field_value_from_output('Status', output)
status_message = xcrun_field_value_from_output(
'Status Message', output)
# Review status.
if status:
if status == 'success':
logger_server.info(
'Package successfully notarized: %s', status_message)
return True
elif status == 'invalid':
logger_server.error(output)
logger_server.error(
'Package notarization has failed: %s', status_message)
return False
elif status == 'in progress':
pass
else:
logger_server.info(
'Unknown notarization status %s (%s)', status, status_message)
if self.notarize_review_status(xcrun_output):
break
logger_server.info('Keep waiting for notarization office.')
time.sleep(30)
@@ -394,8 +410,6 @@ class MacOSCodeSigner(BaseCodeSigner):
command = ['xcrun', 'stapler', 'staple', '-v', file.absolute_filepath]
self.check_output_or_mock(command, util.Platform.MACOS)
return True
def notarize_dmg(self, file: AbsoluteAndRelativeFileName) -> bool:
"""
Run entire pipeline to get DMG notarized.
@@ -414,10 +428,7 @@ class MacOSCodeSigner(BaseCodeSigner):
return False
# Staple.
if not self.notarize_staple(file):
return False
return True
self.notarize_staple(file)
def notarize_all_dmg(
self, files: List[AbsoluteAndRelativeFileName]) -> bool:
@@ -432,10 +443,7 @@ class MacOSCodeSigner(BaseCodeSigner):
if not self.check_file_is_to_be_signed(file):
continue
if not self.notarize_dmg(file):
return False
return True
self.notarize_dmg(file)
############################################################################
# Entry point.
@@ -443,11 +451,6 @@ class MacOSCodeSigner(BaseCodeSigner):
def sign_all_files(self, files: List[AbsoluteAndRelativeFileName]) -> None:
# TODO(sergey): Handle errors somehow.
if not self.codesign_all_files(files):
return
if not self.codesign_bundles(files):
return
if not self.notarize_all_dmg(files):
return
self.codesign_all_files(files)
self.codesign_bundles(files)
self.notarize_all_dmg(files)

View File

@@ -29,6 +29,7 @@ from buildbot_utils import Builder
from codesign.absolute_and_relative_filename import AbsoluteAndRelativeFileName
from codesign.base_code_signer import BaseCodeSigner
from codesign.exception import CodeSignException
logger = logging.getLogger(__name__)
logger_server = logger.getChild('server')
@@ -40,6 +41,9 @@ BLACKLIST_FILE_PREFIXES = (
'api-ms-', 'concrt', 'msvcp', 'ucrtbase', 'vcomp', 'vcruntime')
class SigntoolException(CodeSignException):
pass
class WindowsCodeSigner(BaseCodeSigner):
def check_file_is_to_be_signed(
self, file: AbsoluteAndRelativeFileName) -> bool:
@@ -50,12 +54,41 @@ class WindowsCodeSigner(BaseCodeSigner):
return file.relative_filepath.suffix in EXTENSIONS_TO_BE_SIGNED
def get_sign_command_prefix(self) -> List[str]:
return [
'signtool', 'sign', '/v',
'/f', self.config.WIN_CERTIFICATE_FILEPATH,
'/tr', self.config.WIN_TIMESTAMP_AUTHORITY_URL]
def run_codesign_tool(self, filepath: Path) -> None:
command = self.get_sign_command_prefix() + [filepath]
codesign_output = self.check_output_or_mock(command, util.Platform.WINDOWS)
logger_server.info(f'signtool output:\n{codesign_output}')
got_number_of_success = False
for line in codesign_output.split('\n'):
line_clean = line.strip()
line_clean_lower = line_clean.lower()
if line_clean_lower.startswith('number of warnings') or \
line_clean_lower.startswith('number of errors'):
number = int(line_clean_lower.split(':')[1])
if number != 0:
raise SigntoolException('Non-clean success of signtool')
if line_clean_lower.startswith('number of files successfully signed'):
got_number_of_success = True
number = int(line_clean_lower.split(':')[1])
if number != 1:
raise SigntoolException('Signtool did not consider codesign a success')
if not got_number_of_success:
raise SigntoolException('Signtool did not report number of files signed')
def sign_all_files(self, files: List[AbsoluteAndRelativeFileName]) -> None:
# NOTE: Sign files one by one to avoid possible command line length
# overflow (which could happen if we ever decide to sign every binary
@@ -73,12 +106,7 @@ class WindowsCodeSigner(BaseCodeSigner):
file_index + 1, num_files, file.relative_filepath)
continue
command = self.get_sign_command_prefix()
command.append(file.absolute_filepath)
logger_server.info(
'Running signtool command for file [%d/%d] %s...',
file_index + 1, num_files, file.relative_filepath)
# TODO(sergey): Check the status somehow. With a missing certificate
# the command still exists with a zero code.
self.run_command_or_mock(command, util.Platform.WINDOWS)
# TODO(sergey): Report number of signed and ignored files.
self.run_codesign_tool(file.absolute_filepath)

View File

@@ -44,6 +44,7 @@ set(WITH_OPENMP ON CACHE BOOL "" FORCE)
set(WITH_OPENSUBDIV ON CACHE BOOL "" FORCE)
set(WITH_OPENVDB ON CACHE BOOL "" FORCE)
set(WITH_OPENVDB_BLOSC ON CACHE BOOL "" FORCE)
set(WITH_NANOVDB ON CACHE BOOL "" FORCE)
set(WITH_POTRACE ON CACHE BOOL "" FORCE)
set(WITH_PYTHON_INSTALL ON CACHE BOOL "" FORCE)
set(WITH_QUADRIFLOW ON CACHE BOOL "" FORCE)

View File

@@ -51,6 +51,7 @@ set(WITH_OPENIMAGEIO OFF CACHE BOOL "" FORCE)
set(WITH_OPENMP OFF CACHE BOOL "" FORCE)
set(WITH_OPENSUBDIV OFF CACHE BOOL "" FORCE)
set(WITH_OPENVDB OFF CACHE BOOL "" FORCE)
set(WITH_NANOVDB OFF CACHE BOOL "" FORCE)
set(WITH_QUADRIFLOW OFF CACHE BOOL "" FORCE)
set(WITH_SDL OFF CACHE BOOL "" FORCE)
set(WITH_TBB OFF CACHE BOOL "" FORCE)

View File

@@ -45,6 +45,7 @@ set(WITH_OPENMP ON CACHE BOOL "" FORCE)
set(WITH_OPENSUBDIV ON CACHE BOOL "" FORCE)
set(WITH_OPENVDB ON CACHE BOOL "" FORCE)
set(WITH_OPENVDB_BLOSC ON CACHE BOOL "" FORCE)
set(WITH_NANOVDB ON CACHE BOOL "" FORCE)
set(WITH_POTRACE ON CACHE BOOL "" FORCE)
set(WITH_PYTHON_INSTALL ON CACHE BOOL "" FORCE)
set(WITH_QUADRIFLOW ON CACHE BOOL "" FORCE)

View File

@@ -28,6 +28,7 @@ set(WITH_OPENCOLLADA OFF CACHE BOOL "" FORCE)
set(WITH_INTERNATIONAL OFF CACHE BOOL "" FORCE)
set(WITH_BULLET OFF CACHE BOOL "" FORCE)
set(WITH_OPENVDB OFF CACHE BOOL "" FORCE)
set(WITH_NANOVDB OFF CACHE BOOL "" FORCE)
set(WITH_ALEMBIC OFF CACHE BOOL "" FORCE)
# Depends on Python install, do this to quiet warning.

1137
extern/ceres/ChangeLog vendored

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
Project: Ceres Solver
URL: http://ceres-solver.org/
Upstream version 1.11 (aef9c9563b08d5f39eee1576af133a84749d1b48)
Upstream version 2.0.0
Local modifications: None

View File

@@ -8,8 +8,8 @@ else
fi
repo="https://ceres-solver.googlesource.com/ceres-solver"
branch="master"
tag=""
#branch="master"
tag="2.0.0"
tmp=`mktemp -d`
checkout="$tmp/ceres"

View File

@@ -153,28 +153,44 @@ template <typename CostFunctor,
int... Ns> // Number of parameters in each parameter block.
class AutoDiffCostFunction : public SizedCostFunction<kNumResiduals, Ns...> {
public:
// Takes ownership of functor. Uses the template-provided value for the
// number of residuals ("kNumResiduals").
explicit AutoDiffCostFunction(CostFunctor* functor) : functor_(functor) {
// Takes ownership of functor by default. Uses the template-provided
// value for the number of residuals ("kNumResiduals").
explicit AutoDiffCostFunction(CostFunctor* functor,
Ownership ownership = TAKE_OWNERSHIP)
: functor_(functor), ownership_(ownership) {
static_assert(kNumResiduals != DYNAMIC,
"Can't run the fixed-size constructor if the number of "
"residuals is set to ceres::DYNAMIC.");
}
// Takes ownership of functor. Ignores the template-provided
// Takes ownership of functor by default. Ignores the template-provided
// kNumResiduals in favor of the "num_residuals" argument provided.
//
// This allows for having autodiff cost functions which return varying
// numbers of residuals at runtime.
AutoDiffCostFunction(CostFunctor* functor, int num_residuals)
: functor_(functor) {
AutoDiffCostFunction(CostFunctor* functor,
int num_residuals,
Ownership ownership = TAKE_OWNERSHIP)
: functor_(functor), ownership_(ownership) {
static_assert(kNumResiduals == DYNAMIC,
"Can't run the dynamic-size constructor if the number of "
"residuals is not ceres::DYNAMIC.");
SizedCostFunction<kNumResiduals, Ns...>::set_num_residuals(num_residuals);
}
virtual ~AutoDiffCostFunction() {}
explicit AutoDiffCostFunction(AutoDiffCostFunction&& other)
: functor_(std::move(other.functor_)), ownership_(other.ownership_) {}
virtual ~AutoDiffCostFunction() {
// Manually release pointer if configured to not take ownership rather than
// deleting only if ownership is taken.
// This is to stay maximally compatible to old user code which may have
// forgotten to implement a virtual destructor, from when the
// AutoDiffCostFunction always took ownership.
if (ownership_ == DO_NOT_TAKE_OWNERSHIP) {
functor_.release();
}
}
// Implementation details follow; clients of the autodiff cost function should
// not have to examine below here.
@@ -201,6 +217,7 @@ class AutoDiffCostFunction : public SizedCostFunction<kNumResiduals, Ns...> {
private:
std::unique_ptr<CostFunctor> functor_;
Ownership ownership_;
};
} // namespace ceres

View File

@@ -38,8 +38,10 @@
#ifndef CERES_PUBLIC_C_API_H_
#define CERES_PUBLIC_C_API_H_
// clang-format off
#include "ceres/internal/port.h"
#include "ceres/internal/disable_warnings.h"
// clang-format on
#ifdef __cplusplus
extern "C" {

View File

@@ -144,8 +144,7 @@ class CostFunctionToFunctor {
// Extract parameter block pointers from params.
using Indices =
std::make_integer_sequence<int,
ParameterDims::kNumParameterBlocks>;
std::make_integer_sequence<int, ParameterDims::kNumParameterBlocks>;
std::array<const T*, ParameterDims::kNumParameterBlocks> parameter_blocks =
GetParameterPointers<T>(params, Indices());

View File

@@ -51,7 +51,7 @@ class CovarianceImpl;
// =======
// It is very easy to use this class incorrectly without understanding
// the underlying mathematics. Please read and understand the
// documentation completely before attempting to use this class.
// documentation completely before attempting to use it.
//
//
// This class allows the user to evaluate the covariance for a
@@ -73,7 +73,7 @@ class CovarianceImpl;
// the maximum likelihood estimate of x given observations y is the
// solution to the non-linear least squares problem:
//
// x* = arg min_x |f(x)|^2
// x* = arg min_x |f(x) - y|^2
//
// And the covariance of x* is given by
//
@@ -220,11 +220,11 @@ class CERES_EXPORT Covariance {
// 1. DENSE_SVD uses Eigen's JacobiSVD to perform the
// computations. It computes the singular value decomposition
//
// U * S * V' = J
// U * D * V' = J
//
// and then uses it to compute the pseudo inverse of J'J as
//
// pseudoinverse[J'J]^ = V * pseudoinverse[S] * V'
// pseudoinverse[J'J] = V * pseudoinverse[D^2] * V'
//
// It is an accurate but slow method and should only be used
// for small to moderate sized problems. It can handle
@@ -235,7 +235,7 @@ class CERES_EXPORT Covariance {
//
// Q * R = J
//
// [J'J]^-1 = [R*R']^-1
// [J'J]^-1 = [R'*R]^-1
//
// SPARSE_QR is not capable of computing the covariance if the
// Jacobian is rank deficient. Depending on the value of

View File

@@ -40,6 +40,7 @@
#include "ceres/dynamic_cost_function.h"
#include "ceres/internal/fixed_array.h"
#include "ceres/jet.h"
#include "ceres/types.h"
#include "glog/logging.h"
namespace ceres {
@@ -78,10 +79,24 @@ namespace ceres {
template <typename CostFunctor, int Stride = 4>
class DynamicAutoDiffCostFunction : public DynamicCostFunction {
public:
explicit DynamicAutoDiffCostFunction(CostFunctor* functor)
: functor_(functor) {}
// Takes ownership by default.
DynamicAutoDiffCostFunction(CostFunctor* functor,
Ownership ownership = TAKE_OWNERSHIP)
: functor_(functor), ownership_(ownership) {}
virtual ~DynamicAutoDiffCostFunction() {}
explicit DynamicAutoDiffCostFunction(DynamicAutoDiffCostFunction&& other)
: functor_(std::move(other.functor_)), ownership_(other.ownership_) {}
virtual ~DynamicAutoDiffCostFunction() {
// Manually release pointer if configured to not take ownership
// rather than deleting only if ownership is taken. This is to
// stay maximally compatible to old user code which may have
// forgotten to implement a virtual destructor, from when the
// AutoDiffCostFunction always took ownership.
if (ownership_ == DO_NOT_TAKE_OWNERSHIP) {
functor_.release();
}
}
bool Evaluate(double const* const* parameters,
double* residuals,
@@ -151,6 +166,9 @@ class DynamicAutoDiffCostFunction : public DynamicCostFunction {
}
}
if (num_active_parameters == 0) {
return (*functor_)(parameters, residuals);
}
// When `num_active_parameters % Stride != 0` then it can be the case
// that `active_parameter_count < Stride` while parameter_cursor is less
// than the total number of parameters and with no remaining non-constant
@@ -248,6 +266,7 @@ class DynamicAutoDiffCostFunction : public DynamicCostFunction {
private:
std::unique_ptr<CostFunctor> functor_;
Ownership ownership_;
};
} // namespace ceres

View File

@@ -44,6 +44,7 @@
#include "ceres/internal/numeric_diff.h"
#include "ceres/internal/parameter_dims.h"
#include "ceres/numeric_diff_options.h"
#include "ceres/types.h"
#include "glog/logging.h"
namespace ceres {
@@ -84,6 +85,10 @@ class DynamicNumericDiffCostFunction : public DynamicCostFunction {
const NumericDiffOptions& options = NumericDiffOptions())
: functor_(functor), ownership_(ownership), options_(options) {}
explicit DynamicNumericDiffCostFunction(
DynamicNumericDiffCostFunction&& other)
: functor_(std::move(other.functor_)), ownership_(other.ownership_) {}
virtual ~DynamicNumericDiffCostFunction() {
if (ownership_ != TAKE_OWNERSHIP) {
functor_.release();

View File

@@ -62,7 +62,8 @@ class CERES_EXPORT GradientProblemSolver {
// Minimizer options ----------------------------------------
LineSearchDirectionType line_search_direction_type = LBFGS;
LineSearchType line_search_type = WOLFE;
NonlinearConjugateGradientType nonlinear_conjugate_gradient_type = FLETCHER_REEVES;
NonlinearConjugateGradientType nonlinear_conjugate_gradient_type =
FLETCHER_REEVES;
// The LBFGS hessian approximation is a low rank approximation to
// the inverse of the Hessian matrix. The rank of the

View File

@@ -198,7 +198,7 @@ struct Make1stOrderPerturbation {
template <int N, int Offset, typename T, typename JetT>
struct Make1stOrderPerturbation<N, N, Offset, T, JetT> {
public:
static void Apply(const T* /*src*/, JetT* /*dst*/) {}
static void Apply(const T* src, JetT* dst) {}
};
// Calls Make1stOrderPerturbation for every parameter block.
@@ -229,7 +229,9 @@ struct Make1stOrderPerturbations<std::integer_sequence<int, N, Ns...>,
// End of 'recursion'. Nothing more to do.
template <int ParameterIdx, int Total>
struct Make1stOrderPerturbations<std::integer_sequence<int>, ParameterIdx, Total> {
struct Make1stOrderPerturbations<std::integer_sequence<int>,
ParameterIdx,
Total> {
template <typename T, typename JetT>
static void Apply(T const* const* /* NOT USED */, JetT* /* NOT USED */) {}
};

View File

@@ -34,11 +34,11 @@
#define CERES_WARNINGS_DISABLED
#ifdef _MSC_VER
#pragma warning( push )
#pragma warning(push)
// Disable the warning C4251 which is triggered by stl classes in
// Ceres' public interface. To quote MSDN: "C4251 can be ignored "
// "if you are deriving from a type in the Standard C++ Library"
#pragma warning( disable : 4251 )
#pragma warning(disable : 4251)
#endif
#endif // CERES_WARNINGS_DISABLED

View File

@@ -36,31 +36,26 @@
namespace ceres {
typedef Eigen::Matrix<double, Eigen::Dynamic, 1> Vector;
typedef Eigen::Matrix<double,
Eigen::Dynamic,
Eigen::Dynamic,
Eigen::RowMajor> Matrix;
typedef Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>
Matrix;
typedef Eigen::Map<Vector> VectorRef;
typedef Eigen::Map<Matrix> MatrixRef;
typedef Eigen::Map<const Vector> ConstVectorRef;
typedef Eigen::Map<const Matrix> ConstMatrixRef;
// Column major matrices for DenseSparseMatrix/DenseQRSolver
typedef Eigen::Matrix<double,
Eigen::Dynamic,
Eigen::Dynamic,
Eigen::ColMajor> ColMajorMatrix;
typedef Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::ColMajor>
ColMajorMatrix;
typedef Eigen::Map<ColMajorMatrix, 0,
Eigen::Stride<Eigen::Dynamic, 1>> ColMajorMatrixRef;
typedef Eigen::Map<ColMajorMatrix, 0, Eigen::Stride<Eigen::Dynamic, 1>>
ColMajorMatrixRef;
typedef Eigen::Map<const ColMajorMatrix,
0,
Eigen::Stride<Eigen::Dynamic, 1>> ConstColMajorMatrixRef;
typedef Eigen::Map<const ColMajorMatrix, 0, Eigen::Stride<Eigen::Dynamic, 1>>
ConstColMajorMatrixRef;
// C++ does not support templated typdefs, thus the need for this
// struct so that we can support statically sized Matrix and Maps.
template <int num_rows = Eigen::Dynamic, int num_cols = Eigen::Dynamic>
template <int num_rows = Eigen::Dynamic, int num_cols = Eigen::Dynamic>
struct EigenTypes {
typedef Eigen::Matrix<double,
num_rows,

View File

@@ -30,6 +30,7 @@
#ifndef CERES_PUBLIC_INTERNAL_FIXED_ARRAY_H_
#define CERES_PUBLIC_INTERNAL_FIXED_ARRAY_H_
#include <Eigen/Core> // For Eigen::aligned_allocator
#include <algorithm>
#include <array>
#include <cstddef>
@@ -37,8 +38,6 @@
#include <tuple>
#include <type_traits>
#include <Eigen/Core> // For Eigen::aligned_allocator
#include "ceres/internal/memory.h"
#include "glog/logging.h"

View File

@@ -62,7 +62,8 @@ struct SumImpl;
// Strip of and sum the first number.
template <typename T, T N, T... Ns>
struct SumImpl<std::integer_sequence<T, N, Ns...>> {
static constexpr T Value = N + SumImpl<std::integer_sequence<T, Ns...>>::Value;
static constexpr T Value =
N + SumImpl<std::integer_sequence<T, Ns...>>::Value;
};
// Strip of and sum the first two numbers.
@@ -129,10 +130,14 @@ template <typename T, T Sum, typename SeqIn, typename SeqOut>
struct ExclusiveScanImpl;
template <typename T, T Sum, T N, T... Ns, T... Rs>
struct ExclusiveScanImpl<T, Sum, std::integer_sequence<T, N, Ns...>,
struct ExclusiveScanImpl<T,
Sum,
std::integer_sequence<T, N, Ns...>,
std::integer_sequence<T, Rs...>> {
using Type =
typename ExclusiveScanImpl<T, Sum + N, std::integer_sequence<T, Ns...>,
typename ExclusiveScanImpl<T,
Sum + N,
std::integer_sequence<T, Ns...>,
std::integer_sequence<T, Rs..., Sum>>::Type;
};

View File

@@ -47,15 +47,17 @@
#include "ceres/types.h"
#include "glog/logging.h"
namespace ceres {
namespace internal {
// This is split from the main class because C++ doesn't allow partial template
// specializations for member functions. The alternative is to repeat the main
// class for differing numbers of parameters, which is also unfortunate.
template <typename CostFunctor, NumericDiffMethodType kMethod,
int kNumResiduals, typename ParameterDims, int kParameterBlock,
template <typename CostFunctor,
NumericDiffMethodType kMethod,
int kNumResiduals,
typename ParameterDims,
int kParameterBlock,
int kParameterBlockSize>
struct NumericDiff {
// Mutates parameters but must restore them before return.
@@ -66,23 +68,23 @@ struct NumericDiff {
int num_residuals,
int parameter_block_index,
int parameter_block_size,
double **parameters,
double *jacobian) {
double** parameters,
double* jacobian) {
using Eigen::ColMajor;
using Eigen::Map;
using Eigen::Matrix;
using Eigen::RowMajor;
using Eigen::ColMajor;
DCHECK(jacobian);
const int num_residuals_internal =
(kNumResiduals != ceres::DYNAMIC ? kNumResiduals : num_residuals);
const int parameter_block_index_internal =
(kParameterBlock != ceres::DYNAMIC ? kParameterBlock :
parameter_block_index);
(kParameterBlock != ceres::DYNAMIC ? kParameterBlock
: parameter_block_index);
const int parameter_block_size_internal =
(kParameterBlockSize != ceres::DYNAMIC ? kParameterBlockSize :
parameter_block_size);
(kParameterBlockSize != ceres::DYNAMIC ? kParameterBlockSize
: parameter_block_size);
typedef Matrix<double, kNumResiduals, 1> ResidualVector;
typedef Matrix<double, kParameterBlockSize, 1> ParameterVector;
@@ -97,17 +99,17 @@ struct NumericDiff {
(kParameterBlockSize == 1) ? ColMajor : RowMajor>
JacobianMatrix;
Map<JacobianMatrix> parameter_jacobian(jacobian,
num_residuals_internal,
parameter_block_size_internal);
Map<JacobianMatrix> parameter_jacobian(
jacobian, num_residuals_internal, parameter_block_size_internal);
Map<ParameterVector> x_plus_delta(
parameters[parameter_block_index_internal],
parameter_block_size_internal);
ParameterVector x(x_plus_delta);
ParameterVector step_size = x.array().abs() *
((kMethod == RIDDERS) ? options.ridders_relative_initial_step_size :
options.relative_step_size);
ParameterVector step_size =
x.array().abs() * ((kMethod == RIDDERS)
? options.ridders_relative_initial_step_size
: options.relative_step_size);
// It is not a good idea to make the step size arbitrarily
// small. This will lead to problems with round off and numerical
@@ -118,8 +120,8 @@ struct NumericDiff {
// For Ridders' method, the initial step size is required to be large,
// thus ridders_relative_initial_step_size is used.
if (kMethod == RIDDERS) {
min_step_size = std::max(min_step_size,
options.ridders_relative_initial_step_size);
min_step_size =
std::max(min_step_size, options.ridders_relative_initial_step_size);
}
// For each parameter in the parameter block, use finite differences to
@@ -133,7 +135,9 @@ struct NumericDiff {
const double delta = std::max(min_step_size, step_size(j));
if (kMethod == RIDDERS) {
if (!EvaluateRiddersJacobianColumn(functor, j, delta,
if (!EvaluateRiddersJacobianColumn(functor,
j,
delta,
options,
num_residuals_internal,
parameter_block_size_internal,
@@ -146,7 +150,9 @@ struct NumericDiff {
return false;
}
} else {
if (!EvaluateJacobianColumn(functor, j, delta,
if (!EvaluateJacobianColumn(functor,
j,
delta,
num_residuals_internal,
parameter_block_size_internal,
x.data(),
@@ -182,8 +188,7 @@ struct NumericDiff {
typedef Matrix<double, kParameterBlockSize, 1> ParameterVector;
Map<const ParameterVector> x(x_ptr, parameter_block_size);
Map<ParameterVector> x_plus_delta(x_plus_delta_ptr,
parameter_block_size);
Map<ParameterVector> x_plus_delta(x_plus_delta_ptr, parameter_block_size);
Map<ResidualVector> residuals(residuals_ptr, num_residuals);
Map<ResidualVector> temp_residuals(temp_residuals_ptr, num_residuals);
@@ -191,9 +196,8 @@ struct NumericDiff {
// Mutate 1 element at a time and then restore.
x_plus_delta(parameter_index) = x(parameter_index) + delta;
if (!VariadicEvaluate<ParameterDims>(*functor,
parameters,
residuals.data())) {
if (!VariadicEvaluate<ParameterDims>(
*functor, parameters, residuals.data())) {
return false;
}
@@ -206,9 +210,8 @@ struct NumericDiff {
// Compute the function on the other side of x(parameter_index).
x_plus_delta(parameter_index) = x(parameter_index) - delta;
if (!VariadicEvaluate<ParameterDims>(*functor,
parameters,
temp_residuals.data())) {
if (!VariadicEvaluate<ParameterDims>(
*functor, parameters, temp_residuals.data())) {
return false;
}
@@ -217,8 +220,7 @@ struct NumericDiff {
} else {
// Forward difference only; reuse existing residuals evaluation.
residuals -=
Map<const ResidualVector>(residuals_at_eval_point,
num_residuals);
Map<const ResidualVector>(residuals_at_eval_point, num_residuals);
}
// Restore x_plus_delta.
@@ -254,17 +256,17 @@ struct NumericDiff {
double* x_plus_delta_ptr,
double* temp_residuals_ptr,
double* residuals_ptr) {
using Eigen::aligned_allocator;
using Eigen::Map;
using Eigen::Matrix;
using Eigen::aligned_allocator;
typedef Matrix<double, kNumResiduals, 1> ResidualVector;
typedef Matrix<double, kNumResiduals, Eigen::Dynamic> ResidualCandidateMatrix;
typedef Matrix<double, kNumResiduals, Eigen::Dynamic>
ResidualCandidateMatrix;
typedef Matrix<double, kParameterBlockSize, 1> ParameterVector;
Map<const ParameterVector> x(x_ptr, parameter_block_size);
Map<ParameterVector> x_plus_delta(x_plus_delta_ptr,
parameter_block_size);
Map<ParameterVector> x_plus_delta(x_plus_delta_ptr, parameter_block_size);
Map<ResidualVector> residuals(residuals_ptr, num_residuals);
Map<ResidualVector> temp_residuals(temp_residuals_ptr, num_residuals);
@@ -275,18 +277,16 @@ struct NumericDiff {
// As the derivative is estimated, the step size decreases.
// By default, the step sizes are chosen so that the middle column
// of the Romberg tableau uses the input delta.
double current_step_size = delta *
pow(options.ridders_step_shrink_factor,
options.max_num_ridders_extrapolations / 2);
double current_step_size =
delta * pow(options.ridders_step_shrink_factor,
options.max_num_ridders_extrapolations / 2);
// Double-buffering temporary differential candidate vectors
// from previous step size.
ResidualCandidateMatrix stepsize_candidates_a(
num_residuals,
options.max_num_ridders_extrapolations);
num_residuals, options.max_num_ridders_extrapolations);
ResidualCandidateMatrix stepsize_candidates_b(
num_residuals,
options.max_num_ridders_extrapolations);
num_residuals, options.max_num_ridders_extrapolations);
ResidualCandidateMatrix* current_candidates = &stepsize_candidates_a;
ResidualCandidateMatrix* previous_candidates = &stepsize_candidates_b;
@@ -304,7 +304,9 @@ struct NumericDiff {
// 3. Extrapolation becomes numerically unstable.
for (int i = 0; i < options.max_num_ridders_extrapolations; ++i) {
// Compute the numerical derivative at this step size.
if (!EvaluateJacobianColumn(functor, parameter_index, current_step_size,
if (!EvaluateJacobianColumn(functor,
parameter_index,
current_step_size,
num_residuals,
parameter_block_size,
x.data(),
@@ -327,23 +329,24 @@ struct NumericDiff {
// Extrapolation factor for Richardson acceleration method (see below).
double richardson_factor = options.ridders_step_shrink_factor *
options.ridders_step_shrink_factor;
options.ridders_step_shrink_factor;
for (int k = 1; k <= i; ++k) {
// Extrapolate the various orders of finite differences using
// the Richardson acceleration method.
current_candidates->col(k) =
(richardson_factor * current_candidates->col(k - 1) -
previous_candidates->col(k - 1)) / (richardson_factor - 1.0);
previous_candidates->col(k - 1)) /
(richardson_factor - 1.0);
richardson_factor *= options.ridders_step_shrink_factor *
options.ridders_step_shrink_factor;
options.ridders_step_shrink_factor;
// Compute the difference between the previous value and the current.
double candidate_error = std::max(
(current_candidates->col(k) -
current_candidates->col(k - 1)).norm(),
(current_candidates->col(k) -
previous_candidates->col(k - 1)).norm());
(current_candidates->col(k) - current_candidates->col(k - 1))
.norm(),
(current_candidates->col(k) - previous_candidates->col(k - 1))
.norm());
// If the error has decreased, update results.
if (candidate_error <= norm_error) {
@@ -365,8 +368,9 @@ struct NumericDiff {
// Check to see if the current gradient estimate is numerically unstable.
// If so, bail out and return the last stable result.
if (i > 0) {
double tableau_error = (current_candidates->col(i) -
previous_candidates->col(i - 1)).norm();
double tableau_error =
(current_candidates->col(i) - previous_candidates->col(i - 1))
.norm();
// Compare current error to the chosen candidate's error.
if (tableau_error >= 2 * norm_error) {
@@ -482,14 +486,18 @@ struct EvaluateJacobianForParameterBlocks<ParameterDims,
// End of 'recursion'. Nothing more to do.
template <typename ParameterDims, int ParameterIdx>
struct EvaluateJacobianForParameterBlocks<ParameterDims, std::integer_sequence<int>,
struct EvaluateJacobianForParameterBlocks<ParameterDims,
std::integer_sequence<int>,
ParameterIdx> {
template <NumericDiffMethodType method, int kNumResiduals,
template <NumericDiffMethodType method,
int kNumResiduals,
typename CostFunctor>
static bool Apply(const CostFunctor* /* NOT USED*/,
const double* /* NOT USED*/,
const NumericDiffOptions& /* NOT USED*/, int /* NOT USED*/,
double** /* NOT USED*/, double** /* NOT USED*/) {
const NumericDiffOptions& /* NOT USED*/,
int /* NOT USED*/,
double** /* NOT USED*/,
double** /* NOT USED*/) {
return true;
}
};

View File

@@ -35,17 +35,17 @@
#include "ceres/internal/config.h"
#if defined(CERES_USE_OPENMP)
# if defined(CERES_USE_CXX_THREADS) || defined(CERES_NO_THREADS)
# error CERES_USE_OPENMP is mutually exclusive to CERES_USE_CXX_THREADS and CERES_NO_THREADS
# endif
#if defined(CERES_USE_CXX_THREADS) || defined(CERES_NO_THREADS)
#error CERES_USE_OPENMP is mutually exclusive to CERES_USE_CXX_THREADS and CERES_NO_THREADS
#endif
#elif defined(CERES_USE_CXX_THREADS)
# if defined(CERES_USE_OPENMP) || defined(CERES_NO_THREADS)
# error CERES_USE_CXX_THREADS is mutually exclusive to CERES_USE_OPENMP, CERES_USE_CXX_THREADS and CERES_NO_THREADS
# endif
#if defined(CERES_USE_OPENMP) || defined(CERES_NO_THREADS)
#error CERES_USE_CXX_THREADS is mutually exclusive to CERES_USE_OPENMP, CERES_USE_CXX_THREADS and CERES_NO_THREADS
#endif
#elif defined(CERES_NO_THREADS)
# if defined(CERES_USE_OPENMP) || defined(CERES_USE_CXX_THREADS)
# error CERES_NO_THREADS is mutually exclusive to CERES_USE_OPENMP and CERES_USE_CXX_THREADS
# endif
#if defined(CERES_USE_OPENMP) || defined(CERES_USE_CXX_THREADS)
#error CERES_NO_THREADS is mutually exclusive to CERES_USE_OPENMP and CERES_USE_CXX_THREADS
#endif
#else
# error One of CERES_USE_OPENMP, CERES_USE_CXX_THREADS or CERES_NO_THREADS must be defined.
#endif
@@ -54,37 +54,57 @@
// compiled without any sparse back-end. Verify that it has not subsequently
// been inconsistently redefined.
#if defined(CERES_NO_SPARSE)
# if !defined(CERES_NO_SUITESPARSE)
# error CERES_NO_SPARSE requires CERES_NO_SUITESPARSE.
# endif
# if !defined(CERES_NO_CXSPARSE)
# error CERES_NO_SPARSE requires CERES_NO_CXSPARSE
# endif
# if !defined(CERES_NO_ACCELERATE_SPARSE)
# error CERES_NO_SPARSE requires CERES_NO_ACCELERATE_SPARSE
# endif
# if defined(CERES_USE_EIGEN_SPARSE)
# error CERES_NO_SPARSE requires !CERES_USE_EIGEN_SPARSE
# endif
#if !defined(CERES_NO_SUITESPARSE)
#error CERES_NO_SPARSE requires CERES_NO_SUITESPARSE.
#endif
#if !defined(CERES_NO_CXSPARSE)
#error CERES_NO_SPARSE requires CERES_NO_CXSPARSE
#endif
#if !defined(CERES_NO_ACCELERATE_SPARSE)
#error CERES_NO_SPARSE requires CERES_NO_ACCELERATE_SPARSE
#endif
#if defined(CERES_USE_EIGEN_SPARSE)
#error CERES_NO_SPARSE requires !CERES_USE_EIGEN_SPARSE
#endif
#endif
// A macro to signal which functions and classes are exported when
// building a DLL with MSVC.
//
// Note that the ordering here is important, CERES_BUILDING_SHARED_LIBRARY
// is only defined locally when Ceres is compiled, it is never exported to
// users. However, in order that we do not have to configure config.h
// separately for building vs installing, if we are using MSVC and building
// a shared library, then both CERES_BUILDING_SHARED_LIBRARY and
// CERES_USING_SHARED_LIBRARY will be defined when Ceres is compiled.
// Hence it is important that the check for CERES_BUILDING_SHARED_LIBRARY
// happens first.
#if defined(_MSC_VER) && defined(CERES_BUILDING_SHARED_LIBRARY)
# define CERES_EXPORT __declspec(dllexport)
#elif defined(_MSC_VER) && defined(CERES_USING_SHARED_LIBRARY)
# define CERES_EXPORT __declspec(dllimport)
// building a shared library.
#if defined(_MSC_VER)
#define CERES_API_SHARED_IMPORT __declspec(dllimport)
#define CERES_API_SHARED_EXPORT __declspec(dllexport)
#elif defined(__GNUC__)
#define CERES_API_SHARED_IMPORT __attribute__((visibility("default")))
#define CERES_API_SHARED_EXPORT __attribute__((visibility("default")))
#else
# define CERES_EXPORT
#define CERES_API_SHARED_IMPORT
#define CERES_API_SHARED_EXPORT
#endif
// CERES_BUILDING_SHARED_LIBRARY is only defined locally when Ceres itself is
// compiled as a shared library, it is never exported to users. In order that
// we do not have to configure config.h separately when building Ceres as either
// a static or dynamic library, we define both CERES_USING_SHARED_LIBRARY and
// CERES_BUILDING_SHARED_LIBRARY when building as a shared library.
#if defined(CERES_USING_SHARED_LIBRARY)
#if defined(CERES_BUILDING_SHARED_LIBRARY)
// Compiling Ceres itself as a shared library.
#define CERES_EXPORT CERES_API_SHARED_EXPORT
#else
// Using Ceres as a shared library.
#define CERES_EXPORT CERES_API_SHARED_IMPORT
#endif
#else
// Ceres was compiled as a static library, export everything.
#define CERES_EXPORT
#endif
// Unit tests reach in and test internal functionality so we need a way to make
// those symbols visible
#ifdef CERES_EXPORT_INTERNAL_SYMBOLS
#define CERES_EXPORT_INTERNAL CERES_EXPORT
#else
#define CERES_EXPORT_INTERNAL
#endif
#endif // CERES_PUBLIC_INTERNAL_PORT_H_

View File

@@ -32,7 +32,7 @@
#undef CERES_WARNINGS_DISABLED
#ifdef _MSC_VER
#pragma warning( pop )
#pragma warning(pop)
#endif
#endif // CERES_WARNINGS_DISABLED

View File

@@ -46,8 +46,10 @@ namespace internal {
// For fixed size cost functors
template <typename Functor, typename T, int... Indices>
inline bool VariadicEvaluateImpl(const Functor& functor, T const* const* input,
T* output, std::false_type /*is_dynamic*/,
inline bool VariadicEvaluateImpl(const Functor& functor,
T const* const* input,
T* output,
std::false_type /*is_dynamic*/,
std::integer_sequence<int, Indices...>) {
static_assert(sizeof...(Indices),
"Invalid number of parameter blocks. At least one parameter "
@@ -57,26 +59,31 @@ inline bool VariadicEvaluateImpl(const Functor& functor, T const* const* input,
// For dynamic sized cost functors
template <typename Functor, typename T>
inline bool VariadicEvaluateImpl(const Functor& functor, T const* const* input,
T* output, std::true_type /*is_dynamic*/,
inline bool VariadicEvaluateImpl(const Functor& functor,
T const* const* input,
T* output,
std::true_type /*is_dynamic*/,
std::integer_sequence<int>) {
return functor(input, output);
}
// For ceres cost functors (not ceres::CostFunction)
template <typename ParameterDims, typename Functor, typename T>
inline bool VariadicEvaluateImpl(const Functor& functor, T const* const* input,
T* output, const void* /* NOT USED */) {
inline bool VariadicEvaluateImpl(const Functor& functor,
T const* const* input,
T* output,
const void* /* NOT USED */) {
using ParameterBlockIndices =
std::make_integer_sequence<int, ParameterDims::kNumParameterBlocks>;
using IsDynamic = std::integral_constant<bool, ParameterDims::kIsDynamic>;
return VariadicEvaluateImpl(functor, input, output, IsDynamic(),
ParameterBlockIndices());
return VariadicEvaluateImpl(
functor, input, output, IsDynamic(), ParameterBlockIndices());
}
// For ceres::CostFunction
template <typename ParameterDims, typename Functor, typename T>
inline bool VariadicEvaluateImpl(const Functor& functor, T const* const* input,
inline bool VariadicEvaluateImpl(const Functor& functor,
T const* const* input,
T* output,
const CostFunction* /* NOT USED */) {
return functor.Evaluate(input, output, nullptr);
@@ -95,7 +102,8 @@ inline bool VariadicEvaluateImpl(const Functor& functor, T const* const* input,
// blocks. The signature of the functor must have the following signature
// 'bool()(const T* i_1, const T* i_2, ... const T* i_n, T* output)'.
template <typename ParameterDims, typename Functor, typename T>
inline bool VariadicEvaluate(const Functor& functor, T const* const* input,
inline bool VariadicEvaluate(const Functor& functor,
T const* const* input,
T* output) {
return VariadicEvaluateImpl<ParameterDims>(functor, input, output, &functor);
}

View File

@@ -73,7 +73,7 @@ struct CERES_EXPORT IterationSummary {
bool step_is_successful = false;
// Value of the objective function.
double cost = 0.90;
double cost = 0.0;
// Change in the value of the objective function in this
// iteration. This can be positive or negative.

View File

@@ -388,6 +388,8 @@ using std::cbrt;
using std::ceil;
using std::cos;
using std::cosh;
using std::erf;
using std::erfc;
using std::exp;
using std::exp2;
using std::floor;
@@ -573,6 +575,21 @@ inline Jet<T, N> fmin(const Jet<T, N>& x, const Jet<T, N>& y) {
return y < x ? y : x;
}
// erf is defined as an integral that cannot be expressed analyticaly
// however, the derivative is trivial to compute
// erf(x + h) = erf(x) + h * 2*exp(-x^2)/sqrt(pi)
template <typename T, int N>
inline Jet<T, N> erf(const Jet<T, N>& x) {
return Jet<T, N>(erf(x.a), x.v * M_2_SQRTPI * exp(-x.a * x.a));
}
// erfc(x) = 1-erf(x)
// erfc(x + h) = erfc(x) + h * (-2*exp(-x^2)/sqrt(pi))
template <typename T, int N>
inline Jet<T, N> erfc(const Jet<T, N>& x) {
return Jet<T, N>(erfc(x.a), -x.v * M_2_SQRTPI * exp(-x.a * x.a));
}
// Bessel functions of the first kind with integer order equal to 0, 1, n.
//
// Microsoft has deprecated the j[0,1,n]() POSIX Bessel functions in favour of

View File

@@ -90,8 +90,8 @@ namespace ceres {
//
// An example that occurs commonly in Structure from Motion problems
// is when camera rotations are parameterized using Quaternion. There,
// it is useful only make updates orthogonal to that 4-vector defining
// the quaternion. One way to do this is to let delta be a 3
// it is useful to only make updates orthogonal to that 4-vector
// defining the quaternion. One way to do this is to let delta be a 3
// dimensional vector and define Plus to be
//
// Plus(x, delta) = [cos(|delta|), sin(|delta|) delta / |delta|] * x
@@ -99,7 +99,7 @@ namespace ceres {
// The multiplication between the two 4-vectors on the RHS is the
// standard quaternion product.
//
// Given g and a point x, optimizing f can now be restated as
// Given f and a point x, optimizing f can now be restated as
//
// min f(Plus(x, delta))
// delta
@@ -306,6 +306,7 @@ class CERES_EXPORT ProductParameterization : public LocalParameterization {
public:
ProductParameterization(const ProductParameterization&) = delete;
ProductParameterization& operator=(const ProductParameterization&) = delete;
virtual ~ProductParameterization() {}
//
// NOTE: The constructor takes ownership of the input local
// parameterizations.
@@ -341,7 +342,8 @@ class CERES_EXPORT ProductParameterization : public LocalParameterization {
bool Plus(const double* x,
const double* delta,
double* x_plus_delta) const override;
bool ComputeJacobian(const double* x, double* jacobian) const override;
bool ComputeJacobian(const double* x,
double* jacobian) const override;
int GlobalSize() const override { return global_size_; }
int LocalSize() const override { return local_size_; }
@@ -354,8 +356,8 @@ class CERES_EXPORT ProductParameterization : public LocalParameterization {
} // namespace ceres
// clang-format off
#include "ceres/internal/reenable_warnings.h"
#include "ceres/internal/line_parameterization.h"
#endif // CERES_PUBLIC_LOCAL_PARAMETERIZATION_H_

View File

@@ -192,7 +192,10 @@ class NumericDiffCostFunction : public SizedCostFunction<kNumResiduals, Ns...> {
}
}
~NumericDiffCostFunction() {
explicit NumericDiffCostFunction(NumericDiffCostFunction&& other)
: functor_(std::move(other.functor_)), ownership_(other.ownership_) {}
virtual ~NumericDiffCostFunction() {
if (ownership_ != TAKE_OWNERSHIP) {
functor_.release();
}

View File

@@ -453,13 +453,15 @@ class CERES_EXPORT Problem {
// problem.AddResidualBlock(new MyCostFunction, nullptr, &x);
//
// double cost = 0.0;
// problem.Evaluate(Problem::EvaluateOptions(), &cost, nullptr, nullptr, nullptr);
// problem.Evaluate(Problem::EvaluateOptions(), &cost,
// nullptr, nullptr, nullptr);
//
// The cost is evaluated at x = 1. If you wish to evaluate the
// problem at x = 2, then
//
// x = 2;
// problem.Evaluate(Problem::EvaluateOptions(), &cost, nullptr, nullptr, nullptr);
// problem.Evaluate(Problem::EvaluateOptions(), &cost,
// nullptr, nullptr, nullptr);
//
// is the way to do so.
//
@@ -475,7 +477,7 @@ class CERES_EXPORT Problem {
// at the end of an iteration during a solve.
//
// Note 4: If an EvaluationCallback is associated with the problem,
// then its PrepareForEvaluation method will be called everytime
// then its PrepareForEvaluation method will be called every time
// this method is called with new_point = true.
bool Evaluate(const EvaluateOptions& options,
double* cost,
@@ -509,23 +511,41 @@ class CERES_EXPORT Problem {
// apply_loss_function as the name implies allows the user to switch
// the application of the loss function on and off.
//
// WARNING: If an EvaluationCallback is associated with the problem
// then it is the user's responsibility to call it before calling
// this method.
//
// This is because, if the user calls this method multiple times, we
// cannot tell if the underlying parameter blocks have changed
// between calls or not. So if EvaluateResidualBlock was responsible
// for calling the EvaluationCallback, it will have to do it
// everytime it is called. Which makes the common case where the
// parameter blocks do not change, inefficient. So we leave it to
// the user to call the EvaluationCallback as needed.
// If an EvaluationCallback is associated with the problem, then its
// PrepareForEvaluation method will be called every time this method
// is called with new_point = true. This conservatively assumes that
// the user may have changed the parameter values since the previous
// call to evaluate / solve. For improved efficiency, and only if
// you know that the parameter values have not changed between
// calls, see EvaluateResidualBlockAssumingParametersUnchanged().
bool EvaluateResidualBlock(ResidualBlockId residual_block_id,
bool apply_loss_function,
double* cost,
double* residuals,
double** jacobians) const;
// Same as EvaluateResidualBlock except that if an
// EvaluationCallback is associated with the problem, then its
// PrepareForEvaluation method will be called every time this method
// is called with new_point = false.
//
// This means, if an EvaluationCallback is associated with the
// problem then it is the user's responsibility to call
// PrepareForEvaluation before calling this method if necessary,
// i.e. iff the parameter values have been changed since the last
// call to evaluate / solve.'
//
// This is because, as the name implies, we assume that the
// parameter blocks did not change since the last time
// PrepareForEvaluation was called (via Solve, Evaluate or
// EvaluateResidualBlock).
bool EvaluateResidualBlockAssumingParametersUnchanged(
ResidualBlockId residual_block_id,
bool apply_loss_function,
double* cost,
double* residuals,
double** jacobians) const;
private:
friend class Solver;
friend class Covariance;

View File

@@ -320,8 +320,8 @@ inline void QuaternionToAngleAxis(const T* quaternion, T* angle_axis) {
}
template <typename T>
void RotationMatrixToQuaternion(const T* R, T* angle_axis) {
RotationMatrixToQuaternion(ColumnMajorAdapter3x3(R), angle_axis);
void RotationMatrixToQuaternion(const T* R, T* quaternion) {
RotationMatrixToQuaternion(ColumnMajorAdapter3x3(R), quaternion);
}
// This algorithm comes from "Quaternion Calculus and Fast Animation",

View File

@@ -360,7 +360,8 @@ class CERES_EXPORT Solver {
//
// If Solver::Options::preconditioner_type == SUBSET, then
// residual_blocks_for_subset_preconditioner must be non-empty.
std::unordered_set<ResidualBlockId> residual_blocks_for_subset_preconditioner;
std::unordered_set<ResidualBlockId>
residual_blocks_for_subset_preconditioner;
// Ceres supports using multiple dense linear algebra libraries
// for dense matrix factorizations. Currently EIGEN and LAPACK are
@@ -838,7 +839,7 @@ class CERES_EXPORT Solver {
int num_linear_solves = -1;
// Time (in seconds) spent evaluating the residual vector.
double residual_evaluation_time_in_seconds = 1.0;
double residual_evaluation_time_in_seconds = -1.0;
// Number of residual only evaluations.
int num_residual_evaluations = -1;

View File

@@ -50,7 +50,7 @@ namespace ceres {
// delete on it upon completion.
enum Ownership {
DO_NOT_TAKE_OWNERSHIP,
TAKE_OWNERSHIP
TAKE_OWNERSHIP,
};
// TODO(keir): Considerably expand the explanations of each solver type.
@@ -185,19 +185,19 @@ enum SparseLinearAlgebraLibraryType {
enum DenseLinearAlgebraLibraryType {
EIGEN,
LAPACK
LAPACK,
};
// Logging options
// The options get progressively noisier.
enum LoggingType {
SILENT,
PER_MINIMIZER_ITERATION
PER_MINIMIZER_ITERATION,
};
enum MinimizerType {
LINE_SEARCH,
TRUST_REGION
TRUST_REGION,
};
enum LineSearchDirectionType {
@@ -412,7 +412,7 @@ enum DumpFormatType {
// specified for the number of residuals. If specified, then the
// number of residuas for that cost function can vary at runtime.
enum DimensionType {
DYNAMIC = -1
DYNAMIC = -1,
};
// The differentiation method used to compute numerical derivatives in
@@ -433,7 +433,7 @@ enum NumericDiffMethodType {
enum LineSearchInterpolationType {
BISECTION,
QUADRATIC,
CUBIC
CUBIC,
};
enum CovarianceAlgorithmType {
@@ -448,8 +448,7 @@ enum CovarianceAlgorithmType {
// did not write to that memory location.
const double kImpossibleValue = 1e302;
CERES_EXPORT const char* LinearSolverTypeToString(
LinearSolverType type);
CERES_EXPORT const char* LinearSolverTypeToString(LinearSolverType type);
CERES_EXPORT bool StringToLinearSolverType(std::string value,
LinearSolverType* type);
@@ -459,25 +458,23 @@ CERES_EXPORT bool StringToPreconditionerType(std::string value,
CERES_EXPORT const char* VisibilityClusteringTypeToString(
VisibilityClusteringType type);
CERES_EXPORT bool StringToVisibilityClusteringType(std::string value,
VisibilityClusteringType* type);
CERES_EXPORT bool StringToVisibilityClusteringType(
std::string value, VisibilityClusteringType* type);
CERES_EXPORT const char* SparseLinearAlgebraLibraryTypeToString(
SparseLinearAlgebraLibraryType type);
CERES_EXPORT bool StringToSparseLinearAlgebraLibraryType(
std::string value,
SparseLinearAlgebraLibraryType* type);
std::string value, SparseLinearAlgebraLibraryType* type);
CERES_EXPORT const char* DenseLinearAlgebraLibraryTypeToString(
DenseLinearAlgebraLibraryType type);
CERES_EXPORT bool StringToDenseLinearAlgebraLibraryType(
std::string value,
DenseLinearAlgebraLibraryType* type);
std::string value, DenseLinearAlgebraLibraryType* type);
CERES_EXPORT const char* TrustRegionStrategyTypeToString(
TrustRegionStrategyType type);
CERES_EXPORT bool StringToTrustRegionStrategyType(std::string value,
TrustRegionStrategyType* type);
CERES_EXPORT bool StringToTrustRegionStrategyType(
std::string value, TrustRegionStrategyType* type);
CERES_EXPORT const char* DoglegTypeToString(DoglegType type);
CERES_EXPORT bool StringToDoglegType(std::string value, DoglegType* type);
@@ -487,41 +484,39 @@ CERES_EXPORT bool StringToMinimizerType(std::string value, MinimizerType* type);
CERES_EXPORT const char* LineSearchDirectionTypeToString(
LineSearchDirectionType type);
CERES_EXPORT bool StringToLineSearchDirectionType(std::string value,
LineSearchDirectionType* type);
CERES_EXPORT bool StringToLineSearchDirectionType(
std::string value, LineSearchDirectionType* type);
CERES_EXPORT const char* LineSearchTypeToString(LineSearchType type);
CERES_EXPORT bool StringToLineSearchType(std::string value, LineSearchType* type);
CERES_EXPORT bool StringToLineSearchType(std::string value,
LineSearchType* type);
CERES_EXPORT const char* NonlinearConjugateGradientTypeToString(
NonlinearConjugateGradientType type);
CERES_EXPORT bool StringToNonlinearConjugateGradientType(
std::string value,
NonlinearConjugateGradientType* type);
std::string value, NonlinearConjugateGradientType* type);
CERES_EXPORT const char* LineSearchInterpolationTypeToString(
LineSearchInterpolationType type);
CERES_EXPORT bool StringToLineSearchInterpolationType(
std::string value,
LineSearchInterpolationType* type);
std::string value, LineSearchInterpolationType* type);
CERES_EXPORT const char* CovarianceAlgorithmTypeToString(
CovarianceAlgorithmType type);
CERES_EXPORT bool StringToCovarianceAlgorithmType(
std::string value,
CovarianceAlgorithmType* type);
std::string value, CovarianceAlgorithmType* type);
CERES_EXPORT const char* NumericDiffMethodTypeToString(
NumericDiffMethodType type);
CERES_EXPORT bool StringToNumericDiffMethodType(
std::string value,
NumericDiffMethodType* type);
CERES_EXPORT bool StringToNumericDiffMethodType(std::string value,
NumericDiffMethodType* type);
CERES_EXPORT const char* LoggingTypeToString(LoggingType type);
CERES_EXPORT bool StringtoLoggingType(std::string value, LoggingType* type);
CERES_EXPORT const char* DumpFormatTypeToString(DumpFormatType type);
CERES_EXPORT bool StringtoDumpFormatType(std::string value, DumpFormatType* type);
CERES_EXPORT bool StringtoDumpFormatType(std::string value,
DumpFormatType* type);
CERES_EXPORT bool StringtoDumpFormatType(std::string value, LoggingType* type);
CERES_EXPORT const char* TerminationTypeToString(TerminationType type);

View File

@@ -41,8 +41,9 @@
#define CERES_TO_STRING(x) CERES_TO_STRING_HELPER(x)
// The Ceres version as a string; for example "1.9.0".
#define CERES_VERSION_STRING CERES_TO_STRING(CERES_VERSION_MAJOR) "." \
CERES_TO_STRING(CERES_VERSION_MINOR) "." \
CERES_TO_STRING(CERES_VERSION_REVISION)
#define CERES_VERSION_STRING \
CERES_TO_STRING(CERES_VERSION_MAJOR) \
"." CERES_TO_STRING(CERES_VERSION_MINOR) "." CERES_TO_STRING( \
CERES_VERSION_REVISION)
#endif // CERES_PUBLIC_VERSION_H_

View File

@@ -33,18 +33,19 @@
#ifndef CERES_NO_ACCELERATE_SPARSE
#include "ceres/accelerate_sparse.h"
#include <algorithm>
#include <string>
#include <vector>
#include "ceres/accelerate_sparse.h"
#include "ceres/compressed_col_sparse_matrix_utils.h"
#include "ceres/compressed_row_sparse_matrix.h"
#include "ceres/triplet_sparse_matrix.h"
#include "glog/logging.h"
#define CASESTR(x) case x: return #x
#define CASESTR(x) \
case x: \
return #x
namespace ceres {
namespace internal {
@@ -68,7 +69,7 @@ const char* SparseStatusToString(SparseStatus_t status) {
// aligned to kAccelerateRequiredAlignment and returns a pointer to the
// aligned start.
void* ResizeForAccelerateAlignment(const size_t required_size,
std::vector<uint8_t> *workspace) {
std::vector<uint8_t>* workspace) {
// As per the Accelerate documentation, all workspace memory passed to the
// sparse solver functions must be 16-byte aligned.
constexpr int kAccelerateRequiredAlignment = 16;
@@ -80,29 +81,28 @@ void* ResizeForAccelerateAlignment(const size_t required_size,
size_t size_from_aligned_start = workspace->size();
void* aligned_solve_workspace_start =
reinterpret_cast<void*>(workspace->data());
aligned_solve_workspace_start =
std::align(kAccelerateRequiredAlignment,
required_size,
aligned_solve_workspace_start,
size_from_aligned_start);
aligned_solve_workspace_start = std::align(kAccelerateRequiredAlignment,
required_size,
aligned_solve_workspace_start,
size_from_aligned_start);
CHECK(aligned_solve_workspace_start != nullptr)
<< "required_size: " << required_size
<< ", workspace size: " << workspace->size();
return aligned_solve_workspace_start;
}
template<typename Scalar>
template <typename Scalar>
void AccelerateSparse<Scalar>::Solve(NumericFactorization* numeric_factor,
DenseVector* rhs_and_solution) {
// From SparseSolve() documentation in Solve.h
const int required_size =
numeric_factor->solveWorkspaceRequiredStatic +
numeric_factor->solveWorkspaceRequiredPerRHS;
SparseSolve(*numeric_factor, *rhs_and_solution,
const int required_size = numeric_factor->solveWorkspaceRequiredStatic +
numeric_factor->solveWorkspaceRequiredPerRHS;
SparseSolve(*numeric_factor,
*rhs_and_solution,
ResizeForAccelerateAlignment(required_size, &solve_workspace_));
}
template<typename Scalar>
template <typename Scalar>
typename AccelerateSparse<Scalar>::ASSparseMatrix
AccelerateSparse<Scalar>::CreateSparseMatrixTransposeView(
CompressedRowSparseMatrix* A) {
@@ -112,7 +112,7 @@ AccelerateSparse<Scalar>::CreateSparseMatrixTransposeView(
//
// Accelerate's columnStarts is a long*, not an int*. These types might be
// different (e.g. ARM on iOS) so always make a copy.
column_starts_.resize(A->num_rows() +1); // +1 for final column length.
column_starts_.resize(A->num_rows() + 1); // +1 for final column length.
std::copy_n(A->rows(), column_starts_.size(), &column_starts_[0]);
ASSparseMatrix At;
@@ -136,29 +136,31 @@ AccelerateSparse<Scalar>::CreateSparseMatrixTransposeView(
return At;
}
template<typename Scalar>
template <typename Scalar>
typename AccelerateSparse<Scalar>::SymbolicFactorization
AccelerateSparse<Scalar>::AnalyzeCholesky(ASSparseMatrix* A) {
return SparseFactor(SparseFactorizationCholesky, A->structure);
}
template<typename Scalar>
template <typename Scalar>
typename AccelerateSparse<Scalar>::NumericFactorization
AccelerateSparse<Scalar>::Cholesky(ASSparseMatrix* A,
SymbolicFactorization* symbolic_factor) {
return SparseFactor(*symbolic_factor, *A);
}
template<typename Scalar>
template <typename Scalar>
void AccelerateSparse<Scalar>::Cholesky(ASSparseMatrix* A,
NumericFactorization* numeric_factor) {
// From SparseRefactor() documentation in Solve.h
const int required_size = std::is_same<Scalar, double>::value
? numeric_factor->symbolicFactorization.workspaceSize_Double
: numeric_factor->symbolicFactorization.workspaceSize_Float;
return SparseRefactor(*A, numeric_factor,
ResizeForAccelerateAlignment(required_size,
&factorization_workspace_));
const int required_size =
std::is_same<Scalar, double>::value
? numeric_factor->symbolicFactorization.workspaceSize_Double
: numeric_factor->symbolicFactorization.workspaceSize_Float;
return SparseRefactor(
*A,
numeric_factor,
ResizeForAccelerateAlignment(required_size, &factorization_workspace_));
}
// Instantiate only for the specific template types required/supported s/t the
@@ -166,34 +168,33 @@ void AccelerateSparse<Scalar>::Cholesky(ASSparseMatrix* A,
template class AccelerateSparse<double>;
template class AccelerateSparse<float>;
template<typename Scalar>
std::unique_ptr<SparseCholesky>
AppleAccelerateCholesky<Scalar>::Create(OrderingType ordering_type) {
template <typename Scalar>
std::unique_ptr<SparseCholesky> AppleAccelerateCholesky<Scalar>::Create(
OrderingType ordering_type) {
return std::unique_ptr<SparseCholesky>(
new AppleAccelerateCholesky<Scalar>(ordering_type));
}
template<typename Scalar>
template <typename Scalar>
AppleAccelerateCholesky<Scalar>::AppleAccelerateCholesky(
const OrderingType ordering_type)
: ordering_type_(ordering_type) {}
template<typename Scalar>
template <typename Scalar>
AppleAccelerateCholesky<Scalar>::~AppleAccelerateCholesky() {
FreeSymbolicFactorization();
FreeNumericFactorization();
}
template<typename Scalar>
template <typename Scalar>
CompressedRowSparseMatrix::StorageType
AppleAccelerateCholesky<Scalar>::StorageType() const {
return CompressedRowSparseMatrix::LOWER_TRIANGULAR;
}
template<typename Scalar>
LinearSolverTerminationType
AppleAccelerateCholesky<Scalar>::Factorize(CompressedRowSparseMatrix* lhs,
std::string* message) {
template <typename Scalar>
LinearSolverTerminationType AppleAccelerateCholesky<Scalar>::Factorize(
CompressedRowSparseMatrix* lhs, std::string* message) {
CHECK_EQ(lhs->storage_type(), StorageType());
if (lhs == NULL) {
*message = "Failure: Input lhs is NULL.";
@@ -234,11 +235,9 @@ AppleAccelerateCholesky<Scalar>::Factorize(CompressedRowSparseMatrix* lhs,
return LINEAR_SOLVER_SUCCESS;
}
template<typename Scalar>
LinearSolverTerminationType
AppleAccelerateCholesky<Scalar>::Solve(const double* rhs,
double* solution,
std::string* message) {
template <typename Scalar>
LinearSolverTerminationType AppleAccelerateCholesky<Scalar>::Solve(
const double* rhs, double* solution, std::string* message) {
CHECK_EQ(numeric_factor_->status, SparseStatusOK)
<< "Solve called without a call to Factorize first ("
<< SparseStatusToString(numeric_factor_->status) << ").";
@@ -262,7 +261,7 @@ AppleAccelerateCholesky<Scalar>::Solve(const double* rhs,
return LINEAR_SOLVER_SUCCESS;
}
template<typename Scalar>
template <typename Scalar>
void AppleAccelerateCholesky<Scalar>::FreeSymbolicFactorization() {
if (symbolic_factor_) {
SparseCleanup(*symbolic_factor_);
@@ -270,7 +269,7 @@ void AppleAccelerateCholesky<Scalar>::FreeSymbolicFactorization() {
}
}
template<typename Scalar>
template <typename Scalar>
void AppleAccelerateCholesky<Scalar>::FreeNumericFactorization() {
if (numeric_factor_) {
SparseCleanup(*numeric_factor_);
@@ -283,7 +282,7 @@ void AppleAccelerateCholesky<Scalar>::FreeNumericFactorization() {
template class AppleAccelerateCholesky<double>;
template class AppleAccelerateCholesky<float>;
}
}
} // namespace internal
} // namespace ceres
#endif // CERES_NO_ACCELERATE_SPARSE

View File

@@ -40,9 +40,9 @@
#include <string>
#include <vector>
#include "Accelerate.h"
#include "ceres/linear_solver.h"
#include "ceres/sparse_cholesky.h"
#include "Accelerate.h"
namespace ceres {
namespace internal {
@@ -50,11 +50,10 @@ namespace internal {
class CompressedRowSparseMatrix;
class TripletSparseMatrix;
template<typename Scalar>
struct SparseTypesTrait {
};
template <typename Scalar>
struct SparseTypesTrait {};
template<>
template <>
struct SparseTypesTrait<double> {
typedef DenseVector_Double DenseVector;
typedef SparseMatrix_Double SparseMatrix;
@@ -62,7 +61,7 @@ struct SparseTypesTrait<double> {
typedef SparseOpaqueFactorization_Double NumericFactorization;
};
template<>
template <>
struct SparseTypesTrait<float> {
typedef DenseVector_Float DenseVector;
typedef SparseMatrix_Float SparseMatrix;
@@ -70,14 +69,16 @@ struct SparseTypesTrait<float> {
typedef SparseOpaqueFactorization_Float NumericFactorization;
};
template<typename Scalar>
template <typename Scalar>
class AccelerateSparse {
public:
using DenseVector = typename SparseTypesTrait<Scalar>::DenseVector;
// Use ASSparseMatrix to avoid collision with ceres::internal::SparseMatrix.
using ASSparseMatrix = typename SparseTypesTrait<Scalar>::SparseMatrix;
using SymbolicFactorization = typename SparseTypesTrait<Scalar>::SymbolicFactorization;
using NumericFactorization = typename SparseTypesTrait<Scalar>::NumericFactorization;
using SymbolicFactorization =
typename SparseTypesTrait<Scalar>::SymbolicFactorization;
using NumericFactorization =
typename SparseTypesTrait<Scalar>::NumericFactorization;
// Solves a linear system given its symbolic (reference counted within
// NumericFactorization) and numeric factorization.
@@ -109,7 +110,7 @@ class AccelerateSparse {
// An implementation of SparseCholesky interface using Apple's Accelerate
// framework.
template<typename Scalar>
template <typename Scalar>
class AppleAccelerateCholesky : public SparseCholesky {
public:
// Factory
@@ -122,7 +123,7 @@ class AppleAccelerateCholesky : public SparseCholesky {
std::string* message) final;
LinearSolverTerminationType Solve(const double* rhs,
double* solution,
std::string* message) final ;
std::string* message) final;
private:
AppleAccelerateCholesky(const OrderingType ordering_type);
@@ -132,15 +133,15 @@ class AppleAccelerateCholesky : public SparseCholesky {
const OrderingType ordering_type_;
AccelerateSparse<Scalar> as_;
std::unique_ptr<typename AccelerateSparse<Scalar>::SymbolicFactorization>
symbolic_factor_;
symbolic_factor_;
std::unique_ptr<typename AccelerateSparse<Scalar>::NumericFactorization>
numeric_factor_;
numeric_factor_;
// Copy of rhs/solution if Scalar != double (necessitating a copy).
Eigen::Matrix<Scalar, Eigen::Dynamic, 1> scalar_rhs_and_solution_;
};
}
}
} // namespace internal
} // namespace ceres
#endif // CERES_NO_ACCELERATE_SPARSE

View File

@@ -35,6 +35,7 @@
#include <cstddef>
#include <string>
#include <vector>
#include "ceres/stringprintf.h"
#include "ceres/types.h"
namespace ceres {
@@ -45,7 +46,7 @@ using std::string;
bool IsArrayValid(const int size, const double* x) {
if (x != NULL) {
for (int i = 0; i < size; ++i) {
if (!std::isfinite(x[i]) || (x[i] == kImpossibleValue)) {
if (!std::isfinite(x[i]) || (x[i] == kImpossibleValue)) {
return false;
}
}
@@ -59,7 +60,7 @@ int FindInvalidValue(const int size, const double* x) {
}
for (int i = 0; i < size; ++i) {
if (!std::isfinite(x[i]) || (x[i] == kImpossibleValue)) {
if (!std::isfinite(x[i]) || (x[i] == kImpossibleValue)) {
return i;
}
}
@@ -92,14 +93,13 @@ void AppendArrayToString(const int size, const double* x, string* result) {
void MapValuesToContiguousRange(const int size, int* array) {
std::vector<int> unique_values(array, array + size);
std::sort(unique_values.begin(), unique_values.end());
unique_values.erase(std::unique(unique_values.begin(),
unique_values.end()),
unique_values.erase(std::unique(unique_values.begin(), unique_values.end()),
unique_values.end());
for (int i = 0; i < size; ++i) {
array[i] = std::lower_bound(unique_values.begin(),
unique_values.end(),
array[i]) - unique_values.begin();
array[i] =
std::lower_bound(unique_values.begin(), unique_values.end(), array[i]) -
unique_values.begin();
}
}

View File

@@ -44,6 +44,7 @@
#define CERES_INTERNAL_ARRAY_UTILS_H_
#include <string>
#include "ceres/internal/port.h"
namespace ceres {
@@ -51,20 +52,22 @@ namespace internal {
// Fill the array x with an impossible value that the user code is
// never expected to compute.
void InvalidateArray(int size, double* x);
CERES_EXPORT_INTERNAL void InvalidateArray(int size, double* x);
// Check if all the entries of the array x are valid, i.e. all the
// values in the array should be finite and none of them should be
// equal to the "impossible" value used by InvalidateArray.
bool IsArrayValid(int size, const double* x);
CERES_EXPORT_INTERNAL bool IsArrayValid(int size, const double* x);
// If the array contains an invalid value, return the index for it,
// otherwise return size.
int FindInvalidValue(const int size, const double* x);
CERES_EXPORT_INTERNAL int FindInvalidValue(const int size, const double* x);
// Utility routine to print an array of doubles to a string. If the
// array pointer is NULL, it is treated as an array of zeros.
void AppendArrayToString(const int size, const double* x, std::string* result);
CERES_EXPORT_INTERNAL void AppendArrayToString(const int size,
const double* x,
std::string* result);
// This routine takes an array of integer values, sorts and uniques
// them and then maps each value in the array to its position in the
@@ -79,7 +82,7 @@ void AppendArrayToString(const int size, const double* x, std::string* result);
// gets mapped to
//
// [1 0 2 3 0 1 3]
void MapValuesToContiguousRange(int size, int* array);
CERES_EXPORT_INTERNAL void MapValuesToContiguousRange(int size, int* array);
} // namespace internal
} // namespace ceres

View File

@@ -29,6 +29,7 @@
// Author: sameeragarwal@google.com (Sameer Agarwal)
#include "ceres/blas.h"
#include "ceres/internal/port.h"
#include "glog/logging.h"

View File

@@ -31,6 +31,7 @@
#include "ceres/block_evaluate_preparer.h"
#include <vector>
#include "ceres/block_sparse_matrix.h"
#include "ceres/casts.h"
#include "ceres/parameter_block.h"
@@ -53,10 +54,8 @@ void BlockEvaluatePreparer::Prepare(const ResidualBlock* residual_block,
double** jacobians) {
// If the overall jacobian is not available, use the scratch space.
if (jacobian == NULL) {
scratch_evaluate_preparer_.Prepare(residual_block,
residual_block_index,
jacobian,
jacobians);
scratch_evaluate_preparer_.Prepare(
residual_block, residual_block_index, jacobian, jacobians);
return;
}

View File

@@ -30,9 +30,9 @@
#include "ceres/block_jacobi_preconditioner.h"
#include "ceres/block_random_access_diagonal_matrix.h"
#include "ceres/block_sparse_matrix.h"
#include "ceres/block_structure.h"
#include "ceres/block_random_access_diagonal_matrix.h"
#include "ceres/casts.h"
#include "ceres/internal/eigen.h"
@@ -65,13 +65,11 @@ bool BlockJacobiPreconditioner::UpdateImpl(const BlockSparseMatrix& A,
const int col_block_size = bs->cols[block_id].size;
int r, c, row_stride, col_stride;
CellInfo* cell_info = m_->GetCell(block_id, block_id,
&r, &c,
&row_stride, &col_stride);
CellInfo* cell_info =
m_->GetCell(block_id, block_id, &r, &c, &row_stride, &col_stride);
MatrixRef m(cell_info->values, row_stride, col_stride);
ConstMatrixRef b(values + cells[j].position,
row_block_size,
col_block_size);
ConstMatrixRef b(
values + cells[j].position, row_block_size, col_block_size);
m.block(r, c, col_block_size, col_block_size) += b.transpose() * b;
}
}
@@ -82,9 +80,7 @@ bool BlockJacobiPreconditioner::UpdateImpl(const BlockSparseMatrix& A,
for (int i = 0; i < bs->cols.size(); ++i) {
const int block_size = bs->cols[i].size;
int r, c, row_stride, col_stride;
CellInfo* cell_info = m_->GetCell(i, i,
&r, &c,
&row_stride, &col_stride);
CellInfo* cell_info = m_->GetCell(i, i, &r, &c, &row_stride, &col_stride);
MatrixRef m(cell_info->values, row_stride, col_stride);
m.block(r, c, block_size, block_size).diagonal() +=
ConstVectorRef(D + position, block_size).array().square().matrix();

View File

@@ -32,7 +32,9 @@
#define CERES_INTERNAL_BLOCK_JACOBI_PRECONDITIONER_H_
#include <memory>
#include "ceres/block_random_access_diagonal_matrix.h"
#include "ceres/internal/port.h"
#include "ceres/preconditioner.h"
namespace ceres {
@@ -51,7 +53,8 @@ struct CompressedRowBlockStructure;
// update the matrix by running Update(A, D). The values of the matrix A are
// inspected to construct the preconditioner. The vector D is applied as the
// D^TD diagonal term.
class BlockJacobiPreconditioner : public BlockSparseMatrixPreconditioner {
class CERES_EXPORT_INTERNAL BlockJacobiPreconditioner
: public BlockSparseMatrixPreconditioner {
public:
// A must remain valid while the BlockJacobiPreconditioner is.
explicit BlockJacobiPreconditioner(const BlockSparseMatrix& A);

View File

@@ -32,11 +32,11 @@
#include "ceres/block_evaluate_preparer.h"
#include "ceres/block_sparse_matrix.h"
#include "ceres/internal/eigen.h"
#include "ceres/internal/port.h"
#include "ceres/parameter_block.h"
#include "ceres/program.h"
#include "ceres/residual_block.h"
#include "ceres/internal/eigen.h"
#include "ceres/internal/port.h"
namespace ceres {
namespace internal {

View File

@@ -39,6 +39,7 @@
#define CERES_INTERNAL_BLOCK_JACOBIAN_WRITER_H_
#include <vector>
#include "ceres/evaluator.h"
#include "ceres/internal/port.h"
@@ -52,8 +53,7 @@ class SparseMatrix;
// TODO(sameeragarwal): This class needs documemtation.
class BlockJacobianWriter {
public:
BlockJacobianWriter(const Evaluator::Options& options,
Program* program);
BlockJacobianWriter(const Evaluator::Options& options, Program* program);
// JacobianWriter interface.

View File

@@ -31,6 +31,7 @@
#include "ceres/block_random_access_dense_matrix.h"
#include <vector>
#include "ceres/internal/eigen.h"
#include "glog/logging.h"
@@ -59,8 +60,7 @@ BlockRandomAccessDenseMatrix::BlockRandomAccessDenseMatrix(
// Assume that the user does not hold any locks on any cell blocks
// when they are calling SetZero.
BlockRandomAccessDenseMatrix::~BlockRandomAccessDenseMatrix() {
}
BlockRandomAccessDenseMatrix::~BlockRandomAccessDenseMatrix() {}
CellInfo* BlockRandomAccessDenseMatrix::GetCell(const int row_block_id,
const int col_block_id,

View File

@@ -31,11 +31,10 @@
#ifndef CERES_INTERNAL_BLOCK_RANDOM_ACCESS_DENSE_MATRIX_H_
#define CERES_INTERNAL_BLOCK_RANDOM_ACCESS_DENSE_MATRIX_H_
#include "ceres/block_random_access_matrix.h"
#include <memory>
#include <vector>
#include "ceres/block_random_access_matrix.h"
#include "ceres/internal/port.h"
namespace ceres {
@@ -51,7 +50,8 @@ namespace internal {
// pair.
//
// ReturnCell is a nop.
class BlockRandomAccessDenseMatrix : public BlockRandomAccessMatrix {
class CERES_EXPORT_INTERNAL BlockRandomAccessDenseMatrix
: public BlockRandomAccessMatrix {
public:
// blocks is a vector of block sizes. The resulting matrix has
// blocks.size() * blocks.size() cells.

View File

@@ -63,9 +63,8 @@ BlockRandomAccessDiagonalMatrix::BlockRandomAccessDiagonalMatrix(
num_nonzeros += blocks_[i] * blocks_[i];
}
VLOG(1) << "Matrix Size [" << num_cols
<< "," << num_cols
<< "] " << num_nonzeros;
VLOG(1) << "Matrix Size [" << num_cols << "," << num_cols << "] "
<< num_nonzeros;
tsm_.reset(new TripletSparseMatrix(num_cols, num_cols, num_nonzeros));
tsm_->set_num_nonzeros(num_nonzeros);
@@ -116,8 +115,7 @@ CellInfo* BlockRandomAccessDiagonalMatrix::GetCell(int row_block_id,
// when they are calling SetZero.
void BlockRandomAccessDiagonalMatrix::SetZero() {
if (tsm_->num_nonzeros()) {
VectorRef(tsm_->mutable_values(),
tsm_->num_nonzeros()).setZero();
VectorRef(tsm_->mutable_values(), tsm_->num_nonzeros()).setZero();
}
}
@@ -126,11 +124,8 @@ void BlockRandomAccessDiagonalMatrix::Invert() {
for (int i = 0; i < blocks_.size(); ++i) {
const int block_size = blocks_[i];
MatrixRef block(values, block_size, block_size);
block =
block
.selfadjointView<Eigen::Upper>()
.llt()
.solve(Matrix::Identity(block_size, block_size));
block = block.selfadjointView<Eigen::Upper>().llt().solve(
Matrix::Identity(block_size, block_size));
values += block_size * block_size;
}
}

View File

@@ -46,11 +46,13 @@ namespace internal {
// A thread safe block diagonal matrix implementation of
// BlockRandomAccessMatrix.
class BlockRandomAccessDiagonalMatrix : public BlockRandomAccessMatrix {
class CERES_EXPORT_INTERNAL BlockRandomAccessDiagonalMatrix
: public BlockRandomAccessMatrix {
public:
// blocks is an array of block sizes.
explicit BlockRandomAccessDiagonalMatrix(const std::vector<int>& blocks);
BlockRandomAccessDiagonalMatrix(const BlockRandomAccessDiagonalMatrix&) = delete;
BlockRandomAccessDiagonalMatrix(const BlockRandomAccessDiagonalMatrix&) =
delete;
void operator=(const BlockRandomAccessDiagonalMatrix&) = delete;
// The destructor is not thread safe. It assumes that no one is

View File

@@ -33,8 +33,7 @@
namespace ceres {
namespace internal {
BlockRandomAccessMatrix::~BlockRandomAccessMatrix() {
}
BlockRandomAccessMatrix::~BlockRandomAccessMatrix() {}
} // namespace internal
} // namespace ceres

View File

@@ -35,6 +35,8 @@
#include <mutex>
#include "ceres/internal/port.h"
namespace ceres {
namespace internal {
@@ -91,7 +93,7 @@ struct CellInfo {
std::mutex m;
};
class BlockRandomAccessMatrix {
class CERES_EXPORT_INTERNAL BlockRandomAccessMatrix {
public:
virtual ~BlockRandomAccessMatrix();

View File

@@ -50,10 +50,8 @@ using std::set;
using std::vector;
BlockRandomAccessSparseMatrix::BlockRandomAccessSparseMatrix(
const vector<int>& blocks,
const set<pair<int, int>>& block_pairs)
: kMaxRowBlocks(10 * 1000 * 1000),
blocks_(blocks) {
const vector<int>& blocks, const set<pair<int, int>>& block_pairs)
: kMaxRowBlocks(10 * 1000 * 1000), blocks_(blocks) {
CHECK_LT(blocks.size(), kMaxRowBlocks);
// Build the row/column layout vector and count the number of scalar
@@ -75,9 +73,8 @@ BlockRandomAccessSparseMatrix::BlockRandomAccessSparseMatrix(
num_nonzeros += row_block_size * col_block_size;
}
VLOG(1) << "Matrix Size [" << num_cols
<< "," << num_cols
<< "] " << num_nonzeros;
VLOG(1) << "Matrix Size [" << num_cols << "," << num_cols << "] "
<< num_nonzeros;
tsm_.reset(new TripletSparseMatrix(num_cols, num_cols, num_nonzeros));
tsm_->set_num_nonzeros(num_nonzeros);
@@ -105,11 +102,11 @@ BlockRandomAccessSparseMatrix::BlockRandomAccessSparseMatrix(
layout_[IntPairToLong(row_block_id, col_block_id)]->values - values;
for (int r = 0; r < row_block_size; ++r) {
for (int c = 0; c < col_block_size; ++c, ++pos) {
rows[pos] = block_positions_[row_block_id] + r;
cols[pos] = block_positions_[col_block_id] + c;
values[pos] = 1.0;
DCHECK_LT(rows[pos], tsm_->num_rows());
DCHECK_LT(cols[pos], tsm_->num_rows());
rows[pos] = block_positions_[row_block_id] + r;
cols[pos] = block_positions_[col_block_id] + c;
values[pos] = 1.0;
DCHECK_LT(rows[pos], tsm_->num_rows());
DCHECK_LT(cols[pos], tsm_->num_rows());
}
}
}
@@ -129,7 +126,7 @@ CellInfo* BlockRandomAccessSparseMatrix::GetCell(int row_block_id,
int* col,
int* row_stride,
int* col_stride) {
const LayoutType::iterator it =
const LayoutType::iterator it =
layout_.find(IntPairToLong(row_block_id, col_block_id));
if (it == layout_.end()) {
return NULL;
@@ -147,8 +144,7 @@ CellInfo* BlockRandomAccessSparseMatrix::GetCell(int row_block_id,
// when they are calling SetZero.
void BlockRandomAccessSparseMatrix::SetZero() {
if (tsm_->num_nonzeros()) {
VectorRef(tsm_->mutable_values(),
tsm_->num_nonzeros()).setZero();
VectorRef(tsm_->mutable_values(), tsm_->num_nonzeros()).setZero();
}
}
@@ -164,7 +160,9 @@ void BlockRandomAccessSparseMatrix::SymmetricRightMultiply(const double* x,
const int col_block_pos = block_positions_[col];
MatrixVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
cell_position_and_data.second, row_block_size, col_block_size,
cell_position_and_data.second,
row_block_size,
col_block_size,
x + col_block_pos,
y + row_block_pos);
@@ -174,7 +172,9 @@ void BlockRandomAccessSparseMatrix::SymmetricRightMultiply(const double* x,
// triangular multiply also.
if (row != col) {
MatrixTransposeVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
cell_position_and_data.second, row_block_size, col_block_size,
cell_position_and_data.second,
row_block_size,
col_block_size,
x + row_block_pos,
y + col_block_pos);
}

View File

@@ -39,10 +39,10 @@
#include <vector>
#include "ceres/block_random_access_matrix.h"
#include "ceres/triplet_sparse_matrix.h"
#include "ceres/internal/port.h"
#include "ceres/types.h"
#include "ceres/small_blas.h"
#include "ceres/triplet_sparse_matrix.h"
#include "ceres/types.h"
namespace ceres {
namespace internal {
@@ -51,7 +51,8 @@ namespace internal {
// BlockRandomAccessMatrix. Internally a TripletSparseMatrix is used
// for doing the actual storage. This class augments this matrix with
// an unordered_map that allows random read/write access.
class BlockRandomAccessSparseMatrix : public BlockRandomAccessMatrix {
class CERES_EXPORT_INTERNAL BlockRandomAccessSparseMatrix
: public BlockRandomAccessMatrix {
public:
// blocks is an array of block sizes. block_pairs is a set of
// <row_block_id, col_block_id> pairs to identify the non-zero cells
@@ -110,7 +111,7 @@ class BlockRandomAccessSparseMatrix : public BlockRandomAccessMatrix {
// A mapping from <row_block_id, col_block_id> to the position in
// the values array of tsm_ where the block is stored.
typedef std::unordered_map<long int, CellInfo* > LayoutType;
typedef std::unordered_map<long int, CellInfo*> LayoutType;
LayoutType layout_;
// In order traversal of contents of the matrix. This allows us to

View File

@@ -30,9 +30,10 @@
#include "ceres/block_sparse_matrix.h"
#include <cstddef>
#include <algorithm>
#include <cstddef>
#include <vector>
#include "ceres/block_structure.h"
#include "ceres/internal/eigen.h"
#include "ceres/random.h"
@@ -77,8 +78,8 @@ BlockSparseMatrix::BlockSparseMatrix(
CHECK_GE(num_rows_, 0);
CHECK_GE(num_cols_, 0);
CHECK_GE(num_nonzeros_, 0);
VLOG(2) << "Allocating values array with "
<< num_nonzeros_ * sizeof(double) << " bytes."; // NOLINT
VLOG(2) << "Allocating values array with " << num_nonzeros_ * sizeof(double)
<< " bytes."; // NOLINT
values_.reset(new double[num_nonzeros_]);
max_num_nonzeros_ = num_nonzeros_;
CHECK(values_ != nullptr);
@@ -88,7 +89,7 @@ void BlockSparseMatrix::SetZero() {
std::fill(values_.get(), values_.get() + num_nonzeros_, 0.0);
}
void BlockSparseMatrix::RightMultiply(const double* x, double* y) const {
void BlockSparseMatrix::RightMultiply(const double* x, double* y) const {
CHECK(x != nullptr);
CHECK(y != nullptr);
@@ -101,7 +102,9 @@ void BlockSparseMatrix::RightMultiply(const double* x, double* y) const {
int col_block_size = block_structure_->cols[col_block_id].size;
int col_block_pos = block_structure_->cols[col_block_id].position;
MatrixVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
values_.get() + cells[j].position, row_block_size, col_block_size,
values_.get() + cells[j].position,
row_block_size,
col_block_size,
x + col_block_pos,
y + row_block_pos);
}
@@ -121,7 +124,9 @@ void BlockSparseMatrix::LeftMultiply(const double* x, double* y) const {
int col_block_size = block_structure_->cols[col_block_id].size;
int col_block_pos = block_structure_->cols[col_block_id].position;
MatrixTransposeVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
values_.get() + cells[j].position, row_block_size, col_block_size,
values_.get() + cells[j].position,
row_block_size,
col_block_size,
x + row_block_pos,
y + col_block_pos);
}
@@ -138,8 +143,8 @@ void BlockSparseMatrix::SquaredColumnNorm(double* x) const {
int col_block_id = cells[j].block_id;
int col_block_size = block_structure_->cols[col_block_id].size;
int col_block_pos = block_structure_->cols[col_block_id].position;
const MatrixRef m(values_.get() + cells[j].position,
row_block_size, col_block_size);
const MatrixRef m(
values_.get() + cells[j].position, row_block_size, col_block_size);
VectorRef(x + col_block_pos, col_block_size) += m.colwise().squaredNorm();
}
}
@@ -155,8 +160,8 @@ void BlockSparseMatrix::ScaleColumns(const double* scale) {
int col_block_id = cells[j].block_id;
int col_block_size = block_structure_->cols[col_block_id].size;
int col_block_pos = block_structure_->cols[col_block_id].position;
MatrixRef m(values_.get() + cells[j].position,
row_block_size, col_block_size);
MatrixRef m(
values_.get() + cells[j].position, row_block_size, col_block_size);
m *= ConstVectorRef(scale + col_block_pos, col_block_size).asDiagonal();
}
}
@@ -178,8 +183,8 @@ void BlockSparseMatrix::ToDenseMatrix(Matrix* dense_matrix) const {
int col_block_size = block_structure_->cols[col_block_id].size;
int col_block_pos = block_structure_->cols[col_block_id].position;
int jac_pos = cells[j].position;
m.block(row_block_pos, col_block_pos, row_block_size, col_block_size)
+= MatrixRef(values_.get() + jac_pos, row_block_size, col_block_size);
m.block(row_block_pos, col_block_pos, row_block_size, col_block_size) +=
MatrixRef(values_.get() + jac_pos, row_block_size, col_block_size);
}
}
}
@@ -201,7 +206,7 @@ void BlockSparseMatrix::ToTripletSparseMatrix(
int col_block_size = block_structure_->cols[col_block_id].size;
int col_block_pos = block_structure_->cols[col_block_id].position;
int jac_pos = cells[j].position;
for (int r = 0; r < row_block_size; ++r) {
for (int r = 0; r < row_block_size; ++r) {
for (int c = 0; c < col_block_size; ++c, ++jac_pos) {
matrix->mutable_rows()[jac_pos] = row_block_pos + r;
matrix->mutable_cols()[jac_pos] = col_block_pos + c;
@@ -215,8 +220,7 @@ void BlockSparseMatrix::ToTripletSparseMatrix(
// Return a pointer to the block structure. We continue to hold
// ownership of the object though.
const CompressedRowBlockStructure* BlockSparseMatrix::block_structure()
const {
const CompressedRowBlockStructure* BlockSparseMatrix::block_structure() const {
return block_structure_.get();
}
@@ -233,7 +237,8 @@ void BlockSparseMatrix::ToTextFile(FILE* file) const {
int jac_pos = cells[j].position;
for (int r = 0; r < row_block_size; ++r) {
for (int c = 0; c < col_block_size; ++c) {
fprintf(file, "% 10d % 10d %17f\n",
fprintf(file,
"% 10d % 10d %17f\n",
row_block_pos + r,
col_block_pos + c,
values_[jac_pos++]);
@@ -369,7 +374,6 @@ BlockSparseMatrix* BlockSparseMatrix::CreateRandomMatrix(
int row_block_position = 0;
int value_position = 0;
for (int r = 0; r < options.num_row_blocks; ++r) {
const int delta_block_size =
Uniform(options.max_row_block_size - options.min_row_block_size);
const int row_block_size = options.min_row_block_size + delta_block_size;

View File

@@ -35,9 +35,11 @@
#define CERES_INTERNAL_BLOCK_SPARSE_MATRIX_H_
#include <memory>
#include "ceres/block_structure.h"
#include "ceres/sparse_matrix.h"
#include "ceres/internal/eigen.h"
#include "ceres/internal/port.h"
#include "ceres/sparse_matrix.h"
namespace ceres {
namespace internal {
@@ -52,7 +54,7 @@ class TripletSparseMatrix;
//
// internal/ceres/block_structure.h
//
class BlockSparseMatrix : public SparseMatrix {
class CERES_EXPORT_INTERNAL BlockSparseMatrix : public SparseMatrix {
public:
// Construct a block sparse matrix with a fully initialized
// CompressedRowBlockStructure objected. The matrix takes over
@@ -77,11 +79,13 @@ class BlockSparseMatrix : public SparseMatrix {
void ToDenseMatrix(Matrix* dense_matrix) const final;
void ToTextFile(FILE* file) const final;
// clang-format off
int num_rows() const final { return num_rows_; }
int num_cols() const final { return num_cols_; }
int num_nonzeros() const final { return num_nonzeros_; }
const double* values() const final { return values_.get(); }
double* mutable_values() final { return values_.get(); }
// clang-format on
void ToTripletSparseMatrix(TripletSparseMatrix* matrix) const;
const CompressedRowBlockStructure* block_structure() const;
@@ -94,8 +98,7 @@ class BlockSparseMatrix : public SparseMatrix {
void DeleteRowBlocks(int delta_row_blocks);
static BlockSparseMatrix* CreateDiagonalMatrix(
const double* diagonal,
const std::vector<Block>& column_blocks);
const double* diagonal, const std::vector<Block>& column_blocks);
struct RandomMatrixOptions {
int num_row_blocks = 0;

View File

@@ -35,7 +35,7 @@ namespace internal {
bool CellLessThan(const Cell& lhs, const Cell& rhs) {
if (lhs.block_id == rhs.block_id) {
return (lhs.position < rhs.position);
return (lhs.position < rhs.position);
}
return (lhs.block_id < rhs.block_id);
}

View File

@@ -40,6 +40,7 @@
#include <cstdint>
#include <vector>
#include "ceres/internal/port.h"
namespace ceres {

View File

@@ -34,9 +34,10 @@
#include "ceres/c_api.h"
#include <vector>
#include <iostream>
#include <string>
#include <vector>
#include "ceres/cost_function.h"
#include "ceres/loss_function.h"
#include "ceres/problem.h"
@@ -70,8 +71,7 @@ class CallbackCostFunction : public ceres::CostFunction {
int num_residuals,
int num_parameter_blocks,
int* parameter_block_sizes)
: cost_function_(cost_function),
user_data_(user_data) {
: cost_function_(cost_function), user_data_(user_data) {
set_num_residuals(num_residuals);
for (int i = 0; i < num_parameter_blocks; ++i) {
mutable_parameter_block_sizes()->push_back(parameter_block_sizes[i]);
@@ -81,12 +81,10 @@ class CallbackCostFunction : public ceres::CostFunction {
virtual ~CallbackCostFunction() {}
bool Evaluate(double const* const* parameters,
double* residuals,
double** jacobians) const final {
return (*cost_function_)(user_data_,
const_cast<double**>(parameters),
residuals,
jacobians);
double* residuals,
double** jacobians) const final {
return (*cost_function_)(
user_data_, const_cast<double**>(parameters), residuals, jacobians);
}
private:
@@ -100,7 +98,7 @@ class CallbackLossFunction : public ceres::LossFunction {
public:
explicit CallbackLossFunction(ceres_loss_function_t loss_function,
void* user_data)
: loss_function_(loss_function), user_data_(user_data) {}
: loss_function_(loss_function), user_data_(user_data) {}
void Evaluate(double sq_norm, double* rho) const final {
(*loss_function_)(user_data_, sq_norm, rho);
}
@@ -134,8 +132,8 @@ void ceres_free_stock_loss_function_data(void* loss_function_data) {
void ceres_stock_loss_function(void* user_data,
double squared_norm,
double out[3]) {
reinterpret_cast<ceres::LossFunction*>(user_data)
->Evaluate(squared_norm, out);
reinterpret_cast<ceres::LossFunction*>(user_data)->Evaluate(squared_norm,
out);
}
ceres_residual_block_id_t* ceres_problem_add_residual_block(
@@ -159,16 +157,15 @@ ceres_residual_block_id_t* ceres_problem_add_residual_block(
ceres::LossFunction* callback_loss_function = NULL;
if (loss_function != NULL) {
callback_loss_function = new CallbackLossFunction(loss_function,
loss_function_data);
callback_loss_function =
new CallbackLossFunction(loss_function, loss_function_data);
}
std::vector<double*> parameter_blocks(parameters,
parameters + num_parameter_blocks);
return reinterpret_cast<ceres_residual_block_id_t*>(
ceres_problem->AddResidualBlock(callback_cost_function,
callback_loss_function,
parameter_blocks));
ceres_problem->AddResidualBlock(
callback_cost_function, callback_loss_function, parameter_blocks));
}
void ceres_solve(ceres_problem_t* c_problem) {

View File

@@ -28,8 +28,10 @@
//
// Author: sameeragarwal@google.com (Sameer Agarwal)
#include <iostream> // NO LINT
#include "ceres/callbacks.h"
#include <iostream> // NO LINT
#include "ceres/program.h"
#include "ceres/stringprintf.h"
#include "glog/logging.h"
@@ -76,8 +78,7 @@ CallbackReturnType GradientProblemSolverStateUpdatingCallback::operator()(
LoggingCallback::LoggingCallback(const MinimizerType minimizer_type,
const bool log_to_stdout)
: minimizer_type(minimizer_type),
log_to_stdout_(log_to_stdout) {}
: minimizer_type(minimizer_type), log_to_stdout_(log_to_stdout) {}
LoggingCallback::~LoggingCallback() {}
@@ -99,11 +100,13 @@ CallbackReturnType LoggingCallback::operator()(
summary.iteration_time_in_seconds,
summary.cumulative_time_in_seconds);
} else if (minimizer_type == TRUST_REGION) {
// clang-format off
if (summary.iteration == 0) {
output = "iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time\n"; // NOLINT
}
const char* kReportRowFormat =
"% 4d % 8e % 3.2e % 3.2e % 3.2e % 3.2e % 3.2e % 4d % 3.2e % 3.2e"; // NOLINT
// clang-format on
output += StringPrintf(kReportRowFormat,
summary.iteration,
summary.cost,

View File

@@ -32,8 +32,9 @@
#define CERES_INTERNAL_CALLBACKS_H_
#include <string>
#include "ceres/iteration_callback.h"
#include "ceres/internal/port.h"
#include "ceres/iteration_callback.h"
namespace ceres {
namespace internal {
@@ -47,6 +48,7 @@ class StateUpdatingCallback : public IterationCallback {
StateUpdatingCallback(Program* program, double* parameters);
virtual ~StateUpdatingCallback();
CallbackReturnType operator()(const IterationSummary& summary) final;
private:
Program* program_;
double* parameters_;
@@ -61,6 +63,7 @@ class GradientProblemSolverStateUpdatingCallback : public IterationCallback {
double* user_parameters);
virtual ~GradientProblemSolverStateUpdatingCallback();
CallbackReturnType operator()(const IterationSummary& summary) final;
private:
int num_parameters_;
const double* internal_parameters_;

View File

@@ -31,8 +31,8 @@
#include "ceres/canonical_views_clustering.h"
#include <unordered_set>
#include <unordered_map>
#include <unordered_set>
#include "ceres/graph.h"
#include "ceres/map_util.h"
@@ -126,8 +126,7 @@ void CanonicalViewsClustering::ComputeClustering(
// Add canonical view if quality improves, or if minimum is not
// yet met, otherwise break.
if ((best_difference <= 0) &&
(centers->size() >= options_.min_views)) {
if ((best_difference <= 0) && (centers->size() >= options_.min_views)) {
break;
}
@@ -141,8 +140,7 @@ void CanonicalViewsClustering::ComputeClustering(
// Return the set of vertices of the graph which have valid vertex
// weights.
void CanonicalViewsClustering::FindValidViews(
IntSet* valid_views) const {
void CanonicalViewsClustering::FindValidViews(IntSet* valid_views) const {
const IntSet& views = graph_->vertices();
for (const auto& view : views) {
if (graph_->VertexWeight(view) != WeightedGraph<int>::InvalidWeight()) {
@@ -154,8 +152,7 @@ void CanonicalViewsClustering::FindValidViews(
// Computes the difference in the quality score if 'candidate' were
// added to the set of canonical views.
double CanonicalViewsClustering::ComputeClusteringQualityDifference(
const int candidate,
const vector<int>& centers) const {
const int candidate, const vector<int>& centers) const {
// View score.
double difference =
options_.view_score_weight * graph_->VertexWeight(candidate);
@@ -179,7 +176,7 @@ double CanonicalViewsClustering::ComputeClusteringQualityDifference(
// Orthogonality.
for (int i = 0; i < centers.size(); ++i) {
difference -= options_.similarity_penalty_weight *
graph_->EdgeWeight(centers[i], candidate);
graph_->EdgeWeight(centers[i], candidate);
}
return difference;
@@ -192,8 +189,7 @@ void CanonicalViewsClustering::UpdateCanonicalViewAssignments(
for (const auto& neighbor : neighbors) {
const double old_similarity =
FindWithDefault(view_to_canonical_view_similarity_, neighbor, 0.0);
const double new_similarity =
graph_->EdgeWeight(neighbor, canonical_view);
const double new_similarity = graph_->EdgeWeight(neighbor, canonical_view);
if (new_similarity > old_similarity) {
view_to_canonical_view_[neighbor] = canonical_view;
view_to_canonical_view_similarity_[neighbor] = new_similarity;
@@ -203,8 +199,7 @@ void CanonicalViewsClustering::UpdateCanonicalViewAssignments(
// Assign a cluster id to each view.
void CanonicalViewsClustering::ComputeClusterMembership(
const vector<int>& centers,
IntMap* membership) const {
const vector<int>& centers, IntMap* membership) const {
CHECK(membership != nullptr);
membership->clear();

View File

@@ -45,6 +45,7 @@
#include <vector>
#include "ceres/graph.h"
#include "ceres/internal/port.h"
namespace ceres {
namespace internal {
@@ -94,13 +95,13 @@ struct CanonicalViewsClusteringOptions;
// It is possible depending on the configuration of the clustering
// algorithm that some of the vertices may not be assigned to any
// cluster. In this case they are assigned to a cluster with id = -1;
void ComputeCanonicalViewsClustering(
CERES_EXPORT_INTERNAL void ComputeCanonicalViewsClustering(
const CanonicalViewsClusteringOptions& options,
const WeightedGraph<int>& graph,
std::vector<int>* centers,
std::unordered_map<int, int>* membership);
struct CanonicalViewsClusteringOptions {
struct CERES_EXPORT_INTERNAL CanonicalViewsClusteringOptions {
// The minimum number of canonical views to compute.
int min_views = 3;

View File

@@ -56,15 +56,15 @@ struct identity_ {
//
// base::identity_ is used to make a non-deduced context, which
// forces all callers to explicitly specify the template argument.
template<typename To>
template <typename To>
inline To implicit_cast(typename identity_<To>::type to) {
return to;
}
// This version of implicit_cast is used when two template arguments
// are specified. It's obsolete and should not be used.
template<typename To, typename From>
inline To implicit_cast(typename identity_<From>::type const &f) {
template <typename To, typename From>
inline To implicit_cast(typename identity_<From>::type const& f) {
return f;
}
@@ -86,8 +86,8 @@ inline To implicit_cast(typename identity_<From>::type const &f) {
// if (dynamic_cast<Subclass2>(foo)) HandleASubclass2Object(foo);
// You should design the code some other way not to need this.
template<typename To, typename From> // use like this: down_cast<T*>(foo);
inline To down_cast(From* f) { // so we only accept pointers
template <typename To, typename From> // use like this: down_cast<T*>(foo);
inline To down_cast(From* f) { // so we only accept pointers
// Ensures that To is a sub-type of From *. This test is here only
// for compile-time type checking, and has no overhead in an
// optimized build at run-time, as it will be optimized away

View File

@@ -33,8 +33,9 @@
#include <algorithm>
#include <memory>
#include "ceres/linear_operator.h"
#include "ceres/internal/eigen.h"
#include "ceres/linear_operator.h"
namespace ceres {
namespace internal {
@@ -79,9 +80,8 @@ class SparseMatrix;
// Note: This class is not thread safe, since it uses some temporary storage.
class CgnrLinearOperator : public LinearOperator {
public:
CgnrLinearOperator(const LinearOperator& A, const double *D)
: A_(A), D_(D), z_(new double[A.num_rows()]) {
}
CgnrLinearOperator(const LinearOperator& A, const double* D)
: A_(A), D_(D), z_(new double[A.num_rows()]) {}
virtual ~CgnrLinearOperator() {}
void RightMultiply(const double* x, double* y) const final {
@@ -96,8 +96,8 @@ class CgnrLinearOperator : public LinearOperator {
// y = y + DtDx
if (D_ != NULL) {
int n = A_.num_cols();
VectorRef(y, n).array() += ConstVectorRef(D_, n).array().square() *
ConstVectorRef(x, n).array();
VectorRef(y, n).array() +=
ConstVectorRef(D_, n).array().square() * ConstVectorRef(x, n).array();
}
}

View File

@@ -32,6 +32,7 @@
#define CERES_INTERNAL_CGNR_SOLVER_H_
#include <memory>
#include "ceres/linear_solver.h"
namespace ceres {
@@ -55,11 +56,10 @@ class CgnrSolver : public BlockSparseMatrixSolver {
void operator=(const CgnrSolver&) = delete;
virtual ~CgnrSolver();
Summary SolveImpl(
BlockSparseMatrix* A,
const double* b,
const LinearSolver::PerSolveOptions& per_solve_options,
double* x) final;
Summary SolveImpl(BlockSparseMatrix* A,
const double* b,
const LinearSolver::PerSolveOptions& per_solve_options,
double* x) final;
private:
const LinearSolver::Options options_;

View File

@@ -30,8 +30,9 @@
#include "ceres/compressed_col_sparse_matrix_utils.h"
#include <vector>
#include <algorithm>
#include <vector>
#include "ceres/internal/port.h"
#include "glog/logging.h"
@@ -40,13 +41,12 @@ namespace internal {
using std::vector;
void CompressedColumnScalarMatrixToBlockMatrix(
const int* scalar_rows,
const int* scalar_cols,
const vector<int>& row_blocks,
const vector<int>& col_blocks,
vector<int>* block_rows,
vector<int>* block_cols) {
void CompressedColumnScalarMatrixToBlockMatrix(const int* scalar_rows,
const int* scalar_cols,
const vector<int>& row_blocks,
const vector<int>& col_blocks,
vector<int>* block_rows,
vector<int>* block_cols) {
CHECK(block_rows != nullptr);
CHECK(block_cols != nullptr);
block_rows->clear();
@@ -71,10 +71,8 @@ void CompressedColumnScalarMatrixToBlockMatrix(
for (int col_block = 0; col_block < num_col_blocks; ++col_block) {
int column_size = 0;
for (int idx = scalar_cols[c]; idx < scalar_cols[c + 1]; ++idx) {
vector<int>::const_iterator it =
std::lower_bound(row_block_starts.begin(),
row_block_starts.end(),
scalar_rows[idx]);
vector<int>::const_iterator it = std::lower_bound(
row_block_starts.begin(), row_block_starts.end(), scalar_rows[idx]);
// Since we are using lower_bound, it will return the row id
// where the row block starts. For everything but the first row
// of the block, where these values will be the same, we can
@@ -104,7 +102,7 @@ void BlockOrderingToScalarOrdering(const vector<int>& blocks,
// block_starts = [0, block1, block1 + block2 ..]
vector<int> block_starts(num_blocks);
for (int i = 0, cursor = 0; i < num_blocks ; ++i) {
for (int i = 0, cursor = 0; i < num_blocks; ++i) {
block_starts[i] = cursor;
cursor += blocks[i];
}

View File

@@ -32,6 +32,7 @@
#define CERES_INTERNAL_COMPRESSED_COL_SPARSE_MATRIX_UTILS_H_
#include <vector>
#include "ceres/internal/port.h"
namespace ceres {
@@ -47,7 +48,7 @@ namespace internal {
// and column block j, then it is expected that A contains at least
// one non-zero entry corresponding to the top left entry of c_ij,
// as that entry is used to detect the presence of a non-zero c_ij.
void CompressedColumnScalarMatrixToBlockMatrix(
CERES_EXPORT_INTERNAL void CompressedColumnScalarMatrixToBlockMatrix(
const int* scalar_rows,
const int* scalar_cols,
const std::vector<int>& row_blocks,
@@ -58,7 +59,7 @@ void CompressedColumnScalarMatrixToBlockMatrix(
// Given a set of blocks and a permutation of these blocks, compute
// the corresponding "scalar" ordering, where the scalar ordering of
// size sum(blocks).
void BlockOrderingToScalarOrdering(
CERES_EXPORT_INTERNAL void BlockOrderingToScalarOrdering(
const std::vector<int>& blocks,
const std::vector<int>& block_ordering,
std::vector<int>* scalar_ordering);
@@ -101,7 +102,7 @@ void SolveUpperTriangularTransposeInPlace(IntegerType num_cols,
const double v = values[idx];
rhs_and_solution[c] -= v * rhs_and_solution[r];
}
rhs_and_solution[c] = rhs_and_solution[c] / values[cols[c + 1] - 1];
rhs_and_solution[c] = rhs_and_solution[c] / values[cols[c + 1] - 1];
}
}
@@ -132,7 +133,7 @@ void SolveRTRWithSparseRHS(IntegerType num_cols,
const double v = values[idx];
solution[c] -= v * solution[r];
}
solution[c] = solution[c] / values[cols[c + 1] - 1];
solution[c] = solution[c] / values[cols[c + 1] - 1];
}
SolveUpperTriangularInPlace(num_cols, rows, cols, values, solution);

View File

@@ -44,23 +44,21 @@
namespace ceres {
namespace internal {
using std::adjacent_find;
using std::make_pair;
using std::pair;
using std::vector;
using std::adjacent_find;
void CompressedRowJacobianWriter::PopulateJacobianRowAndColumnBlockVectors(
const Program* program, CompressedRowSparseMatrix* jacobian) {
const vector<ParameterBlock*>& parameter_blocks =
program->parameter_blocks();
const vector<ParameterBlock*>& parameter_blocks = program->parameter_blocks();
vector<int>& col_blocks = *(jacobian->mutable_col_blocks());
col_blocks.resize(parameter_blocks.size());
for (int i = 0; i < parameter_blocks.size(); ++i) {
col_blocks[i] = parameter_blocks[i]->LocalSize();
}
const vector<ResidualBlock*>& residual_blocks =
program->residual_blocks();
const vector<ResidualBlock*>& residual_blocks = program->residual_blocks();
vector<int>& row_blocks = *(jacobian->mutable_row_blocks());
row_blocks.resize(residual_blocks.size());
for (int i = 0; i < residual_blocks.size(); ++i) {
@@ -69,11 +67,10 @@ void CompressedRowJacobianWriter::PopulateJacobianRowAndColumnBlockVectors(
}
void CompressedRowJacobianWriter::GetOrderedParameterBlocks(
const Program* program,
int residual_id,
vector<pair<int, int>>* evaluated_jacobian_blocks) {
const ResidualBlock* residual_block =
program->residual_blocks()[residual_id];
const Program* program,
int residual_id,
vector<pair<int, int>>* evaluated_jacobian_blocks) {
const ResidualBlock* residual_block = program->residual_blocks()[residual_id];
const int num_parameter_blocks = residual_block->NumParameterBlocks();
for (int j = 0; j < num_parameter_blocks; ++j) {
@@ -88,8 +85,7 @@ void CompressedRowJacobianWriter::GetOrderedParameterBlocks(
}
SparseMatrix* CompressedRowJacobianWriter::CreateJacobian() const {
const vector<ResidualBlock*>& residual_blocks =
program_->residual_blocks();
const vector<ResidualBlock*>& residual_blocks = program_->residual_blocks();
int total_num_residuals = program_->NumResiduals();
int total_num_effective_parameters = program_->NumEffectiveParameters();
@@ -112,11 +108,10 @@ SparseMatrix* CompressedRowJacobianWriter::CreateJacobian() const {
// Allocate more space than needed to store the jacobian so that when the LM
// algorithm adds the diagonal, no reallocation is necessary. This reduces
// peak memory usage significantly.
CompressedRowSparseMatrix* jacobian =
new CompressedRowSparseMatrix(
total_num_residuals,
total_num_effective_parameters,
num_jacobian_nonzeros + total_num_effective_parameters);
CompressedRowSparseMatrix* jacobian = new CompressedRowSparseMatrix(
total_num_residuals,
total_num_effective_parameters,
num_jacobian_nonzeros + total_num_effective_parameters);
// At this stage, the CompressedRowSparseMatrix is an invalid state. But this
// seems to be the only way to construct it without doing a memory copy.
@@ -148,8 +143,7 @@ SparseMatrix* CompressedRowJacobianWriter::CreateJacobian() const {
std::string parameter_block_description;
for (int j = 0; j < num_parameter_blocks; ++j) {
ParameterBlock* parameter_block = residual_block->parameter_blocks()[j];
parameter_block_description +=
parameter_block->ToString() + "\n";
parameter_block_description += parameter_block->ToString() + "\n";
}
LOG(FATAL) << "Ceres internal error: "
<< "Duplicate parameter blocks detected in a cost function. "
@@ -196,7 +190,7 @@ SparseMatrix* CompressedRowJacobianWriter::CreateJacobian() const {
void CompressedRowJacobianWriter::Write(int residual_id,
int residual_offset,
double **jacobians,
double** jacobians,
SparseMatrix* base_jacobian) {
CompressedRowSparseMatrix* jacobian =
down_cast<CompressedRowSparseMatrix*>(base_jacobian);

View File

@@ -50,8 +50,7 @@ class CompressedRowJacobianWriter {
public:
CompressedRowJacobianWriter(Evaluator::Options /* ignored */,
Program* program)
: program_(program) {
}
: program_(program) {}
// PopulateJacobianRowAndColumnBlockVectors sets col_blocks and
// row_blocks for a CompressedRowSparseMatrix, based on the
@@ -64,8 +63,7 @@ class CompressedRowJacobianWriter {
// (Jacobian writers do not fall under any type hierarchy; they only
// have to provide an interface as specified in program_evaluator.h).
static void PopulateJacobianRowAndColumnBlockVectors(
const Program* program,
CompressedRowSparseMatrix* jacobian);
const Program* program, CompressedRowSparseMatrix* jacobian);
// It is necessary to determine the order of the jacobian blocks
// before copying them into a CompressedRowSparseMatrix (or derived
@@ -99,7 +97,7 @@ class CompressedRowJacobianWriter {
void Write(int residual_id,
int residual_offset,
double **jacobians,
double** jacobians,
SparseMatrix* base_jacobian);
private:

View File

@@ -33,6 +33,7 @@
#include <algorithm>
#include <numeric>
#include <vector>
#include "ceres/crs_matrix.h"
#include "ceres/internal/port.h"
#include "ceres/random.h"

View File

@@ -32,6 +32,7 @@
#define CERES_INTERNAL_COMPRESSED_ROW_SPARSE_MATRIX_H_
#include <vector>
#include "ceres/internal/port.h"
#include "ceres/sparse_matrix.h"
#include "ceres/types.h"
@@ -45,7 +46,7 @@ namespace internal {
class TripletSparseMatrix;
class CompressedRowSparseMatrix : public SparseMatrix {
class CERES_EXPORT_INTERNAL CompressedRowSparseMatrix : public SparseMatrix {
public:
enum StorageType {
UNSYMMETRIC,

View File

@@ -152,7 +152,6 @@ class ConcurrentQueue {
bool wait_;
};
} // namespace internal
} // namespace ceres

View File

@@ -68,7 +68,8 @@ ConditionedCostFunction::ConditionedCostFunction(
ConditionedCostFunction::~ConditionedCostFunction() {
if (ownership_ == TAKE_OWNERSHIP) {
STLDeleteUniqueContainerPointers(conditioners_.begin(), conditioners_.end());
STLDeleteUniqueContainerPointers(conditioners_.begin(),
conditioners_.end());
} else {
wrapped_cost_function_.release();
}
@@ -77,8 +78,8 @@ ConditionedCostFunction::~ConditionedCostFunction() {
bool ConditionedCostFunction::Evaluate(double const* const* parameters,
double* residuals,
double** jacobians) const {
bool success = wrapped_cost_function_->Evaluate(parameters, residuals,
jacobians);
bool success =
wrapped_cost_function_->Evaluate(parameters, residuals, jacobians);
if (!success) {
return false;
}
@@ -102,9 +103,8 @@ bool ConditionedCostFunction::Evaluate(double const* const* parameters,
double unconditioned_residual = residuals[r];
double* parameter_pointer = &unconditioned_residual;
success = conditioners_[r]->Evaluate(&parameter_pointer,
&residuals[r],
conditioner_derivative_pointer2);
success = conditioners_[r]->Evaluate(
&parameter_pointer, &residuals[r], conditioner_derivative_pointer2);
if (!success) {
return false;
}
@@ -117,7 +117,8 @@ bool ConditionedCostFunction::Evaluate(double const* const* parameters,
int parameter_block_size =
wrapped_cost_function_->parameter_block_sizes()[i];
VectorRef jacobian_row(jacobians[i] + r * parameter_block_size,
parameter_block_size, 1);
parameter_block_size,
1);
jacobian_row *= conditioner_derivative;
}
}

View File

@@ -41,6 +41,7 @@
#include <cmath>
#include <cstddef>
#include "ceres/internal/eigen.h"
#include "ceres/linear_operator.h"
#include "ceres/stringprintf.h"
@@ -51,16 +52,13 @@ namespace ceres {
namespace internal {
namespace {
bool IsZeroOrInfinity(double x) {
return ((x == 0.0) || std::isinf(x));
}
bool IsZeroOrInfinity(double x) { return ((x == 0.0) || std::isinf(x)); }
} // namespace
ConjugateGradientsSolver::ConjugateGradientsSolver(
const LinearSolver::Options& options)
: options_(options) {
}
: options_(options) {}
LinearSolver::Summary ConjugateGradientsSolver::Solve(
LinearOperator* A,
@@ -137,7 +135,10 @@ LinearSolver::Summary ConjugateGradientsSolver::Solve(
summary.termination_type = LINEAR_SOLVER_FAILURE;
summary.message = StringPrintf(
"Numerical failure. beta = rho_n / rho_{n-1} = %e, "
"rho_n = %e, rho_{n-1} = %e", beta, rho, last_rho);
"rho_n = %e, rho_{n-1} = %e",
beta,
rho,
last_rho);
break;
}
p = z + beta * p;
@@ -152,16 +153,20 @@ LinearSolver::Summary ConjugateGradientsSolver::Solve(
summary.message = StringPrintf(
"Matrix is indefinite, no more progress can be made. "
"p'q = %e. |p| = %e, |q| = %e",
pq, p.norm(), q.norm());
pq,
p.norm(),
q.norm());
break;
}
const double alpha = rho / pq;
if (std::isinf(alpha)) {
summary.termination_type = LINEAR_SOLVER_FAILURE;
summary.message =
StringPrintf("Numerical failure. alpha = rho / pq = %e, "
"rho = %e, pq = %e.", alpha, rho, pq);
summary.message = StringPrintf(
"Numerical failure. alpha = rho / pq = %e, rho = %e, pq = %e.",
alpha,
rho,
pq);
break;
}
@@ -223,7 +228,7 @@ LinearSolver::Summary ConjugateGradientsSolver::Solve(
Q0 = Q1;
// Residual based termination.
norm_r = r. norm();
norm_r = r.norm();
if (norm_r <= tol_r &&
summary.num_iterations >= options_.min_num_iterations) {
summary.termination_type = LINEAR_SOLVER_SUCCESS;

View File

@@ -34,6 +34,7 @@
#ifndef CERES_INTERNAL_CONJUGATE_GRADIENTS_SOLVER_H_
#define CERES_INTERNAL_CONJUGATE_GRADIENTS_SOLVER_H_
#include "ceres/internal/port.h"
#include "ceres/linear_solver.h"
namespace ceres {
@@ -54,7 +55,7 @@ class LinearOperator;
// For more details see the documentation for
// LinearSolver::PerSolveOptions::r_tolerance and
// LinearSolver::PerSolveOptions::q_tolerance in linear_solver.h.
class ConjugateGradientsSolver : public LinearSolver {
class CERES_EXPORT_INTERNAL ConjugateGradientsSolver : public LinearSolver {
public:
explicit ConjugateGradientsSolver(const LinearSolver::Options& options);
Summary Solve(LinearOperator* A,

View File

@@ -34,8 +34,6 @@
namespace ceres {
Context* Context::Create() {
return new internal::ContextImpl();
}
Context* Context::Create() { return new internal::ContextImpl(); }
} // namespace ceres

View File

@@ -37,7 +37,6 @@ void ContextImpl::EnsureMinimumThreads(int num_threads) {
#ifdef CERES_USE_CXX_THREADS
thread_pool.Resize(num_threads);
#endif // CERES_USE_CXX_THREADS
}
} // namespace internal
} // namespace ceres

View File

@@ -32,7 +32,9 @@
#define CERES_INTERNAL_CONTEXT_IMPL_H_
// This include must come before any #ifndef check on Ceres compile options.
// clang-format off
#include "ceres/internal/port.h"
// clanf-format on
#include "ceres/context.h"
@@ -43,7 +45,7 @@
namespace ceres {
namespace internal {
class ContextImpl : public Context {
class CERES_EXPORT_INTERNAL ContextImpl : public Context {
public:
ContextImpl() {}
ContextImpl(const ContextImpl&) = delete;

View File

@@ -64,8 +64,7 @@ CoordinateDescentMinimizer::CoordinateDescentMinimizer(ContextImpl* context)
CHECK(context_ != nullptr);
}
CoordinateDescentMinimizer::~CoordinateDescentMinimizer() {
}
CoordinateDescentMinimizer::~CoordinateDescentMinimizer() {}
bool CoordinateDescentMinimizer::Init(
const Program& program,
@@ -82,13 +81,13 @@ bool CoordinateDescentMinimizer::Init(
map<int, set<double*>> group_to_elements = ordering.group_to_elements();
for (const auto& g_t_e : group_to_elements) {
const auto& elements = g_t_e.second;
for (double* parameter_block: elements) {
for (double* parameter_block : elements) {
parameter_blocks_.push_back(parameter_map.find(parameter_block)->second);
parameter_block_index[parameter_blocks_.back()] =
parameter_blocks_.size() - 1;
}
independent_set_offsets_.push_back(
independent_set_offsets_.back() + elements.size());
independent_set_offsets_.push_back(independent_set_offsets_.back() +
elements.size());
}
// The ordering does not have to contain all parameter blocks, so
@@ -126,10 +125,9 @@ bool CoordinateDescentMinimizer::Init(
return true;
}
void CoordinateDescentMinimizer::Minimize(
const Minimizer::Options& options,
double* parameters,
Solver::Summary* summary) {
void CoordinateDescentMinimizer::Minimize(const Minimizer::Options& options,
double* parameters,
Solver::Summary* summary) {
// Set the state and mark all parameter blocks constant.
for (int i = 0; i < parameter_blocks_.size(); ++i) {
ParameterBlock* parameter_block = parameter_blocks_[i];
@@ -202,7 +200,7 @@ void CoordinateDescentMinimizer::Minimize(
});
}
for (int i = 0; i < parameter_blocks_.size(); ++i) {
for (int i = 0; i < parameter_blocks_.size(); ++i) {
parameter_blocks_[i]->SetVarying();
}
@@ -251,10 +249,10 @@ bool CoordinateDescentMinimizer::IsOrderingValid(
// Verify that each group is an independent set
for (const auto& g_t_e : group_to_elements) {
if (!program.IsParameterBlockSetIndependent(g_t_e.second)) {
*message =
StringPrintf("The user-provided "
"parameter_blocks_for_inner_iterations does not "
"form an independent set. Group Id: %d", g_t_e.first);
*message = StringPrintf(
"The user-provided parameter_blocks_for_inner_iterations does not "
"form an independent set. Group Id: %d",
g_t_e.first);
return false;
}
}

View File

@@ -30,8 +30,9 @@
#include "ceres/corrector.h"
#include <cstddef>
#include <cmath>
#include <cstddef>
#include "ceres/internal/eigen.h"
#include "glog/logging.h"
@@ -147,9 +148,9 @@ void Corrector::CorrectJacobian(const int num_rows,
}
for (int r = 0; r < num_rows; ++r) {
jacobian[r * num_cols + c] = sqrt_rho1_ *
(jacobian[r * num_cols + c] -
alpha_sq_norm_ * residuals[r] * r_transpose_j);
jacobian[r * num_cols + c] =
sqrt_rho1_ * (jacobian[r * num_cols + c] -
alpha_sq_norm_ * residuals[r] * r_transpose_j);
}
}
}

View File

@@ -35,6 +35,8 @@
#ifndef CERES_INTERNAL_CORRECTOR_H_
#define CERES_INTERNAL_CORRECTOR_H_
#include "ceres/internal/port.h"
namespace ceres {
namespace internal {
@@ -46,7 +48,7 @@ namespace internal {
// gauss newton approximation and then take its square root to get the
// corresponding corrections to the residual and jacobian. For the
// full expressions see Eq. 10 and 11 in BANS by Triggs et al.
class Corrector {
class CERES_EXPORT_INTERNAL Corrector {
public:
// The constructor takes the squared norm, the value, the first and
// second derivatives of the LossFunction. It precalculates some of

View File

@@ -32,6 +32,7 @@
#include <utility>
#include <vector>
#include "ceres/covariance_impl.h"
#include "ceres/problem.h"
#include "ceres/problem_impl.h"
@@ -46,8 +47,7 @@ Covariance::Covariance(const Covariance::Options& options) {
impl_.reset(new internal::CovarianceImpl(options));
}
Covariance::~Covariance() {
}
Covariance::~Covariance() {}
bool Covariance::Compute(
const vector<pair<const double*, const double*>>& covariance_blocks,
@@ -55,9 +55,8 @@ bool Covariance::Compute(
return impl_->Compute(covariance_blocks, problem->impl_.get());
}
bool Covariance::Compute(
const vector<const double*>& parameter_blocks,
Problem* problem) {
bool Covariance::Compute(const vector<const double*>& parameter_blocks,
Problem* problem) {
return impl_->Compute(parameter_blocks, problem->impl_.get());
}
@@ -89,8 +88,8 @@ bool Covariance::GetCovarianceMatrix(
}
bool Covariance::GetCovarianceMatrixInTangentSpace(
const std::vector<const double *>& parameter_blocks,
double *covariance_matrix) const {
const std::vector<const double*>& parameter_blocks,
double* covariance_matrix) const {
return impl_->GetCovarianceMatrixInTangentOrAmbientSpace(parameter_blocks,
false, // tangent
covariance_matrix);

View File

@@ -39,10 +39,9 @@
#include <utility>
#include <vector>
#include "Eigen/SVD"
#include "Eigen/SparseCore"
#include "Eigen/SparseQR"
#include "Eigen/SVD"
#include "ceres/compressed_col_sparse_matrix_utils.h"
#include "ceres/compressed_row_sparse_matrix.h"
#include "ceres/covariance.h"
@@ -61,25 +60,17 @@
namespace ceres {
namespace internal {
using std::make_pair;
using std::map;
using std::pair;
using std::sort;
using std::swap;
using std::vector;
typedef vector<pair<const double*, const double*>> CovarianceBlocks;
using CovarianceBlocks = std::vector<std::pair<const double*, const double*>>;
CovarianceImpl::CovarianceImpl(const Covariance::Options& options)
: options_(options),
is_computed_(false),
is_valid_(false) {
: options_(options), is_computed_(false), is_valid_(false) {
#ifdef CERES_NO_THREADS
if (options_.num_threads > 1) {
LOG(WARNING)
<< "No threading support is compiled into this binary; "
<< "only options.num_threads = 1 is supported. Switching "
<< "to single threaded mode.";
LOG(WARNING) << "No threading support is compiled into this binary; "
<< "only options.num_threads = 1 is supported. Switching "
<< "to single threaded mode.";
options_.num_threads = 1;
}
#endif
@@ -88,16 +79,16 @@ CovarianceImpl::CovarianceImpl(const Covariance::Options& options)
evaluate_options_.apply_loss_function = options_.apply_loss_function;
}
CovarianceImpl::~CovarianceImpl() {
}
CovarianceImpl::~CovarianceImpl() {}
template <typename T> void CheckForDuplicates(vector<T> blocks) {
template <typename T>
void CheckForDuplicates(std::vector<T> blocks) {
sort(blocks.begin(), blocks.end());
typename vector<T>::iterator it =
typename std::vector<T>::iterator it =
std::adjacent_find(blocks.begin(), blocks.end());
if (it != blocks.end()) {
// In case there are duplicates, we search for their location.
map<T, vector<int>> blocks_map;
std::map<T, std::vector<int>> blocks_map;
for (int i = 0; i < blocks.size(); ++i) {
blocks_map[blocks[i]].push_back(i);
}
@@ -122,7 +113,8 @@ template <typename T> void CheckForDuplicates(vector<T> blocks) {
bool CovarianceImpl::Compute(const CovarianceBlocks& covariance_blocks,
ProblemImpl* problem) {
CheckForDuplicates<pair<const double*, const double*>>(covariance_blocks);
CheckForDuplicates<std::pair<const double*, const double*>>(
covariance_blocks);
problem_ = problem;
parameter_block_to_row_index_.clear();
covariance_matrix_.reset(NULL);
@@ -132,14 +124,14 @@ bool CovarianceImpl::Compute(const CovarianceBlocks& covariance_blocks,
return is_valid_;
}
bool CovarianceImpl::Compute(const vector<const double*>& parameter_blocks,
bool CovarianceImpl::Compute(const std::vector<const double*>& parameter_blocks,
ProblemImpl* problem) {
CheckForDuplicates<const double*>(parameter_blocks);
CovarianceBlocks covariance_blocks;
for (int i = 0; i < parameter_blocks.size(); ++i) {
for (int j = i; j < parameter_blocks.size(); ++j) {
covariance_blocks.push_back(make_pair(parameter_blocks[i],
parameter_blocks[j]));
covariance_blocks.push_back(
std::make_pair(parameter_blocks[i], parameter_blocks[j]));
}
}
@@ -162,13 +154,11 @@ bool CovarianceImpl::GetCovarianceBlockInTangentOrAmbientSpace(
if (constant_parameter_blocks_.count(original_parameter_block1) > 0 ||
constant_parameter_blocks_.count(original_parameter_block2) > 0) {
const ProblemImpl::ParameterMap& parameter_map = problem_->parameter_map();
ParameterBlock* block1 =
FindOrDie(parameter_map,
const_cast<double*>(original_parameter_block1));
ParameterBlock* block1 = FindOrDie(
parameter_map, const_cast<double*>(original_parameter_block1));
ParameterBlock* block2 =
FindOrDie(parameter_map,
const_cast<double*>(original_parameter_block2));
ParameterBlock* block2 = FindOrDie(
parameter_map, const_cast<double*>(original_parameter_block2));
const int block1_size = block1->Size();
const int block2_size = block2->Size();
@@ -210,8 +200,7 @@ bool CovarianceImpl::GetCovarianceBlockInTangentOrAmbientSpace(
if (offset == row_size) {
LOG(ERROR) << "Unable to find covariance block for "
<< original_parameter_block1 << " "
<< original_parameter_block2;
<< original_parameter_block1 << " " << original_parameter_block2;
return false;
}
@@ -227,9 +216,8 @@ bool CovarianceImpl::GetCovarianceBlockInTangentOrAmbientSpace(
const int block2_size = block2->Size();
const int block2_local_size = block2->LocalSize();
ConstMatrixRef cov(covariance_matrix_->values() + rows[row_begin],
block1_size,
row_size);
ConstMatrixRef cov(
covariance_matrix_->values() + rows[row_begin], block1_size, row_size);
// Fast path when there are no local parameterizations or if the
// user does not want it lifted to the ambient space.
@@ -237,8 +225,8 @@ bool CovarianceImpl::GetCovarianceBlockInTangentOrAmbientSpace(
!lift_covariance_to_ambient_space) {
if (transpose) {
MatrixRef(covariance_block, block2_local_size, block1_local_size) =
cov.block(0, offset, block1_local_size,
block2_local_size).transpose();
cov.block(0, offset, block1_local_size, block2_local_size)
.transpose();
} else {
MatrixRef(covariance_block, block1_local_size, block2_local_size) =
cov.block(0, offset, block1_local_size, block2_local_size);
@@ -298,7 +286,7 @@ bool CovarianceImpl::GetCovarianceBlockInTangentOrAmbientSpace(
}
bool CovarianceImpl::GetCovarianceMatrixInTangentOrAmbientSpace(
const vector<const double*>& parameters,
const std::vector<const double*>& parameters,
bool lift_covariance_to_ambient_space,
double* covariance_matrix) const {
CHECK(is_computed_)
@@ -310,8 +298,8 @@ bool CovarianceImpl::GetCovarianceMatrixInTangentOrAmbientSpace(
const ProblemImpl::ParameterMap& parameter_map = problem_->parameter_map();
// For OpenMP compatibility we need to define these vectors in advance
const int num_parameters = parameters.size();
vector<int> parameter_sizes;
vector<int> cum_parameter_size;
std::vector<int> parameter_sizes;
std::vector<int> cum_parameter_size;
parameter_sizes.reserve(num_parameters);
cum_parameter_size.resize(num_parameters + 1);
cum_parameter_size[0] = 0;
@@ -324,7 +312,8 @@ bool CovarianceImpl::GetCovarianceMatrixInTangentOrAmbientSpace(
parameter_sizes.push_back(block->LocalSize());
}
}
std::partial_sum(parameter_sizes.begin(), parameter_sizes.end(),
std::partial_sum(parameter_sizes.begin(),
parameter_sizes.end(),
cum_parameter_size.begin() + 1);
const int max_covariance_block_size =
*std::max_element(parameter_sizes.begin(), parameter_sizes.end());
@@ -343,65 +332,66 @@ bool CovarianceImpl::GetCovarianceMatrixInTangentOrAmbientSpace(
// i = 1:n, j = i:n.
int iteration_count = (num_parameters * (num_parameters + 1)) / 2;
problem_->context()->EnsureMinimumThreads(num_threads);
ParallelFor(
problem_->context(),
0,
iteration_count,
num_threads,
[&](int thread_id, int k) {
int i, j;
LinearIndexToUpperTriangularIndex(k, num_parameters, &i, &j);
ParallelFor(problem_->context(),
0,
iteration_count,
num_threads,
[&](int thread_id, int k) {
int i, j;
LinearIndexToUpperTriangularIndex(k, num_parameters, &i, &j);
int covariance_row_idx = cum_parameter_size[i];
int covariance_col_idx = cum_parameter_size[j];
int size_i = parameter_sizes[i];
int size_j = parameter_sizes[j];
double* covariance_block =
workspace.get() + thread_id * max_covariance_block_size *
max_covariance_block_size;
if (!GetCovarianceBlockInTangentOrAmbientSpace(
parameters[i], parameters[j],
lift_covariance_to_ambient_space, covariance_block)) {
success = false;
}
int covariance_row_idx = cum_parameter_size[i];
int covariance_col_idx = cum_parameter_size[j];
int size_i = parameter_sizes[i];
int size_j = parameter_sizes[j];
double* covariance_block =
workspace.get() + thread_id * max_covariance_block_size *
max_covariance_block_size;
if (!GetCovarianceBlockInTangentOrAmbientSpace(
parameters[i],
parameters[j],
lift_covariance_to_ambient_space,
covariance_block)) {
success = false;
}
covariance.block(covariance_row_idx, covariance_col_idx, size_i,
size_j) = MatrixRef(covariance_block, size_i, size_j);
covariance.block(
covariance_row_idx, covariance_col_idx, size_i, size_j) =
MatrixRef(covariance_block, size_i, size_j);
if (i != j) {
covariance.block(covariance_col_idx, covariance_row_idx,
size_j, size_i) =
MatrixRef(covariance_block, size_i, size_j).transpose();
}
});
if (i != j) {
covariance.block(
covariance_col_idx, covariance_row_idx, size_j, size_i) =
MatrixRef(covariance_block, size_i, size_j).transpose();
}
});
return success;
}
// Determine the sparsity pattern of the covariance matrix based on
// the block pairs requested by the user.
bool CovarianceImpl::ComputeCovarianceSparsity(
const CovarianceBlocks& original_covariance_blocks,
ProblemImpl* problem) {
const CovarianceBlocks& original_covariance_blocks, ProblemImpl* problem) {
EventLogger event_logger("CovarianceImpl::ComputeCovarianceSparsity");
// Determine an ordering for the parameter block, by sorting the
// parameter blocks by their pointers.
vector<double*> all_parameter_blocks;
std::vector<double*> all_parameter_blocks;
problem->GetParameterBlocks(&all_parameter_blocks);
const ProblemImpl::ParameterMap& parameter_map = problem->parameter_map();
std::unordered_set<ParameterBlock*> parameter_blocks_in_use;
vector<ResidualBlock*> residual_blocks;
std::vector<ResidualBlock*> residual_blocks;
problem->GetResidualBlocks(&residual_blocks);
for (int i = 0; i < residual_blocks.size(); ++i) {
ResidualBlock* residual_block = residual_blocks[i];
parameter_blocks_in_use.insert(residual_block->parameter_blocks(),
residual_block->parameter_blocks() +
residual_block->NumParameterBlocks());
residual_block->NumParameterBlocks());
}
constant_parameter_blocks_.clear();
vector<double*>& active_parameter_blocks =
std::vector<double*>& active_parameter_blocks =
evaluate_options_.parameter_blocks;
active_parameter_blocks.clear();
for (int i = 0; i < all_parameter_blocks.size(); ++i) {
@@ -434,8 +424,8 @@ bool CovarianceImpl::ComputeCovarianceSparsity(
// triangular part of the matrix.
int num_nonzeros = 0;
CovarianceBlocks covariance_blocks;
for (int i = 0; i < original_covariance_blocks.size(); ++i) {
const pair<const double*, const double*>& block_pair =
for (int i = 0; i < original_covariance_blocks.size(); ++i) {
const std::pair<const double*, const double*>& block_pair =
original_covariance_blocks[i];
if (constant_parameter_blocks_.count(block_pair.first) > 0 ||
constant_parameter_blocks_.count(block_pair.second) > 0) {
@@ -450,8 +440,8 @@ bool CovarianceImpl::ComputeCovarianceSparsity(
// Make sure we are constructing a block upper triangular matrix.
if (index1 > index2) {
covariance_blocks.push_back(make_pair(block_pair.second,
block_pair.first));
covariance_blocks.push_back(
std::make_pair(block_pair.second, block_pair.first));
} else {
covariance_blocks.push_back(block_pair);
}
@@ -466,7 +456,7 @@ bool CovarianceImpl::ComputeCovarianceSparsity(
// Sort the block pairs. As a consequence we get the covariance
// blocks as they will occur in the CompressedRowSparseMatrix that
// will store the covariance.
sort(covariance_blocks.begin(), covariance_blocks.end());
std::sort(covariance_blocks.begin(), covariance_blocks.end());
// Fill the sparsity pattern of the covariance matrix.
covariance_matrix_.reset(
@@ -486,10 +476,10 @@ bool CovarianceImpl::ComputeCovarianceSparsity(
// values of the parameter blocks. Thus iterating over the keys of
// parameter_block_to_row_index_ corresponds to iterating over the
// rows of the covariance matrix in order.
int i = 0; // index into covariance_blocks.
int i = 0; // index into covariance_blocks.
int cursor = 0; // index into the covariance matrix.
for (const auto& entry : parameter_block_to_row_index_) {
const double* row_block = entry.first;
const double* row_block = entry.first;
const int row_block_size = problem->ParameterBlockLocalSize(row_block);
int row_begin = entry.second;
@@ -498,7 +488,7 @@ bool CovarianceImpl::ComputeCovarianceSparsity(
int num_col_blocks = 0;
int num_columns = 0;
for (int j = i; j < covariance_blocks.size(); ++j, ++num_col_blocks) {
const pair<const double*, const double*>& block_pair =
const std::pair<const double*, const double*>& block_pair =
covariance_blocks[j];
if (block_pair.first != row_block) {
break;
@@ -519,7 +509,7 @@ bool CovarianceImpl::ComputeCovarianceSparsity(
}
}
i+= num_col_blocks;
i += num_col_blocks;
}
rows[num_rows] = cursor;
@@ -580,9 +570,9 @@ bool CovarianceImpl::ComputeCovarianceValuesUsingSuiteSparseQR() {
const int num_cols = jacobian.num_cols;
const int num_nonzeros = jacobian.values.size();
vector<SuiteSparse_long> transpose_rows(num_cols + 1, 0);
vector<SuiteSparse_long> transpose_cols(num_nonzeros, 0);
vector<double> transpose_values(num_nonzeros, 0);
std::vector<SuiteSparse_long> transpose_rows(num_cols + 1, 0);
std::vector<SuiteSparse_long> transpose_cols(num_nonzeros, 0);
std::vector<double> transpose_values(num_nonzeros, 0);
for (int idx = 0; idx < num_nonzeros; ++idx) {
transpose_rows[jacobian.cols[idx] + 1] += 1;
@@ -602,7 +592,7 @@ bool CovarianceImpl::ComputeCovarianceValuesUsingSuiteSparseQR() {
}
}
for (int i = transpose_rows.size() - 1; i > 0 ; --i) {
for (int i = transpose_rows.size() - 1; i > 0; --i) {
transpose_rows[i] = transpose_rows[i - 1];
}
transpose_rows[0] = 0;
@@ -642,14 +632,13 @@ bool CovarianceImpl::ComputeCovarianceValuesUsingSuiteSparseQR() {
// more efficient, both in runtime as well as the quality of
// ordering computed. So, it maybe worth doing that analysis
// separately.
const SuiteSparse_long rank =
SuiteSparseQR<double>(SPQR_ORDERING_BESTAMD,
SPQR_DEFAULT_TOL,
cholmod_jacobian.ncol,
&cholmod_jacobian,
&R,
&permutation,
&cc);
const SuiteSparse_long rank = SuiteSparseQR<double>(SPQR_ORDERING_BESTAMD,
SPQR_DEFAULT_TOL,
cholmod_jacobian.ncol,
&cholmod_jacobian,
&R,
&permutation,
&cc);
event_logger.AddEvent("Numeric Factorization");
if (R == nullptr) {
LOG(ERROR) << "Something is wrong. SuiteSparseQR returned R = nullptr.";
@@ -668,7 +657,7 @@ bool CovarianceImpl::ComputeCovarianceValuesUsingSuiteSparseQR() {
return false;
}
vector<int> inverse_permutation(num_cols);
std::vector<int> inverse_permutation(num_cols);
if (permutation) {
for (SuiteSparse_long i = 0; i < num_cols; ++i) {
inverse_permutation[permutation[i]] = i;
@@ -697,19 +686,18 @@ bool CovarianceImpl::ComputeCovarianceValuesUsingSuiteSparseQR() {
problem_->context()->EnsureMinimumThreads(num_threads);
ParallelFor(
problem_->context(),
0,
num_cols,
num_threads,
[&](int thread_id, int r) {
problem_->context(), 0, num_cols, num_threads, [&](int thread_id, int r) {
const int row_begin = rows[r];
const int row_end = rows[r + 1];
if (row_end != row_begin) {
double* solution = workspace.get() + thread_id * num_cols;
SolveRTRWithSparseRHS<SuiteSparse_long>(
num_cols, static_cast<SuiteSparse_long*>(R->i),
static_cast<SuiteSparse_long*>(R->p), static_cast<double*>(R->x),
inverse_permutation[r], solution);
num_cols,
static_cast<SuiteSparse_long*>(R->i),
static_cast<SuiteSparse_long*>(R->p),
static_cast<double*>(R->x),
inverse_permutation[r],
solution);
for (int idx = row_begin; idx < row_end; ++idx) {
const int c = cols[idx];
values[idx] = solution[inverse_permutation[c]];
@@ -801,10 +789,9 @@ bool CovarianceImpl::ComputeCovarianceValuesUsingDenseSVD() {
1.0 / (singular_values[i] * singular_values[i]);
}
Matrix dense_covariance =
svd.matrixV() *
inverse_squared_singular_values.asDiagonal() *
svd.matrixV().transpose();
Matrix dense_covariance = svd.matrixV() *
inverse_squared_singular_values.asDiagonal() *
svd.matrixV().transpose();
event_logger.AddEvent("PseudoInverse");
const int num_rows = covariance_matrix_->num_rows();
@@ -839,13 +826,16 @@ bool CovarianceImpl::ComputeCovarianceValuesUsingEigenSparseQR() {
// Convert the matrix to column major order as required by SparseQR.
EigenSparseMatrix sparse_jacobian =
Eigen::MappedSparseMatrix<double, Eigen::RowMajor>(
jacobian.num_rows, jacobian.num_cols,
jacobian.num_rows,
jacobian.num_cols,
static_cast<int>(jacobian.values.size()),
jacobian.rows.data(), jacobian.cols.data(), jacobian.values.data());
jacobian.rows.data(),
jacobian.cols.data(),
jacobian.values.data());
event_logger.AddEvent("ConvertToSparseMatrix");
Eigen::SparseQR<EigenSparseMatrix, Eigen::COLAMDOrdering<int>>
qr_solver(sparse_jacobian);
Eigen::SparseQR<EigenSparseMatrix, Eigen::COLAMDOrdering<int>> qr_solver(
sparse_jacobian);
event_logger.AddEvent("QRDecomposition");
if (qr_solver.info() != Eigen::Success) {
@@ -883,22 +873,17 @@ bool CovarianceImpl::ComputeCovarianceValuesUsingEigenSparseQR() {
problem_->context()->EnsureMinimumThreads(num_threads);
ParallelFor(
problem_->context(),
0,
num_cols,
num_threads,
[&](int thread_id, int r) {
problem_->context(), 0, num_cols, num_threads, [&](int thread_id, int r) {
const int row_begin = rows[r];
const int row_end = rows[r + 1];
if (row_end != row_begin) {
double* solution = workspace.get() + thread_id * num_cols;
SolveRTRWithSparseRHS<int>(
num_cols,
qr_solver.matrixR().innerIndexPtr(),
qr_solver.matrixR().outerIndexPtr(),
&qr_solver.matrixR().data().value(0),
inverse_permutation.indices().coeff(r),
solution);
SolveRTRWithSparseRHS<int>(num_cols,
qr_solver.matrixR().innerIndexPtr(),
qr_solver.matrixR().outerIndexPtr(),
&qr_solver.matrixR().data().value(0),
inverse_permutation.indices().coeff(r),
solution);
// Assign the values of the computed covariance using the
// inverse permutation used in the QR factorization.

View File

@@ -36,7 +36,9 @@
#include <set>
#include <utility>
#include <vector>
#include "ceres/covariance.h"
#include "ceres/internal/port.h"
#include "ceres/problem_impl.h"
#include "ceres/suitesparse.h"
@@ -45,19 +47,17 @@ namespace internal {
class CompressedRowSparseMatrix;
class CovarianceImpl {
class CERES_EXPORT_INTERNAL CovarianceImpl {
public:
explicit CovarianceImpl(const Covariance::Options& options);
~CovarianceImpl();
bool Compute(
const std::vector<std::pair<const double*,
const double*>>& covariance_blocks,
ProblemImpl* problem);
bool Compute(const std::vector<std::pair<const double*, const double*>>&
covariance_blocks,
ProblemImpl* problem);
bool Compute(
const std::vector<const double*>& parameter_blocks,
ProblemImpl* problem);
bool Compute(const std::vector<const double*>& parameter_blocks,
ProblemImpl* problem);
bool GetCovarianceBlockInTangentOrAmbientSpace(
const double* parameter_block1,
@@ -68,11 +68,11 @@ class CovarianceImpl {
bool GetCovarianceMatrixInTangentOrAmbientSpace(
const std::vector<const double*>& parameters,
bool lift_covariance_to_ambient_space,
double *covariance_matrix) const;
double* covariance_matrix) const;
bool ComputeCovarianceSparsity(
const std::vector<std::pair<const double*,
const double*>>& covariance_blocks,
const std::vector<std::pair<const double*, const double*>>&
covariance_blocks,
ProblemImpl* problem);
bool ComputeCovarianceValues();

View File

@@ -33,13 +33,12 @@
#ifndef CERES_NO_CXSPARSE
#include "ceres/cxsparse.h"
#include <string>
#include <vector>
#include "ceres/compressed_col_sparse_matrix_utils.h"
#include "ceres/compressed_row_sparse_matrix.h"
#include "ceres/cxsparse.h"
#include "ceres/triplet_sparse_matrix.h"
#include "glog/logging.h"

View File

@@ -166,7 +166,7 @@ class CXSparseCholesky : public SparseCholesky {
} // namespace internal
} // namespace ceres
#else // CERES_NO_CXSPARSE
#else
typedef void cs_dis;

View File

@@ -35,21 +35,19 @@
#include "ceres/casts.h"
#include "ceres/dense_sparse_matrix.h"
#include "ceres/internal/eigen.h"
#include "ceres/parameter_block.h"
#include "ceres/program.h"
#include "ceres/residual_block.h"
#include "ceres/scratch_evaluate_preparer.h"
#include "ceres/internal/eigen.h"
namespace ceres {
namespace internal {
class DenseJacobianWriter {
public:
DenseJacobianWriter(Evaluator::Options /* ignored */,
Program* program)
: program_(program) {
}
DenseJacobianWriter(Evaluator::Options /* ignored */, Program* program)
: program_(program) {}
// JacobianWriter interface.
@@ -61,14 +59,13 @@ class DenseJacobianWriter {
}
SparseMatrix* CreateJacobian() const {
return new DenseSparseMatrix(program_->NumResiduals(),
program_->NumEffectiveParameters(),
true);
return new DenseSparseMatrix(
program_->NumResiduals(), program_->NumEffectiveParameters(), true);
}
void Write(int residual_id,
int residual_offset,
double **jacobians,
double** jacobians,
SparseMatrix* jacobian) {
DenseSparseMatrix* dense_jacobian = down_cast<DenseSparseMatrix*>(jacobian);
const ResidualBlock* residual_block =
@@ -86,15 +83,14 @@ class DenseJacobianWriter {
}
const int parameter_block_size = parameter_block->LocalSize();
ConstMatrixRef parameter_jacobian(jacobians[j],
num_residuals,
parameter_block_size);
ConstMatrixRef parameter_jacobian(
jacobians[j], num_residuals, parameter_block_size);
dense_jacobian->mutable_matrix().block(
residual_offset,
parameter_block->delta_offset(),
num_residuals,
parameter_block_size) = parameter_jacobian;
dense_jacobian->mutable_matrix().block(residual_offset,
parameter_block->delta_offset(),
num_residuals,
parameter_block_size) =
parameter_jacobian;
}
}

View File

@@ -132,13 +132,8 @@ LinearSolver::Summary DenseNormalCholeskySolver::SolveUsingLAPACK(
//
// Note: This is a bit delicate, it assumes that the stride on this
// matrix is the same as the number of rows.
BLAS::SymmetricRankKUpdate(A->num_rows(),
num_cols,
A->values(),
true,
1.0,
0.0,
lhs.data());
BLAS::SymmetricRankKUpdate(
A->num_rows(), num_cols, A->values(), true, 1.0, 0.0, lhs.data());
if (per_solve_options.D != NULL) {
// Undo the modifications to the matrix A.
@@ -153,13 +148,10 @@ LinearSolver::Summary DenseNormalCholeskySolver::SolveUsingLAPACK(
LinearSolver::Summary summary;
summary.num_iterations = 1;
summary.termination_type =
LAPACK::SolveInPlaceUsingCholesky(num_cols,
lhs.data(),
x,
&summary.message);
summary.termination_type = LAPACK::SolveInPlaceUsingCholesky(
num_cols, lhs.data(), x, &summary.message);
event_logger.AddEvent("Solve");
return summary;
}
} // namespace internal
} // namespace ceres
} // namespace internal
} // namespace ceres

View File

@@ -73,7 +73,7 @@ class DenseSparseMatrix;
// library. This solver always returns a solution, it is the user's
// responsibility to judge if the solution is good enough for their
// purposes.
class DenseNormalCholeskySolver: public DenseSparseMatrixSolver {
class DenseNormalCholeskySolver : public DenseSparseMatrixSolver {
public:
explicit DenseNormalCholeskySolver(const LinearSolver::Options& options);

View File

@@ -31,6 +31,7 @@
#include "ceres/dense_qr_solver.h"
#include <cstddef>
#include "Eigen/Dense"
#include "ceres/dense_sparse_matrix.h"
#include "ceres/internal/eigen.h"
@@ -77,7 +78,7 @@ LinearSolver::Summary DenseQRSolver::SolveUsingLAPACK(
// TODO(sameeragarwal): Since we are copying anyways, the diagonal
// can be appended to the matrix instead of doing it on A.
lhs_ = A->matrix();
lhs_ = A->matrix();
if (per_solve_options.D != NULL) {
// Undo the modifications to the matrix A.
@@ -164,5 +165,5 @@ LinearSolver::Summary DenseQRSolver::SolveUsingEigen(
return summary;
}
} // namespace internal
} // namespace ceres
} // namespace internal
} // namespace ceres

View File

@@ -32,8 +32,9 @@
#ifndef CERES_INTERNAL_DENSE_QR_SOLVER_H_
#define CERES_INTERNAL_DENSE_QR_SOLVER_H_
#include "ceres/linear_solver.h"
#include "ceres/internal/eigen.h"
#include "ceres/internal/port.h"
#include "ceres/linear_solver.h"
namespace ceres {
namespace internal {
@@ -78,7 +79,7 @@ class DenseSparseMatrix;
// library. This solver always returns a solution, it is the user's
// responsibility to judge if the solution is good enough for their
// purposes.
class DenseQRSolver: public DenseSparseMatrixSolver {
class CERES_EXPORT_INTERNAL DenseQRSolver : public DenseSparseMatrixSolver {
public:
explicit DenseQRSolver(const LinearSolver::Options& options);

View File

@@ -31,17 +31,17 @@
#include "ceres/dense_sparse_matrix.h"
#include <algorithm>
#include "ceres/triplet_sparse_matrix.h"
#include "ceres/internal/eigen.h"
#include "ceres/internal/port.h"
#include "ceres/triplet_sparse_matrix.h"
#include "glog/logging.h"
namespace ceres {
namespace internal {
DenseSparseMatrix::DenseSparseMatrix(int num_rows, int num_cols)
: has_diagonal_appended_(false),
has_diagonal_reserved_(false) {
: has_diagonal_appended_(false), has_diagonal_reserved_(false) {
m_.resize(num_rows, num_cols);
m_.setZero();
}
@@ -49,11 +49,10 @@ DenseSparseMatrix::DenseSparseMatrix(int num_rows, int num_cols)
DenseSparseMatrix::DenseSparseMatrix(int num_rows,
int num_cols,
bool reserve_diagonal)
: has_diagonal_appended_(false),
has_diagonal_reserved_(reserve_diagonal) {
: has_diagonal_appended_(false), has_diagonal_reserved_(reserve_diagonal) {
if (reserve_diagonal) {
// Allocate enough space for the diagonal.
m_.resize(num_rows + num_cols, num_cols);
m_.resize(num_rows + num_cols, num_cols);
} else {
m_.resize(num_rows, num_cols);
}
@@ -64,9 +63,9 @@ DenseSparseMatrix::DenseSparseMatrix(const TripletSparseMatrix& m)
: m_(Eigen::MatrixXd::Zero(m.num_rows(), m.num_cols())),
has_diagonal_appended_(false),
has_diagonal_reserved_(false) {
const double *values = m.values();
const int *rows = m.rows();
const int *cols = m.cols();
const double* values = m.values();
const int* rows = m.rows();
const int* cols = m.cols();
int num_nonzeros = m.num_nonzeros();
for (int i = 0; i < num_nonzeros; ++i) {
@@ -75,14 +74,9 @@ DenseSparseMatrix::DenseSparseMatrix(const TripletSparseMatrix& m)
}
DenseSparseMatrix::DenseSparseMatrix(const ColMajorMatrix& m)
: m_(m),
has_diagonal_appended_(false),
has_diagonal_reserved_(false) {
}
: m_(m), has_diagonal_appended_(false), has_diagonal_reserved_(false) {}
void DenseSparseMatrix::SetZero() {
m_.setZero();
}
void DenseSparseMatrix::SetZero() { m_.setZero(); }
void DenseSparseMatrix::RightMultiply(const double* x, double* y) const {
VectorRef(y, num_rows()) += matrix() * ConstVectorRef(x, num_cols());
@@ -105,7 +99,7 @@ void DenseSparseMatrix::ToDenseMatrix(Matrix* dense_matrix) const {
*dense_matrix = m_.block(0, 0, num_rows(), num_cols());
}
void DenseSparseMatrix::AppendDiagonal(double *d) {
void DenseSparseMatrix::AppendDiagonal(double* d) {
CHECK(!has_diagonal_appended_);
if (!has_diagonal_reserved_) {
ColMajorMatrix tmp = m_;
@@ -133,9 +127,7 @@ int DenseSparseMatrix::num_rows() const {
return m_.rows();
}
int DenseSparseMatrix::num_cols() const {
return m_.cols();
}
int DenseSparseMatrix::num_cols() const { return m_.cols(); }
int DenseSparseMatrix::num_nonzeros() const {
if (has_diagonal_reserved_ && !has_diagonal_appended_) {
@@ -148,33 +140,30 @@ ConstColMajorMatrixRef DenseSparseMatrix::matrix() const {
return ConstColMajorMatrixRef(
m_.data(),
((has_diagonal_reserved_ && !has_diagonal_appended_)
? m_.rows() - m_.cols()
: m_.rows()),
? m_.rows() - m_.cols()
: m_.rows()),
m_.cols(),
Eigen::Stride<Eigen::Dynamic, 1>(m_.rows(), 1));
}
ColMajorMatrixRef DenseSparseMatrix::mutable_matrix() {
return ColMajorMatrixRef(
m_.data(),
((has_diagonal_reserved_ && !has_diagonal_appended_)
? m_.rows() - m_.cols()
: m_.rows()),
m_.cols(),
Eigen::Stride<Eigen::Dynamic, 1>(m_.rows(), 1));
return ColMajorMatrixRef(m_.data(),
((has_diagonal_reserved_ && !has_diagonal_appended_)
? m_.rows() - m_.cols()
: m_.rows()),
m_.cols(),
Eigen::Stride<Eigen::Dynamic, 1>(m_.rows(), 1));
}
void DenseSparseMatrix::ToTextFile(FILE* file) const {
CHECK(file != nullptr);
const int active_rows =
(has_diagonal_reserved_ && !has_diagonal_appended_)
? (m_.rows() - m_.cols())
: m_.rows();
const int active_rows = (has_diagonal_reserved_ && !has_diagonal_appended_)
? (m_.rows() - m_.cols())
: m_.rows();
for (int r = 0; r < active_rows; ++r) {
for (int c = 0; c < m_.cols(); ++c) {
fprintf(file, "% 10d % 10d %17f\n", r, c, m_(r, c));
fprintf(file, "% 10d % 10d %17f\n", r, c, m_(r, c));
}
}
}

View File

@@ -34,6 +34,7 @@
#define CERES_INTERNAL_DENSE_SPARSE_MATRIX_H_
#include "ceres/internal/eigen.h"
#include "ceres/internal/port.h"
#include "ceres/sparse_matrix.h"
#include "ceres/types.h"
@@ -42,7 +43,7 @@ namespace internal {
class TripletSparseMatrix;
class DenseSparseMatrix : public SparseMatrix {
class CERES_EXPORT_INTERNAL DenseSparseMatrix : public SparseMatrix {
public:
// Build a matrix with the same content as the TripletSparseMatrix
// m. This assumes that m does not have any repeated entries.
@@ -92,7 +93,7 @@ class DenseSparseMatrix : public SparseMatrix {
// Calling RemoveDiagonal removes the block. It is a fatal error to append a
// diagonal to a matrix that already has an appended diagonal, and it is also
// a fatal error to remove a diagonal from a matrix that has none.
void AppendDiagonal(double *d);
void AppendDiagonal(double* d);
void RemoveDiagonal();
private:

View File

@@ -29,6 +29,7 @@
// Author: sameeragarwal@google.com (Sameer Agarwal)
#include "ceres/detect_structure.h"
#include "ceres/internal/eigen.h"
#include "glog/logging.h"
@@ -61,8 +62,7 @@ void DetectStructure(const CompressedRowBlockStructure& bs,
} else if (*row_block_size != Eigen::Dynamic &&
*row_block_size != row.block.size) {
VLOG(2) << "Dynamic row block size because the block size changed from "
<< *row_block_size << " to "
<< row.block.size;
<< *row_block_size << " to " << row.block.size;
*row_block_size = Eigen::Dynamic;
}
@@ -73,8 +73,7 @@ void DetectStructure(const CompressedRowBlockStructure& bs,
} else if (*e_block_size != Eigen::Dynamic &&
*e_block_size != bs.cols[e_block_id].size) {
VLOG(2) << "Dynamic e block size because the block size changed from "
<< *e_block_size << " to "
<< bs.cols[e_block_id].size;
<< *e_block_size << " to " << bs.cols[e_block_id].size;
*e_block_size = Eigen::Dynamic;
}
@@ -100,9 +99,11 @@ void DetectStructure(const CompressedRowBlockStructure& bs,
}
}
// clang-format off
const bool is_everything_dynamic = (*row_block_size == Eigen::Dynamic &&
*e_block_size == Eigen::Dynamic &&
*f_block_size == Eigen::Dynamic);
// clang-format on
if (is_everything_dynamic) {
break;
}
@@ -110,10 +111,12 @@ void DetectStructure(const CompressedRowBlockStructure& bs,
CHECK_NE(*row_block_size, 0) << "No rows found";
CHECK_NE(*e_block_size, 0) << "No e type blocks found";
// clang-format off
VLOG(1) << "Schur complement static structure <"
<< *row_block_size << ","
<< *e_block_size << ","
<< *f_block_size << ">.";
// clang-format on
}
} // namespace internal

View File

@@ -32,6 +32,7 @@
#define CERES_INTERNAL_DETECT_STRUCTURE_H_
#include "ceres/block_structure.h"
#include "ceres/internal/port.h"
namespace ceres {
namespace internal {
@@ -55,11 +56,11 @@ namespace internal {
// Note: The structure of rows without any e-blocks has no effect on
// the values returned by this function. It is entirely possible that
// the f_block_size and row_blocks_size is not constant in such rows.
void DetectStructure(const CompressedRowBlockStructure& bs,
const int num_eliminate_blocks,
int* row_block_size,
int* e_block_size,
int* f_block_size);
void CERES_EXPORT DetectStructure(const CompressedRowBlockStructure& bs,
const int num_eliminate_blocks,
int* row_block_size,
int* e_block_size,
int* f_block_size);
} // namespace internal
} // namespace ceres

View File

@@ -49,7 +49,7 @@ namespace internal {
namespace {
const double kMaxMu = 1.0;
const double kMinMu = 1e-8;
}
} // namespace
DoglegStrategy::DoglegStrategy(const TrustRegionStrategy::Options& options)
: linear_solver_(options.linear_solver),
@@ -122,8 +122,8 @@ TrustRegionStrategy::Summary DoglegStrategy::ComputeStep(
//
jacobian->SquaredColumnNorm(diagonal_.data());
for (int i = 0; i < n; ++i) {
diagonal_[i] = std::min(std::max(diagonal_[i], min_diagonal_),
max_diagonal_);
diagonal_[i] =
std::min(std::max(diagonal_[i], min_diagonal_), max_diagonal_);
}
diagonal_ = diagonal_.array().sqrt();
@@ -171,9 +171,8 @@ TrustRegionStrategy::Summary DoglegStrategy::ComputeStep(
// The gradient, the Gauss-Newton step, the Cauchy point,
// and all calculations involving the Jacobian have to
// be adjusted accordingly.
void DoglegStrategy::ComputeGradient(
SparseMatrix* jacobian,
const double* residuals) {
void DoglegStrategy::ComputeGradient(SparseMatrix* jacobian,
const double* residuals) {
gradient_.setZero();
jacobian->LeftMultiply(residuals, gradient_.data());
gradient_.array() /= diagonal_.array();
@@ -187,8 +186,7 @@ void DoglegStrategy::ComputeCauchyPoint(SparseMatrix* jacobian) {
Jg.setZero();
// The Jacobian is scaled implicitly by computing J * (D^-1 * (D^-1 * g))
// instead of (J * D^-1) * (D^-1 * g).
Vector scaled_gradient =
(gradient_.array() / diagonal_.array()).matrix();
Vector scaled_gradient = (gradient_.array() / diagonal_.array()).matrix();
jacobian->RightMultiply(scaled_gradient.data(), Jg.data());
alpha_ = gradient_.squaredNorm() / Jg.squaredNorm();
}
@@ -217,7 +215,7 @@ void DoglegStrategy::ComputeTraditionalDoglegStep(double* dogleg) {
// Case 2. The Cauchy point and the Gauss-Newton steps lie outside
// the trust region. Rescale the Cauchy point to the trust region
// and return.
if (gradient_norm * alpha_ >= radius_) {
if (gradient_norm * alpha_ >= radius_) {
dogleg_step = -(radius_ / gradient_norm) * gradient_;
dogleg_step_norm_ = radius_;
dogleg_step.array() /= diagonal_.array();
@@ -242,14 +240,12 @@ void DoglegStrategy::ComputeTraditionalDoglegStep(double* dogleg) {
// = alpha * -gradient' gauss_newton_step - alpha^2 |gradient|^2
const double c = b_dot_a - a_squared_norm;
const double d = sqrt(c * c + b_minus_a_squared_norm *
(pow(radius_, 2.0) - a_squared_norm));
(pow(radius_, 2.0) - a_squared_norm));
double beta =
(c <= 0)
? (d - c) / b_minus_a_squared_norm
: (radius_ * radius_ - a_squared_norm) / (d + c);
dogleg_step = (-alpha_ * (1.0 - beta)) * gradient_
+ beta * gauss_newton_step_;
double beta = (c <= 0) ? (d - c) / b_minus_a_squared_norm
: (radius_ * radius_ - a_squared_norm) / (d + c);
dogleg_step =
(-alpha_ * (1.0 - beta)) * gradient_ + beta * gauss_newton_step_;
dogleg_step_norm_ = dogleg_step.norm();
dogleg_step.array() /= diagonal_.array();
VLOG(3) << "Dogleg step size: " << dogleg_step_norm_
@@ -345,13 +341,13 @@ void DoglegStrategy::ComputeSubspaceDoglegStep(double* dogleg) {
// correctly determined.
const double kCosineThreshold = 0.99;
const Vector2d grad_minimum = subspace_B_ * minimum + subspace_g_;
const double cosine_angle = -minimum.dot(grad_minimum) /
(minimum.norm() * grad_minimum.norm());
const double cosine_angle =
-minimum.dot(grad_minimum) / (minimum.norm() * grad_minimum.norm());
if (cosine_angle < kCosineThreshold) {
LOG(WARNING) << "First order optimality seems to be violated "
<< "in the subspace method!\n"
<< "Cosine of angle between x and B x + g is "
<< cosine_angle << ".\n"
<< "Cosine of angle between x and B x + g is " << cosine_angle
<< ".\n"
<< "Taking a regular dogleg step instead.\n"
<< "Please consider filing a bug report if this "
<< "happens frequently or consistently.\n";
@@ -423,15 +419,17 @@ Vector DoglegStrategy::MakePolynomialForBoundaryConstrainedProblem() const {
const double trB = subspace_B_.trace();
const double r2 = radius_ * radius_;
Matrix2d B_adj;
// clang-format off
B_adj << subspace_B_(1, 1) , -subspace_B_(0, 1),
-subspace_B_(1, 0) , subspace_B_(0, 0);
-subspace_B_(1, 0) , subspace_B_(0, 0);
// clang-format on
Vector polynomial(5);
polynomial(0) = r2;
polynomial(1) = 2.0 * r2 * trB;
polynomial(2) = r2 * (trB * trB + 2.0 * detB) - subspace_g_.squaredNorm();
polynomial(3) = -2.0 * (subspace_g_.transpose() * B_adj * subspace_g_
- r2 * detB * trB);
polynomial(3) =
-2.0 * (subspace_g_.transpose() * B_adj * subspace_g_ - r2 * detB * trB);
polynomial(4) = r2 * detB * detB - (B_adj * subspace_g_).squaredNorm();
return polynomial;
@@ -565,10 +563,8 @@ LinearSolver::Summary DoglegStrategy::ComputeGaussNewtonStep(
// of Jx = -r and later set x = -y to avoid having to modify
// either jacobian or residuals.
InvalidateArray(n, gauss_newton_step_.data());
linear_solver_summary = linear_solver_->Solve(jacobian,
residuals,
solve_options,
gauss_newton_step_.data());
linear_solver_summary = linear_solver_->Solve(
jacobian, residuals, solve_options, gauss_newton_step_.data());
if (per_solve_options.dump_format_type == CONSOLE ||
(per_solve_options.dump_format_type != CONSOLE &&
@@ -641,9 +637,7 @@ void DoglegStrategy::StepIsInvalid() {
reuse_ = false;
}
double DoglegStrategy::Radius() const {
return radius_;
}
double DoglegStrategy::Radius() const { return radius_; }
bool DoglegStrategy::ComputeSubspaceModel(SparseMatrix* jacobian) {
// Compute an orthogonal basis for the subspace using QR decomposition.
@@ -701,8 +695,8 @@ bool DoglegStrategy::ComputeSubspaceModel(SparseMatrix* jacobian) {
subspace_g_ = subspace_basis_.transpose() * gradient_;
Eigen::Matrix<double, 2, Eigen::Dynamic, Eigen::RowMajor>
Jb(2, jacobian->num_rows());
Eigen::Matrix<double, 2, Eigen::Dynamic, Eigen::RowMajor> Jb(
2, jacobian->num_rows());
Jb.setZero();
Vector tmp;

View File

@@ -31,6 +31,7 @@
#ifndef CERES_INTERNAL_DOGLEG_STRATEGY_H_
#define CERES_INTERNAL_DOGLEG_STRATEGY_H_
#include "ceres/internal/port.h"
#include "ceres/linear_solver.h"
#include "ceres/trust_region_strategy.h"
@@ -52,16 +53,16 @@ namespace internal {
// DoglegStrategy follows the approach by Shultz, Schnabel, Byrd.
// This finds the exact optimum over the two-dimensional subspace
// spanned by the two Dogleg vectors.
class DoglegStrategy : public TrustRegionStrategy {
class CERES_EXPORT_INTERNAL DoglegStrategy : public TrustRegionStrategy {
public:
explicit DoglegStrategy(const TrustRegionStrategy::Options& options);
virtual ~DoglegStrategy() {}
// TrustRegionStrategy interface
Summary ComputeStep(const PerSolveOptions& per_solve_options,
SparseMatrix* jacobian,
const double* residuals,
double* step) final;
SparseMatrix* jacobian,
const double* residuals,
double* step) final;
void StepAccepted(double step_quality) final;
void StepRejected(double step_quality) final;
void StepIsInvalid();

Some files were not shown because too many files have changed in this diff Show More