1
1

Compare commits

..

15 Commits

Author SHA1 Message Date
ca7079a44e Merge branch 'master' into outliner-cpp-refactor 2020-11-11 17:35:26 +01:00
fdd9cb713e Merge branch 'master' into outliner-cpp-refactor 2020-11-09 13:56:13 +01:00
8c4f121492 Cleanup: Split header for Outliner tree building into C and C++ headers
It's odd to include a C++ header (".hh") in C code, we should avoid that. All
of the Outliner code should be moved to C++, I don't expect this C header to
stay for long.
2020-11-09 13:54:30 +01:00
c796364f05 Cleanup: Rename Outliner "tree-view" types to "tree-display" & update comments
"View" leads to weird names like `TreeViewViewLayer` and after all they are
specific to what we call a "display mode", so "display" is more appropriate.

Also add, update and correct comments.
2020-11-09 13:37:28 +01:00
4a572f06e9 Cleanup: Follow C++ code style for new Outliner building code
* Use C++17 nested namespaces.
* Use `_` suffix rather than prefix for private member variables.

Also: Simplify code visually in `tree_view.cc` with `using namespace`.
2020-11-09 12:21:51 +01:00
fa91f90287 Cleanup: General cleanup of Outliner Blender File display mode building
* Turn functions into member functions (makes API for a type more obvious &
  local, allows implicitly sharing data through member variables, enables order
  independend definition of functions, allows more natural language for
  function names because of the obvious context).
* Prefer references over pointers for passing by reference (makes clear that
  NULL is not a valid value and that the current scope is not the owner).
* Reduce indentation levels, use `continue` in loops to ensure preconditions
  are met.
* Add asserts for sanity checks.
2020-11-09 00:37:41 +01:00
2d60f64960 UI Code Quality: Convert Outliner Blender File mode to new tree buiding design
See https://developer.blender.org/D9499.

Also:
* Add `space_outliner/tree/common.cc` for functions shared between display
  modes.
* Had to add a cast to `ListBaseWrapper` to make it work with ID lists.
* Cleanup: Remove internal `Tree` alias for `ListBase`. That was more confusing
  than helpful.
2020-11-07 22:46:51 +01:00
fc6338d863 Cleanup: Put Outliner C++ namespace into blender::ed namespace, add comments
Also remove unnecessary forward declaration.
2020-11-07 14:23:06 +01:00
d962ca7387 Fix possible null-pointer dereference in new Outliner tree building code 2020-11-07 01:24:18 +01:00
e1a361d287 Cleanup: Remove redundant parameter from new Outliner tree building code 2020-11-07 01:21:47 +01:00
8b787089b3 Merge branch 'master' into outliner-cpp-refactor 2020-11-07 01:09:59 +01:00
126b6f8cc7 Cleanup: Comments & style improvements for new Outliner C++ code
* Add comments to explain the design ideas better.
* Follow code style guide for class layout.
* Avoid uninitialized value after construction (general good practice).
2020-11-07 01:04:17 +01:00
86fe1ec968 UI Code Quality: Use C++ data-structures for Outliner object hierarchy building
* Use `blender::Map` over `GHash`
* Use `blender::Vector` over allocated `ListBase *`

Benefits:
* Significantly reduces the amount of heap allocations in large trees (e.g.
  from O(n) to O(log(n)), where n is number of objects).
* Higher type safety (no `void *`, virtually no casts).
* More optimized (e.g. small buffer optimization).
* More practicable, const-correct APIs with well-defined exception behavior.

Code generally becomes more readable (less lines of code, less boilerplate,
more logic-focused APIs because of greater language flexibility).
2020-11-07 00:15:09 +01:00
6f87489536 UI Code Quality: General refactor of Outliner View Layer display mode creation
* Turn functions into member functions (makes API for a type more obvious &
  local, allows implicitly sharing data through member variables, enables order
  independend definition of functions, allows more natural language for
  function names because of the obvious context).
* Move important variables to classes rather than passing around all the time
  (shorter, more task-focused code, localizes important data names).
* Add helper class for adding object children sub-trees (smaller, more focused
  units are easier to reason about, have higher coherence, better testability,
  can manage own resources easily with RAII).
* Use C++ iterators over C-macros (arguably more readable, less macros is
  generally preferred)
* Add doxygen groups (visually emphasizes the coherence of code sections,
  provide place for higher level comments on sections).
* Prefer references over pointers for passing by reference (makes clear that
  NULL is not a valid value and that the current scope is not the owner).
2020-11-06 22:55:36 +01:00
cf94571762 UI Code Quality: Start refactoring Outliner tree creation (using C++)
The Outliner tree creation was very messy and hard to follow. Hardcoded display
type checks are scattered over many places.
This introduces a new abstraction "tree-view" to help constructing and managing
the tree for the different display types (View Layer, Scene, Blender file,
etc.).

Idea is to have an abstract base class to define an interface
(`AbstractTreeView`), and then subclasses with the implementation of each
display type (e.g. `TreeViewViewLayer`, `TreeViewBlenderFile`, etc). The
tree-viewer is kept alive until tree-rebuild as runtime data of the space, so
that further queries based on the display type can be executed (e.g. "does the
view support selection syncing?", "does it support restriction toggle
columns?", etc.).

I may still change the names a bit, not sure yet if "tree-view" is the right
term for this helper.
2020-11-06 20:54:20 +01:00
2301 changed files with 42759 additions and 78276 deletions

View File

@@ -29,19 +29,23 @@ Checks: >
-bugprone-sizeof-expression,
-bugprone-integer-division,
-bugprone-exception-escape,
-bugprone-redundant-branch-condition,
modernize-*,
-modernize-use-auto,
-modernize-use-trailing-return-type,
-modernize-deprecated-headers,
-modernize-avoid-c-arrays,
-modernize-use-equals-default,
-modernize-use-nodiscard,
-modernize-use-using,
-modernize-loop-convert,
-modernize-pass-by-value,
-modernize-use-default-member-init,
-modernize-raw-string-literal,
-modernize-avoid-bind,
-modernize-use-override,
-modernize-use-transparent-functors,
WarningsAsErrors: '*'

View File

@@ -91,79 +91,3 @@ c42a6b77b52560d257279de2cb624b4ef2c0d24c
# Cleanup: use doxy sections for imbuf
c207f7c22e1439e0b285fba5d2c072bdae23f981
# Cleanup: Clang-Tidy, modernize-use-bool-literals
af35ada2f3fa8da4d46b3a71de724d353d716820
# Cleanup: Use nullptr everywhere in fluid code
311031ecd03dbfbf43e1df672a395f24b2e7d4d3
# Cleanup: Clang-Tidy, modernize-redundant-void-arg
a331d5c99299c4514ca33c843b1c79b872f2728d
# Cleanup: Clang-Tidy modernize-use-nullptr
16732def37c5a66f3ea28dbe247b09cc6bca6677
# Cleanup: Clang-tidy, modernize-concat-nested-namespaces
4525049aa0cf818f6483dce589ac9791eb562338
# Cleanup: Clang-tidy else-after-return
ae342ed4511cf2e144dcd27ce2c635d3d536f9ad
# Cleanup: Clang-Tidy, readability-redundant-member-init
190170d4cc92ff34abe1744a10474ac4f1074086
# Cleanup: use 'filepath' instead of 'name' for ImBuf utilities
99f56b4c16323f96c0cbf54e392fb509fcac5bda
# Cleanup: clang-format
c4d8f6a4a8ddc29ed27311ed7578b3c8c31399d2
b5d310b569e07a937798a2d38539cfd290149f1c
8c846cccd6bdfd3e90a695fabbf05f53e5466a57
40d4a4cb1a6b4c3c2a486e8f2868f547530e0811
4eac03d821fa17546f562485f7d073813a5e5943
# Cleanup: use preprocessor version check for PyTypeObject declaration
cd9acfed4f7674b84be965d469a367aef96f8af3
# Cycles: fix compilation of OSL shaders following API change
b980cd163a9d5d77eeffc2e353333e739fa9e719
# Cleanup: clang-tidy suppress warnings for PyTypeObject.tp_print
efd71aad4f22ec0073d80b8dd296015d3f395aa8
# Cleanup: fix wrong merge, remove extra unique_ptr.
6507449e54a167c63a72229e4d0119dd2af68ae5
# Cleanup: fix some clang tidy issues
525a042c5c7513c41240b118acca002f6c60cc12
# Fix T82520: error building freestyle with Python3.8
e118426e4695a97d67e65d69677f3c4e2db50a56
# Cleanup: Clang-tidy, readability-else-after-return
7be47dadea5066ae095c644e0b4f1f10d75f5ab3
# Cleanup: Add `r_` to return parameter
45dca05b1cd2a5ead59144c93d790fdfe7c35ee6
# Cleanup: Typo in `print_default_info` function name.
41a73909dec716642f044e60b40a28335c9fdb10
# Cleanup: Reduce indentation
1cc3a0e2cf73a5ff4f9e0a7f5338eda77266b300
# Build-system: Force C linkage for all DNA type headers
ad4b7741dba45a2be210942c18af6b6e4438f129
# Cleanup: Move function to proper section
c126e27cdc8b28365a9d5f9fafc4d521d1eb83df
# Cleanup: remove break after return statements
bbdfeb751e16d939482d2e4b95c4d470f53f18a5
# Cleanup: clang-tidy
af013ff76feef7e8b8ba642279c62a5dc275d59f
# Cleanup: Make panel type flag names more clear
9d28353b525ecfbcca1501be72e4276dfb2bbc2a

View File

@@ -178,7 +178,6 @@ mark_as_advanced(BUILDINFO_OVERRIDE_TIME)
option(WITH_IK_ITASC "Enable ITASC IK solver (only disable for development & for incompatible C++ compilers)" ON)
option(WITH_IK_SOLVER "Enable Legacy IK solver (only disable for development)" ON)
option(WITH_FFTW3 "Enable FFTW3 support (Used for smoke, ocean sim, and audio effects)" ON)
option(WITH_PUGIXML "Enable PugiXML support (Used for OpenImageIO, Grease Pencil SVG export)" ON)
option(WITH_BULLET "Enable Bullet (Physics Engine)" ON)
option(WITH_SYSTEM_BULLET "Use the systems bullet library (currently unsupported due to missing features in upstream!)" )
mark_as_advanced(WITH_SYSTEM_BULLET)
@@ -347,21 +346,16 @@ if(UNIX AND NOT APPLE)
endif()
option(WITH_PYTHON_INSTALL "Copy system python into the blender install folder" ON)
if((WITH_AUDASPACE AND NOT WITH_SYSTEM_AUDASPACE) OR WITH_MOD_FLUID)
option(WITH_PYTHON_NUMPY "Include NumPy in Blender (used by Audaspace and Mantaflow)" ON)
endif()
if(WIN32 OR APPLE)
# Windows and macOS have this bundled with Python libraries.
elseif(WITH_PYTHON_INSTALL OR WITH_PYTHON_NUMPY)
elseif(WITH_PYTHON_INSTALL OR (WITH_AUDASPACE AND NOT WITH_SYSTEM_AUDASPACE))
set(PYTHON_NUMPY_PATH "" CACHE PATH "Path to python site-packages or dist-packages containing 'numpy' module")
mark_as_advanced(PYTHON_NUMPY_PATH)
set(PYTHON_NUMPY_INCLUDE_DIRS "" CACHE PATH "Path to the include directory of the NumPy module")
set(PYTHON_NUMPY_INCLUDE_DIRS ${PYTHON_NUMPY_PATH}/numpy/core/include CACHE PATH "Path to the include directory of the numpy module")
mark_as_advanced(PYTHON_NUMPY_INCLUDE_DIRS)
endif()
if(WITH_PYTHON_INSTALL)
option(WITH_PYTHON_INSTALL_NUMPY "Copy system NumPy into the blender install folder" ON)
option(WITH_PYTHON_INSTALL_NUMPY "Copy system numpy into the blender install folder" ON)
if(UNIX AND NOT APPLE)
option(WITH_PYTHON_INSTALL_REQUESTS "Copy system requests into the blender install folder" ON)
@@ -383,7 +377,6 @@ option(WITH_CYCLES_CUDA_BINARIES "Build Cycles CUDA binaries" OFF)
option(WITH_CYCLES_CUBIN_COMPILER "Build cubins with nvrtc based compiler instead of nvcc" OFF)
option(WITH_CYCLES_CUDA_BUILD_SERIAL "Build cubins one after another (useful on machines with limited RAM)" OFF)
mark_as_advanced(WITH_CYCLES_CUDA_BUILD_SERIAL)
set(CYCLES_TEST_DEVICES CPU CACHE STRING "Run regression tests on the specified device types (CPU CUDA OPTIX OPENCL)" )
set(CYCLES_CUDA_BINARIES_ARCH sm_30 sm_35 sm_37 sm_50 sm_52 sm_60 sm_61 sm_70 sm_75 sm_86 compute_75 CACHE STRING "CUDA architectures to build binaries for")
mark_as_advanced(CYCLES_CUDA_BINARIES_ARCH)
unset(PLATFORM_DEFAULT)
@@ -432,8 +425,8 @@ mark_as_advanced(WITH_CXX_GUARDEDALLOC)
option(WITH_ASSERT_ABORT "Call abort() when raising an assertion through BLI_assert()" ON)
mark_as_advanced(WITH_ASSERT_ABORT)
if((UNIX AND NOT APPLE) OR (CMAKE_GENERATOR MATCHES "^Visual Studio.+"))
option(WITH_CLANG_TIDY "Use Clang Tidy to analyze the source code (only enable for development on Linux using Clang, or Windows using the Visual Studio IDE)" OFF)
if(UNIX AND NOT APPLE)
option(WITH_CLANG_TIDY "Use Clang Tidy to analyze the source code (only enable for development on Linux using Clang)" OFF)
mark_as_advanced(WITH_CLANG_TIDY)
endif()
@@ -610,11 +603,6 @@ if(WIN32)
endif()
if(UNIX)
# See WITH_WINDOWS_SCCACHE for Windows.
option(WITH_COMPILER_CCACHE "Use ccache to improve rebuild times (Works with Ninja, Makefiles and Xcode)" OFF)
endif()
# The following only works with the Ninja generator in CMake >= 3.0.
if("${CMAKE_GENERATOR}" MATCHES "Ninja")
option(WITH_NINJA_POOL_JOBS
@@ -709,8 +697,6 @@ set_and_warn_dependency(WITH_BOOST WITH_OPENCOLORIO OFF)
set_and_warn_dependency(WITH_BOOST WITH_QUADRIFLOW OFF)
set_and_warn_dependency(WITH_BOOST WITH_USD OFF)
set_and_warn_dependency(WITH_BOOST WITH_ALEMBIC OFF)
set_and_warn_dependency(WITH_PUGIXML WITH_CYCLES_OSL OFF)
set_and_warn_dependency(WITH_PUGIXML WITH_OPENIMAGEIO OFF)
if(WITH_BOOST AND NOT (WITH_CYCLES OR WITH_OPENIMAGEIO OR WITH_INTERNATIONAL OR
WITH_OPENVDB OR WITH_OPENCOLORIO OR WITH_USD OR WITH_ALEMBIC))
@@ -889,11 +875,9 @@ if(NOT CMAKE_BUILD_TYPE MATCHES "Release")
if(APPLE AND COMPILER_ASAN_LIBRARY)
string(REPLACE " " ";" _list_COMPILER_ASAN_CFLAGS ${COMPILER_ASAN_CFLAGS})
set(_is_CONFIG_DEBUG "$<OR:$<CONFIG:Debug>,$<CONFIG:RelWithDebInfo>>")
add_compile_options("$<${_is_CONFIG_DEBUG}:${_list_COMPILER_ASAN_CFLAGS}>")
add_link_options("$<${_is_CONFIG_DEBUG}:-fno-omit-frame-pointer;-fsanitize=address>")
add_compile_options("$<$<NOT:$<CONFIG:Release>>:${_list_COMPILER_ASAN_CFLAGS}>")
add_link_options("$<$<NOT:$<CONFIG:Release>>:-fno-omit-frame-pointer;-fsanitize=address>")
unset(_list_COMPILER_ASAN_CFLAGS)
unset(_is_CONFIG_DEBUG)
elseif(COMPILER_ASAN_LIBRARY)
set(PLATFORM_LINKLIBS "${PLATFORM_LINKLIBS};${COMPILER_ASAN_LIBRARY}")
set(PLATFORM_LINKFLAGS "${COMPILER_ASAN_LIBRARY} ${COMPILER_ASAN_LINKER_FLAGS}")
@@ -1496,12 +1480,10 @@ if(CMAKE_COMPILER_IS_GNUCC)
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_INT_IN_BOOL_CONTEXT -Wno-int-in-bool-context)
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_FORMAT -Wno-format)
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_SWITCH -Wno-switch)
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_UNUSED_VARIABLE -Wno-unused-variable)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_CLASS_MEMACCESS -Wno-class-memaccess)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_COMMENT -Wno-comment)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_TYPEDEFS -Wno-unused-local-typedefs)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_VARIABLE -Wno-unused-variable)
if(CMAKE_COMPILER_IS_GNUCC AND (NOT "${CMAKE_C_COMPILER_VERSION}" VERSION_LESS "7.0"))
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_IMPLICIT_FALLTHROUGH -Wno-implicit-fallthrough)
@@ -1556,7 +1538,6 @@ elseif(CMAKE_C_COMPILER_ID MATCHES "Clang")
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_CXX11_NARROWING -Wno-c++11-narrowing)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_NON_VIRTUAL_DTOR -Wno-non-virtual-dtor)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_MACROS -Wno-unused-macros)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_VARIABLE -Wno-unused-variable)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_REORDER -Wno-reorder)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_COMMENT -Wno-comment)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_TYPEDEFS -Wno-unused-local-typedefs)
@@ -1631,16 +1612,19 @@ if(WITH_PYTHON)
if(WIN32 OR APPLE)
# Windows and macOS have this bundled with Python libraries.
elseif((WITH_PYTHON_INSTALL AND WITH_PYTHON_INSTALL_NUMPY) OR WITH_PYTHON_NUMPY)
elseif((WITH_PYTHON_INSTALL AND WITH_PYTHON_INSTALL_NUMPY) OR (WITH_AUDASPACE AND NOT WITH_SYSTEM_AUDASPACE))
if(("${PYTHON_NUMPY_PATH}" STREQUAL "") OR (${PYTHON_NUMPY_PATH} MATCHES NOTFOUND))
find_python_package(numpy "core/include")
find_python_package(numpy)
unset(PYTHON_NUMPY_INCLUDE_DIRS CACHE)
set(PYTHON_NUMPY_INCLUDE_DIRS ${PYTHON_NUMPY_PATH}/numpy/core/include CACHE PATH "Path to the include directory of the numpy module")
mark_as_advanced(PYTHON_NUMPY_INCLUDE_DIRS)
endif()
endif()
if(WIN32 OR APPLE)
# pass, we have this in lib/python/site-packages
elseif(WITH_PYTHON_INSTALL_REQUESTS)
find_python_package(requests "")
find_python_package(requests)
endif()
endif()

View File

@@ -41,7 +41,6 @@ Convenience Targets
* developer: Enable faster builds, error checking and tests, recommended for developers.
* config: Run cmake configuration tool to set build options.
* ninja: Use ninja build tool for faster builds.
* ccache: Use ccache for faster rebuilds.
Note: passing the argument 'BUILD_DIR=path' when calling make will override the default build dir.
Note: passing the argument 'BUILD_CMAKE_ARGS=args' lets you add cmake arguments.
@@ -242,10 +241,6 @@ ifneq "$(findstring developer, $(MAKECMDGOALS))" ""
CMAKE_CONFIG_ARGS:=-C"$(BLENDER_DIR)/build_files/cmake/config/blender_developer.cmake" $(CMAKE_CONFIG_ARGS)
endif
ifneq "$(findstring ccache, $(MAKECMDGOALS))" ""
CMAKE_CONFIG_ARGS:=-DWITH_COMPILER_CCACHE=YES $(CMAKE_CONFIG_ARGS)
endif
# -----------------------------------------------------------------------------
# build tool
@@ -345,7 +340,6 @@ headless: all
bpy: all
developer: all
ninja: all
ccache: all
# -----------------------------------------------------------------------------
# Build dependencies

View File

@@ -94,7 +94,11 @@ include(cmake/usd.cmake)
include(cmake/potrace.cmake)
# Boost needs to be included after python.cmake due to the PYTHON_BINARY variable being needed.
include(cmake/boost.cmake)
include(cmake/pugixml.cmake)
if(UNIX)
# Rely on PugiXML compiled with OpenImageIO
else()
include(cmake/pugixml.cmake)
endif()
if((NOT APPLE) OR ("${CMAKE_OSX_ARCHITECTURES}" STREQUAL "x86_64"))
include(cmake/ispc.cmake)
include(cmake/openimagedenoise.cmake)

View File

@@ -42,13 +42,7 @@ if(UNIX)
endforeach()
if(APPLE)
# Homebrew has different default locations for ARM and Intel macOS.
if("${CMAKE_HOST_SYSTEM_PROCESSOR}" STREQUAL "arm64")
set(HOMEBREW_LOCATION "/opt/homebrew")
else()
set(HOMEBREW_LOCATION "/usr/local")
endif()
if(NOT EXISTS "${HOMEBREW_LOCATION}/opt/bison/bin/bison")
if(NOT EXISTS "/usr/local/opt/bison/bin/bison")
string(APPEND _software_missing " bison")
endif()
endif()

View File

@@ -17,14 +17,13 @@
# ***** END GPL LICENSE BLOCK *****
set(CLANG_EXTRA_ARGS
-DLLVM_DIR="${LIBDIR}/llvm/lib/cmake/llvm/"
-DCLANG_PATH_TO_LLVM_SOURCE=${BUILD_DIR}/ll/src/ll
-DCLANG_PATH_TO_LLVM_BUILD=${LIBDIR}/llvm
-DLLVM_USE_CRT_RELEASE=MD
-DLLVM_USE_CRT_DEBUG=MDd
-DLLVM_CONFIG=${LIBDIR}/llvm/bin/llvm-config
)
set(BUILD_CLANG_TOOLS OFF)
if(WIN32)
set(CLANG_GENERATOR "Ninja")
else()
@@ -32,32 +31,11 @@ else()
endif()
if(APPLE)
set(BUILD_CLANG_TOOLS ON)
set(CLANG_EXTRA_ARGS ${CLANG_EXTRA_ARGS}
-DLIBXML2_LIBRARY=${LIBDIR}/xml2/lib/libxml2.a
)
endif()
if(BUILD_CLANG_TOOLS)
# ExternalProject_Add does not allow multiple tarballs to be
# downloaded. Work around this by having an empty build action
# for the extra tools, and referring the clang build to the location
# of the clang-tools-extra source.
ExternalProject_Add(external_clang_tools
URL ${CLANG_TOOLS_URI}
DOWNLOAD_DIR ${DOWNLOAD_DIR}
URL_HASH MD5=${CLANG_TOOLS_HASH}
INSTALL_DIR ${LIBDIR}/clang_tools
PREFIX ${BUILD_DIR}/clang_tools
CONFIGURE_COMMAND echo "."
BUILD_COMMAND echo "."
INSTALL_COMMAND echo "."
)
list(APPEND CLANG_EXTRA_ARGS
-DLLVM_EXTERNAL_CLANG_TOOLS_EXTRA_SOURCE_DIR=${BUILD_DIR}/clang_tools/src/external_clang_tools/
)
endif()
ExternalProject_Add(external_clang
URL ${CLANG_URI}
DOWNLOAD_DIR ${DOWNLOAD_DIR}
@@ -87,14 +65,6 @@ add_dependencies(
ll
)
if(BUILD_CLANG_TOOLS)
# `external_clang_tools` is for downloading the source, not compiling it.
add_dependencies(
external_clang
external_clang_tools
)
endif()
# We currently do not build libxml2 on Windows.
if(NOT WIN32)
add_dependencies(

View File

@@ -98,10 +98,6 @@ harvest(jpg/include jpeg/include "*.h")
harvest(jpg/lib jpeg/lib "libjpeg.a")
harvest(lame/lib ffmpeg/lib "*.a")
harvest(clang/bin llvm/bin "clang-format")
if(BUILD_CLANG_TOOLS)
harvest(clang/bin llvm/bin "clang-tidy")
harvest(clang/share/clang llvm/share "run-clang-tidy.py")
endif()
harvest(clang/include llvm/include "*")
harvest(llvm/include llvm/include "*")
harvest(llvm/bin llvm/bin "llvm-config")
@@ -160,8 +156,6 @@ harvest(osl/lib osl/lib "*.a")
harvest(osl/shaders osl/shaders "*.h")
harvest(png/include png/include "*.h")
harvest(png/lib png/lib "*.a")
harvest(pugixml/include pugixml/include "*.hpp")
harvest(pugixml/lib pugixml/lib "*.a")
harvest(python/bin python/bin "python${PYTHON_SHORT_VERSION}m")
harvest(python/include python/include "*h")
harvest(python/lib python/lib "*")

View File

@@ -25,13 +25,8 @@ if(WIN32)
elseif(APPLE)
# Use bison installed via Homebrew.
# The one which comes which Xcode toolset is too old.
if("${CMAKE_HOST_SYSTEM_PROCESSOR}" STREQUAL "arm64")
set(HOMEBREW_LOCATION "/opt/homebrew")
else()
set(HOMEBREW_LOCATION "/usr/local")
endif()
set(ISPC_EXTRA_ARGS_APPLE
-DBISON_EXECUTABLE=${HOMEBREW_LOCATION}/opt/bison/bin/bison
-DBISON_EXECUTABLE=/usr/local/opt/bison/bin/bison
)
elseif(UNIX)
set(ISPC_EXTRA_ARGS_UNIX

View File

@@ -112,9 +112,6 @@ set(OPENIMAGEIO_EXTRA_ARGS
-DOPENEXR_IEX_LIBRARY=${LIBDIR}/openexr/lib/${LIBPREFIX}Iex${OPENEXR_VERSION_POSTFIX}${LIBEXT}
-DOPENEXR_ILMIMF_LIBRARY=${LIBDIR}/openexr/lib/${LIBPREFIX}IlmImf${OPENEXR_VERSION_POSTFIX}${LIBEXT}
-DSTOP_ON_WARNING=OFF
-DUSE_EXTERNAL_PUGIXML=ON
-DPUGIXML_LIBRARY=${LIBDIR}/pugixml/lib/${LIBPREFIX}pugixml${LIBEXT}
-DPUGIXML_INCLUDE_DIR=${LIBDIR}/pugixml/include/
${WEBP_FLAGS}
${OIIO_SIMD_FLAGS}
)
@@ -137,7 +134,6 @@ add_dependencies(
external_jpeg
external_boost
external_tiff
external_pugixml
external_openjpeg${OPENJPEG_POSTFIX}
${WEBP_DEP}
)

View File

@@ -78,10 +78,14 @@ set(OSL_EXTRA_ARGS
-DINSTALL_DOCS=OFF
${OSL_SIMD_FLAGS}
-DPARTIO_LIBRARIES=
-DPUGIXML_HOME=${LIBDIR}/pugixml
)
if(APPLE)
if(WIN32)
set(OSL_EXTRA_ARGS
${OSL_EXTRA_ARGS}
-DPUGIXML_HOME=${LIBDIR}/pugixml
)
elseif(APPLE)
# Make symbol hiding consistent with OIIO which defaults to OFF,
# avoids linker warnings on macOS
set(OSL_EXTRA_ARGS
@@ -110,9 +114,17 @@ add_dependencies(
external_zlib
external_flexbison
external_openimageio
external_pugixml
)
if(UNIX)
# Rely on PugiXML compiled with OpenImageIO
else()
add_dependencies(
external_osl
external_pugixml
)
endif()
if(WIN32)
if(BUILD_MODE STREQUAL Release)
ExternalProject_Add_Step(external_osl after_install

View File

@@ -30,13 +30,13 @@ ExternalProject_Add(external_pugixml
if(WIN32)
if(BUILD_MODE STREQUAL Release)
ExternalProject_Add_Step(external_pugixml after_install
COMMAND ${CMAKE_COMMAND} -E copy_directory ${LIBDIR}/pugixml ${HARVEST_TARGET}/pugixml
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/pugixml/lib/pugixml.lib ${HARVEST_TARGET}/osl/lib/pugixml.lib
DEPENDEES install
)
endif()
if(BUILD_MODE STREQUAL Debug)
ExternalProject_Add_Step(external_pugixml after_install
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/pugixml/lib/pugixml.lib ${HARVEST_TARGET}/pugixml/lib/pugixml_d.lib
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/pugixml/lib/pugixml.lib ${HARVEST_TARGET}/osl/lib/pugixml_d.lib
DEPENDEES install
)
endif()

View File

@@ -120,9 +120,6 @@ set(LLVM_HASH 31eb9ce73dd2a0f8dcab8319fb03f8fc)
set(CLANG_URI https://github.com/llvm/llvm-project/releases/download/llvmorg-${LLVM_VERSION}/clang-${LLVM_VERSION}.src.tar.xz)
set(CLANG_HASH 13468e4a44940efef1b75e8641752f90)
set(CLANG_TOOLS_URI https://github.com/llvm/llvm-project/releases/download/llvmorg-${LLVM_VERSION}/clang-tools-extra-${LLVM_VERSION}.src.tar.xz)
set(CLANG_TOOLS_HASH c76293870b564c6a7968622b475b7646)
set(OPENMP_URI https://github.com/llvm/llvm-project/releases/download/llvmorg-${LLVM_VERSION}/openmp-${LLVM_VERSION}.src.tar.xz)
set(OPENMP_HASH 6eade16057edbdecb3c4eef9daa2bfcf)

View File

@@ -2086,7 +2086,7 @@ compile_OIIO() {
cmake_d="$cmake_d -D USE_OPENCV=OFF"
cmake_d="$cmake_d -D BUILD_TESTING=OFF"
cmake_d="$cmake_d -D OIIO_BUILD_TESTS=OFF"
cmake_d="$cmake_d -D OIIO_BUILD_TOOLS=ON"
cmake_d="$cmake_d -D OIIO_BUILD_TOOLS=OFF"
cmake_d="$cmake_d -D TXT2MAN="
#cmake_d="$cmake_d -D CMAKE_EXPORT_COMPILE_COMMANDS=ON"
#cmake_d="$cmake_d -D CMAKE_VERBOSE_MAKEFILE=ON"
@@ -4072,7 +4072,7 @@ install_DEB() {
else
check_package_version_ge_lt_DEB libopenimageio-dev $OIIO_VERSION_MIN $OIIO_VERSION_MAX
if [ $? -eq 0 -a "$_with_built_openexr" = false ]; then
install_packages_DEB libopenimageio-dev openimageio-tools
install_packages_DEB libopenimageio-dev
clean_OIIO
else
compile_OIIO
@@ -4714,13 +4714,13 @@ install_RPM() {
INFO "Forced OpenImageIO building, as requested..."
compile_OIIO
else
check_package_version_ge_lt_RPM OpenImageIO-devel $OIIO_VERSION_MIN $OIIO_VERSION_MAX
if [ $? -eq 0 -a $_with_built_openexr == false ]; then
install_packages_RPM OpenImageIO-devel OpenImageIO-utils
clean_OIIO
else
#check_package_version_ge_lt_RPM OpenImageIO-devel $OIIO_VERSION_MIN $OIIO_VERSION_MAX
#if [ $? -eq 0 -a $_with_built_openexr == false ]; then
# install_packages_RPM OpenImageIO-devel
# clean_OIIO
#else
compile_OIIO
fi
#fi
fi

View File

@@ -34,24 +34,3 @@ diff -Naur orig/src/include/OpenImageIO/platform.h external_openimageio/src/incl
# include <windows.h>
#endif
diff -Naur orig/src/libutil/ustring.cpp external_openimageio/src/libutil/ustring.cpp
--- orig/src/libutil/ustring.cpp 2020-05-11 05:43:52.000000000 +0200
+++ external_openimageio/src/libutil/ustring.cpp 2020-11-26 12:06:08.000000000 +0100
@@ -337,6 +337,8 @@
// the std::string to make it point to our chars! In such a case, the
// destructor will be careful not to allow a deallocation.
+ // Disable internal std::string for Apple silicon based Macs
+#if !(defined(__APPLE__) && defined(__arm64__))
#if defined(__GNUC__) && !defined(_LIBCPP_VERSION) \
&& defined(_GLIBCXX_USE_CXX11_ABI) && _GLIBCXX_USE_CXX11_ABI
// NEW gcc ABI
@@ -382,7 +384,7 @@
return;
}
#endif
-
+#endif
// Remaining cases - just assign the internal string. This may result
// in double allocation for the chars. If you care about that, do
// something special for your platform, much like we did for gcc and

View File

@@ -18,72 +18,12 @@
# <pep8 compliant>
import dataclasses
import json
import os
from pathlib import Path
from typing import Optional
import codesign.util as util
class ArchiveStateError(Exception):
message: str
def __init__(self, message):
self.message = message
super().__init__(self.message)
@dataclasses.dataclass
class ArchiveState:
"""
Additional information (state) of the archive
Includes information like expected file size of the archive file in the case
the archive file is expected to be successfully created.
If the archive can not be created, this state will contain error message
indicating details of error.
"""
# Size in bytes of the corresponding archive.
file_size: Optional[int] = None
# Non-empty value indicates that error has happenned.
error_message: str = ''
def has_error(self) -> bool:
"""
Check whether the archive is at error state
"""
return self.error_message
def serialize_to_string(self) -> str:
payload = dataclasses.asdict(self)
return json.dumps(payload, sort_keys=True, indent=4)
def serialize_to_file(self, filepath: Path) -> None:
string = self.serialize_to_string()
filepath.write_text(string)
@classmethod
def deserialize_from_string(cls, string: str) -> 'ArchiveState':
try:
object_as_dict = json.loads(string)
except json.decoder.JSONDecodeError:
raise ArchiveStateError('Error parsing JSON')
return cls(**object_as_dict)
@classmethod
def deserialize_from_file(cls, filepath: Path):
string = filepath.read_text()
return cls.deserialize_from_string(string)
class ArchiveWithIndicator:
"""
The idea of this class is to wrap around logic which takes care of keeping
@@ -139,19 +79,6 @@ class ArchiveWithIndicator:
if not self.ready_indicator_filepath.exists():
return False
try:
archive_state = ArchiveState.deserialize_from_file(
self.ready_indicator_filepath)
except ArchiveStateError as error:
print(f'Error deserializing archive state: {error.message}')
return False
if archive_state.has_error():
# If the error did happen during codesign procedure there will be no
# corresponding archive file.
# The caller code will deal with the error check further.
return True
# Sometimes on macOS indicator file appears prior to the actual archive
# despite the order of creation and os.sync() used in tag_ready().
# So consider archive not ready if there is an indicator without an
@@ -161,11 +88,23 @@ class ArchiveWithIndicator:
f'({self.archive_filepath}) to appear.')
return False
# Read archive size from indicator/
#
# Assume that file is either empty or is fully written. This is being checked
# by performing ValueError check since empty string will throw this exception
# when attempted to be converted to int.
expected_archive_size_str = self.ready_indicator_filepath.read_text()
try:
expected_archive_size = int(expected_archive_size_str)
except ValueError:
print(f'Invalid archive size "{expected_archive_size_str}"')
return False
# Wait for until archive is fully stored.
actual_archive_size = self.archive_filepath.stat().st_size
if actual_archive_size != archive_state.file_size:
if actual_archive_size != expected_archive_size:
print('Partial/invalid archive size (expected '
f'{archive_state.file_size} got {actual_archive_size})')
f'{expected_archive_size} got {actual_archive_size})')
return False
return True
@@ -190,7 +129,7 @@ class ArchiveWithIndicator:
print(f'Exception checking archive: {e}')
return False
def tag_ready(self, error_message='') -> None:
def tag_ready(self) -> None:
"""
Tag the archive as ready by creating the corresponding indication file.
@@ -199,34 +138,13 @@ class ArchiveWithIndicator:
If it is violated, an assert will fail.
"""
assert not self.is_ready()
# Try the best to make sure everything is synced to the file system,
# to avoid any possibility of stamp appearing on a network share prior to
# an actual file.
if util.get_current_platform() != util.Platform.WINDOWS:
os.sync()
archive_size = -1
if self.archive_filepath.exists():
archive_size = self.archive_filepath.stat().st_size
archive_info = ArchiveState(
file_size=archive_size, error_message=error_message)
self.ready_indicator_filepath.write_text(
archive_info.serialize_to_string())
def get_state(self) -> ArchiveState:
"""
Get state object for this archive
The state is read from the corresponding state file.
"""
try:
return ArchiveState.deserialize_from_file(self.ready_indicator_filepath)
except ArchiveStateError as error:
return ArchiveState(error_message=f'Error in information format: {error}')
archive_size = self.archive_filepath.stat().st_size
self.ready_indicator_filepath.write_text(str(archive_size))
def clean(self) -> None:
"""

View File

@@ -58,7 +58,6 @@ import codesign.util as util
from codesign.absolute_and_relative_filename import AbsoluteAndRelativeFileName
from codesign.archive_with_indicator import ArchiveWithIndicator
from codesign.exception import CodeSignException
logger = logging.getLogger(__name__)
@@ -146,13 +145,13 @@ class BaseCodeSigner(metaclass=abc.ABCMeta):
def cleanup_environment_for_builder(self) -> None:
# TODO(sergey): Revisit need of cleaning up the existing files.
# In practice it wasn't so helpful, and with multiple clients
# talking to the same server it becomes even more tricky.
# talking to the same server it becomes even mor etricky.
pass
def cleanup_environment_for_signing_server(self) -> None:
# TODO(sergey): Revisit need of cleaning up the existing files.
# In practice it wasn't so helpful, and with multiple clients
# talking to the same server it becomes even more tricky.
# talking to the same server it becomes even mor etricky.
pass
def generate_request_id(self) -> str:
@@ -221,15 +220,9 @@ class BaseCodeSigner(metaclass=abc.ABCMeta):
"""
Wait until archive with signed files is available.
Will only return if the archive with signed files is available. If there
was an error during code sign procedure the SystemExit exception is
raised, with the message set to the error reported by the codesign
server.
Will only wait for the configured time. If that time exceeds and there
is still no responce from the signing server the application will exit
with a non-zero exit code.
"""
signed_archive_info = self.signed_archive_info_for_request_id(
@@ -243,17 +236,9 @@ class BaseCodeSigner(metaclass=abc.ABCMeta):
time.sleep(1)
time_slept_in_seconds = time.monotonic() - time_start
if time_slept_in_seconds > timeout_in_seconds:
signed_archive_info.clean()
unsigned_archive_info.clean()
raise SystemExit("Signing server didn't finish signing in "
f'{timeout_in_seconds} seconds, dying :(')
archive_state = signed_archive_info.get_state()
if archive_state.has_error():
signed_archive_info.clean()
unsigned_archive_info.clean()
raise SystemExit(
f'Error happenned during codesign procedure: {archive_state.error_message}')
f"{timeout_in_seconds} seconds, dying :(")
def copy_signed_files_to_directory(
self, signed_dir: Path, destination_dir: Path) -> None:
@@ -411,13 +396,7 @@ class BaseCodeSigner(metaclass=abc.ABCMeta):
temp_dir)
logger_server.info('Signing all requested files...')
try:
self.sign_all_files(files)
except CodeSignException as error:
signed_archive_info.tag_ready(error_message=error.message)
unsigned_archive_info.clean()
logger_server.info('Signing is complete with errors.')
return
self.sign_all_files(files)
logger_server.info('Packing signed files...')
pack_files(files=files,

View File

@@ -33,7 +33,6 @@ from buildbot_utils import Builder
from codesign.absolute_and_relative_filename import AbsoluteAndRelativeFileName
from codesign.base_code_signer import BaseCodeSigner
from codesign.exception import CodeSignException
logger = logging.getLogger(__name__)
logger_server = logger.getChild('server')
@@ -46,10 +45,6 @@ EXTENSIONS_TO_BE_SIGNED = {'.dylib', '.so', '.dmg'}
NAME_PREFIXES_TO_BE_SIGNED = {'python'}
class NotarizationException(CodeSignException):
pass
def is_file_from_bundle(file: AbsoluteAndRelativeFileName) -> bool:
"""
Check whether file is coming from an .app bundle
@@ -191,7 +186,7 @@ class MacOSCodeSigner(BaseCodeSigner):
file.absolute_filepath]
self.run_command_or_mock(command, util.Platform.MACOS)
def codesign_all_files(self, files: List[AbsoluteAndRelativeFileName]) -> None:
def codesign_all_files(self, files: List[AbsoluteAndRelativeFileName]) -> bool:
"""
Run codesign tool on all eligible files in the given list.
@@ -230,6 +225,8 @@ class MacOSCodeSigner(BaseCodeSigner):
file_index + 1, num_signed_files,
signed_file.relative_filepath)
return True
def codesign_bundles(
self, files: List[AbsoluteAndRelativeFileName]) -> None:
"""
@@ -276,6 +273,8 @@ class MacOSCodeSigner(BaseCodeSigner):
files.extend(extra_files)
return True
############################################################################
# Notarization.
@@ -335,40 +334,7 @@ class MacOSCodeSigner(BaseCodeSigner):
logger_server.error('xcrun command did not report RequestUUID')
return None
def notarize_review_status(self, xcrun_output: str) -> bool:
"""
Review status returned by xcrun's notarization info
Returns truth if the notarization process has finished.
If there are errors during notarization, a NotarizationException()
exception is thrown with status message from the notarial office.
"""
# Parse status and message
status = xcrun_field_value_from_output('Status', xcrun_output)
status_message = xcrun_field_value_from_output(
'Status Message', xcrun_output)
if status == 'success':
logger_server.info(
'Package successfully notarized: %s', status_message)
return True
if status == 'invalid':
logger_server.error(xcrun_output)
logger_server.error(
'Package notarization has failed: %s', status_message)
raise NotarizationException(status_message)
if status == 'in progress':
return False
logger_server.info(
'Unknown notarization status %s (%s)', status, status_message)
return False
def notarize_wait_result(self, request_uuid: str) -> None:
def notarize_wait_result(self, request_uuid: str) -> bool:
"""
Wait for until notarial office have a reply
"""
@@ -385,11 +351,29 @@ class MacOSCodeSigner(BaseCodeSigner):
timeout_in_seconds = self.config.MACOS_NOTARIZE_TIMEOUT_IN_SECONDS
while True:
xcrun_output = self.check_output_or_mock(
output = self.check_output_or_mock(
command, util.Platform.MACOS, allow_nonzero_exit_code=True)
# Parse status and message
status = xcrun_field_value_from_output('Status', output)
status_message = xcrun_field_value_from_output(
'Status Message', output)
if self.notarize_review_status(xcrun_output):
break
# Review status.
if status:
if status == 'success':
logger_server.info(
'Package successfully notarized: %s', status_message)
return True
elif status == 'invalid':
logger_server.error(output)
logger_server.error(
'Package notarization has failed: %s', status_message)
return False
elif status == 'in progress':
pass
else:
logger_server.info(
'Unknown notarization status %s (%s)', status, status_message)
logger_server.info('Keep waiting for notarization office.')
time.sleep(30)
@@ -410,6 +394,8 @@ class MacOSCodeSigner(BaseCodeSigner):
command = ['xcrun', 'stapler', 'staple', '-v', file.absolute_filepath]
self.check_output_or_mock(command, util.Platform.MACOS)
return True
def notarize_dmg(self, file: AbsoluteAndRelativeFileName) -> bool:
"""
Run entire pipeline to get DMG notarized.
@@ -428,7 +414,10 @@ class MacOSCodeSigner(BaseCodeSigner):
return False
# Staple.
self.notarize_staple(file)
if not self.notarize_staple(file):
return False
return True
def notarize_all_dmg(
self, files: List[AbsoluteAndRelativeFileName]) -> bool:
@@ -443,7 +432,10 @@ class MacOSCodeSigner(BaseCodeSigner):
if not self.check_file_is_to_be_signed(file):
continue
self.notarize_dmg(file)
if not self.notarize_dmg(file):
return False
return True
############################################################################
# Entry point.
@@ -451,6 +443,11 @@ class MacOSCodeSigner(BaseCodeSigner):
def sign_all_files(self, files: List[AbsoluteAndRelativeFileName]) -> None:
# TODO(sergey): Handle errors somehow.
self.codesign_all_files(files)
self.codesign_bundles(files)
self.notarize_all_dmg(files)
if not self.codesign_all_files(files):
return
if not self.codesign_bundles(files):
return
if not self.notarize_all_dmg(files):
return

View File

@@ -29,7 +29,6 @@ from buildbot_utils import Builder
from codesign.absolute_and_relative_filename import AbsoluteAndRelativeFileName
from codesign.base_code_signer import BaseCodeSigner
from codesign.exception import CodeSignException
logger = logging.getLogger(__name__)
logger_server = logger.getChild('server')
@@ -41,9 +40,6 @@ BLACKLIST_FILE_PREFIXES = (
'api-ms-', 'concrt', 'msvcp', 'ucrtbase', 'vcomp', 'vcruntime')
class SigntoolException(CodeSignException):
pass
class WindowsCodeSigner(BaseCodeSigner):
def check_file_is_to_be_signed(
self, file: AbsoluteAndRelativeFileName) -> bool:
@@ -54,46 +50,12 @@ class WindowsCodeSigner(BaseCodeSigner):
return file.relative_filepath.suffix in EXTENSIONS_TO_BE_SIGNED
def get_sign_command_prefix(self) -> List[str]:
return [
'signtool', 'sign', '/v',
'/f', self.config.WIN_CERTIFICATE_FILEPATH,
'/tr', self.config.WIN_TIMESTAMP_AUTHORITY_URL]
def run_codesign_tool(self, filepath: Path) -> None:
command = self.get_sign_command_prefix() + [filepath]
try:
codesign_output = self.check_output_or_mock(command, util.Platform.WINDOWS)
except subprocess.CalledProcessError as e:
raise SigntoolException(f'Error running signtool {e}')
logger_server.info(f'signtool output:\n{codesign_output}')
got_number_of_success = False
for line in codesign_output.split('\n'):
line_clean = line.strip()
line_clean_lower = line_clean.lower()
if line_clean_lower.startswith('number of warnings') or \
line_clean_lower.startswith('number of errors'):
number = int(line_clean_lower.split(':')[1])
if number != 0:
raise SigntoolException('Non-clean success of signtool')
if line_clean_lower.startswith('number of files successfully signed'):
got_number_of_success = True
number = int(line_clean_lower.split(':')[1])
if number != 1:
raise SigntoolException('Signtool did not consider codesign a success')
if not got_number_of_success:
raise SigntoolException('Signtool did not report number of files signed')
def sign_all_files(self, files: List[AbsoluteAndRelativeFileName]) -> None:
# NOTE: Sign files one by one to avoid possible command line length
# overflow (which could happen if we ever decide to sign every binary
@@ -111,7 +73,12 @@ class WindowsCodeSigner(BaseCodeSigner):
file_index + 1, num_files, file.relative_filepath)
continue
command = self.get_sign_command_prefix()
command.append(file.absolute_filepath)
logger_server.info(
'Running signtool command for file [%d/%d] %s...',
file_index + 1, num_files, file.relative_filepath)
self.run_codesign_tool(file.absolute_filepath)
# TODO(sergey): Check the status somehow. With a missing certificate
# the command still exists with a zero code.
self.run_command_or_mock(command, util.Platform.WINDOWS)
# TODO(sergey): Report number of signed and ignored files.

View File

@@ -49,7 +49,7 @@ FIND_LIBRARY(PUGIXML_LIBRARY
# handle the QUIETLY and REQUIRED arguments and set PUGIXML_FOUND to TRUE if
# all listed variables are TRUE
INCLUDE(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(PugiXML DEFAULT_MSG
FIND_PACKAGE_HANDLE_STANDARD_ARGS(PUGIXML DEFAULT_MSG
PUGIXML_LIBRARY PUGIXML_INCLUDE_DIR)
IF(PUGIXML_FOUND)

View File

@@ -1,10 +1,7 @@
# Distributed under the OSI-approved BSD 3-Clause License,
# see accompanying file BSD-3-Clause-license.txt for details.
# Changes made to this script have been marked with "BLENDER".
# BLENDER: disable ASAN leak detection when trying to discover tests.
# Blender: disable ASAN leak detection when trying to discover tests.
set(ENV{ASAN_OPTIONS} "detect_leaks=0")
cmake_minimum_required(VERSION ${CMAKE_VERSION})

View File

@@ -13,7 +13,7 @@ Invocation:
export CLANG_BIND_DIR="/dsk/src/llvm/tools/clang/bindings/python"
export CLANG_LIB_DIR="/opt/llvm/lib"
python clang_array_check.py somefile.c -DSOME_DEFINE -I/some/include
python2 clang_array_check.py somefile.c -DSOME_DEFINE -I/some/include
... defines and includes are optional
@@ -76,32 +76,6 @@ defs_precalc = {
"glNormal3bv": {0: 3},
"glNormal3iv": {0: 3},
"glNormal3sv": {0: 3},
# GPU immediate mode.
"immVertex2iv": {1: 2},
"immVertex2fv": {1: 2},
"immVertex3fv": {1: 3},
"immAttr2fv": {1: 2},
"immAttr3fv": {1: 3},
"immAttr4fv": {1: 4},
"immAttr3ubv": {1: 3},
"immAttr4ubv": {1: 4},
"immUniform2fv": {1: 2},
"immUniform3fv": {1: 3},
"immUniform4fv": {1: 4},
"immUniformColor3fv": {0: 3},
"immUniformColor4fv": {0: 4},
"immUniformColor3ubv": {1: 3},
"immUniformColor4ubv": {1: 4},
"immUniformColor3fvAlpha": {0: 3},
"immUniformColor4fvAlpha": {0: 4},
}
# -----------------------------------------------------------------------------
@@ -126,8 +100,7 @@ else:
if CLANG_LIB_DIR is None:
print("$CLANG_LIB_DIR clang lib dir not set")
if CLANG_BIND_DIR:
sys.path.append(CLANG_BIND_DIR)
sys.path.append(CLANG_BIND_DIR)
import clang
import clang.cindex
@@ -135,8 +108,7 @@ from clang.cindex import (CursorKind,
TypeKind,
TokenKind)
if CLANG_LIB_DIR:
clang.cindex.Config.set_library_path(CLANG_LIB_DIR)
clang.cindex.Config.set_library_path(CLANG_LIB_DIR)
index = clang.cindex.Index.create()

View File

@@ -32,7 +32,7 @@ CHECKER_IGNORE_PREFIX = [
"intern/moto",
]
CHECKER_BIN = "python3"
CHECKER_BIN = "python2"
CHECKER_ARGS = [
os.path.join(os.path.dirname(__file__), "clang_array_check.py"),

View File

@@ -44,7 +44,6 @@ set(WITH_OPENMP ON CACHE BOOL "" FORCE)
set(WITH_OPENSUBDIV ON CACHE BOOL "" FORCE)
set(WITH_OPENVDB ON CACHE BOOL "" FORCE)
set(WITH_OPENVDB_BLOSC ON CACHE BOOL "" FORCE)
set(WITH_NANOVDB ON CACHE BOOL "" FORCE)
set(WITH_POTRACE ON CACHE BOOL "" FORCE)
set(WITH_PYTHON_INSTALL ON CACHE BOOL "" FORCE)
set(WITH_QUADRIFLOW ON CACHE BOOL "" FORCE)

View File

@@ -51,7 +51,6 @@ set(WITH_OPENIMAGEIO OFF CACHE BOOL "" FORCE)
set(WITH_OPENMP OFF CACHE BOOL "" FORCE)
set(WITH_OPENSUBDIV OFF CACHE BOOL "" FORCE)
set(WITH_OPENVDB OFF CACHE BOOL "" FORCE)
set(WITH_NANOVDB OFF CACHE BOOL "" FORCE)
set(WITH_QUADRIFLOW OFF CACHE BOOL "" FORCE)
set(WITH_SDL OFF CACHE BOOL "" FORCE)
set(WITH_TBB OFF CACHE BOOL "" FORCE)

View File

@@ -45,7 +45,6 @@ set(WITH_OPENMP ON CACHE BOOL "" FORCE)
set(WITH_OPENSUBDIV ON CACHE BOOL "" FORCE)
set(WITH_OPENVDB ON CACHE BOOL "" FORCE)
set(WITH_OPENVDB_BLOSC ON CACHE BOOL "" FORCE)
set(WITH_NANOVDB ON CACHE BOOL "" FORCE)
set(WITH_POTRACE ON CACHE BOOL "" FORCE)
set(WITH_PYTHON_INSTALL ON CACHE BOOL "" FORCE)
set(WITH_QUADRIFLOW ON CACHE BOOL "" FORCE)

View File

@@ -28,7 +28,6 @@ set(WITH_OPENCOLLADA OFF CACHE BOOL "" FORCE)
set(WITH_INTERNATIONAL OFF CACHE BOOL "" FORCE)
set(WITH_BULLET OFF CACHE BOOL "" FORCE)
set(WITH_OPENVDB OFF CACHE BOOL "" FORCE)
set(WITH_NANOVDB OFF CACHE BOOL "" FORCE)
set(WITH_ALEMBIC OFF CACHE BOOL "" FORCE)
# Depends on Python install, do this to quiet warning.

View File

@@ -1139,7 +1139,6 @@ endfunction()
function(find_python_package
package
relative_include_dir
)
string(TOUPPER ${package} _upper_package)
@@ -1171,10 +1170,7 @@ function(find_python_package
dist-packages
vendor-packages
NO_DEFAULT_PATH
DOC
"Path to python site-packages or dist-packages containing '${package}' module"
)
mark_as_advanced(PYTHON_${_upper_package}_PATH)
if(NOT EXISTS "${PYTHON_${_upper_package}_PATH}")
message(WARNING
@@ -1192,50 +1188,6 @@ function(find_python_package
set(WITH_PYTHON_INSTALL_${_upper_package} OFF PARENT_SCOPE)
else()
message(STATUS "${package} found at '${PYTHON_${_upper_package}_PATH}'")
if(NOT "${relative_include_dir}" STREQUAL "")
set(_relative_include_dir "${package}/${relative_include_dir}")
unset(PYTHON_${_upper_package}_INCLUDE_DIRS CACHE)
find_path(PYTHON_${_upper_package}_INCLUDE_DIRS
NAMES
"${_relative_include_dir}"
HINTS
"${PYTHON_LIBPATH}/"
"${PYTHON_LIBPATH}/python${PYTHON_VERSION}/"
"${PYTHON_LIBPATH}/python${_PY_VER_MAJOR}/"
PATH_SUFFIXES
"site-packages/"
"dist-packages/"
"vendor-packages/"
NO_DEFAULT_PATH
DOC
"Path to python site-packages or dist-packages containing '${package}' module header files"
)
mark_as_advanced(PYTHON_${_upper_package}_INCLUDE_DIRS)
if(NOT EXISTS "${PYTHON_${_upper_package}_INCLUDE_DIRS}")
message(WARNING
"Python package '${package}' include dir path could not be found in:\n"
"'${PYTHON_LIBPATH}/python${PYTHON_VERSION}/site-packages/${_relative_include_dir}', "
"'${PYTHON_LIBPATH}/python${_PY_VER_MAJOR}/site-packages/${_relative_include_dir}', "
"'${PYTHON_LIBPATH}/python${PYTHON_VERSION}/dist-packages/${_relative_include_dir}', "
"'${PYTHON_LIBPATH}/python${_PY_VER_MAJOR}/dist-packages/${_relative_include_dir}', "
"'${PYTHON_LIBPATH}/python${PYTHON_VERSION}/vendor-packages/${_relative_include_dir}', "
"'${PYTHON_LIBPATH}/python${_PY_VER_MAJOR}/vendor-packages/${_relative_include_dir}', "
"\n"
"The 'WITH_PYTHON_${_upper_package}' option will be disabled.\n"
"The build will be usable, only add-ons that depend on this package won't be functional."
)
set(WITH_PYTHON_${_upper_package} OFF PARENT_SCOPE)
else()
set(_temp "${PYTHON_${_upper_package}_INCLUDE_DIRS}/${package}/${relative_include_dir}")
unset(PYTHON_${_upper_package}_INCLUDE_DIRS CACHE)
set(PYTHON_${_upper_package}_INCLUDE_DIRS "${_temp}"
CACHE PATH "Path to the include directory of the ${package} module")
message(STATUS "${package} include files found at '${PYTHON_${_upper_package}_INCLUDE_DIRS}'")
endif()
endif()
endif()
endif()
endfunction()

View File

@@ -73,6 +73,9 @@ endif()
if(NOT DEFINED LIBDIR)
set(LIBDIR ${CMAKE_SOURCE_DIR}/../lib/darwin)
# Prefer lib directory paths
file(GLOB LIB_SUBDIRS ${LIBDIR}/*)
set(CMAKE_PREFIX_PATH ${LIB_SUBDIRS})
else()
message(STATUS "Using pre-compiled LIBDIR: ${LIBDIR}")
endif()
@@ -80,10 +83,6 @@ if(NOT EXISTS "${LIBDIR}/")
message(FATAL_ERROR "Mac OSX requires pre-compiled libs at: '${LIBDIR}'")
endif()
# Prefer lib directory paths
file(GLOB LIB_SUBDIRS ${LIBDIR}/*)
set(CMAKE_PREFIX_PATH ${LIB_SUBDIRS})
# -------------------------------------------------------------------------
# Find precompiled libraries, and avoid system or user-installed ones.
@@ -270,14 +269,6 @@ if(WITH_INTERNATIONAL OR WITH_CODEC_FFMPEG)
string(APPEND PLATFORM_LINKFLAGS " -liconv") # boost_locale and ffmpeg needs it !
endif()
if(WITH_PUGIXML)
find_package(PugiXML)
if(NOT PUGIXML_FOUND)
message(WARNING "PugiXML not found, disabling WITH_PUGIXML")
set(WITH_PUGIXML OFF)
endif()
endif()
if(WITH_OPENIMAGEIO)
find_package(OpenImageIO)
list(APPEND OPENIMAGEIO_LIBRARIES
@@ -346,7 +337,7 @@ if(WITH_CYCLES_EMBREE)
find_package(Embree 3.8.0 REQUIRED)
# Increase stack size for Embree, only works for executables.
if(NOT WITH_PYTHON_MODULE)
string(APPEND PLATFORM_LINKFLAGS " -Wl,-stack_size,0x100000")
string(APPEND PLATFORM_LINKFLAGS " -Xlinker -stack_size -Xlinker 0x100000")
endif()
# Embree static library linking can mix up SSE and AVX symbols, causing
@@ -458,8 +449,8 @@ endif()
# Avoid conflicts with Luxrender, and other plug-ins that may use the same
# libraries as Blender with a different version or build options.
string(APPEND PLATFORM_LINKFLAGS
" -Wl,-unexported_symbols_list,'${CMAKE_SOURCE_DIR}/source/creator/osx_locals.map'"
set(PLATFORM_LINKFLAGS
"${PLATFORM_LINKFLAGS} -Xlinker -unexported_symbols_list -Xlinker '${CMAKE_SOURCE_DIR}/source/creator/osx_locals.map'"
)
string(APPEND CMAKE_CXX_FLAGS " -stdlib=libc++")
@@ -470,17 +461,3 @@ set(CMAKE_C_ARCHIVE_CREATE "<CMAKE_AR> Scr <TARGET> <LINK_FLAGS> <OBJECTS>")
set(CMAKE_CXX_ARCHIVE_CREATE "<CMAKE_AR> Scr <TARGET> <LINK_FLAGS> <OBJECTS>")
set(CMAKE_C_ARCHIVE_FINISH "<CMAKE_RANLIB> -no_warning_for_no_symbols -c <TARGET>")
set(CMAKE_CXX_ARCHIVE_FINISH "<CMAKE_RANLIB> -no_warning_for_no_symbols -c <TARGET>")
if(WITH_COMPILER_CCACHE)
if(NOT CMAKE_GENERATOR STREQUAL "Xcode")
find_program(CCACHE_PROGRAM ccache)
if(CCACHE_PROGRAM)
# Makefiles and ninja
set(CMAKE_C_COMPILER_LAUNCHER "${CCACHE_PROGRAM}" CACHE STRING "" FORCE)
set(CMAKE_CXX_COMPILER_LAUNCHER "${CCACHE_PROGRAM}" CACHE STRING "" FORCE)
else()
message(WARNING "Ccache NOT found, disabling WITH_COMPILER_CCACHE")
set(WITH_COMPILER_CCACHE OFF)
endif()
endif()
endif()

View File

@@ -154,32 +154,3 @@ if(NOT ${CMAKE_GENERATOR} MATCHES "Xcode")
string(APPEND CMAKE_CXX_FLAGS " -mmacosx-version-min=${CMAKE_OSX_DEPLOYMENT_TARGET}")
add_definitions("-DMACOSX_DEPLOYMENT_TARGET=${CMAKE_OSX_DEPLOYMENT_TARGET}")
endif()
if(WITH_COMPILER_CCACHE)
if(CMAKE_GENERATOR STREQUAL "Xcode")
find_program(CCACHE_PROGRAM ccache)
if(CCACHE_PROGRAM)
get_filename_component(ccompiler "${CMAKE_C_COMPILER}" NAME)
get_filename_component(cxxcompiler "${CMAKE_CXX_COMPILER}" NAME)
# Ccache can figure out which compiler to use if it's invoked from
# a symlink with the name of the compiler.
# https://ccache.dev/manual/4.1.html#_run_modes
set(_fake_compiler_dir "${CMAKE_BINARY_DIR}/ccache")
execute_process(COMMAND ${CMAKE_COMMAND} -E make_directory ${_fake_compiler_dir})
set(_fake_C_COMPILER "${_fake_compiler_dir}/${ccompiler}")
set(_fake_CXX_COMPILER "${_fake_compiler_dir}/${cxxcompiler}")
execute_process(COMMAND ${CMAKE_COMMAND} -E create_symlink "${CCACHE_PROGRAM}" ${_fake_C_COMPILER})
execute_process(COMMAND ${CMAKE_COMMAND} -E create_symlink "${CCACHE_PROGRAM}" ${_fake_CXX_COMPILER})
set(CMAKE_XCODE_ATTRIBUTE_CC ${_fake_C_COMPILER} CACHE STRING "" FORCE)
set(CMAKE_XCODE_ATTRIBUTE_CXX ${_fake_CXX_COMPILER} CACHE STRING "" FORCE)
set(CMAKE_XCODE_ATTRIBUTE_LD ${_fake_C_COMPILER} CACHE STRING "" FORCE)
set(CMAKE_XCODE_ATTRIBUTE_LDPLUSPLUS ${_fake_CXX_COMPILER} CACHE STRING "" FORCE)
unset(_fake_compiler_dir)
unset(_fake_C_COMPILER)
unset(_fake_CXX_COMPILER)
else()
message(WARNING "Ccache NOT found, disabling WITH_COMPILER_CCACHE")
set(WITH_COMPILER_CCACHE OFF)
endif()
endif()
endif()

View File

@@ -350,12 +350,15 @@ if(WITH_BOOST)
endif()
endif()
if(WITH_PUGIXML)
find_package_wrapper(PugiXML)
endif()
if(WITH_OPENIMAGEIO)
find_package_wrapper(OpenImageIO)
if(NOT OPENIMAGEIO_PUGIXML_FOUND AND WITH_CYCLES_STANDALONE)
find_package_wrapper(PugiXML)
else()
set(PUGIXML_INCLUDE_DIR "${OPENIMAGEIO_INCLUDE_DIR/OpenImageIO}")
set(PUGIXML_LIBRARIES "")
endif()
set(OPENIMAGEIO_LIBRARIES
${OPENIMAGEIO_LIBRARIES}
${PNG_LIBRARIES}
@@ -684,15 +687,3 @@ set(PLATFORM_LINKFLAGS
if(WITH_INSTALL_PORTABLE)
string(APPEND CMAKE_EXE_LINKER_FLAGS " -no-pie")
endif()
if(WITH_COMPILER_CCACHE)
find_program(CCACHE_PROGRAM ccache)
if(CCACHE_PROGRAM)
# Makefiles and ninja
set(CMAKE_C_COMPILER_LAUNCHER "${CCACHE_PROGRAM}" CACHE STRING "" FORCE)
set(CMAKE_CXX_COMPILER_LAUNCHER "${CCACHE_PROGRAM}" CACHE STRING "" FORCE)
else()
message(WARNING "Ccache NOT found, disabling WITH_COMPILER_CCACHE")
set(WITH_COMPILER_CCACHE OFF)
endif()
endif()

View File

@@ -239,24 +239,9 @@ if(NOT EXISTS "${LIBDIR}/")
message(FATAL_ERROR "\n\nWindows requires pre-compiled libs at: '${LIBDIR}'. Please run `make update` in the blender source folder to obtain them.")
endif()
if(CMAKE_GENERATOR MATCHES "^Visual Studio.+" AND # Only supported in the VS IDE
MSVC_VERSION GREATER_EQUAL 1924 AND # Supported for 16.4+
WITH_CLANG_TIDY # And Clang Tidy needs to be on
)
set(CMAKE_VS_GLOBALS
"RunCodeAnalysis=false"
"EnableMicrosoftCodeAnalysis=false"
"EnableClangTidyCodeAnalysis=true"
)
set(VS_CLANG_TIDY On)
endif()
# Mark libdir as system headers with a lower warn level, to resolve some warnings
# that we have very little control over
if(MSVC_VERSION GREATER_EQUAL 1914 AND # Available with 15.7+
NOT MSVC_CLANG AND # But not for clang
NOT WITH_WINDOWS_SCCACHE AND # And not when sccache is enabled
NOT VS_CLANG_TIDY) # Clang-tidy does not like these options
if(MSVC_VERSION GREATER_EQUAL 1914 AND NOT MSVC_CLANG AND NOT WITH_WINDOWS_SCCACHE)
add_compile_options(/experimental:external /external:templates- /external:I "${LIBDIR}" /external:W0)
endif()
@@ -268,11 +253,6 @@ foreach(child ${children})
endif()
endforeach()
if(WITH_PUGIXML)
set(PUGIXML_LIBRARIES optimized ${LIBDIR}/pugixml/lib/pugixml.lib debug ${LIBDIR}/pugixml/lib/pugixml_d.lib)
set(PUGIXML_INCLUDE_DIR ${LIBDIR}/pugixml/include)
endif()
set(ZLIB_INCLUDE_DIRS ${LIBDIR}/zlib/include)
set(ZLIB_LIBRARIES ${LIBDIR}/zlib/lib/libz_st.lib)
set(ZLIB_INCLUDE_DIR ${LIBDIR}/zlib/include)
@@ -671,10 +651,11 @@ if(WITH_CYCLES_OSL)
optimized ${OSL_LIB_COMP}
optimized ${OSL_LIB_EXEC}
optimized ${OSL_LIB_QUERY}
optimized ${CYCLES_OSL}/lib/pugixml.lib
debug ${OSL_LIB_EXEC_DEBUG}
debug ${OSL_LIB_COMP_DEBUG}
debug ${OSL_LIB_QUERY_DEBUG}
${PUGIXML_LIBRARIES}
debug ${CYCLES_OSL}/lib/pugixml_d.lib
)
find_path(OSL_INCLUDE_DIR OSL/oslclosure.h PATHS ${CYCLES_OSL}/include)
find_program(OSL_COMPILER NAMES oslc PATHS ${CYCLES_OSL}/bin)
@@ -739,7 +720,7 @@ if(WINDOWS_PYTHON_DEBUG)
string(REPLACE "/" "\\" _group_path "${_source_path}")
source_group("${_group_path}" FILES "${_source}")
endforeach()
# If the user scripts env var is set, include scripts from there otherwise
# include user scripts in the profile folder.
if(DEFINED ENV{BLENDER_USER_SCRIPTS})
@@ -750,7 +731,7 @@ if(WINDOWS_PYTHON_DEBUG)
# Include the user scripts from the profile folder in the blender_python_user_scripts project.
set(USER_SCRIPTS_ROOT "$ENV{appdata}/blender foundation/blender/${BLENDER_VERSION}/scripts")
endif()
file(TO_CMAKE_PATH ${USER_SCRIPTS_ROOT} USER_SCRIPTS_ROOT)
FILE(GLOB_RECURSE inFiles "${USER_SCRIPTS_ROOT}/*.*" )
ADD_CUSTOM_TARGET(blender_python_user_scripts SOURCES ${inFiles})

View File

@@ -92,5 +92,5 @@ echo if "%%VSCMD_VER%%" == "" ^( >> %BUILD_DIR%\rebuild.cmd
echo call "%VCVARS%" %BUILD_ARCH% >> %BUILD_DIR%\rebuild.cmd
echo ^) >> %BUILD_DIR%\rebuild.cmd
echo echo %%TIME%% ^> buildtime.txt >> %BUILD_DIR%\rebuild.cmd
echo ninja install %%* >> %BUILD_DIR%\rebuild.cmd
echo ninja install >> %BUILD_DIR%\rebuild.cmd
echo echo %%TIME%% ^>^> buildtime.txt >> %BUILD_DIR%\rebuild.cmd

View File

@@ -453,7 +453,7 @@ TYPEDEF_HIDES_STRUCT = NO
# the optimal cache size from a speed point of view.
# Minimum value: 0, maximum value: 9, default value: 0.
LOOKUP_CACHE_SIZE = 3
LOOKUP_CACHE_SIZE = 0
#---------------------------------------------------------------------------
# Build related configuration options
@@ -1321,7 +1321,7 @@ DOCSET_PUBLISHER_NAME = Publisher
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.
GENERATE_HTMLHELP = NO
GENERATE_HTMLHELP = YES
# The CHM_FILE tag can be used to specify the file name of the resulting .chm
# file. You can add a path in front of the file if the result should not be

View File

@@ -121,10 +121,6 @@
* \ingroup editors
*/
/** \defgroup edasset asset
* \ingroup editors
*/
/** \defgroup edcurve curve
* \ingroup editors
*/

View File

@@ -1,30 +0,0 @@
"""
.. note::
Properties defined at run-time store the values of the properties as custom-properties.
This method checks if the underlying data exists, causing the property to be considered *set*.
A common pattern for operators is to calculate a value for the properties
that have not had their values explicitly set by the caller
(where the caller could be a key-binding, menu-items or Python script for example).
In the case of executing operators multiple times, values are re-used from the previous execution.
For example: subdividing a mesh with a smooth value of 1.0 will keep using
that value on subsequent calls to subdivision, unless the operator is called with
that property set to a different value.
This behavior can be disabled using the ``SKIP_SAVE`` option when the property is declared (see: :mod:`bpy.props`).
The ``ghost`` argument allows detecting how a value from a previous execution is handled.
- When true: The property is considered unset even if the value from a previous call is used.
- When false: The existence of any values causes ``is_property_set`` to return true.
While this argument should typically be omitted, there are times when
it's important to know if a value is anything besides the default.
For example, the previous value may have been scaled by the scene's unit scale.
In this case scaling the value multiple times would cause problems, so the ``ghost`` argument should be false.
"""

View File

@@ -237,7 +237,7 @@ Examples:
{'FINISHED'}
>>> bpy.ops.mesh.hide(unselected=False)
{'FINISHED'}
>>> bpy.ops.object.transform_apply()
>>> bpy.ops.object.scale_apply()
{'FINISHED'}
.. tip::

View File

@@ -551,7 +551,7 @@ def example_extract_docstring(filepath):
file.close()
return "", 0
for line in file:
for line in file.readlines():
line_no += 1
if line.startswith('"""'):
break
@@ -559,13 +559,6 @@ def example_extract_docstring(filepath):
text.append(line.rstrip())
line_no += 1
# Skip over blank lines so the Python code doesn't have blank lines at the top.
for line in file:
if line.strip():
break
line_no += 1
file.close()
return "\n".join(text), line_no
@@ -2195,20 +2188,9 @@ def setup_blender():
# Remove handlers since the functions get included
# in the doc-string and don't have meaningful names.
lists_to_restore = []
for var in bpy.app.handlers:
if isinstance(var, list):
lists_to_restore.append((var[:], var))
var.clear()
return {
"lists_to_restore": lists_to_restore,
}
def teardown_blender(setup_data):
for var_src, var_dst in setup_data["lists_to_restore"]:
var_dst[:] = var_src
for ls in bpy.app.handlers:
if isinstance(ls, list):
ls.clear()
def main():
@@ -2217,7 +2199,7 @@ def main():
setup_monkey_patch()
# Perform changes to Blender it's self.
setup_data = setup_blender()
setup_blender()
# eventually, create the dirs
for dir_path in [ARGS.output_dir, SPHINX_IN]:
@@ -2323,8 +2305,6 @@ def main():
shutil.copy(os.path.join(SPHINX_OUT_PDF, "contents.pdf"),
os.path.join(REFERENCE_PATH, BLENDER_PDF_FILENAME))
teardown_blender(setup_data)
sys.exit()

View File

@@ -1,5 +1,7 @@
/* T76453: Prevent Long enum lists */
.field-list li {
/* Prevent Long enum lists */
.field-body {
display: block;
width: 100%;
max-height: 245px;
overflow-y: auto !important;
}

4
extern/README vendored
View File

@@ -1,4 +0,0 @@
When updating a library remember to:
* Update the README.blender with the corresponding version.
* Update the THIRD-PARTY-LICENSE.txt document

View File

@@ -1,5 +0,0 @@
Project: Audaspace
URL: https://audaspace.github.io/
License: Apache 2.0
Upstream version: 1.3 (Last Release)
Local modifications: None

View File

@@ -24,6 +24,6 @@ set(JACK_FOUND ${WITH_JACK})
set(LIBSNDFILE_FOUND ${WITH_CODEC_SNDFILE})
set(OPENAL_FOUND ${WITH_OPENAL})
set(PYTHONLIBS_FOUND TRUE)
set(NUMPY_FOUND ${WITH_PYTHON_NUMPY})
set(NUMPY_FOUND TRUE)
set(NUMPY_INCLUDE_DIRS ${PYTHON_NUMPY_INCLUDE_DIRS})
set(SDL_FOUND ${WITH_SDL})

View File

@@ -72,9 +72,6 @@ protected:
/// The channel mapper reader in between.
std::shared_ptr<ChannelMapperReader> m_mapper;
/// Whether the source is being read for the first time.
bool m_first_reading;
/// Whether to keep the source if end of it is reached.
bool m_keep;

View File

@@ -78,7 +78,7 @@ bool SoftwareDevice::SoftwareHandle::pause(bool keep)
}
SoftwareDevice::SoftwareHandle::SoftwareHandle(SoftwareDevice* device, std::shared_ptr<IReader> reader, std::shared_ptr<PitchReader> pitch, std::shared_ptr<ResampleReader> resampler, std::shared_ptr<ChannelMapperReader> mapper, bool keep) :
m_reader(reader), m_pitch(pitch), m_resampler(resampler), m_mapper(mapper), m_first_reading(true), m_keep(keep), m_user_pitch(1.0f), m_user_volume(1.0f), m_user_pan(0.0f), m_volume(0.0f), m_old_volume(0.0f), m_loopcount(0),
m_reader(reader), m_pitch(pitch), m_resampler(resampler), m_mapper(mapper), m_keep(keep), m_user_pitch(1.0f), m_user_volume(1.0f), m_user_pan(0.0f), m_volume(0.0f), m_old_volume(0.0f), m_loopcount(0),
m_relative(true), m_volume_max(1.0f), m_volume_min(0), m_distance_max(std::numeric_limits<float>::max()),
m_distance_reference(1.0f), m_attenuation(1.0f), m_cone_angle_outer(M_PI), m_cone_angle_inner(M_PI), m_cone_volume_outer(0),
m_flags(RENDER_CONE), m_stop(nullptr), m_stop_data(nullptr), m_status(STATUS_PLAYING), m_device(device)
@@ -106,14 +106,6 @@ void SoftwareDevice::SoftwareHandle::update()
if(m_pitch->getSpecs().channels != CHANNELS_MONO)
{
m_volume = m_user_volume;
// we don't know a previous volume if this source has never been read before
if(m_first_reading)
{
m_old_volume = m_volume;
m_first_reading = false;
}
m_pitch->setPitch(m_user_pitch);
return;
}
@@ -222,13 +214,6 @@ void SoftwareDevice::SoftwareHandle::update()
m_volume *= m_user_volume;
}
// we don't know a previous volume if this source has never been read before
if(m_first_reading)
{
m_old_volume = m_volume;
m_first_reading = false;
}
// 3D Cue
Quaternion orientation;
@@ -769,8 +754,6 @@ void SoftwareDevice::mix(data_t* buffer, int length)
{
m_mixer->mix(buf, pos, len, sound->m_volume, sound->m_old_volume);
sound->m_old_volume = sound->m_volume;
pos += len;
if(sound->m_loopcount > 0)

View File

@@ -22,7 +22,6 @@
#include <mutex>
#define KEEP_TIME 10
#define POSITION_EPSILON (1.0 / static_cast<double>(RATE_48000))
AUD_NAMESPACE_BEGIN
@@ -65,7 +64,7 @@ bool SequenceHandle::updatePosition(double position)
if(m_handle.get())
{
// we currently have a handle, let's check where we are
if(position - POSITION_EPSILON >= m_entry->m_end)
if(position >= m_entry->m_end)
{
if(position >= m_entry->m_end + KEEP_TIME)
// far end, stopping
@@ -77,7 +76,7 @@ bool SequenceHandle::updatePosition(double position)
return true;
}
}
else if(position + POSITION_EPSILON >= m_entry->m_begin)
else if(position >= m_entry->m_begin)
{
// inside, resuming
m_handle->resume();
@@ -99,7 +98,7 @@ bool SequenceHandle::updatePosition(double position)
else
{
// we don't have a handle, let's start if we should be playing
if(position + POSITION_EPSILON >= m_entry->m_begin && position - POSITION_EPSILON <= m_entry->m_end)
if(position >= m_entry->m_begin && position <= m_entry->m_end)
{
start();
return m_valid;

View File

@@ -1,5 +0,0 @@
Project: Bullet Continuous Collision Detection and Physics Library
URL: http://bulletphysics.org
License: zlib
Upstream version: 3.07
Local modifications: Fixed inertia

1137
extern/ceres/ChangeLog vendored

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,4 @@
Project: Ceres Solver
URL: http://ceres-solver.org/
License: BSD 3-Clause
Upstream version 2.0.0
Upstream version 1.11 (aef9c9563b08d5f39eee1576af133a84749d1b48)
Local modifications: None

View File

@@ -8,8 +8,8 @@ else
fi
repo="https://ceres-solver.googlesource.com/ceres-solver"
#branch="master"
tag="2.0.0"
branch="master"
tag=""
tmp=`mktemp -d`
checkout="$tmp/ceres"

View File

@@ -153,44 +153,28 @@ template <typename CostFunctor,
int... Ns> // Number of parameters in each parameter block.
class AutoDiffCostFunction : public SizedCostFunction<kNumResiduals, Ns...> {
public:
// Takes ownership of functor by default. Uses the template-provided
// value for the number of residuals ("kNumResiduals").
explicit AutoDiffCostFunction(CostFunctor* functor,
Ownership ownership = TAKE_OWNERSHIP)
: functor_(functor), ownership_(ownership) {
// Takes ownership of functor. Uses the template-provided value for the
// number of residuals ("kNumResiduals").
explicit AutoDiffCostFunction(CostFunctor* functor) : functor_(functor) {
static_assert(kNumResiduals != DYNAMIC,
"Can't run the fixed-size constructor if the number of "
"residuals is set to ceres::DYNAMIC.");
}
// Takes ownership of functor by default. Ignores the template-provided
// Takes ownership of functor. Ignores the template-provided
// kNumResiduals in favor of the "num_residuals" argument provided.
//
// This allows for having autodiff cost functions which return varying
// numbers of residuals at runtime.
AutoDiffCostFunction(CostFunctor* functor,
int num_residuals,
Ownership ownership = TAKE_OWNERSHIP)
: functor_(functor), ownership_(ownership) {
AutoDiffCostFunction(CostFunctor* functor, int num_residuals)
: functor_(functor) {
static_assert(kNumResiduals == DYNAMIC,
"Can't run the dynamic-size constructor if the number of "
"residuals is not ceres::DYNAMIC.");
SizedCostFunction<kNumResiduals, Ns...>::set_num_residuals(num_residuals);
}
explicit AutoDiffCostFunction(AutoDiffCostFunction&& other)
: functor_(std::move(other.functor_)), ownership_(other.ownership_) {}
virtual ~AutoDiffCostFunction() {
// Manually release pointer if configured to not take ownership rather than
// deleting only if ownership is taken.
// This is to stay maximally compatible to old user code which may have
// forgotten to implement a virtual destructor, from when the
// AutoDiffCostFunction always took ownership.
if (ownership_ == DO_NOT_TAKE_OWNERSHIP) {
functor_.release();
}
}
virtual ~AutoDiffCostFunction() {}
// Implementation details follow; clients of the autodiff cost function should
// not have to examine below here.
@@ -217,7 +201,6 @@ class AutoDiffCostFunction : public SizedCostFunction<kNumResiduals, Ns...> {
private:
std::unique_ptr<CostFunctor> functor_;
Ownership ownership_;
};
} // namespace ceres

View File

@@ -38,10 +38,8 @@
#ifndef CERES_PUBLIC_C_API_H_
#define CERES_PUBLIC_C_API_H_
// clang-format off
#include "ceres/internal/port.h"
#include "ceres/internal/disable_warnings.h"
// clang-format on
#ifdef __cplusplus
extern "C" {

View File

@@ -144,7 +144,8 @@ class CostFunctionToFunctor {
// Extract parameter block pointers from params.
using Indices =
std::make_integer_sequence<int, ParameterDims::kNumParameterBlocks>;
std::make_integer_sequence<int,
ParameterDims::kNumParameterBlocks>;
std::array<const T*, ParameterDims::kNumParameterBlocks> parameter_blocks =
GetParameterPointers<T>(params, Indices());

View File

@@ -51,7 +51,7 @@ class CovarianceImpl;
// =======
// It is very easy to use this class incorrectly without understanding
// the underlying mathematics. Please read and understand the
// documentation completely before attempting to use it.
// documentation completely before attempting to use this class.
//
//
// This class allows the user to evaluate the covariance for a
@@ -73,7 +73,7 @@ class CovarianceImpl;
// the maximum likelihood estimate of x given observations y is the
// solution to the non-linear least squares problem:
//
// x* = arg min_x |f(x) - y|^2
// x* = arg min_x |f(x)|^2
//
// And the covariance of x* is given by
//
@@ -220,11 +220,11 @@ class CERES_EXPORT Covariance {
// 1. DENSE_SVD uses Eigen's JacobiSVD to perform the
// computations. It computes the singular value decomposition
//
// U * D * V' = J
// U * S * V' = J
//
// and then uses it to compute the pseudo inverse of J'J as
//
// pseudoinverse[J'J] = V * pseudoinverse[D^2] * V'
// pseudoinverse[J'J]^ = V * pseudoinverse[S] * V'
//
// It is an accurate but slow method and should only be used
// for small to moderate sized problems. It can handle
@@ -235,7 +235,7 @@ class CERES_EXPORT Covariance {
//
// Q * R = J
//
// [J'J]^-1 = [R'*R]^-1
// [J'J]^-1 = [R*R']^-1
//
// SPARSE_QR is not capable of computing the covariance if the
// Jacobian is rank deficient. Depending on the value of

View File

@@ -40,7 +40,6 @@
#include "ceres/dynamic_cost_function.h"
#include "ceres/internal/fixed_array.h"
#include "ceres/jet.h"
#include "ceres/types.h"
#include "glog/logging.h"
namespace ceres {
@@ -79,24 +78,10 @@ namespace ceres {
template <typename CostFunctor, int Stride = 4>
class DynamicAutoDiffCostFunction : public DynamicCostFunction {
public:
// Takes ownership by default.
DynamicAutoDiffCostFunction(CostFunctor* functor,
Ownership ownership = TAKE_OWNERSHIP)
: functor_(functor), ownership_(ownership) {}
explicit DynamicAutoDiffCostFunction(CostFunctor* functor)
: functor_(functor) {}
explicit DynamicAutoDiffCostFunction(DynamicAutoDiffCostFunction&& other)
: functor_(std::move(other.functor_)), ownership_(other.ownership_) {}
virtual ~DynamicAutoDiffCostFunction() {
// Manually release pointer if configured to not take ownership
// rather than deleting only if ownership is taken. This is to
// stay maximally compatible to old user code which may have
// forgotten to implement a virtual destructor, from when the
// AutoDiffCostFunction always took ownership.
if (ownership_ == DO_NOT_TAKE_OWNERSHIP) {
functor_.release();
}
}
virtual ~DynamicAutoDiffCostFunction() {}
bool Evaluate(double const* const* parameters,
double* residuals,
@@ -166,9 +151,6 @@ class DynamicAutoDiffCostFunction : public DynamicCostFunction {
}
}
if (num_active_parameters == 0) {
return (*functor_)(parameters, residuals);
}
// When `num_active_parameters % Stride != 0` then it can be the case
// that `active_parameter_count < Stride` while parameter_cursor is less
// than the total number of parameters and with no remaining non-constant
@@ -266,7 +248,6 @@ class DynamicAutoDiffCostFunction : public DynamicCostFunction {
private:
std::unique_ptr<CostFunctor> functor_;
Ownership ownership_;
};
} // namespace ceres

View File

@@ -44,7 +44,6 @@
#include "ceres/internal/numeric_diff.h"
#include "ceres/internal/parameter_dims.h"
#include "ceres/numeric_diff_options.h"
#include "ceres/types.h"
#include "glog/logging.h"
namespace ceres {
@@ -85,10 +84,6 @@ class DynamicNumericDiffCostFunction : public DynamicCostFunction {
const NumericDiffOptions& options = NumericDiffOptions())
: functor_(functor), ownership_(ownership), options_(options) {}
explicit DynamicNumericDiffCostFunction(
DynamicNumericDiffCostFunction&& other)
: functor_(std::move(other.functor_)), ownership_(other.ownership_) {}
virtual ~DynamicNumericDiffCostFunction() {
if (ownership_ != TAKE_OWNERSHIP) {
functor_.release();

View File

@@ -62,8 +62,7 @@ class CERES_EXPORT GradientProblemSolver {
// Minimizer options ----------------------------------------
LineSearchDirectionType line_search_direction_type = LBFGS;
LineSearchType line_search_type = WOLFE;
NonlinearConjugateGradientType nonlinear_conjugate_gradient_type =
FLETCHER_REEVES;
NonlinearConjugateGradientType nonlinear_conjugate_gradient_type = FLETCHER_REEVES;
// The LBFGS hessian approximation is a low rank approximation to
// the inverse of the Hessian matrix. The rank of the

View File

@@ -198,7 +198,7 @@ struct Make1stOrderPerturbation {
template <int N, int Offset, typename T, typename JetT>
struct Make1stOrderPerturbation<N, N, Offset, T, JetT> {
public:
static void Apply(const T* src, JetT* dst) {}
static void Apply(const T* /*src*/, JetT* /*dst*/) {}
};
// Calls Make1stOrderPerturbation for every parameter block.
@@ -229,9 +229,7 @@ struct Make1stOrderPerturbations<std::integer_sequence<int, N, Ns...>,
// End of 'recursion'. Nothing more to do.
template <int ParameterIdx, int Total>
struct Make1stOrderPerturbations<std::integer_sequence<int>,
ParameterIdx,
Total> {
struct Make1stOrderPerturbations<std::integer_sequence<int>, ParameterIdx, Total> {
template <typename T, typename JetT>
static void Apply(T const* const* /* NOT USED */, JetT* /* NOT USED */) {}
};

View File

@@ -34,11 +34,11 @@
#define CERES_WARNINGS_DISABLED
#ifdef _MSC_VER
#pragma warning(push)
#pragma warning( push )
// Disable the warning C4251 which is triggered by stl classes in
// Ceres' public interface. To quote MSDN: "C4251 can be ignored "
// "if you are deriving from a type in the Standard C++ Library"
#pragma warning(disable : 4251)
#pragma warning( disable : 4251 )
#endif
#endif // CERES_WARNINGS_DISABLED

View File

@@ -36,26 +36,31 @@
namespace ceres {
typedef Eigen::Matrix<double, Eigen::Dynamic, 1> Vector;
typedef Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>
Matrix;
typedef Eigen::Matrix<double,
Eigen::Dynamic,
Eigen::Dynamic,
Eigen::RowMajor> Matrix;
typedef Eigen::Map<Vector> VectorRef;
typedef Eigen::Map<Matrix> MatrixRef;
typedef Eigen::Map<const Vector> ConstVectorRef;
typedef Eigen::Map<const Matrix> ConstMatrixRef;
// Column major matrices for DenseSparseMatrix/DenseQRSolver
typedef Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::ColMajor>
ColMajorMatrix;
typedef Eigen::Matrix<double,
Eigen::Dynamic,
Eigen::Dynamic,
Eigen::ColMajor> ColMajorMatrix;
typedef Eigen::Map<ColMajorMatrix, 0, Eigen::Stride<Eigen::Dynamic, 1>>
ColMajorMatrixRef;
typedef Eigen::Map<ColMajorMatrix, 0,
Eigen::Stride<Eigen::Dynamic, 1>> ColMajorMatrixRef;
typedef Eigen::Map<const ColMajorMatrix, 0, Eigen::Stride<Eigen::Dynamic, 1>>
ConstColMajorMatrixRef;
typedef Eigen::Map<const ColMajorMatrix,
0,
Eigen::Stride<Eigen::Dynamic, 1>> ConstColMajorMatrixRef;
// C++ does not support templated typdefs, thus the need for this
// struct so that we can support statically sized Matrix and Maps.
template <int num_rows = Eigen::Dynamic, int num_cols = Eigen::Dynamic>
template <int num_rows = Eigen::Dynamic, int num_cols = Eigen::Dynamic>
struct EigenTypes {
typedef Eigen::Matrix<double,
num_rows,

View File

@@ -30,7 +30,6 @@
#ifndef CERES_PUBLIC_INTERNAL_FIXED_ARRAY_H_
#define CERES_PUBLIC_INTERNAL_FIXED_ARRAY_H_
#include <Eigen/Core> // For Eigen::aligned_allocator
#include <algorithm>
#include <array>
#include <cstddef>
@@ -38,6 +37,8 @@
#include <tuple>
#include <type_traits>
#include <Eigen/Core> // For Eigen::aligned_allocator
#include "ceres/internal/memory.h"
#include "glog/logging.h"

View File

@@ -62,8 +62,7 @@ struct SumImpl;
// Strip of and sum the first number.
template <typename T, T N, T... Ns>
struct SumImpl<std::integer_sequence<T, N, Ns...>> {
static constexpr T Value =
N + SumImpl<std::integer_sequence<T, Ns...>>::Value;
static constexpr T Value = N + SumImpl<std::integer_sequence<T, Ns...>>::Value;
};
// Strip of and sum the first two numbers.
@@ -130,14 +129,10 @@ template <typename T, T Sum, typename SeqIn, typename SeqOut>
struct ExclusiveScanImpl;
template <typename T, T Sum, T N, T... Ns, T... Rs>
struct ExclusiveScanImpl<T,
Sum,
std::integer_sequence<T, N, Ns...>,
struct ExclusiveScanImpl<T, Sum, std::integer_sequence<T, N, Ns...>,
std::integer_sequence<T, Rs...>> {
using Type =
typename ExclusiveScanImpl<T,
Sum + N,
std::integer_sequence<T, Ns...>,
typename ExclusiveScanImpl<T, Sum + N, std::integer_sequence<T, Ns...>,
std::integer_sequence<T, Rs..., Sum>>::Type;
};

View File

@@ -47,17 +47,15 @@
#include "ceres/types.h"
#include "glog/logging.h"
namespace ceres {
namespace internal {
// This is split from the main class because C++ doesn't allow partial template
// specializations for member functions. The alternative is to repeat the main
// class for differing numbers of parameters, which is also unfortunate.
template <typename CostFunctor,
NumericDiffMethodType kMethod,
int kNumResiduals,
typename ParameterDims,
int kParameterBlock,
template <typename CostFunctor, NumericDiffMethodType kMethod,
int kNumResiduals, typename ParameterDims, int kParameterBlock,
int kParameterBlockSize>
struct NumericDiff {
// Mutates parameters but must restore them before return.
@@ -68,23 +66,23 @@ struct NumericDiff {
int num_residuals,
int parameter_block_index,
int parameter_block_size,
double** parameters,
double* jacobian) {
using Eigen::ColMajor;
double **parameters,
double *jacobian) {
using Eigen::Map;
using Eigen::Matrix;
using Eigen::RowMajor;
using Eigen::ColMajor;
DCHECK(jacobian);
const int num_residuals_internal =
(kNumResiduals != ceres::DYNAMIC ? kNumResiduals : num_residuals);
const int parameter_block_index_internal =
(kParameterBlock != ceres::DYNAMIC ? kParameterBlock
: parameter_block_index);
(kParameterBlock != ceres::DYNAMIC ? kParameterBlock :
parameter_block_index);
const int parameter_block_size_internal =
(kParameterBlockSize != ceres::DYNAMIC ? kParameterBlockSize
: parameter_block_size);
(kParameterBlockSize != ceres::DYNAMIC ? kParameterBlockSize :
parameter_block_size);
typedef Matrix<double, kNumResiduals, 1> ResidualVector;
typedef Matrix<double, kParameterBlockSize, 1> ParameterVector;
@@ -99,17 +97,17 @@ struct NumericDiff {
(kParameterBlockSize == 1) ? ColMajor : RowMajor>
JacobianMatrix;
Map<JacobianMatrix> parameter_jacobian(
jacobian, num_residuals_internal, parameter_block_size_internal);
Map<JacobianMatrix> parameter_jacobian(jacobian,
num_residuals_internal,
parameter_block_size_internal);
Map<ParameterVector> x_plus_delta(
parameters[parameter_block_index_internal],
parameter_block_size_internal);
ParameterVector x(x_plus_delta);
ParameterVector step_size =
x.array().abs() * ((kMethod == RIDDERS)
? options.ridders_relative_initial_step_size
: options.relative_step_size);
ParameterVector step_size = x.array().abs() *
((kMethod == RIDDERS) ? options.ridders_relative_initial_step_size :
options.relative_step_size);
// It is not a good idea to make the step size arbitrarily
// small. This will lead to problems with round off and numerical
@@ -120,8 +118,8 @@ struct NumericDiff {
// For Ridders' method, the initial step size is required to be large,
// thus ridders_relative_initial_step_size is used.
if (kMethod == RIDDERS) {
min_step_size =
std::max(min_step_size, options.ridders_relative_initial_step_size);
min_step_size = std::max(min_step_size,
options.ridders_relative_initial_step_size);
}
// For each parameter in the parameter block, use finite differences to
@@ -135,9 +133,7 @@ struct NumericDiff {
const double delta = std::max(min_step_size, step_size(j));
if (kMethod == RIDDERS) {
if (!EvaluateRiddersJacobianColumn(functor,
j,
delta,
if (!EvaluateRiddersJacobianColumn(functor, j, delta,
options,
num_residuals_internal,
parameter_block_size_internal,
@@ -150,9 +146,7 @@ struct NumericDiff {
return false;
}
} else {
if (!EvaluateJacobianColumn(functor,
j,
delta,
if (!EvaluateJacobianColumn(functor, j, delta,
num_residuals_internal,
parameter_block_size_internal,
x.data(),
@@ -188,7 +182,8 @@ struct NumericDiff {
typedef Matrix<double, kParameterBlockSize, 1> ParameterVector;
Map<const ParameterVector> x(x_ptr, parameter_block_size);
Map<ParameterVector> x_plus_delta(x_plus_delta_ptr, parameter_block_size);
Map<ParameterVector> x_plus_delta(x_plus_delta_ptr,
parameter_block_size);
Map<ResidualVector> residuals(residuals_ptr, num_residuals);
Map<ResidualVector> temp_residuals(temp_residuals_ptr, num_residuals);
@@ -196,8 +191,9 @@ struct NumericDiff {
// Mutate 1 element at a time and then restore.
x_plus_delta(parameter_index) = x(parameter_index) + delta;
if (!VariadicEvaluate<ParameterDims>(
*functor, parameters, residuals.data())) {
if (!VariadicEvaluate<ParameterDims>(*functor,
parameters,
residuals.data())) {
return false;
}
@@ -210,8 +206,9 @@ struct NumericDiff {
// Compute the function on the other side of x(parameter_index).
x_plus_delta(parameter_index) = x(parameter_index) - delta;
if (!VariadicEvaluate<ParameterDims>(
*functor, parameters, temp_residuals.data())) {
if (!VariadicEvaluate<ParameterDims>(*functor,
parameters,
temp_residuals.data())) {
return false;
}
@@ -220,7 +217,8 @@ struct NumericDiff {
} else {
// Forward difference only; reuse existing residuals evaluation.
residuals -=
Map<const ResidualVector>(residuals_at_eval_point, num_residuals);
Map<const ResidualVector>(residuals_at_eval_point,
num_residuals);
}
// Restore x_plus_delta.
@@ -256,17 +254,17 @@ struct NumericDiff {
double* x_plus_delta_ptr,
double* temp_residuals_ptr,
double* residuals_ptr) {
using Eigen::aligned_allocator;
using Eigen::Map;
using Eigen::Matrix;
using Eigen::aligned_allocator;
typedef Matrix<double, kNumResiduals, 1> ResidualVector;
typedef Matrix<double, kNumResiduals, Eigen::Dynamic>
ResidualCandidateMatrix;
typedef Matrix<double, kNumResiduals, Eigen::Dynamic> ResidualCandidateMatrix;
typedef Matrix<double, kParameterBlockSize, 1> ParameterVector;
Map<const ParameterVector> x(x_ptr, parameter_block_size);
Map<ParameterVector> x_plus_delta(x_plus_delta_ptr, parameter_block_size);
Map<ParameterVector> x_plus_delta(x_plus_delta_ptr,
parameter_block_size);
Map<ResidualVector> residuals(residuals_ptr, num_residuals);
Map<ResidualVector> temp_residuals(temp_residuals_ptr, num_residuals);
@@ -277,16 +275,18 @@ struct NumericDiff {
// As the derivative is estimated, the step size decreases.
// By default, the step sizes are chosen so that the middle column
// of the Romberg tableau uses the input delta.
double current_step_size =
delta * pow(options.ridders_step_shrink_factor,
options.max_num_ridders_extrapolations / 2);
double current_step_size = delta *
pow(options.ridders_step_shrink_factor,
options.max_num_ridders_extrapolations / 2);
// Double-buffering temporary differential candidate vectors
// from previous step size.
ResidualCandidateMatrix stepsize_candidates_a(
num_residuals, options.max_num_ridders_extrapolations);
num_residuals,
options.max_num_ridders_extrapolations);
ResidualCandidateMatrix stepsize_candidates_b(
num_residuals, options.max_num_ridders_extrapolations);
num_residuals,
options.max_num_ridders_extrapolations);
ResidualCandidateMatrix* current_candidates = &stepsize_candidates_a;
ResidualCandidateMatrix* previous_candidates = &stepsize_candidates_b;
@@ -304,9 +304,7 @@ struct NumericDiff {
// 3. Extrapolation becomes numerically unstable.
for (int i = 0; i < options.max_num_ridders_extrapolations; ++i) {
// Compute the numerical derivative at this step size.
if (!EvaluateJacobianColumn(functor,
parameter_index,
current_step_size,
if (!EvaluateJacobianColumn(functor, parameter_index, current_step_size,
num_residuals,
parameter_block_size,
x.data(),
@@ -329,24 +327,23 @@ struct NumericDiff {
// Extrapolation factor for Richardson acceleration method (see below).
double richardson_factor = options.ridders_step_shrink_factor *
options.ridders_step_shrink_factor;
options.ridders_step_shrink_factor;
for (int k = 1; k <= i; ++k) {
// Extrapolate the various orders of finite differences using
// the Richardson acceleration method.
current_candidates->col(k) =
(richardson_factor * current_candidates->col(k - 1) -
previous_candidates->col(k - 1)) /
(richardson_factor - 1.0);
previous_candidates->col(k - 1)) / (richardson_factor - 1.0);
richardson_factor *= options.ridders_step_shrink_factor *
options.ridders_step_shrink_factor;
options.ridders_step_shrink_factor;
// Compute the difference between the previous value and the current.
double candidate_error = std::max(
(current_candidates->col(k) - current_candidates->col(k - 1))
.norm(),
(current_candidates->col(k) - previous_candidates->col(k - 1))
.norm());
(current_candidates->col(k) -
current_candidates->col(k - 1)).norm(),
(current_candidates->col(k) -
previous_candidates->col(k - 1)).norm());
// If the error has decreased, update results.
if (candidate_error <= norm_error) {
@@ -368,9 +365,8 @@ struct NumericDiff {
// Check to see if the current gradient estimate is numerically unstable.
// If so, bail out and return the last stable result.
if (i > 0) {
double tableau_error =
(current_candidates->col(i) - previous_candidates->col(i - 1))
.norm();
double tableau_error = (current_candidates->col(i) -
previous_candidates->col(i - 1)).norm();
// Compare current error to the chosen candidate's error.
if (tableau_error >= 2 * norm_error) {
@@ -486,18 +482,14 @@ struct EvaluateJacobianForParameterBlocks<ParameterDims,
// End of 'recursion'. Nothing more to do.
template <typename ParameterDims, int ParameterIdx>
struct EvaluateJacobianForParameterBlocks<ParameterDims,
std::integer_sequence<int>,
struct EvaluateJacobianForParameterBlocks<ParameterDims, std::integer_sequence<int>,
ParameterIdx> {
template <NumericDiffMethodType method,
int kNumResiduals,
template <NumericDiffMethodType method, int kNumResiduals,
typename CostFunctor>
static bool Apply(const CostFunctor* /* NOT USED*/,
const double* /* NOT USED*/,
const NumericDiffOptions& /* NOT USED*/,
int /* NOT USED*/,
double** /* NOT USED*/,
double** /* NOT USED*/) {
const NumericDiffOptions& /* NOT USED*/, int /* NOT USED*/,
double** /* NOT USED*/, double** /* NOT USED*/) {
return true;
}
};

View File

@@ -35,17 +35,17 @@
#include "ceres/internal/config.h"
#if defined(CERES_USE_OPENMP)
#if defined(CERES_USE_CXX_THREADS) || defined(CERES_NO_THREADS)
#error CERES_USE_OPENMP is mutually exclusive to CERES_USE_CXX_THREADS and CERES_NO_THREADS
#endif
# if defined(CERES_USE_CXX_THREADS) || defined(CERES_NO_THREADS)
# error CERES_USE_OPENMP is mutually exclusive to CERES_USE_CXX_THREADS and CERES_NO_THREADS
# endif
#elif defined(CERES_USE_CXX_THREADS)
#if defined(CERES_USE_OPENMP) || defined(CERES_NO_THREADS)
#error CERES_USE_CXX_THREADS is mutually exclusive to CERES_USE_OPENMP, CERES_USE_CXX_THREADS and CERES_NO_THREADS
#endif
# if defined(CERES_USE_OPENMP) || defined(CERES_NO_THREADS)
# error CERES_USE_CXX_THREADS is mutually exclusive to CERES_USE_OPENMP, CERES_USE_CXX_THREADS and CERES_NO_THREADS
# endif
#elif defined(CERES_NO_THREADS)
#if defined(CERES_USE_OPENMP) || defined(CERES_USE_CXX_THREADS)
#error CERES_NO_THREADS is mutually exclusive to CERES_USE_OPENMP and CERES_USE_CXX_THREADS
#endif
# if defined(CERES_USE_OPENMP) || defined(CERES_USE_CXX_THREADS)
# error CERES_NO_THREADS is mutually exclusive to CERES_USE_OPENMP and CERES_USE_CXX_THREADS
# endif
#else
# error One of CERES_USE_OPENMP, CERES_USE_CXX_THREADS or CERES_NO_THREADS must be defined.
#endif
@@ -54,57 +54,37 @@
// compiled without any sparse back-end. Verify that it has not subsequently
// been inconsistently redefined.
#if defined(CERES_NO_SPARSE)
#if !defined(CERES_NO_SUITESPARSE)
#error CERES_NO_SPARSE requires CERES_NO_SUITESPARSE.
#endif
#if !defined(CERES_NO_CXSPARSE)
#error CERES_NO_SPARSE requires CERES_NO_CXSPARSE
#endif
#if !defined(CERES_NO_ACCELERATE_SPARSE)
#error CERES_NO_SPARSE requires CERES_NO_ACCELERATE_SPARSE
#endif
#if defined(CERES_USE_EIGEN_SPARSE)
#error CERES_NO_SPARSE requires !CERES_USE_EIGEN_SPARSE
#endif
# if !defined(CERES_NO_SUITESPARSE)
# error CERES_NO_SPARSE requires CERES_NO_SUITESPARSE.
# endif
# if !defined(CERES_NO_CXSPARSE)
# error CERES_NO_SPARSE requires CERES_NO_CXSPARSE
# endif
# if !defined(CERES_NO_ACCELERATE_SPARSE)
# error CERES_NO_SPARSE requires CERES_NO_ACCELERATE_SPARSE
# endif
# if defined(CERES_USE_EIGEN_SPARSE)
# error CERES_NO_SPARSE requires !CERES_USE_EIGEN_SPARSE
# endif
#endif
// A macro to signal which functions and classes are exported when
// building a shared library.
#if defined(_MSC_VER)
#define CERES_API_SHARED_IMPORT __declspec(dllimport)
#define CERES_API_SHARED_EXPORT __declspec(dllexport)
#elif defined(__GNUC__)
#define CERES_API_SHARED_IMPORT __attribute__((visibility("default")))
#define CERES_API_SHARED_EXPORT __attribute__((visibility("default")))
// building a DLL with MSVC.
//
// Note that the ordering here is important, CERES_BUILDING_SHARED_LIBRARY
// is only defined locally when Ceres is compiled, it is never exported to
// users. However, in order that we do not have to configure config.h
// separately for building vs installing, if we are using MSVC and building
// a shared library, then both CERES_BUILDING_SHARED_LIBRARY and
// CERES_USING_SHARED_LIBRARY will be defined when Ceres is compiled.
// Hence it is important that the check for CERES_BUILDING_SHARED_LIBRARY
// happens first.
#if defined(_MSC_VER) && defined(CERES_BUILDING_SHARED_LIBRARY)
# define CERES_EXPORT __declspec(dllexport)
#elif defined(_MSC_VER) && defined(CERES_USING_SHARED_LIBRARY)
# define CERES_EXPORT __declspec(dllimport)
#else
#define CERES_API_SHARED_IMPORT
#define CERES_API_SHARED_EXPORT
#endif
// CERES_BUILDING_SHARED_LIBRARY is only defined locally when Ceres itself is
// compiled as a shared library, it is never exported to users. In order that
// we do not have to configure config.h separately when building Ceres as either
// a static or dynamic library, we define both CERES_USING_SHARED_LIBRARY and
// CERES_BUILDING_SHARED_LIBRARY when building as a shared library.
#if defined(CERES_USING_SHARED_LIBRARY)
#if defined(CERES_BUILDING_SHARED_LIBRARY)
// Compiling Ceres itself as a shared library.
#define CERES_EXPORT CERES_API_SHARED_EXPORT
#else
// Using Ceres as a shared library.
#define CERES_EXPORT CERES_API_SHARED_IMPORT
#endif
#else
// Ceres was compiled as a static library, export everything.
#define CERES_EXPORT
#endif
// Unit tests reach in and test internal functionality so we need a way to make
// those symbols visible
#ifdef CERES_EXPORT_INTERNAL_SYMBOLS
#define CERES_EXPORT_INTERNAL CERES_EXPORT
#else
#define CERES_EXPORT_INTERNAL
# define CERES_EXPORT
#endif
#endif // CERES_PUBLIC_INTERNAL_PORT_H_

View File

@@ -32,7 +32,7 @@
#undef CERES_WARNINGS_DISABLED
#ifdef _MSC_VER
#pragma warning(pop)
#pragma warning( pop )
#endif
#endif // CERES_WARNINGS_DISABLED

View File

@@ -46,10 +46,8 @@ namespace internal {
// For fixed size cost functors
template <typename Functor, typename T, int... Indices>
inline bool VariadicEvaluateImpl(const Functor& functor,
T const* const* input,
T* output,
std::false_type /*is_dynamic*/,
inline bool VariadicEvaluateImpl(const Functor& functor, T const* const* input,
T* output, std::false_type /*is_dynamic*/,
std::integer_sequence<int, Indices...>) {
static_assert(sizeof...(Indices),
"Invalid number of parameter blocks. At least one parameter "
@@ -59,31 +57,26 @@ inline bool VariadicEvaluateImpl(const Functor& functor,
// For dynamic sized cost functors
template <typename Functor, typename T>
inline bool VariadicEvaluateImpl(const Functor& functor,
T const* const* input,
T* output,
std::true_type /*is_dynamic*/,
inline bool VariadicEvaluateImpl(const Functor& functor, T const* const* input,
T* output, std::true_type /*is_dynamic*/,
std::integer_sequence<int>) {
return functor(input, output);
}
// For ceres cost functors (not ceres::CostFunction)
template <typename ParameterDims, typename Functor, typename T>
inline bool VariadicEvaluateImpl(const Functor& functor,
T const* const* input,
T* output,
const void* /* NOT USED */) {
inline bool VariadicEvaluateImpl(const Functor& functor, T const* const* input,
T* output, const void* /* NOT USED */) {
using ParameterBlockIndices =
std::make_integer_sequence<int, ParameterDims::kNumParameterBlocks>;
using IsDynamic = std::integral_constant<bool, ParameterDims::kIsDynamic>;
return VariadicEvaluateImpl(
functor, input, output, IsDynamic(), ParameterBlockIndices());
return VariadicEvaluateImpl(functor, input, output, IsDynamic(),
ParameterBlockIndices());
}
// For ceres::CostFunction
template <typename ParameterDims, typename Functor, typename T>
inline bool VariadicEvaluateImpl(const Functor& functor,
T const* const* input,
inline bool VariadicEvaluateImpl(const Functor& functor, T const* const* input,
T* output,
const CostFunction* /* NOT USED */) {
return functor.Evaluate(input, output, nullptr);
@@ -102,8 +95,7 @@ inline bool VariadicEvaluateImpl(const Functor& functor,
// blocks. The signature of the functor must have the following signature
// 'bool()(const T* i_1, const T* i_2, ... const T* i_n, T* output)'.
template <typename ParameterDims, typename Functor, typename T>
inline bool VariadicEvaluate(const Functor& functor,
T const* const* input,
inline bool VariadicEvaluate(const Functor& functor, T const* const* input,
T* output) {
return VariadicEvaluateImpl<ParameterDims>(functor, input, output, &functor);
}

View File

@@ -73,7 +73,7 @@ struct CERES_EXPORT IterationSummary {
bool step_is_successful = false;
// Value of the objective function.
double cost = 0.0;
double cost = 0.90;
// Change in the value of the objective function in this
// iteration. This can be positive or negative.

View File

@@ -388,8 +388,6 @@ using std::cbrt;
using std::ceil;
using std::cos;
using std::cosh;
using std::erf;
using std::erfc;
using std::exp;
using std::exp2;
using std::floor;
@@ -575,21 +573,6 @@ inline Jet<T, N> fmin(const Jet<T, N>& x, const Jet<T, N>& y) {
return y < x ? y : x;
}
// erf is defined as an integral that cannot be expressed analyticaly
// however, the derivative is trivial to compute
// erf(x + h) = erf(x) + h * 2*exp(-x^2)/sqrt(pi)
template <typename T, int N>
inline Jet<T, N> erf(const Jet<T, N>& x) {
return Jet<T, N>(erf(x.a), x.v * M_2_SQRTPI * exp(-x.a * x.a));
}
// erfc(x) = 1-erf(x)
// erfc(x + h) = erfc(x) + h * (-2*exp(-x^2)/sqrt(pi))
template <typename T, int N>
inline Jet<T, N> erfc(const Jet<T, N>& x) {
return Jet<T, N>(erfc(x.a), -x.v * M_2_SQRTPI * exp(-x.a * x.a));
}
// Bessel functions of the first kind with integer order equal to 0, 1, n.
//
// Microsoft has deprecated the j[0,1,n]() POSIX Bessel functions in favour of

View File

@@ -90,8 +90,8 @@ namespace ceres {
//
// An example that occurs commonly in Structure from Motion problems
// is when camera rotations are parameterized using Quaternion. There,
// it is useful to only make updates orthogonal to that 4-vector
// defining the quaternion. One way to do this is to let delta be a 3
// it is useful only make updates orthogonal to that 4-vector defining
// the quaternion. One way to do this is to let delta be a 3
// dimensional vector and define Plus to be
//
// Plus(x, delta) = [cos(|delta|), sin(|delta|) delta / |delta|] * x
@@ -99,7 +99,7 @@ namespace ceres {
// The multiplication between the two 4-vectors on the RHS is the
// standard quaternion product.
//
// Given f and a point x, optimizing f can now be restated as
// Given g and a point x, optimizing f can now be restated as
//
// min f(Plus(x, delta))
// delta
@@ -306,7 +306,6 @@ class CERES_EXPORT ProductParameterization : public LocalParameterization {
public:
ProductParameterization(const ProductParameterization&) = delete;
ProductParameterization& operator=(const ProductParameterization&) = delete;
virtual ~ProductParameterization() {}
//
// NOTE: The constructor takes ownership of the input local
// parameterizations.
@@ -342,8 +341,7 @@ class CERES_EXPORT ProductParameterization : public LocalParameterization {
bool Plus(const double* x,
const double* delta,
double* x_plus_delta) const override;
bool ComputeJacobian(const double* x,
double* jacobian) const override;
bool ComputeJacobian(const double* x, double* jacobian) const override;
int GlobalSize() const override { return global_size_; }
int LocalSize() const override { return local_size_; }
@@ -356,8 +354,8 @@ class CERES_EXPORT ProductParameterization : public LocalParameterization {
} // namespace ceres
// clang-format off
#include "ceres/internal/reenable_warnings.h"
#include "ceres/internal/line_parameterization.h"
#endif // CERES_PUBLIC_LOCAL_PARAMETERIZATION_H_

View File

@@ -192,10 +192,7 @@ class NumericDiffCostFunction : public SizedCostFunction<kNumResiduals, Ns...> {
}
}
explicit NumericDiffCostFunction(NumericDiffCostFunction&& other)
: functor_(std::move(other.functor_)), ownership_(other.ownership_) {}
virtual ~NumericDiffCostFunction() {
~NumericDiffCostFunction() {
if (ownership_ != TAKE_OWNERSHIP) {
functor_.release();
}

View File

@@ -453,15 +453,13 @@ class CERES_EXPORT Problem {
// problem.AddResidualBlock(new MyCostFunction, nullptr, &x);
//
// double cost = 0.0;
// problem.Evaluate(Problem::EvaluateOptions(), &cost,
// nullptr, nullptr, nullptr);
// problem.Evaluate(Problem::EvaluateOptions(), &cost, nullptr, nullptr, nullptr);
//
// The cost is evaluated at x = 1. If you wish to evaluate the
// problem at x = 2, then
//
// x = 2;
// problem.Evaluate(Problem::EvaluateOptions(), &cost,
// nullptr, nullptr, nullptr);
// problem.Evaluate(Problem::EvaluateOptions(), &cost, nullptr, nullptr, nullptr);
//
// is the way to do so.
//
@@ -477,7 +475,7 @@ class CERES_EXPORT Problem {
// at the end of an iteration during a solve.
//
// Note 4: If an EvaluationCallback is associated with the problem,
// then its PrepareForEvaluation method will be called every time
// then its PrepareForEvaluation method will be called everytime
// this method is called with new_point = true.
bool Evaluate(const EvaluateOptions& options,
double* cost,
@@ -511,41 +509,23 @@ class CERES_EXPORT Problem {
// apply_loss_function as the name implies allows the user to switch
// the application of the loss function on and off.
//
// If an EvaluationCallback is associated with the problem, then its
// PrepareForEvaluation method will be called every time this method
// is called with new_point = true. This conservatively assumes that
// the user may have changed the parameter values since the previous
// call to evaluate / solve. For improved efficiency, and only if
// you know that the parameter values have not changed between
// calls, see EvaluateResidualBlockAssumingParametersUnchanged().
// WARNING: If an EvaluationCallback is associated with the problem
// then it is the user's responsibility to call it before calling
// this method.
//
// This is because, if the user calls this method multiple times, we
// cannot tell if the underlying parameter blocks have changed
// between calls or not. So if EvaluateResidualBlock was responsible
// for calling the EvaluationCallback, it will have to do it
// everytime it is called. Which makes the common case where the
// parameter blocks do not change, inefficient. So we leave it to
// the user to call the EvaluationCallback as needed.
bool EvaluateResidualBlock(ResidualBlockId residual_block_id,
bool apply_loss_function,
double* cost,
double* residuals,
double** jacobians) const;
// Same as EvaluateResidualBlock except that if an
// EvaluationCallback is associated with the problem, then its
// PrepareForEvaluation method will be called every time this method
// is called with new_point = false.
//
// This means, if an EvaluationCallback is associated with the
// problem then it is the user's responsibility to call
// PrepareForEvaluation before calling this method if necessary,
// i.e. iff the parameter values have been changed since the last
// call to evaluate / solve.'
//
// This is because, as the name implies, we assume that the
// parameter blocks did not change since the last time
// PrepareForEvaluation was called (via Solve, Evaluate or
// EvaluateResidualBlock).
bool EvaluateResidualBlockAssumingParametersUnchanged(
ResidualBlockId residual_block_id,
bool apply_loss_function,
double* cost,
double* residuals,
double** jacobians) const;
private:
friend class Solver;
friend class Covariance;

View File

@@ -320,8 +320,8 @@ inline void QuaternionToAngleAxis(const T* quaternion, T* angle_axis) {
}
template <typename T>
void RotationMatrixToQuaternion(const T* R, T* quaternion) {
RotationMatrixToQuaternion(ColumnMajorAdapter3x3(R), quaternion);
void RotationMatrixToQuaternion(const T* R, T* angle_axis) {
RotationMatrixToQuaternion(ColumnMajorAdapter3x3(R), angle_axis);
}
// This algorithm comes from "Quaternion Calculus and Fast Animation",

View File

@@ -360,8 +360,7 @@ class CERES_EXPORT Solver {
//
// If Solver::Options::preconditioner_type == SUBSET, then
// residual_blocks_for_subset_preconditioner must be non-empty.
std::unordered_set<ResidualBlockId>
residual_blocks_for_subset_preconditioner;
std::unordered_set<ResidualBlockId> residual_blocks_for_subset_preconditioner;
// Ceres supports using multiple dense linear algebra libraries
// for dense matrix factorizations. Currently EIGEN and LAPACK are
@@ -839,7 +838,7 @@ class CERES_EXPORT Solver {
int num_linear_solves = -1;
// Time (in seconds) spent evaluating the residual vector.
double residual_evaluation_time_in_seconds = -1.0;
double residual_evaluation_time_in_seconds = 1.0;
// Number of residual only evaluations.
int num_residual_evaluations = -1;

View File

@@ -50,7 +50,7 @@ namespace ceres {
// delete on it upon completion.
enum Ownership {
DO_NOT_TAKE_OWNERSHIP,
TAKE_OWNERSHIP,
TAKE_OWNERSHIP
};
// TODO(keir): Considerably expand the explanations of each solver type.
@@ -185,19 +185,19 @@ enum SparseLinearAlgebraLibraryType {
enum DenseLinearAlgebraLibraryType {
EIGEN,
LAPACK,
LAPACK
};
// Logging options
// The options get progressively noisier.
enum LoggingType {
SILENT,
PER_MINIMIZER_ITERATION,
PER_MINIMIZER_ITERATION
};
enum MinimizerType {
LINE_SEARCH,
TRUST_REGION,
TRUST_REGION
};
enum LineSearchDirectionType {
@@ -412,7 +412,7 @@ enum DumpFormatType {
// specified for the number of residuals. If specified, then the
// number of residuas for that cost function can vary at runtime.
enum DimensionType {
DYNAMIC = -1,
DYNAMIC = -1
};
// The differentiation method used to compute numerical derivatives in
@@ -433,7 +433,7 @@ enum NumericDiffMethodType {
enum LineSearchInterpolationType {
BISECTION,
QUADRATIC,
CUBIC,
CUBIC
};
enum CovarianceAlgorithmType {
@@ -448,7 +448,8 @@ enum CovarianceAlgorithmType {
// did not write to that memory location.
const double kImpossibleValue = 1e302;
CERES_EXPORT const char* LinearSolverTypeToString(LinearSolverType type);
CERES_EXPORT const char* LinearSolverTypeToString(
LinearSolverType type);
CERES_EXPORT bool StringToLinearSolverType(std::string value,
LinearSolverType* type);
@@ -458,23 +459,25 @@ CERES_EXPORT bool StringToPreconditionerType(std::string value,
CERES_EXPORT const char* VisibilityClusteringTypeToString(
VisibilityClusteringType type);
CERES_EXPORT bool StringToVisibilityClusteringType(
std::string value, VisibilityClusteringType* type);
CERES_EXPORT bool StringToVisibilityClusteringType(std::string value,
VisibilityClusteringType* type);
CERES_EXPORT const char* SparseLinearAlgebraLibraryTypeToString(
SparseLinearAlgebraLibraryType type);
CERES_EXPORT bool StringToSparseLinearAlgebraLibraryType(
std::string value, SparseLinearAlgebraLibraryType* type);
std::string value,
SparseLinearAlgebraLibraryType* type);
CERES_EXPORT const char* DenseLinearAlgebraLibraryTypeToString(
DenseLinearAlgebraLibraryType type);
CERES_EXPORT bool StringToDenseLinearAlgebraLibraryType(
std::string value, DenseLinearAlgebraLibraryType* type);
std::string value,
DenseLinearAlgebraLibraryType* type);
CERES_EXPORT const char* TrustRegionStrategyTypeToString(
TrustRegionStrategyType type);
CERES_EXPORT bool StringToTrustRegionStrategyType(
std::string value, TrustRegionStrategyType* type);
CERES_EXPORT bool StringToTrustRegionStrategyType(std::string value,
TrustRegionStrategyType* type);
CERES_EXPORT const char* DoglegTypeToString(DoglegType type);
CERES_EXPORT bool StringToDoglegType(std::string value, DoglegType* type);
@@ -484,39 +487,41 @@ CERES_EXPORT bool StringToMinimizerType(std::string value, MinimizerType* type);
CERES_EXPORT const char* LineSearchDirectionTypeToString(
LineSearchDirectionType type);
CERES_EXPORT bool StringToLineSearchDirectionType(
std::string value, LineSearchDirectionType* type);
CERES_EXPORT bool StringToLineSearchDirectionType(std::string value,
LineSearchDirectionType* type);
CERES_EXPORT const char* LineSearchTypeToString(LineSearchType type);
CERES_EXPORT bool StringToLineSearchType(std::string value,
LineSearchType* type);
CERES_EXPORT bool StringToLineSearchType(std::string value, LineSearchType* type);
CERES_EXPORT const char* NonlinearConjugateGradientTypeToString(
NonlinearConjugateGradientType type);
CERES_EXPORT bool StringToNonlinearConjugateGradientType(
std::string value, NonlinearConjugateGradientType* type);
std::string value,
NonlinearConjugateGradientType* type);
CERES_EXPORT const char* LineSearchInterpolationTypeToString(
LineSearchInterpolationType type);
CERES_EXPORT bool StringToLineSearchInterpolationType(
std::string value, LineSearchInterpolationType* type);
std::string value,
LineSearchInterpolationType* type);
CERES_EXPORT const char* CovarianceAlgorithmTypeToString(
CovarianceAlgorithmType type);
CERES_EXPORT bool StringToCovarianceAlgorithmType(
std::string value, CovarianceAlgorithmType* type);
std::string value,
CovarianceAlgorithmType* type);
CERES_EXPORT const char* NumericDiffMethodTypeToString(
NumericDiffMethodType type);
CERES_EXPORT bool StringToNumericDiffMethodType(std::string value,
NumericDiffMethodType* type);
CERES_EXPORT bool StringToNumericDiffMethodType(
std::string value,
NumericDiffMethodType* type);
CERES_EXPORT const char* LoggingTypeToString(LoggingType type);
CERES_EXPORT bool StringtoLoggingType(std::string value, LoggingType* type);
CERES_EXPORT const char* DumpFormatTypeToString(DumpFormatType type);
CERES_EXPORT bool StringtoDumpFormatType(std::string value,
DumpFormatType* type);
CERES_EXPORT bool StringtoDumpFormatType(std::string value, DumpFormatType* type);
CERES_EXPORT bool StringtoDumpFormatType(std::string value, LoggingType* type);
CERES_EXPORT const char* TerminationTypeToString(TerminationType type);

View File

@@ -41,9 +41,8 @@
#define CERES_TO_STRING(x) CERES_TO_STRING_HELPER(x)
// The Ceres version as a string; for example "1.9.0".
#define CERES_VERSION_STRING \
CERES_TO_STRING(CERES_VERSION_MAJOR) \
"." CERES_TO_STRING(CERES_VERSION_MINOR) "." CERES_TO_STRING( \
CERES_VERSION_REVISION)
#define CERES_VERSION_STRING CERES_TO_STRING(CERES_VERSION_MAJOR) "." \
CERES_TO_STRING(CERES_VERSION_MINOR) "." \
CERES_TO_STRING(CERES_VERSION_REVISION)
#endif // CERES_PUBLIC_VERSION_H_

View File

@@ -33,19 +33,18 @@
#ifndef CERES_NO_ACCELERATE_SPARSE
#include "ceres/accelerate_sparse.h"
#include <algorithm>
#include <string>
#include <vector>
#include "ceres/accelerate_sparse.h"
#include "ceres/compressed_col_sparse_matrix_utils.h"
#include "ceres/compressed_row_sparse_matrix.h"
#include "ceres/triplet_sparse_matrix.h"
#include "glog/logging.h"
#define CASESTR(x) \
case x: \
return #x
#define CASESTR(x) case x: return #x
namespace ceres {
namespace internal {
@@ -69,7 +68,7 @@ const char* SparseStatusToString(SparseStatus_t status) {
// aligned to kAccelerateRequiredAlignment and returns a pointer to the
// aligned start.
void* ResizeForAccelerateAlignment(const size_t required_size,
std::vector<uint8_t>* workspace) {
std::vector<uint8_t> *workspace) {
// As per the Accelerate documentation, all workspace memory passed to the
// sparse solver functions must be 16-byte aligned.
constexpr int kAccelerateRequiredAlignment = 16;
@@ -81,28 +80,29 @@ void* ResizeForAccelerateAlignment(const size_t required_size,
size_t size_from_aligned_start = workspace->size();
void* aligned_solve_workspace_start =
reinterpret_cast<void*>(workspace->data());
aligned_solve_workspace_start = std::align(kAccelerateRequiredAlignment,
required_size,
aligned_solve_workspace_start,
size_from_aligned_start);
aligned_solve_workspace_start =
std::align(kAccelerateRequiredAlignment,
required_size,
aligned_solve_workspace_start,
size_from_aligned_start);
CHECK(aligned_solve_workspace_start != nullptr)
<< "required_size: " << required_size
<< ", workspace size: " << workspace->size();
return aligned_solve_workspace_start;
}
template <typename Scalar>
template<typename Scalar>
void AccelerateSparse<Scalar>::Solve(NumericFactorization* numeric_factor,
DenseVector* rhs_and_solution) {
// From SparseSolve() documentation in Solve.h
const int required_size = numeric_factor->solveWorkspaceRequiredStatic +
numeric_factor->solveWorkspaceRequiredPerRHS;
SparseSolve(*numeric_factor,
*rhs_and_solution,
const int required_size =
numeric_factor->solveWorkspaceRequiredStatic +
numeric_factor->solveWorkspaceRequiredPerRHS;
SparseSolve(*numeric_factor, *rhs_and_solution,
ResizeForAccelerateAlignment(required_size, &solve_workspace_));
}
template <typename Scalar>
template<typename Scalar>
typename AccelerateSparse<Scalar>::ASSparseMatrix
AccelerateSparse<Scalar>::CreateSparseMatrixTransposeView(
CompressedRowSparseMatrix* A) {
@@ -112,7 +112,7 @@ AccelerateSparse<Scalar>::CreateSparseMatrixTransposeView(
//
// Accelerate's columnStarts is a long*, not an int*. These types might be
// different (e.g. ARM on iOS) so always make a copy.
column_starts_.resize(A->num_rows() + 1); // +1 for final column length.
column_starts_.resize(A->num_rows() +1); // +1 for final column length.
std::copy_n(A->rows(), column_starts_.size(), &column_starts_[0]);
ASSparseMatrix At;
@@ -136,31 +136,29 @@ AccelerateSparse<Scalar>::CreateSparseMatrixTransposeView(
return At;
}
template <typename Scalar>
template<typename Scalar>
typename AccelerateSparse<Scalar>::SymbolicFactorization
AccelerateSparse<Scalar>::AnalyzeCholesky(ASSparseMatrix* A) {
return SparseFactor(SparseFactorizationCholesky, A->structure);
}
template <typename Scalar>
template<typename Scalar>
typename AccelerateSparse<Scalar>::NumericFactorization
AccelerateSparse<Scalar>::Cholesky(ASSparseMatrix* A,
SymbolicFactorization* symbolic_factor) {
return SparseFactor(*symbolic_factor, *A);
}
template <typename Scalar>
template<typename Scalar>
void AccelerateSparse<Scalar>::Cholesky(ASSparseMatrix* A,
NumericFactorization* numeric_factor) {
// From SparseRefactor() documentation in Solve.h
const int required_size =
std::is_same<Scalar, double>::value
? numeric_factor->symbolicFactorization.workspaceSize_Double
: numeric_factor->symbolicFactorization.workspaceSize_Float;
return SparseRefactor(
*A,
numeric_factor,
ResizeForAccelerateAlignment(required_size, &factorization_workspace_));
const int required_size = std::is_same<Scalar, double>::value
? numeric_factor->symbolicFactorization.workspaceSize_Double
: numeric_factor->symbolicFactorization.workspaceSize_Float;
return SparseRefactor(*A, numeric_factor,
ResizeForAccelerateAlignment(required_size,
&factorization_workspace_));
}
// Instantiate only for the specific template types required/supported s/t the
@@ -168,33 +166,34 @@ void AccelerateSparse<Scalar>::Cholesky(ASSparseMatrix* A,
template class AccelerateSparse<double>;
template class AccelerateSparse<float>;
template <typename Scalar>
std::unique_ptr<SparseCholesky> AppleAccelerateCholesky<Scalar>::Create(
OrderingType ordering_type) {
template<typename Scalar>
std::unique_ptr<SparseCholesky>
AppleAccelerateCholesky<Scalar>::Create(OrderingType ordering_type) {
return std::unique_ptr<SparseCholesky>(
new AppleAccelerateCholesky<Scalar>(ordering_type));
}
template <typename Scalar>
template<typename Scalar>
AppleAccelerateCholesky<Scalar>::AppleAccelerateCholesky(
const OrderingType ordering_type)
: ordering_type_(ordering_type) {}
template <typename Scalar>
template<typename Scalar>
AppleAccelerateCholesky<Scalar>::~AppleAccelerateCholesky() {
FreeSymbolicFactorization();
FreeNumericFactorization();
}
template <typename Scalar>
template<typename Scalar>
CompressedRowSparseMatrix::StorageType
AppleAccelerateCholesky<Scalar>::StorageType() const {
return CompressedRowSparseMatrix::LOWER_TRIANGULAR;
}
template <typename Scalar>
LinearSolverTerminationType AppleAccelerateCholesky<Scalar>::Factorize(
CompressedRowSparseMatrix* lhs, std::string* message) {
template<typename Scalar>
LinearSolverTerminationType
AppleAccelerateCholesky<Scalar>::Factorize(CompressedRowSparseMatrix* lhs,
std::string* message) {
CHECK_EQ(lhs->storage_type(), StorageType());
if (lhs == NULL) {
*message = "Failure: Input lhs is NULL.";
@@ -235,9 +234,11 @@ LinearSolverTerminationType AppleAccelerateCholesky<Scalar>::Factorize(
return LINEAR_SOLVER_SUCCESS;
}
template <typename Scalar>
LinearSolverTerminationType AppleAccelerateCholesky<Scalar>::Solve(
const double* rhs, double* solution, std::string* message) {
template<typename Scalar>
LinearSolverTerminationType
AppleAccelerateCholesky<Scalar>::Solve(const double* rhs,
double* solution,
std::string* message) {
CHECK_EQ(numeric_factor_->status, SparseStatusOK)
<< "Solve called without a call to Factorize first ("
<< SparseStatusToString(numeric_factor_->status) << ").";
@@ -261,7 +262,7 @@ LinearSolverTerminationType AppleAccelerateCholesky<Scalar>::Solve(
return LINEAR_SOLVER_SUCCESS;
}
template <typename Scalar>
template<typename Scalar>
void AppleAccelerateCholesky<Scalar>::FreeSymbolicFactorization() {
if (symbolic_factor_) {
SparseCleanup(*symbolic_factor_);
@@ -269,7 +270,7 @@ void AppleAccelerateCholesky<Scalar>::FreeSymbolicFactorization() {
}
}
template <typename Scalar>
template<typename Scalar>
void AppleAccelerateCholesky<Scalar>::FreeNumericFactorization() {
if (numeric_factor_) {
SparseCleanup(*numeric_factor_);
@@ -282,7 +283,7 @@ void AppleAccelerateCholesky<Scalar>::FreeNumericFactorization() {
template class AppleAccelerateCholesky<double>;
template class AppleAccelerateCholesky<float>;
} // namespace internal
} // namespace ceres
}
}
#endif // CERES_NO_ACCELERATE_SPARSE

View File

@@ -40,9 +40,9 @@
#include <string>
#include <vector>
#include "Accelerate.h"
#include "ceres/linear_solver.h"
#include "ceres/sparse_cholesky.h"
#include "Accelerate.h"
namespace ceres {
namespace internal {
@@ -50,10 +50,11 @@ namespace internal {
class CompressedRowSparseMatrix;
class TripletSparseMatrix;
template <typename Scalar>
struct SparseTypesTrait {};
template<typename Scalar>
struct SparseTypesTrait {
};
template <>
template<>
struct SparseTypesTrait<double> {
typedef DenseVector_Double DenseVector;
typedef SparseMatrix_Double SparseMatrix;
@@ -61,7 +62,7 @@ struct SparseTypesTrait<double> {
typedef SparseOpaqueFactorization_Double NumericFactorization;
};
template <>
template<>
struct SparseTypesTrait<float> {
typedef DenseVector_Float DenseVector;
typedef SparseMatrix_Float SparseMatrix;
@@ -69,16 +70,14 @@ struct SparseTypesTrait<float> {
typedef SparseOpaqueFactorization_Float NumericFactorization;
};
template <typename Scalar>
template<typename Scalar>
class AccelerateSparse {
public:
using DenseVector = typename SparseTypesTrait<Scalar>::DenseVector;
// Use ASSparseMatrix to avoid collision with ceres::internal::SparseMatrix.
using ASSparseMatrix = typename SparseTypesTrait<Scalar>::SparseMatrix;
using SymbolicFactorization =
typename SparseTypesTrait<Scalar>::SymbolicFactorization;
using NumericFactorization =
typename SparseTypesTrait<Scalar>::NumericFactorization;
using SymbolicFactorization = typename SparseTypesTrait<Scalar>::SymbolicFactorization;
using NumericFactorization = typename SparseTypesTrait<Scalar>::NumericFactorization;
// Solves a linear system given its symbolic (reference counted within
// NumericFactorization) and numeric factorization.
@@ -110,7 +109,7 @@ class AccelerateSparse {
// An implementation of SparseCholesky interface using Apple's Accelerate
// framework.
template <typename Scalar>
template<typename Scalar>
class AppleAccelerateCholesky : public SparseCholesky {
public:
// Factory
@@ -123,7 +122,7 @@ class AppleAccelerateCholesky : public SparseCholesky {
std::string* message) final;
LinearSolverTerminationType Solve(const double* rhs,
double* solution,
std::string* message) final;
std::string* message) final ;
private:
AppleAccelerateCholesky(const OrderingType ordering_type);
@@ -133,15 +132,15 @@ class AppleAccelerateCholesky : public SparseCholesky {
const OrderingType ordering_type_;
AccelerateSparse<Scalar> as_;
std::unique_ptr<typename AccelerateSparse<Scalar>::SymbolicFactorization>
symbolic_factor_;
symbolic_factor_;
std::unique_ptr<typename AccelerateSparse<Scalar>::NumericFactorization>
numeric_factor_;
numeric_factor_;
// Copy of rhs/solution if Scalar != double (necessitating a copy).
Eigen::Matrix<Scalar, Eigen::Dynamic, 1> scalar_rhs_and_solution_;
};
} // namespace internal
} // namespace ceres
}
}
#endif // CERES_NO_ACCELERATE_SPARSE

View File

@@ -35,7 +35,6 @@
#include <cstddef>
#include <string>
#include <vector>
#include "ceres/stringprintf.h"
#include "ceres/types.h"
namespace ceres {
@@ -46,7 +45,7 @@ using std::string;
bool IsArrayValid(const int size, const double* x) {
if (x != NULL) {
for (int i = 0; i < size; ++i) {
if (!std::isfinite(x[i]) || (x[i] == kImpossibleValue)) {
if (!std::isfinite(x[i]) || (x[i] == kImpossibleValue)) {
return false;
}
}
@@ -60,7 +59,7 @@ int FindInvalidValue(const int size, const double* x) {
}
for (int i = 0; i < size; ++i) {
if (!std::isfinite(x[i]) || (x[i] == kImpossibleValue)) {
if (!std::isfinite(x[i]) || (x[i] == kImpossibleValue)) {
return i;
}
}
@@ -93,13 +92,14 @@ void AppendArrayToString(const int size, const double* x, string* result) {
void MapValuesToContiguousRange(const int size, int* array) {
std::vector<int> unique_values(array, array + size);
std::sort(unique_values.begin(), unique_values.end());
unique_values.erase(std::unique(unique_values.begin(), unique_values.end()),
unique_values.erase(std::unique(unique_values.begin(),
unique_values.end()),
unique_values.end());
for (int i = 0; i < size; ++i) {
array[i] =
std::lower_bound(unique_values.begin(), unique_values.end(), array[i]) -
unique_values.begin();
array[i] = std::lower_bound(unique_values.begin(),
unique_values.end(),
array[i]) - unique_values.begin();
}
}

View File

@@ -44,7 +44,6 @@
#define CERES_INTERNAL_ARRAY_UTILS_H_
#include <string>
#include "ceres/internal/port.h"
namespace ceres {
@@ -52,22 +51,20 @@ namespace internal {
// Fill the array x with an impossible value that the user code is
// never expected to compute.
CERES_EXPORT_INTERNAL void InvalidateArray(int size, double* x);
void InvalidateArray(int size, double* x);
// Check if all the entries of the array x are valid, i.e. all the
// values in the array should be finite and none of them should be
// equal to the "impossible" value used by InvalidateArray.
CERES_EXPORT_INTERNAL bool IsArrayValid(int size, const double* x);
bool IsArrayValid(int size, const double* x);
// If the array contains an invalid value, return the index for it,
// otherwise return size.
CERES_EXPORT_INTERNAL int FindInvalidValue(const int size, const double* x);
int FindInvalidValue(const int size, const double* x);
// Utility routine to print an array of doubles to a string. If the
// array pointer is NULL, it is treated as an array of zeros.
CERES_EXPORT_INTERNAL void AppendArrayToString(const int size,
const double* x,
std::string* result);
void AppendArrayToString(const int size, const double* x, std::string* result);
// This routine takes an array of integer values, sorts and uniques
// them and then maps each value in the array to its position in the
@@ -82,7 +79,7 @@ CERES_EXPORT_INTERNAL void AppendArrayToString(const int size,
// gets mapped to
//
// [1 0 2 3 0 1 3]
CERES_EXPORT_INTERNAL void MapValuesToContiguousRange(int size, int* array);
void MapValuesToContiguousRange(int size, int* array);
} // namespace internal
} // namespace ceres

View File

@@ -29,7 +29,6 @@
// Author: sameeragarwal@google.com (Sameer Agarwal)
#include "ceres/blas.h"
#include "ceres/internal/port.h"
#include "glog/logging.h"

View File

@@ -31,7 +31,6 @@
#include "ceres/block_evaluate_preparer.h"
#include <vector>
#include "ceres/block_sparse_matrix.h"
#include "ceres/casts.h"
#include "ceres/parameter_block.h"
@@ -54,8 +53,10 @@ void BlockEvaluatePreparer::Prepare(const ResidualBlock* residual_block,
double** jacobians) {
// If the overall jacobian is not available, use the scratch space.
if (jacobian == NULL) {
scratch_evaluate_preparer_.Prepare(
residual_block, residual_block_index, jacobian, jacobians);
scratch_evaluate_preparer_.Prepare(residual_block,
residual_block_index,
jacobian,
jacobians);
return;
}

View File

@@ -30,9 +30,9 @@
#include "ceres/block_jacobi_preconditioner.h"
#include "ceres/block_random_access_diagonal_matrix.h"
#include "ceres/block_sparse_matrix.h"
#include "ceres/block_structure.h"
#include "ceres/block_random_access_diagonal_matrix.h"
#include "ceres/casts.h"
#include "ceres/internal/eigen.h"
@@ -65,11 +65,13 @@ bool BlockJacobiPreconditioner::UpdateImpl(const BlockSparseMatrix& A,
const int col_block_size = bs->cols[block_id].size;
int r, c, row_stride, col_stride;
CellInfo* cell_info =
m_->GetCell(block_id, block_id, &r, &c, &row_stride, &col_stride);
CellInfo* cell_info = m_->GetCell(block_id, block_id,
&r, &c,
&row_stride, &col_stride);
MatrixRef m(cell_info->values, row_stride, col_stride);
ConstMatrixRef b(
values + cells[j].position, row_block_size, col_block_size);
ConstMatrixRef b(values + cells[j].position,
row_block_size,
col_block_size);
m.block(r, c, col_block_size, col_block_size) += b.transpose() * b;
}
}
@@ -80,7 +82,9 @@ bool BlockJacobiPreconditioner::UpdateImpl(const BlockSparseMatrix& A,
for (int i = 0; i < bs->cols.size(); ++i) {
const int block_size = bs->cols[i].size;
int r, c, row_stride, col_stride;
CellInfo* cell_info = m_->GetCell(i, i, &r, &c, &row_stride, &col_stride);
CellInfo* cell_info = m_->GetCell(i, i,
&r, &c,
&row_stride, &col_stride);
MatrixRef m(cell_info->values, row_stride, col_stride);
m.block(r, c, block_size, block_size).diagonal() +=
ConstVectorRef(D + position, block_size).array().square().matrix();

View File

@@ -32,9 +32,7 @@
#define CERES_INTERNAL_BLOCK_JACOBI_PRECONDITIONER_H_
#include <memory>
#include "ceres/block_random_access_diagonal_matrix.h"
#include "ceres/internal/port.h"
#include "ceres/preconditioner.h"
namespace ceres {
@@ -53,8 +51,7 @@ struct CompressedRowBlockStructure;
// update the matrix by running Update(A, D). The values of the matrix A are
// inspected to construct the preconditioner. The vector D is applied as the
// D^TD diagonal term.
class CERES_EXPORT_INTERNAL BlockJacobiPreconditioner
: public BlockSparseMatrixPreconditioner {
class BlockJacobiPreconditioner : public BlockSparseMatrixPreconditioner {
public:
// A must remain valid while the BlockJacobiPreconditioner is.
explicit BlockJacobiPreconditioner(const BlockSparseMatrix& A);

View File

@@ -32,11 +32,11 @@
#include "ceres/block_evaluate_preparer.h"
#include "ceres/block_sparse_matrix.h"
#include "ceres/internal/eigen.h"
#include "ceres/internal/port.h"
#include "ceres/parameter_block.h"
#include "ceres/program.h"
#include "ceres/residual_block.h"
#include "ceres/internal/eigen.h"
#include "ceres/internal/port.h"
namespace ceres {
namespace internal {

View File

@@ -39,7 +39,6 @@
#define CERES_INTERNAL_BLOCK_JACOBIAN_WRITER_H_
#include <vector>
#include "ceres/evaluator.h"
#include "ceres/internal/port.h"
@@ -53,7 +52,8 @@ class SparseMatrix;
// TODO(sameeragarwal): This class needs documemtation.
class BlockJacobianWriter {
public:
BlockJacobianWriter(const Evaluator::Options& options, Program* program);
BlockJacobianWriter(const Evaluator::Options& options,
Program* program);
// JacobianWriter interface.

View File

@@ -31,7 +31,6 @@
#include "ceres/block_random_access_dense_matrix.h"
#include <vector>
#include "ceres/internal/eigen.h"
#include "glog/logging.h"
@@ -60,7 +59,8 @@ BlockRandomAccessDenseMatrix::BlockRandomAccessDenseMatrix(
// Assume that the user does not hold any locks on any cell blocks
// when they are calling SetZero.
BlockRandomAccessDenseMatrix::~BlockRandomAccessDenseMatrix() {}
BlockRandomAccessDenseMatrix::~BlockRandomAccessDenseMatrix() {
}
CellInfo* BlockRandomAccessDenseMatrix::GetCell(const int row_block_id,
const int col_block_id,

View File

@@ -31,10 +31,11 @@
#ifndef CERES_INTERNAL_BLOCK_RANDOM_ACCESS_DENSE_MATRIX_H_
#define CERES_INTERNAL_BLOCK_RANDOM_ACCESS_DENSE_MATRIX_H_
#include "ceres/block_random_access_matrix.h"
#include <memory>
#include <vector>
#include "ceres/block_random_access_matrix.h"
#include "ceres/internal/port.h"
namespace ceres {
@@ -50,8 +51,7 @@ namespace internal {
// pair.
//
// ReturnCell is a nop.
class CERES_EXPORT_INTERNAL BlockRandomAccessDenseMatrix
: public BlockRandomAccessMatrix {
class BlockRandomAccessDenseMatrix : public BlockRandomAccessMatrix {
public:
// blocks is a vector of block sizes. The resulting matrix has
// blocks.size() * blocks.size() cells.

View File

@@ -63,8 +63,9 @@ BlockRandomAccessDiagonalMatrix::BlockRandomAccessDiagonalMatrix(
num_nonzeros += blocks_[i] * blocks_[i];
}
VLOG(1) << "Matrix Size [" << num_cols << "," << num_cols << "] "
<< num_nonzeros;
VLOG(1) << "Matrix Size [" << num_cols
<< "," << num_cols
<< "] " << num_nonzeros;
tsm_.reset(new TripletSparseMatrix(num_cols, num_cols, num_nonzeros));
tsm_->set_num_nonzeros(num_nonzeros);
@@ -115,7 +116,8 @@ CellInfo* BlockRandomAccessDiagonalMatrix::GetCell(int row_block_id,
// when they are calling SetZero.
void BlockRandomAccessDiagonalMatrix::SetZero() {
if (tsm_->num_nonzeros()) {
VectorRef(tsm_->mutable_values(), tsm_->num_nonzeros()).setZero();
VectorRef(tsm_->mutable_values(),
tsm_->num_nonzeros()).setZero();
}
}
@@ -124,8 +126,11 @@ void BlockRandomAccessDiagonalMatrix::Invert() {
for (int i = 0; i < blocks_.size(); ++i) {
const int block_size = blocks_[i];
MatrixRef block(values, block_size, block_size);
block = block.selfadjointView<Eigen::Upper>().llt().solve(
Matrix::Identity(block_size, block_size));
block =
block
.selfadjointView<Eigen::Upper>()
.llt()
.solve(Matrix::Identity(block_size, block_size));
values += block_size * block_size;
}
}

View File

@@ -46,13 +46,11 @@ namespace internal {
// A thread safe block diagonal matrix implementation of
// BlockRandomAccessMatrix.
class CERES_EXPORT_INTERNAL BlockRandomAccessDiagonalMatrix
: public BlockRandomAccessMatrix {
class BlockRandomAccessDiagonalMatrix : public BlockRandomAccessMatrix {
public:
// blocks is an array of block sizes.
explicit BlockRandomAccessDiagonalMatrix(const std::vector<int>& blocks);
BlockRandomAccessDiagonalMatrix(const BlockRandomAccessDiagonalMatrix&) =
delete;
BlockRandomAccessDiagonalMatrix(const BlockRandomAccessDiagonalMatrix&) = delete;
void operator=(const BlockRandomAccessDiagonalMatrix&) = delete;
// The destructor is not thread safe. It assumes that no one is

View File

@@ -33,7 +33,8 @@
namespace ceres {
namespace internal {
BlockRandomAccessMatrix::~BlockRandomAccessMatrix() {}
BlockRandomAccessMatrix::~BlockRandomAccessMatrix() {
}
} // namespace internal
} // namespace ceres

View File

@@ -35,8 +35,6 @@
#include <mutex>
#include "ceres/internal/port.h"
namespace ceres {
namespace internal {
@@ -93,7 +91,7 @@ struct CellInfo {
std::mutex m;
};
class CERES_EXPORT_INTERNAL BlockRandomAccessMatrix {
class BlockRandomAccessMatrix {
public:
virtual ~BlockRandomAccessMatrix();

View File

@@ -50,8 +50,10 @@ using std::set;
using std::vector;
BlockRandomAccessSparseMatrix::BlockRandomAccessSparseMatrix(
const vector<int>& blocks, const set<pair<int, int>>& block_pairs)
: kMaxRowBlocks(10 * 1000 * 1000), blocks_(blocks) {
const vector<int>& blocks,
const set<pair<int, int>>& block_pairs)
: kMaxRowBlocks(10 * 1000 * 1000),
blocks_(blocks) {
CHECK_LT(blocks.size(), kMaxRowBlocks);
// Build the row/column layout vector and count the number of scalar
@@ -73,8 +75,9 @@ BlockRandomAccessSparseMatrix::BlockRandomAccessSparseMatrix(
num_nonzeros += row_block_size * col_block_size;
}
VLOG(1) << "Matrix Size [" << num_cols << "," << num_cols << "] "
<< num_nonzeros;
VLOG(1) << "Matrix Size [" << num_cols
<< "," << num_cols
<< "] " << num_nonzeros;
tsm_.reset(new TripletSparseMatrix(num_cols, num_cols, num_nonzeros));
tsm_->set_num_nonzeros(num_nonzeros);
@@ -102,11 +105,11 @@ BlockRandomAccessSparseMatrix::BlockRandomAccessSparseMatrix(
layout_[IntPairToLong(row_block_id, col_block_id)]->values - values;
for (int r = 0; r < row_block_size; ++r) {
for (int c = 0; c < col_block_size; ++c, ++pos) {
rows[pos] = block_positions_[row_block_id] + r;
cols[pos] = block_positions_[col_block_id] + c;
values[pos] = 1.0;
DCHECK_LT(rows[pos], tsm_->num_rows());
DCHECK_LT(cols[pos], tsm_->num_rows());
rows[pos] = block_positions_[row_block_id] + r;
cols[pos] = block_positions_[col_block_id] + c;
values[pos] = 1.0;
DCHECK_LT(rows[pos], tsm_->num_rows());
DCHECK_LT(cols[pos], tsm_->num_rows());
}
}
}
@@ -126,7 +129,7 @@ CellInfo* BlockRandomAccessSparseMatrix::GetCell(int row_block_id,
int* col,
int* row_stride,
int* col_stride) {
const LayoutType::iterator it =
const LayoutType::iterator it =
layout_.find(IntPairToLong(row_block_id, col_block_id));
if (it == layout_.end()) {
return NULL;
@@ -144,7 +147,8 @@ CellInfo* BlockRandomAccessSparseMatrix::GetCell(int row_block_id,
// when they are calling SetZero.
void BlockRandomAccessSparseMatrix::SetZero() {
if (tsm_->num_nonzeros()) {
VectorRef(tsm_->mutable_values(), tsm_->num_nonzeros()).setZero();
VectorRef(tsm_->mutable_values(),
tsm_->num_nonzeros()).setZero();
}
}
@@ -160,9 +164,7 @@ void BlockRandomAccessSparseMatrix::SymmetricRightMultiply(const double* x,
const int col_block_pos = block_positions_[col];
MatrixVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
cell_position_and_data.second,
row_block_size,
col_block_size,
cell_position_and_data.second, row_block_size, col_block_size,
x + col_block_pos,
y + row_block_pos);
@@ -172,9 +174,7 @@ void BlockRandomAccessSparseMatrix::SymmetricRightMultiply(const double* x,
// triangular multiply also.
if (row != col) {
MatrixTransposeVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
cell_position_and_data.second,
row_block_size,
col_block_size,
cell_position_and_data.second, row_block_size, col_block_size,
x + row_block_pos,
y + col_block_pos);
}

View File

@@ -39,10 +39,10 @@
#include <vector>
#include "ceres/block_random_access_matrix.h"
#include "ceres/internal/port.h"
#include "ceres/small_blas.h"
#include "ceres/triplet_sparse_matrix.h"
#include "ceres/internal/port.h"
#include "ceres/types.h"
#include "ceres/small_blas.h"
namespace ceres {
namespace internal {
@@ -51,8 +51,7 @@ namespace internal {
// BlockRandomAccessMatrix. Internally a TripletSparseMatrix is used
// for doing the actual storage. This class augments this matrix with
// an unordered_map that allows random read/write access.
class CERES_EXPORT_INTERNAL BlockRandomAccessSparseMatrix
: public BlockRandomAccessMatrix {
class BlockRandomAccessSparseMatrix : public BlockRandomAccessMatrix {
public:
// blocks is an array of block sizes. block_pairs is a set of
// <row_block_id, col_block_id> pairs to identify the non-zero cells
@@ -111,7 +110,7 @@ class CERES_EXPORT_INTERNAL BlockRandomAccessSparseMatrix
// A mapping from <row_block_id, col_block_id> to the position in
// the values array of tsm_ where the block is stored.
typedef std::unordered_map<long int, CellInfo*> LayoutType;
typedef std::unordered_map<long int, CellInfo* > LayoutType;
LayoutType layout_;
// In order traversal of contents of the matrix. This allows us to

View File

@@ -30,10 +30,9 @@
#include "ceres/block_sparse_matrix.h"
#include <algorithm>
#include <cstddef>
#include <algorithm>
#include <vector>
#include "ceres/block_structure.h"
#include "ceres/internal/eigen.h"
#include "ceres/random.h"
@@ -78,8 +77,8 @@ BlockSparseMatrix::BlockSparseMatrix(
CHECK_GE(num_rows_, 0);
CHECK_GE(num_cols_, 0);
CHECK_GE(num_nonzeros_, 0);
VLOG(2) << "Allocating values array with " << num_nonzeros_ * sizeof(double)
<< " bytes."; // NOLINT
VLOG(2) << "Allocating values array with "
<< num_nonzeros_ * sizeof(double) << " bytes."; // NOLINT
values_.reset(new double[num_nonzeros_]);
max_num_nonzeros_ = num_nonzeros_;
CHECK(values_ != nullptr);
@@ -89,7 +88,7 @@ void BlockSparseMatrix::SetZero() {
std::fill(values_.get(), values_.get() + num_nonzeros_, 0.0);
}
void BlockSparseMatrix::RightMultiply(const double* x, double* y) const {
void BlockSparseMatrix::RightMultiply(const double* x, double* y) const {
CHECK(x != nullptr);
CHECK(y != nullptr);
@@ -102,9 +101,7 @@ void BlockSparseMatrix::RightMultiply(const double* x, double* y) const {
int col_block_size = block_structure_->cols[col_block_id].size;
int col_block_pos = block_structure_->cols[col_block_id].position;
MatrixVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
values_.get() + cells[j].position,
row_block_size,
col_block_size,
values_.get() + cells[j].position, row_block_size, col_block_size,
x + col_block_pos,
y + row_block_pos);
}
@@ -124,9 +121,7 @@ void BlockSparseMatrix::LeftMultiply(const double* x, double* y) const {
int col_block_size = block_structure_->cols[col_block_id].size;
int col_block_pos = block_structure_->cols[col_block_id].position;
MatrixTransposeVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
values_.get() + cells[j].position,
row_block_size,
col_block_size,
values_.get() + cells[j].position, row_block_size, col_block_size,
x + row_block_pos,
y + col_block_pos);
}
@@ -143,8 +138,8 @@ void BlockSparseMatrix::SquaredColumnNorm(double* x) const {
int col_block_id = cells[j].block_id;
int col_block_size = block_structure_->cols[col_block_id].size;
int col_block_pos = block_structure_->cols[col_block_id].position;
const MatrixRef m(
values_.get() + cells[j].position, row_block_size, col_block_size);
const MatrixRef m(values_.get() + cells[j].position,
row_block_size, col_block_size);
VectorRef(x + col_block_pos, col_block_size) += m.colwise().squaredNorm();
}
}
@@ -160,8 +155,8 @@ void BlockSparseMatrix::ScaleColumns(const double* scale) {
int col_block_id = cells[j].block_id;
int col_block_size = block_structure_->cols[col_block_id].size;
int col_block_pos = block_structure_->cols[col_block_id].position;
MatrixRef m(
values_.get() + cells[j].position, row_block_size, col_block_size);
MatrixRef m(values_.get() + cells[j].position,
row_block_size, col_block_size);
m *= ConstVectorRef(scale + col_block_pos, col_block_size).asDiagonal();
}
}
@@ -183,8 +178,8 @@ void BlockSparseMatrix::ToDenseMatrix(Matrix* dense_matrix) const {
int col_block_size = block_structure_->cols[col_block_id].size;
int col_block_pos = block_structure_->cols[col_block_id].position;
int jac_pos = cells[j].position;
m.block(row_block_pos, col_block_pos, row_block_size, col_block_size) +=
MatrixRef(values_.get() + jac_pos, row_block_size, col_block_size);
m.block(row_block_pos, col_block_pos, row_block_size, col_block_size)
+= MatrixRef(values_.get() + jac_pos, row_block_size, col_block_size);
}
}
}
@@ -206,7 +201,7 @@ void BlockSparseMatrix::ToTripletSparseMatrix(
int col_block_size = block_structure_->cols[col_block_id].size;
int col_block_pos = block_structure_->cols[col_block_id].position;
int jac_pos = cells[j].position;
for (int r = 0; r < row_block_size; ++r) {
for (int r = 0; r < row_block_size; ++r) {
for (int c = 0; c < col_block_size; ++c, ++jac_pos) {
matrix->mutable_rows()[jac_pos] = row_block_pos + r;
matrix->mutable_cols()[jac_pos] = col_block_pos + c;
@@ -220,7 +215,8 @@ void BlockSparseMatrix::ToTripletSparseMatrix(
// Return a pointer to the block structure. We continue to hold
// ownership of the object though.
const CompressedRowBlockStructure* BlockSparseMatrix::block_structure() const {
const CompressedRowBlockStructure* BlockSparseMatrix::block_structure()
const {
return block_structure_.get();
}
@@ -237,8 +233,7 @@ void BlockSparseMatrix::ToTextFile(FILE* file) const {
int jac_pos = cells[j].position;
for (int r = 0; r < row_block_size; ++r) {
for (int c = 0; c < col_block_size; ++c) {
fprintf(file,
"% 10d % 10d %17f\n",
fprintf(file, "% 10d % 10d %17f\n",
row_block_pos + r,
col_block_pos + c,
values_[jac_pos++]);
@@ -374,6 +369,7 @@ BlockSparseMatrix* BlockSparseMatrix::CreateRandomMatrix(
int row_block_position = 0;
int value_position = 0;
for (int r = 0; r < options.num_row_blocks; ++r) {
const int delta_block_size =
Uniform(options.max_row_block_size - options.min_row_block_size);
const int row_block_size = options.min_row_block_size + delta_block_size;

View File

@@ -35,11 +35,9 @@
#define CERES_INTERNAL_BLOCK_SPARSE_MATRIX_H_
#include <memory>
#include "ceres/block_structure.h"
#include "ceres/internal/eigen.h"
#include "ceres/internal/port.h"
#include "ceres/sparse_matrix.h"
#include "ceres/internal/eigen.h"
namespace ceres {
namespace internal {
@@ -54,7 +52,7 @@ class TripletSparseMatrix;
//
// internal/ceres/block_structure.h
//
class CERES_EXPORT_INTERNAL BlockSparseMatrix : public SparseMatrix {
class BlockSparseMatrix : public SparseMatrix {
public:
// Construct a block sparse matrix with a fully initialized
// CompressedRowBlockStructure objected. The matrix takes over
@@ -79,13 +77,11 @@ class CERES_EXPORT_INTERNAL BlockSparseMatrix : public SparseMatrix {
void ToDenseMatrix(Matrix* dense_matrix) const final;
void ToTextFile(FILE* file) const final;
// clang-format off
int num_rows() const final { return num_rows_; }
int num_cols() const final { return num_cols_; }
int num_nonzeros() const final { return num_nonzeros_; }
const double* values() const final { return values_.get(); }
double* mutable_values() final { return values_.get(); }
// clang-format on
void ToTripletSparseMatrix(TripletSparseMatrix* matrix) const;
const CompressedRowBlockStructure* block_structure() const;
@@ -98,7 +94,8 @@ class CERES_EXPORT_INTERNAL BlockSparseMatrix : public SparseMatrix {
void DeleteRowBlocks(int delta_row_blocks);
static BlockSparseMatrix* CreateDiagonalMatrix(
const double* diagonal, const std::vector<Block>& column_blocks);
const double* diagonal,
const std::vector<Block>& column_blocks);
struct RandomMatrixOptions {
int num_row_blocks = 0;

View File

@@ -35,7 +35,7 @@ namespace internal {
bool CellLessThan(const Cell& lhs, const Cell& rhs) {
if (lhs.block_id == rhs.block_id) {
return (lhs.position < rhs.position);
return (lhs.position < rhs.position);
}
return (lhs.block_id < rhs.block_id);
}

View File

@@ -40,7 +40,6 @@
#include <cstdint>
#include <vector>
#include "ceres/internal/port.h"
namespace ceres {

View File

@@ -34,10 +34,9 @@
#include "ceres/c_api.h"
#include <vector>
#include <iostream>
#include <string>
#include <vector>
#include "ceres/cost_function.h"
#include "ceres/loss_function.h"
#include "ceres/problem.h"
@@ -71,7 +70,8 @@ class CallbackCostFunction : public ceres::CostFunction {
int num_residuals,
int num_parameter_blocks,
int* parameter_block_sizes)
: cost_function_(cost_function), user_data_(user_data) {
: cost_function_(cost_function),
user_data_(user_data) {
set_num_residuals(num_residuals);
for (int i = 0; i < num_parameter_blocks; ++i) {
mutable_parameter_block_sizes()->push_back(parameter_block_sizes[i]);
@@ -81,10 +81,12 @@ class CallbackCostFunction : public ceres::CostFunction {
virtual ~CallbackCostFunction() {}
bool Evaluate(double const* const* parameters,
double* residuals,
double** jacobians) const final {
return (*cost_function_)(
user_data_, const_cast<double**>(parameters), residuals, jacobians);
double* residuals,
double** jacobians) const final {
return (*cost_function_)(user_data_,
const_cast<double**>(parameters),
residuals,
jacobians);
}
private:
@@ -98,7 +100,7 @@ class CallbackLossFunction : public ceres::LossFunction {
public:
explicit CallbackLossFunction(ceres_loss_function_t loss_function,
void* user_data)
: loss_function_(loss_function), user_data_(user_data) {}
: loss_function_(loss_function), user_data_(user_data) {}
void Evaluate(double sq_norm, double* rho) const final {
(*loss_function_)(user_data_, sq_norm, rho);
}
@@ -132,8 +134,8 @@ void ceres_free_stock_loss_function_data(void* loss_function_data) {
void ceres_stock_loss_function(void* user_data,
double squared_norm,
double out[3]) {
reinterpret_cast<ceres::LossFunction*>(user_data)->Evaluate(squared_norm,
out);
reinterpret_cast<ceres::LossFunction*>(user_data)
->Evaluate(squared_norm, out);
}
ceres_residual_block_id_t* ceres_problem_add_residual_block(
@@ -157,15 +159,16 @@ ceres_residual_block_id_t* ceres_problem_add_residual_block(
ceres::LossFunction* callback_loss_function = NULL;
if (loss_function != NULL) {
callback_loss_function =
new CallbackLossFunction(loss_function, loss_function_data);
callback_loss_function = new CallbackLossFunction(loss_function,
loss_function_data);
}
std::vector<double*> parameter_blocks(parameters,
parameters + num_parameter_blocks);
return reinterpret_cast<ceres_residual_block_id_t*>(
ceres_problem->AddResidualBlock(
callback_cost_function, callback_loss_function, parameter_blocks));
ceres_problem->AddResidualBlock(callback_cost_function,
callback_loss_function,
parameter_blocks));
}
void ceres_solve(ceres_problem_t* c_problem) {

View File

@@ -28,10 +28,8 @@
//
// Author: sameeragarwal@google.com (Sameer Agarwal)
#include "ceres/callbacks.h"
#include <iostream> // NO LINT
#include "ceres/callbacks.h"
#include "ceres/program.h"
#include "ceres/stringprintf.h"
#include "glog/logging.h"
@@ -78,7 +76,8 @@ CallbackReturnType GradientProblemSolverStateUpdatingCallback::operator()(
LoggingCallback::LoggingCallback(const MinimizerType minimizer_type,
const bool log_to_stdout)
: minimizer_type(minimizer_type), log_to_stdout_(log_to_stdout) {}
: minimizer_type(minimizer_type),
log_to_stdout_(log_to_stdout) {}
LoggingCallback::~LoggingCallback() {}
@@ -100,13 +99,11 @@ CallbackReturnType LoggingCallback::operator()(
summary.iteration_time_in_seconds,
summary.cumulative_time_in_seconds);
} else if (minimizer_type == TRUST_REGION) {
// clang-format off
if (summary.iteration == 0) {
output = "iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time\n"; // NOLINT
}
const char* kReportRowFormat =
"% 4d % 8e % 3.2e % 3.2e % 3.2e % 3.2e % 3.2e % 4d % 3.2e % 3.2e"; // NOLINT
// clang-format on
output += StringPrintf(kReportRowFormat,
summary.iteration,
summary.cost,

View File

@@ -32,9 +32,8 @@
#define CERES_INTERNAL_CALLBACKS_H_
#include <string>
#include "ceres/internal/port.h"
#include "ceres/iteration_callback.h"
#include "ceres/internal/port.h"
namespace ceres {
namespace internal {
@@ -48,7 +47,6 @@ class StateUpdatingCallback : public IterationCallback {
StateUpdatingCallback(Program* program, double* parameters);
virtual ~StateUpdatingCallback();
CallbackReturnType operator()(const IterationSummary& summary) final;
private:
Program* program_;
double* parameters_;
@@ -63,7 +61,6 @@ class GradientProblemSolverStateUpdatingCallback : public IterationCallback {
double* user_parameters);
virtual ~GradientProblemSolverStateUpdatingCallback();
CallbackReturnType operator()(const IterationSummary& summary) final;
private:
int num_parameters_;
const double* internal_parameters_;

View File

@@ -31,8 +31,8 @@
#include "ceres/canonical_views_clustering.h"
#include <unordered_map>
#include <unordered_set>
#include <unordered_map>
#include "ceres/graph.h"
#include "ceres/map_util.h"
@@ -126,7 +126,8 @@ void CanonicalViewsClustering::ComputeClustering(
// Add canonical view if quality improves, or if minimum is not
// yet met, otherwise break.
if ((best_difference <= 0) && (centers->size() >= options_.min_views)) {
if ((best_difference <= 0) &&
(centers->size() >= options_.min_views)) {
break;
}
@@ -140,7 +141,8 @@ void CanonicalViewsClustering::ComputeClustering(
// Return the set of vertices of the graph which have valid vertex
// weights.
void CanonicalViewsClustering::FindValidViews(IntSet* valid_views) const {
void CanonicalViewsClustering::FindValidViews(
IntSet* valid_views) const {
const IntSet& views = graph_->vertices();
for (const auto& view : views) {
if (graph_->VertexWeight(view) != WeightedGraph<int>::InvalidWeight()) {
@@ -152,7 +154,8 @@ void CanonicalViewsClustering::FindValidViews(IntSet* valid_views) const {
// Computes the difference in the quality score if 'candidate' were
// added to the set of canonical views.
double CanonicalViewsClustering::ComputeClusteringQualityDifference(
const int candidate, const vector<int>& centers) const {
const int candidate,
const vector<int>& centers) const {
// View score.
double difference =
options_.view_score_weight * graph_->VertexWeight(candidate);
@@ -176,7 +179,7 @@ double CanonicalViewsClustering::ComputeClusteringQualityDifference(
// Orthogonality.
for (int i = 0; i < centers.size(); ++i) {
difference -= options_.similarity_penalty_weight *
graph_->EdgeWeight(centers[i], candidate);
graph_->EdgeWeight(centers[i], candidate);
}
return difference;
@@ -189,7 +192,8 @@ void CanonicalViewsClustering::UpdateCanonicalViewAssignments(
for (const auto& neighbor : neighbors) {
const double old_similarity =
FindWithDefault(view_to_canonical_view_similarity_, neighbor, 0.0);
const double new_similarity = graph_->EdgeWeight(neighbor, canonical_view);
const double new_similarity =
graph_->EdgeWeight(neighbor, canonical_view);
if (new_similarity > old_similarity) {
view_to_canonical_view_[neighbor] = canonical_view;
view_to_canonical_view_similarity_[neighbor] = new_similarity;
@@ -199,7 +203,8 @@ void CanonicalViewsClustering::UpdateCanonicalViewAssignments(
// Assign a cluster id to each view.
void CanonicalViewsClustering::ComputeClusterMembership(
const vector<int>& centers, IntMap* membership) const {
const vector<int>& centers,
IntMap* membership) const {
CHECK(membership != nullptr);
membership->clear();

Some files were not shown because too many files have changed in this diff Show More