Compare commits

..

33 Commits

Author SHA1 Message Date
bf4b31468e Merge branch 'geometry-nodes' into geometry-nodes-transform-node 2020-10-24 12:42:24 -05:00
f5de32562e Cleanup: Remove unused variable 2020-10-24 12:42:04 -05:00
bc4e31afb6 Merge branch 'master' into geometry-nodes 2020-10-24 14:38:00 +02:00
065dfdc529 Geometry Nodes: Add transform geometry node 2020-10-23 15:25:38 -05:00
9d7672be71 Merge branch 'master' into geometry-nodes 2020-10-23 15:18:20 +02:00
994e7178bb Geometry Nodes: make some function nodes available
We might not want to have all those nodes in a final version.
Some of them have been added with particle nodes in mind.
However, to test the evaluation system it is useful to have a
couple of nodes available.

Those nodes should "just" work, because their implementation
is reused from the particle nodes project.
2020-10-23 15:13:19 +02:00
1719743066 Geometry Nodes: improve node group evaluation
This adds support for nodes that have a multi-function implementation.
That includes various function nodes like Math, Combine Vector, ...

Furthermore, there is support for implicit conversions now. So it should
work when one connects e.g. a float to an integer and vice versa.
2020-10-23 15:09:55 +02:00
8910033f57 Nodes: add utility methods 2020-10-23 15:05:01 +02:00
2a4c6c612a Functions: add utility method 2020-10-23 15:01:07 +02:00
b062b922f9 Geometry Nodes: Resolve some missing 3D viewport updates
These two functions "snode_notify" and "ED_node_tag_update_id" appear to
be mostly duplicates. However, there is already a case for each type of
built-in node tree, so it makes sense to add one for the geometry node
tree as well. This doesn't solve the update issues for changing number
in buttons, that must be handled somewhere else.
2020-10-22 22:59:40 -05:00
895f4620a0 Merge branch 'master' into geometry-nodes 2020-10-22 22:03:28 -05:00
fafed6234b Geometry Nodes: Add edge split node functionality 2020-10-22 13:48:56 -05:00
7ff8094a8b Geometry Nodes: expose minimum vertices input of Triangulate node 2020-10-22 18:24:05 +02:00
5aabf67a9c Fix error in previous commit
That should not have happened -.-
2020-10-22 18:23:39 +02:00
ab8c7fe946 Fix previous comment 2020-10-22 18:20:17 +02:00
a05012d500 Geometry Nodes: simplify and deduplicate callbacks on sockets
This adds a layer of abstraction between the code calling callbacks
on sockets and the implementation of those callbacks.
This abstraction layer allows some sockets to not implement all
callbacks when those can be derived from some other callback.
2020-10-22 18:08:27 +02:00
97a93566e9 Geometry Nodes: change "Node Tree" to "Node Group" 2020-10-22 15:52:15 +02:00
da4d697772 Geometry Nodes: initial support for evaluating geometry node groups
This is still very basic and does quite a few unnecessary computations.
Also the error handling is quite weak currently, so when invalid things are
connected, it will probably just crash.

Also the interface that individual nodes have to implement will have to change,
but the current solution is a good starting point.

Only the triangulate node is implemented for now.
2020-10-22 15:05:41 +02:00
87218899be Geometry Nodes: add an initial geometry class 2020-10-22 15:02:27 +02:00
ffa0a6df9d Functions: add generic pointer class
This class represents a pointer whose type is only known at runtime.
2020-10-22 15:01:31 +02:00
706fa5ad76 Functions: add move operations to CPPType 2020-10-22 15:00:07 +02:00
b7f6de490d Geometry Nodes: Add initial node definition for edge split
This is just based on rBa7dba81aab22, and contains no funcionality at all.
2020-10-21 16:11:09 -05:00
7e485b4620 Merge branch 'master' into geometry-nodes 2020-10-21 08:54:39 -05:00
a7dba81aab Nodes: add initial UI for Triangulate node 2020-10-21 14:14:09 +02:00
4606e83a75 Merge branch 'master' into geometry-nodes 2020-10-21 14:00:32 +02:00
1d28de57a4 Nodes: improve dependency between modifier and node group 2020-10-21 13:16:19 +02:00
3cfcfb938d Nodes: support creating geometry node groups 2020-10-21 12:32:02 +02:00
bcdc6910a0 Nodes: show header in geometry node editor 2020-10-21 12:16:57 +02:00
7793e8c884 Modifiers: add node_tree to NodesModifierData 2020-10-21 12:13:13 +02:00
05d9bd7c4a Modifiers: rename Simulation to Nodes modifier 2020-10-21 12:03:06 +02:00
9255ce9247 Nodes: rename Simulation to Geometry node tree 2020-10-21 11:39:42 +02:00
a0ce0154e7 Merge branch 'master' into geometry-nodes 2020-10-21 11:11:16 +02:00
0cd7f7ddd1 Nodes: add geometry socket type
We still have to pick a color for this socket.

Ref T81848.
2020-10-20 15:31:59 +02:00
3556 changed files with 77366 additions and 131524 deletions

View File

@@ -15,6 +15,7 @@ Checks: >
-readability-misleading-indentation, -readability-misleading-indentation,
-readability-redundant-member-init,
-readability-use-anyofallof, -readability-use-anyofallof,
-readability-function-cognitive-complexity, -readability-function-cognitive-complexity,
@@ -29,19 +30,7 @@ Checks: >
-bugprone-sizeof-expression, -bugprone-sizeof-expression,
-bugprone-integer-division, -bugprone-integer-division,
-bugprone-exception-escape,
-bugprone-redundant-branch-condition, -bugprone-redundant-branch-condition,
modernize-*,
-modernize-use-auto,
-modernize-use-trailing-return-type,
-modernize-avoid-c-arrays,
-modernize-use-equals-default,
-modernize-use-nodiscard,
-modernize-loop-convert,
-modernize-pass-by-value,
-modernize-use-default-member-init,
-modernize-raw-string-literal,
-modernize-avoid-bind,
-modernize-use-transparent-functors,
WarningsAsErrors: '*' WarningsAsErrors: '*'

View File

@@ -1,104 +0,0 @@
# git config blame.ignoreRevsFile .git-blame-ignore-revs
#
# After running the above, commits listed in this file will be
# ignored by git blame. The blame will be shifted to the person
# who edited the line(s) before the ignored commit.
#
# To disable this ignorance for a command, run as follows
# git blame --ignore-revs-file="" <other options>
#
# Changes that belong here:
# - Massive comment, doxy-sections, or spelling corrections.
# - Clang-format, PEP8 or other automated changes which are *strictly* "no functional change".
# - Several commits should be added to this list at once, because adding
# one extra commit (to edit this file) after every cleanup is noisy.
# - No clang-tidy changes.
#
# Note:
# - The comment above the SHA should be the first line of the commit.
# - It is fine to pack together similar commits if they have the same explanatory comment.
# - Use only 40 character git SHAs; not smaller ones, not prefixed with rB.
#
# https://git-scm.com/docs/git-blame/2.23.0
# white space commit. (2 spaces -> tab).
0a3694cd6ebec710da7110e9f168a72d47c71ee0
# Cycles: Cleanup, spacing after preprocessor
cb4b5e12abf1fc6cf9ffc0944e0a1bc406286c63
# ClangFormat: apply to source, most of intern
e12c08e8d170b7ca40f204a5b0423c23a9fbc2c1
# Code Style: use "#pragma once" in source directory
91694b9b58ab953f3b313be9389cc1303e472fc2
# Code Style: use "#pragma once" in some newer headers
8198dbb888856b8c11757586df02aca15f132f90
# Code Style: use "#pragma once" in intern/ghost
1b1129f82a9cf316b54fbc025f8cfcc1a74b8589
# Cleanup: mostly comments, use doxy syntax & typos
e0cb02587012b4b2f4b18363dc7d0a7da2c02093
# Cleanup: use C comments for descriptive text
2abfcebb0eb7989e3d1e7d03f37ecf5c088210af
# use lowercase for cmake builtin names and macros, remove contents in else() and endif() which is no longer needed.
afacd184982e58a9c830a3d5366e25983939a7ba
# Spelling: It's Versus Its
3a7fd309fce89213b0224b3c6807adb2d1fe7ca8
# Spelling: Then Versus Than
d1eefc421544e2ea632fb35cb6bcaade4c39ce6b
# Spelling: Miscellaneous
84ef3b80de4915a24a9fd2fd214d0fa44e59b854
# Spelling: Loose Versus Lose
c0a6bc19794c69843c38451c762e91bc10136e0f
# Spelling: Apart Versus A Part
3d26cd01b9ba6381eb165e11536345ae652dfb41
# Cleanup: use 2 space indentation for CMake
3076d95ba441cd32706a27d18922a30f8fd28b8a
# Cleanup: use over-line for doxy comments
4b188bb08cf5aaae3c68ab57bbcfa037eef1ac10
# Cleanup: General comment style clean up of graph_edit.c and fcurve.c
0105f146bb40bd609ccbda3d3f6aeb8e14ad3f9e
# Cleanup: pep8 (indentation, spacing, long lines)
41d2d6da0c96d351b47acb64d3e0decdba16cb16
# Cleanup: pep8, blank lines
bab9de2a52929fe2b45ecddb1eb09da3378e303b
# Cleanup: PEP8 for python changes
1e7e94588daa66483190f45a9de5e98228f80e05
# GPencil: Cleanup pep8
a09cc3ee1a99f2cd5040bbf30c8ab8c588bb2bb1
# Cleanup: trailing space, remove tabs, pep8
c42a6b77b52560d257279de2cb624b4ef2c0d24c
# Cleanup: use C style doxygen comments
8c1726918374e1d2d2123e17bae8db5aadde3433
# Cleanup: use doxy sections for imbuf
c207f7c22e1439e0b285fba5d2c072bdae23f981
# Cleanup: clang-format
c4d8f6a4a8ddc29ed27311ed7578b3c8c31399d2
b5d310b569e07a937798a2d38539cfd290149f1c
8c846cccd6bdfd3e90a695fabbf05f53e5466a57
4eac03d821fa17546f562485f7d073813a5e5943
1166110a9d66af9c5a47cee2be591f50fdc445e8
# Cleanup: clang-format.
40d4a4cb1a6b4c3c2a486e8f2868f547530e0811

View File

@@ -178,7 +178,6 @@ mark_as_advanced(BUILDINFO_OVERRIDE_TIME)
option(WITH_IK_ITASC "Enable ITASC IK solver (only disable for development & for incompatible C++ compilers)" ON) option(WITH_IK_ITASC "Enable ITASC IK solver (only disable for development & for incompatible C++ compilers)" ON)
option(WITH_IK_SOLVER "Enable Legacy IK solver (only disable for development)" ON) option(WITH_IK_SOLVER "Enable Legacy IK solver (only disable for development)" ON)
option(WITH_FFTW3 "Enable FFTW3 support (Used for smoke, ocean sim, and audio effects)" ON) option(WITH_FFTW3 "Enable FFTW3 support (Used for smoke, ocean sim, and audio effects)" ON)
option(WITH_PUGIXML "Enable PugiXML support (Used for OpenImageIO, Grease Pencil SVG export)" ON)
option(WITH_BULLET "Enable Bullet (Physics Engine)" ON) option(WITH_BULLET "Enable Bullet (Physics Engine)" ON)
option(WITH_SYSTEM_BULLET "Use the systems bullet library (currently unsupported due to missing features in upstream!)" ) option(WITH_SYSTEM_BULLET "Use the systems bullet library (currently unsupported due to missing features in upstream!)" )
mark_as_advanced(WITH_SYSTEM_BULLET) mark_as_advanced(WITH_SYSTEM_BULLET)
@@ -204,8 +203,7 @@ option(WITH_OPENVDB "Enable features relying on OpenVDB" ON)
option(WITH_OPENVDB_BLOSC "Enable blosc compression for OpenVDB, only enable if OpenVDB was built with blosc support" ON) option(WITH_OPENVDB_BLOSC "Enable blosc compression for OpenVDB, only enable if OpenVDB was built with blosc support" ON)
option(WITH_OPENVDB_3_ABI_COMPATIBLE "Assume OpenVDB library has been compiled with version 3 ABI compatibility" OFF) option(WITH_OPENVDB_3_ABI_COMPATIBLE "Assume OpenVDB library has been compiled with version 3 ABI compatibility" OFF)
mark_as_advanced(WITH_OPENVDB_3_ABI_COMPATIBLE) mark_as_advanced(WITH_OPENVDB_3_ABI_COMPATIBLE)
option(WITH_NANOVDB "Enable usage of NanoVDB data structure for rendering on the GPU" ON) option(WITH_NANOVDB "Enable usage of NanoVDB data structure for accelerated rendering on the GPU" OFF)
option(WITH_HARU "Enable features relying on Libharu (Grease pencil PDF export)" ON)
# GHOST Windowing Library Options # GHOST Windowing Library Options
option(WITH_GHOST_DEBUG "Enable debugging output for the GHOST library" OFF) option(WITH_GHOST_DEBUG "Enable debugging output for the GHOST library" OFF)
@@ -348,21 +346,16 @@ if(UNIX AND NOT APPLE)
endif() endif()
option(WITH_PYTHON_INSTALL "Copy system python into the blender install folder" ON) option(WITH_PYTHON_INSTALL "Copy system python into the blender install folder" ON)
if((WITH_AUDASPACE AND NOT WITH_SYSTEM_AUDASPACE) OR WITH_MOD_FLUID)
option(WITH_PYTHON_NUMPY "Include NumPy in Blender (used by Audaspace and Mantaflow)" ON)
endif()
if(WIN32 OR APPLE) if(WIN32 OR APPLE)
# Windows and macOS have this bundled with Python libraries. # Windows and macOS have this bundled with Python libraries.
elseif(WITH_PYTHON_INSTALL OR WITH_PYTHON_NUMPY) elseif(WITH_PYTHON_INSTALL OR (WITH_AUDASPACE AND NOT WITH_SYSTEM_AUDASPACE))
set(PYTHON_NUMPY_PATH "" CACHE PATH "Path to python site-packages or dist-packages containing 'numpy' module") set(PYTHON_NUMPY_PATH "" CACHE PATH "Path to python site-packages or dist-packages containing 'numpy' module")
mark_as_advanced(PYTHON_NUMPY_PATH) mark_as_advanced(PYTHON_NUMPY_PATH)
set(PYTHON_NUMPY_INCLUDE_DIRS "" CACHE PATH "Path to the include directory of the NumPy module") set(PYTHON_NUMPY_INCLUDE_DIRS ${PYTHON_NUMPY_PATH}/numpy/core/include CACHE PATH "Path to the include directory of the numpy module")
mark_as_advanced(PYTHON_NUMPY_INCLUDE_DIRS) mark_as_advanced(PYTHON_NUMPY_INCLUDE_DIRS)
endif() endif()
if(WITH_PYTHON_INSTALL) if(WITH_PYTHON_INSTALL)
option(WITH_PYTHON_INSTALL_NUMPY "Copy system NumPy into the blender install folder" ON) option(WITH_PYTHON_INSTALL_NUMPY "Copy system numpy into the blender install folder" ON)
if(UNIX AND NOT APPLE) if(UNIX AND NOT APPLE)
option(WITH_PYTHON_INSTALL_REQUESTS "Copy system requests into the blender install folder" ON) option(WITH_PYTHON_INSTALL_REQUESTS "Copy system requests into the blender install folder" ON)
@@ -384,7 +377,6 @@ option(WITH_CYCLES_CUDA_BINARIES "Build Cycles CUDA binaries" OFF)
option(WITH_CYCLES_CUBIN_COMPILER "Build cubins with nvrtc based compiler instead of nvcc" OFF) option(WITH_CYCLES_CUBIN_COMPILER "Build cubins with nvrtc based compiler instead of nvcc" OFF)
option(WITH_CYCLES_CUDA_BUILD_SERIAL "Build cubins one after another (useful on machines with limited RAM)" OFF) option(WITH_CYCLES_CUDA_BUILD_SERIAL "Build cubins one after another (useful on machines with limited RAM)" OFF)
mark_as_advanced(WITH_CYCLES_CUDA_BUILD_SERIAL) mark_as_advanced(WITH_CYCLES_CUDA_BUILD_SERIAL)
set(CYCLES_TEST_DEVICES CPU CACHE STRING "Run regression tests on the specified device types (CPU CUDA OPTIX OPENCL)" )
set(CYCLES_CUDA_BINARIES_ARCH sm_30 sm_35 sm_37 sm_50 sm_52 sm_60 sm_61 sm_70 sm_75 sm_86 compute_75 CACHE STRING "CUDA architectures to build binaries for") set(CYCLES_CUDA_BINARIES_ARCH sm_30 sm_35 sm_37 sm_50 sm_52 sm_60 sm_61 sm_70 sm_75 sm_86 compute_75 CACHE STRING "CUDA architectures to build binaries for")
mark_as_advanced(CYCLES_CUDA_BINARIES_ARCH) mark_as_advanced(CYCLES_CUDA_BINARIES_ARCH)
unset(PLATFORM_DEFAULT) unset(PLATFORM_DEFAULT)
@@ -433,8 +425,8 @@ mark_as_advanced(WITH_CXX_GUARDEDALLOC)
option(WITH_ASSERT_ABORT "Call abort() when raising an assertion through BLI_assert()" ON) option(WITH_ASSERT_ABORT "Call abort() when raising an assertion through BLI_assert()" ON)
mark_as_advanced(WITH_ASSERT_ABORT) mark_as_advanced(WITH_ASSERT_ABORT)
if((UNIX AND NOT APPLE) OR (CMAKE_GENERATOR MATCHES "^Visual Studio.+")) if(UNIX AND NOT APPLE)
option(WITH_CLANG_TIDY "Use Clang Tidy to analyze the source code (only enable for development on Linux using Clang, or Windows using the Visual Studio IDE)" OFF) option(WITH_CLANG_TIDY "Use Clang Tidy to analyze the source code (only enable for development on Linux using Clang)" OFF)
mark_as_advanced(WITH_CLANG_TIDY) mark_as_advanced(WITH_CLANG_TIDY)
endif() endif()
@@ -534,10 +526,10 @@ if(CMAKE_COMPILER_IS_GNUCC OR CMAKE_C_COMPILER_ID MATCHES "Clang")
# Silence the warning that object-size is not effective in -O0. # Silence the warning that object-size is not effective in -O0.
set(_asan_defaults "${_asan_defaults}") set(_asan_defaults "${_asan_defaults}")
else() else()
string(APPEND _asan_defaults " -fsanitize=object-size") set(_asan_defaults "${_asan_defaults} -fsanitize=object-size")
endif() endif()
else() else()
string(APPEND _asan_defaults " -fsanitize=leak -fsanitize=object-size") set(_asan_defaults "${_asan_defaults} -fsanitize=leak -fsanitize=object-size")
endif() endif()
set(COMPILER_ASAN_CFLAGS "${_asan_defaults}" CACHE STRING "C flags for address sanitizer") set(COMPILER_ASAN_CFLAGS "${_asan_defaults}" CACHE STRING "C flags for address sanitizer")
@@ -578,11 +570,6 @@ if(CMAKE_COMPILER_IS_GNUCC OR CMAKE_C_COMPILER_ID MATCHES "Clang")
endif() endif()
endif() endif()
if(CMAKE_COMPILER_IS_GNUCC OR CMAKE_C_COMPILER_ID MATCHES "Clang")
option(WITH_COMPILER_SHORT_FILE_MACRO "Make paths in macros like __FILE__ relative to top level source and build directories." ON)
mark_as_advanced(WITH_COMPILER_SHORT_FILE_MACRO)
endif()
if(WIN32) if(WIN32)
# Use hardcoded paths or find_package to find externals # Use hardcoded paths or find_package to find externals
option(WITH_WINDOWS_FIND_MODULES "Use find_package to locate libraries" OFF) option(WITH_WINDOWS_FIND_MODULES "Use find_package to locate libraries" OFF)
@@ -611,11 +598,6 @@ if(WIN32)
endif() endif()
if(UNIX)
# See WITH_WINDOWS_SCCACHE for Windows.
option(WITH_COMPILER_CCACHE "Use ccache to improve rebuild times (Works with Ninja, Makefiles and Xcode)" OFF)
endif()
# The following only works with the Ninja generator in CMake >= 3.0. # The following only works with the Ninja generator in CMake >= 3.0.
if("${CMAKE_GENERATOR}" MATCHES "Ninja") if("${CMAKE_GENERATOR}" MATCHES "Ninja")
option(WITH_NINJA_POOL_JOBS option(WITH_NINJA_POOL_JOBS
@@ -710,8 +692,6 @@ set_and_warn_dependency(WITH_BOOST WITH_OPENCOLORIO OFF)
set_and_warn_dependency(WITH_BOOST WITH_QUADRIFLOW OFF) set_and_warn_dependency(WITH_BOOST WITH_QUADRIFLOW OFF)
set_and_warn_dependency(WITH_BOOST WITH_USD OFF) set_and_warn_dependency(WITH_BOOST WITH_USD OFF)
set_and_warn_dependency(WITH_BOOST WITH_ALEMBIC OFF) set_and_warn_dependency(WITH_BOOST WITH_ALEMBIC OFF)
set_and_warn_dependency(WITH_PUGIXML WITH_CYCLES_OSL OFF)
set_and_warn_dependency(WITH_PUGIXML WITH_OPENIMAGEIO OFF)
if(WITH_BOOST AND NOT (WITH_CYCLES OR WITH_OPENIMAGEIO OR WITH_INTERNATIONAL OR if(WITH_BOOST AND NOT (WITH_CYCLES OR WITH_OPENIMAGEIO OR WITH_INTERNATIONAL OR
WITH_OPENVDB OR WITH_OPENCOLORIO OR WITH_USD OR WITH_ALEMBIC)) WITH_OPENVDB OR WITH_OPENCOLORIO OR WITH_USD OR WITH_ALEMBIC))
@@ -731,9 +711,6 @@ set_and_warn_dependency(WITH_OPENVDB WITH_NANOVDB OFF)
# OpenVDB uses 'half' type from OpenEXR & fails to link without OpenEXR enabled. # OpenVDB uses 'half' type from OpenEXR & fails to link without OpenEXR enabled.
set_and_warn_dependency(WITH_IMAGE_OPENEXR WITH_OPENVDB OFF) set_and_warn_dependency(WITH_IMAGE_OPENEXR WITH_OPENVDB OFF)
# Haru needs `TIFFFaxBlackCodes` & `TIFFFaxWhiteCodes` symbols from TIFF.
set_and_warn_dependency(WITH_IMAGE_TIFF WITH_HARU OFF)
# auto enable openimageio for cycles # auto enable openimageio for cycles
if(WITH_CYCLES) if(WITH_CYCLES)
set(WITH_OPENIMAGEIO ON) set(WITH_OPENIMAGEIO ON)
@@ -881,11 +858,11 @@ if(NOT CMAKE_BUILD_TYPE MATCHES "Release")
# Since linker flags are not set, all compiler checks and `find_package` # Since linker flags are not set, all compiler checks and `find_package`
# calls that rely on `try_compile` will fail. # calls that rely on `try_compile` will fail.
# See CMP0066 also. # See CMP0066 also.
string(APPEND CMAKE_C_FLAGS_DEBUG " ${COMPILER_ASAN_CFLAGS}") set(CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG} ${COMPILER_ASAN_CFLAGS}")
string(APPEND CMAKE_C_FLAGS_RELWITHDEBINFO " ${COMPILER_ASAN_CFLAGS}") set(CMAKE_C_FLAGS_RELWITHDEBINFO "${CMAKE_C_FLAGS_RELWITHDEBINFO} ${COMPILER_ASAN_CFLAGS}")
string(APPEND CMAKE_CXX_FLAGS_DEBUG " ${COMPILER_ASAN_CXXFLAGS}") set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} ${COMPILER_ASAN_CXXFLAGS}")
string(APPEND CMAKE_CXX_FLAGS_RELWITHDEBINFO " ${COMPILER_ASAN_CXXFLAGS}") set(CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO} ${COMPILER_ASAN_CXXFLAGS}")
endif() endif()
if(MSVC) if(MSVC)
set(COMPILER_ASAN_LINKER_FLAGS "/FUNCTIONPADMIN:6") set(COMPILER_ASAN_LINKER_FLAGS "/FUNCTIONPADMIN:6")
@@ -893,11 +870,9 @@ if(NOT CMAKE_BUILD_TYPE MATCHES "Release")
if(APPLE AND COMPILER_ASAN_LIBRARY) if(APPLE AND COMPILER_ASAN_LIBRARY)
string(REPLACE " " ";" _list_COMPILER_ASAN_CFLAGS ${COMPILER_ASAN_CFLAGS}) string(REPLACE " " ";" _list_COMPILER_ASAN_CFLAGS ${COMPILER_ASAN_CFLAGS})
set(_is_CONFIG_DEBUG "$<OR:$<CONFIG:Debug>,$<CONFIG:RelWithDebInfo>>") add_compile_options("$<$<NOT:$<CONFIG:Release>>:${_list_COMPILER_ASAN_CFLAGS}>")
add_compile_options("$<${_is_CONFIG_DEBUG}:${_list_COMPILER_ASAN_CFLAGS}>") add_link_options("$<$<NOT:$<CONFIG:Release>>:-fno-omit-frame-pointer;-fsanitize=address>")
add_link_options("$<${_is_CONFIG_DEBUG}:-fno-omit-frame-pointer;-fsanitize=address>")
unset(_list_COMPILER_ASAN_CFLAGS) unset(_list_COMPILER_ASAN_CFLAGS)
unset(_is_CONFIG_DEBUG)
elseif(COMPILER_ASAN_LIBRARY) elseif(COMPILER_ASAN_LIBRARY)
set(PLATFORM_LINKLIBS "${PLATFORM_LINKLIBS};${COMPILER_ASAN_LIBRARY}") set(PLATFORM_LINKLIBS "${PLATFORM_LINKLIBS};${COMPILER_ASAN_LIBRARY}")
set(PLATFORM_LINKFLAGS "${COMPILER_ASAN_LIBRARY} ${COMPILER_ASAN_LINKER_FLAGS}") set(PLATFORM_LINKFLAGS "${COMPILER_ASAN_LIBRARY} ${COMPILER_ASAN_LINKER_FLAGS}")
@@ -966,11 +941,11 @@ endif()
# Do it globally, SSE2 is required for quite some time now. # Do it globally, SSE2 is required for quite some time now.
# Doing it now allows to use SSE/SSE2 in inline headers. # Doing it now allows to use SSE/SSE2 in inline headers.
if(SUPPORT_SSE_BUILD) if(SUPPORT_SSE_BUILD)
string(PREPEND PLATFORM_CFLAGS "${COMPILER_SSE_FLAG} ") set(PLATFORM_CFLAGS " ${COMPILER_SSE_FLAG} ${PLATFORM_CFLAGS}")
add_definitions(-D__SSE__ -D__MMX__) add_definitions(-D__SSE__ -D__MMX__)
endif() endif()
if(SUPPORT_SSE2_BUILD) if(SUPPORT_SSE2_BUILD)
string(APPEND PLATFORM_CFLAGS " ${COMPILER_SSE2_FLAG}") set(PLATFORM_CFLAGS " ${PLATFORM_CFLAGS} ${COMPILER_SSE2_FLAG}")
add_definitions(-D__SSE2__) add_definitions(-D__SSE2__)
if(NOT SUPPORT_SSE_BUILD) # don't double up if(NOT SUPPORT_SSE_BUILD) # don't double up
add_definitions(-D__MMX__) add_definitions(-D__MMX__)
@@ -1182,8 +1157,8 @@ if(WITH_OPENMP)
if(OPENMP_FOUND) if(OPENMP_FOUND)
if(NOT WITH_OPENMP_STATIC) if(NOT WITH_OPENMP_STATIC)
string(APPEND CMAKE_C_FLAGS " ${OpenMP_C_FLAGS}") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${OpenMP_C_FLAGS}")
string(APPEND CMAKE_CXX_FLAGS " ${OpenMP_CXX_FLAGS}") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${OpenMP_CXX_FLAGS}")
else() else()
# Typically avoid adding flags as defines but we can't # Typically avoid adding flags as defines but we can't
# pass OpenMP flags to the linker for static builds, meaning # pass OpenMP flags to the linker for static builds, meaning
@@ -1500,12 +1475,10 @@ if(CMAKE_COMPILER_IS_GNUCC)
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_INT_IN_BOOL_CONTEXT -Wno-int-in-bool-context) ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_INT_IN_BOOL_CONTEXT -Wno-int-in-bool-context)
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_FORMAT -Wno-format) ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_FORMAT -Wno-format)
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_SWITCH -Wno-switch) ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_SWITCH -Wno-switch)
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_UNUSED_VARIABLE -Wno-unused-variable)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_CLASS_MEMACCESS -Wno-class-memaccess) ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_CLASS_MEMACCESS -Wno-class-memaccess)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_COMMENT -Wno-comment) ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_COMMENT -Wno-comment)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_TYPEDEFS -Wno-unused-local-typedefs) ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_TYPEDEFS -Wno-unused-local-typedefs)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_VARIABLE -Wno-unused-variable)
if(CMAKE_COMPILER_IS_GNUCC AND (NOT "${CMAKE_C_COMPILER_VERSION}" VERSION_LESS "7.0")) if(CMAKE_COMPILER_IS_GNUCC AND (NOT "${CMAKE_C_COMPILER_VERSION}" VERSION_LESS "7.0"))
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_IMPLICIT_FALLTHROUGH -Wno-implicit-fallthrough) ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_IMPLICIT_FALLTHROUGH -Wno-implicit-fallthrough)
@@ -1544,7 +1517,6 @@ elseif(CMAKE_C_COMPILER_ID MATCHES "Clang")
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_UNUSED_PARAMETER -Wno-unused-parameter) ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_UNUSED_PARAMETER -Wno-unused-parameter)
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_UNUSED_VARIABLE -Wno-unused-variable) ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_UNUSED_VARIABLE -Wno-unused-variable)
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_UNUSED_MACROS -Wno-unused-macros) ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_UNUSED_MACROS -Wno-unused-macros)
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_MISLEADING_INDENTATION -Wno-misleading-indentation)
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_MISSING_VARIABLE_DECLARATIONS -Wno-missing-variable-declarations) ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_MISSING_VARIABLE_DECLARATIONS -Wno-missing-variable-declarations)
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_INCOMPAT_PTR_DISCARD_QUAL -Wno-incompatible-pointer-types-discards-qualifiers) ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_INCOMPAT_PTR_DISCARD_QUAL -Wno-incompatible-pointer-types-discards-qualifiers)
@@ -1555,18 +1527,15 @@ elseif(CMAKE_C_COMPILER_ID MATCHES "Clang")
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_UNDEF -Wno-undef) ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_UNDEF -Wno-undef)
ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_MISSING_NORETURN -Wno-missing-noreturn) ADD_CHECK_C_COMPILER_FLAG(C_REMOVE_STRICT_FLAGS C_WARN_NO_MISSING_NORETURN -Wno-missing-noreturn)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_PARAMETER -Wno-unused-parameter)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_PRIVATE_FIELD -Wno-unused-private-field) ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_PRIVATE_FIELD -Wno-unused-private-field)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_CXX11_NARROWING -Wno-c++11-narrowing) ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_CXX11_NARROWING -Wno-c++11-narrowing)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_NON_VIRTUAL_DTOR -Wno-non-virtual-dtor) ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_NON_VIRTUAL_DTOR -Wno-non-virtual-dtor)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_MACROS -Wno-unused-macros) ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_MACROS -Wno-unused-macros)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_VARIABLE -Wno-unused-variable)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_REORDER -Wno-reorder) ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_REORDER -Wno-reorder)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_COMMENT -Wno-comment) ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_COMMENT -Wno-comment)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_TYPEDEFS -Wno-unused-local-typedefs) ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNUSED_TYPEDEFS -Wno-unused-local-typedefs)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNDEFINED_VAR_TEMPLATE -Wno-undefined-var-template) ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_UNDEFINED_VAR_TEMPLATE -Wno-undefined-var-template)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_INSTANTIATION_AFTER_SPECIALIZATION -Wno-instantiation-after-specialization) ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_INSTANTIATION_AFTER_SPECIALIZATION -Wno-instantiation-after-specialization)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_REMOVE_STRICT_FLAGS CXX_WARN_NO_MISLEADING_INDENTATION -Wno-misleading-indentation)
elseif(CMAKE_C_COMPILER_ID MATCHES "Intel") elseif(CMAKE_C_COMPILER_ID MATCHES "Intel")
@@ -1579,8 +1548,8 @@ elseif(CMAKE_C_COMPILER_ID MATCHES "Intel")
ADD_CHECK_CXX_COMPILER_FLAG(CXX_WARNINGS CXX_WARN_NO_SIGN_COMPARE -Wno-sign-compare) ADD_CHECK_CXX_COMPILER_FLAG(CXX_WARNINGS CXX_WARN_NO_SIGN_COMPARE -Wno-sign-compare)
# disable numbered, false positives # disable numbered, false positives
string(APPEND C_WARNINGS " -wd188,186,144,913,556,858,597,177,1292,167,279,592,94,2722,3199") set(C_WARNINGS "${C_WARNINGS} -wd188,186,144,913,556,858,597,177,1292,167,279,592,94,2722,3199")
string(APPEND CXX_WARNINGS " -wd188,186,144,913,556,858,597,177,1292,167,279,592,94,2722,3199") set(CXX_WARNINGS "${CXX_WARNINGS} -wd188,186,144,913,556,858,597,177,1292,167,279,592,94,2722,3199")
elseif(CMAKE_C_COMPILER_ID MATCHES "MSVC") elseif(CMAKE_C_COMPILER_ID MATCHES "MSVC")
# most msvc warnings are C & C++ # most msvc warnings are C & C++
set(_WARNINGS set(_WARNINGS
@@ -1611,7 +1580,7 @@ elseif(CMAKE_C_COMPILER_ID MATCHES "MSVC")
if(MSVC_VERSION GREATER_EQUAL 1911) if(MSVC_VERSION GREATER_EQUAL 1911)
# see https://docs.microsoft.com/en-us/cpp/error-messages/compiler-warnings/c5038?view=vs-2017 # see https://docs.microsoft.com/en-us/cpp/error-messages/compiler-warnings/c5038?view=vs-2017
string(APPEND _WARNINGS " /w35038") # order of initialization in c++ constructors set(_WARNINGS "${_WARNINGS} /w35038") # order of initialization in c++ constructors
endif() endif()
string(REPLACE ";" " " _WARNINGS "${_WARNINGS}") string(REPLACE ";" " " _WARNINGS "${_WARNINGS}")
@@ -1635,33 +1604,36 @@ if(WITH_PYTHON)
if(WIN32 OR APPLE) if(WIN32 OR APPLE)
# Windows and macOS have this bundled with Python libraries. # Windows and macOS have this bundled with Python libraries.
elseif((WITH_PYTHON_INSTALL AND WITH_PYTHON_INSTALL_NUMPY) OR WITH_PYTHON_NUMPY) elseif((WITH_PYTHON_INSTALL AND WITH_PYTHON_INSTALL_NUMPY) OR (WITH_AUDASPACE AND NOT WITH_SYSTEM_AUDASPACE))
if(("${PYTHON_NUMPY_PATH}" STREQUAL "") OR (${PYTHON_NUMPY_PATH} MATCHES NOTFOUND)) if(("${PYTHON_NUMPY_PATH}" STREQUAL "") OR (${PYTHON_NUMPY_PATH} MATCHES NOTFOUND))
find_python_package(numpy "core/include") find_python_package(numpy)
unset(PYTHON_NUMPY_INCLUDE_DIRS CACHE)
set(PYTHON_NUMPY_INCLUDE_DIRS ${PYTHON_NUMPY_PATH}/numpy/core/include CACHE PATH "Path to the include directory of the numpy module")
mark_as_advanced(PYTHON_NUMPY_INCLUDE_DIRS)
endif() endif()
endif() endif()
if(WIN32 OR APPLE) if(WIN32 OR APPLE)
# pass, we have this in lib/python/site-packages # pass, we have this in lib/python/site-packages
elseif(WITH_PYTHON_INSTALL_REQUESTS) elseif(WITH_PYTHON_INSTALL_REQUESTS)
find_python_package(requests "") find_python_package(requests)
endif() endif()
endif() endif()
if(MSVC) if(MSVC)
string(APPEND CMAKE_CXX_FLAGS " /std:c++17") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /std:c++17")
# Make MSVC properly report the value of the __cplusplus preprocessor macro # Make MSVC properly report the value of the __cplusplus preprocessor macro
# Available MSVC 15.7 (1914) and up, without this it reports 199711L regardless # Available MSVC 15.7 (1914) and up, without this it reports 199711L regardless
# of the C++ standard chosen above # of the C++ standard chosen above
if(MSVC_VERSION GREATER 1913) if(MSVC_VERSION GREATER 1913)
string(APPEND CMAKE_CXX_FLAGS " /Zc:__cplusplus") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /Zc:__cplusplus")
endif() endif()
elseif( elseif(
CMAKE_COMPILER_IS_GNUCC OR CMAKE_COMPILER_IS_GNUCC OR
CMAKE_C_COMPILER_ID MATCHES "Clang" OR CMAKE_C_COMPILER_ID MATCHES "Clang" OR
CMAKE_C_COMPILER_ID MATCHES "Intel" CMAKE_C_COMPILER_ID MATCHES "Intel"
) )
string(APPEND CMAKE_CXX_FLAGS " -std=c++17") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++17")
else() else()
message(FATAL_ERROR "Unknown compiler ${CMAKE_C_COMPILER_ID}, can't enable C++17 build") message(FATAL_ERROR "Unknown compiler ${CMAKE_C_COMPILER_ID}, can't enable C++17 build")
endif() endif()
@@ -1674,47 +1646,12 @@ if(
(CMAKE_C_COMPILER_ID MATCHES "Intel") (CMAKE_C_COMPILER_ID MATCHES "Intel")
) )
# Use C11 + GNU extensions, works with GCC, Clang, ICC # Use C11 + GNU extensions, works with GCC, Clang, ICC
string(APPEND CMAKE_C_FLAGS " -std=gnu11") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -std=gnu11")
endif() endif()
if(UNIX AND NOT APPLE) if(UNIX AND NOT APPLE)
if(NOT WITH_CXX11_ABI) if(NOT WITH_CXX11_ABI)
string(APPEND PLATFORM_CFLAGS " -D_GLIBCXX_USE_CXX11_ABI=0") set(PLATFORM_CFLAGS "${PLATFORM_CFLAGS} -D_GLIBCXX_USE_CXX11_ABI=0")
endif()
endif()
if(WITH_COMPILER_SHORT_FILE_MACRO)
# Use '-fmacro-prefix-map' for Clang and GCC (MSVC doesn't support this).
ADD_CHECK_C_COMPILER_FLAG(C_PREFIX_MAP_FLAGS C_MACRO_PREFIX_MAP -fmacro-prefix-map=foo=bar)
ADD_CHECK_CXX_COMPILER_FLAG(CXX_PREFIX_MAP_FLAGS CXX_MACRO_PREFIX_MAP -fmacro-prefix-map=foo=bar)
if(C_MACRO_PREFIX_MAP AND CXX_MACRO_PREFIX_MAP)
if(APPLE)
if(XCODE AND ${XCODE_VERSION} VERSION_LESS 12.0)
# Developers may have say LLVM Clang-10.0.1 toolchain (which supports the flag)
# with Xcode-11 (the Clang of which doesn't support the flag).
message(WARNING
"-fmacro-prefix-map flag is NOT supported by Clang shipped with Xcode-${XCODE_VERSION}."
" Some Xcode functionality in Product menu may not work. Disabling WITH_COMPILER_SHORT_FILE_MACRO."
)
set(WITH_COMPILER_SHORT_FILE_MACRO OFF)
endif()
endif()
if(WITH_COMPILER_SHORT_FILE_MACRO)
path_ensure_trailing_slash(_src_dir "${CMAKE_SOURCE_DIR}")
path_ensure_trailing_slash(_bin_dir "${CMAKE_BINARY_DIR}")
# Keep this variable so it can be stripped from build-info.
set(PLATFORM_CFLAGS_FMACRO_PREFIX_MAP
"-fmacro-prefix-map=\"${_src_dir}\"=\"\" -fmacro-prefix-map=\"${_bin_dir}\"=\"\"")
string(APPEND PLATFORM_CFLAGS " ${PLATFORM_CFLAGS_FMACRO_PREFIX_MAP}")
unset(_src_dir)
unset(_bin_dir)
endif()
else()
message(WARNING
"-fmacro-prefix-map flag is NOT supported by C/C++ compiler."
" Disabling WITH_COMPILER_SHORT_FILE_MACRO."
)
set(WITH_COMPILER_SHORT_FILE_MACRO OFF)
endif() endif()
endif() endif()
@@ -1767,20 +1704,8 @@ if(WITH_BLENDER)
# internal and external library information first, for test linking # internal and external library information first, for test linking
add_subdirectory(source) add_subdirectory(source)
elseif(WITH_CYCLES_STANDALONE) elseif(WITH_CYCLES_STANDALONE)
add_subdirectory(intern/glew-mx)
add_subdirectory(intern/guardedalloc)
add_subdirectory(intern/libc_compat)
add_subdirectory(intern/numaapi)
add_subdirectory(intern/sky)
add_subdirectory(intern/cycles) add_subdirectory(intern/cycles)
add_subdirectory(extern/clew) add_subdirectory(extern/clew)
if(WITH_CYCLES_LOGGING)
if(NOT WITH_SYSTEM_GFLAGS)
add_subdirectory(extern/gflags)
endif()
add_subdirectory(extern/glog)
endif()
if(WITH_CUDA_DYNLOAD) if(WITH_CUDA_DYNLOAD)
add_subdirectory(extern/cuew) add_subdirectory(extern/cuew)
endif() endif()
@@ -1789,10 +1714,6 @@ elseif(WITH_CYCLES_STANDALONE)
endif() endif()
endif() endif()
#-----------------------------------------------------------------------------
# Testing
add_subdirectory(tests)
#----------------------------------------------------------------------------- #-----------------------------------------------------------------------------
# Blender Application # Blender Application
if(WITH_BLENDER) if(WITH_BLENDER)
@@ -1800,6 +1721,11 @@ if(WITH_BLENDER)
endif() endif()
#-----------------------------------------------------------------------------
# Testing
add_subdirectory(tests)
#----------------------------------------------------------------------------- #-----------------------------------------------------------------------------
# Define 'heavy' submodules (for Ninja builder when using pools). # Define 'heavy' submodules (for Ninja builder when using pools).
setup_heavy_lib_pool() setup_heavy_lib_pool()
@@ -1829,7 +1755,7 @@ if(FIRST_RUN)
set(_msg " - ${_setting}") set(_msg " - ${_setting}")
string(LENGTH "${_msg}" _len) string(LENGTH "${_msg}" _len)
while("32" GREATER "${_len}") while("32" GREATER "${_len}")
string(APPEND _msg " ") set(_msg "${_msg} ")
math(EXPR _len "${_len} + 1") math(EXPR _len "${_len} + 1")
endwhile() endwhile()
@@ -1847,27 +1773,24 @@ if(FIRST_RUN)
message(STATUS "C++ Compiler: \"${CMAKE_CXX_COMPILER_ID}\"") message(STATUS "C++ Compiler: \"${CMAKE_CXX_COMPILER_ID}\"")
info_cfg_text("Build Options:") info_cfg_text("Build Options:")
info_cfg_option(WITH_ALEMBIC)
info_cfg_option(WITH_BULLET) info_cfg_option(WITH_BULLET)
info_cfg_option(WITH_CYCLES)
info_cfg_option(WITH_FFTW3)
info_cfg_option(WITH_FREESTYLE)
info_cfg_option(WITH_GMP)
info_cfg_option(WITH_HARU)
info_cfg_option(WITH_IK_ITASC)
info_cfg_option(WITH_IK_SOLVER) info_cfg_option(WITH_IK_SOLVER)
info_cfg_option(WITH_INPUT_NDOF) info_cfg_option(WITH_IK_ITASC)
info_cfg_option(WITH_INTERNATIONAL)
info_cfg_option(WITH_OPENCOLLADA) info_cfg_option(WITH_OPENCOLLADA)
info_cfg_option(WITH_FFTW3)
info_cfg_option(WITH_INTERNATIONAL)
info_cfg_option(WITH_INPUT_NDOF)
info_cfg_option(WITH_CYCLES)
info_cfg_option(WITH_FREESTYLE)
info_cfg_option(WITH_OPENCOLORIO) info_cfg_option(WITH_OPENCOLORIO)
info_cfg_option(WITH_XR_OPENXR)
info_cfg_option(WITH_OPENIMAGEDENOISE) info_cfg_option(WITH_OPENIMAGEDENOISE)
info_cfg_option(WITH_OPENVDB) info_cfg_option(WITH_OPENVDB)
info_cfg_option(WITH_POTRACE) info_cfg_option(WITH_ALEMBIC)
info_cfg_option(WITH_PUGIXML)
info_cfg_option(WITH_QUADRIFLOW) info_cfg_option(WITH_QUADRIFLOW)
info_cfg_option(WITH_TBB)
info_cfg_option(WITH_USD) info_cfg_option(WITH_USD)
info_cfg_option(WITH_XR_OPENXR) info_cfg_option(WITH_TBB)
info_cfg_option(WITH_GMP)
info_cfg_text("Compiler Options:") info_cfg_text("Compiler Options:")
info_cfg_option(WITH_BUILDINFO) info_cfg_option(WITH_BUILDINFO)
@@ -1875,58 +1798,58 @@ if(FIRST_RUN)
info_cfg_text("System Options:") info_cfg_text("System Options:")
info_cfg_option(WITH_INSTALL_PORTABLE) info_cfg_option(WITH_INSTALL_PORTABLE)
info_cfg_option(WITH_MEM_JEMALLOC)
info_cfg_option(WITH_MEM_VALGRIND)
info_cfg_option(WITH_SYSTEM_GLEW)
info_cfg_option(WITH_X11_ALPHA) info_cfg_option(WITH_X11_ALPHA)
info_cfg_option(WITH_X11_XF86VMODE) info_cfg_option(WITH_X11_XF86VMODE)
info_cfg_option(WITH_X11_XFIXES) info_cfg_option(WITH_X11_XFIXES)
info_cfg_option(WITH_X11_XINPUT) info_cfg_option(WITH_X11_XINPUT)
info_cfg_option(WITH_MEM_JEMALLOC)
info_cfg_option(WITH_MEM_VALGRIND)
info_cfg_option(WITH_SYSTEM_GLEW)
info_cfg_text("Image Formats:") info_cfg_text("Image Formats:")
info_cfg_option(WITH_OPENIMAGEIO)
info_cfg_option(WITH_IMAGE_CINEON) info_cfg_option(WITH_IMAGE_CINEON)
info_cfg_option(WITH_IMAGE_DDS) info_cfg_option(WITH_IMAGE_DDS)
info_cfg_option(WITH_IMAGE_HDR) info_cfg_option(WITH_IMAGE_HDR)
info_cfg_option(WITH_IMAGE_OPENEXR) info_cfg_option(WITH_IMAGE_OPENEXR)
info_cfg_option(WITH_IMAGE_OPENJPEG) info_cfg_option(WITH_IMAGE_OPENJPEG)
info_cfg_option(WITH_IMAGE_TIFF) info_cfg_option(WITH_IMAGE_TIFF)
info_cfg_option(WITH_OPENIMAGEIO)
info_cfg_text("Audio:") info_cfg_text("Audio:")
info_cfg_option(WITH_CODEC_AVI)
info_cfg_option(WITH_CODEC_FFMPEG)
info_cfg_option(WITH_CODEC_SNDFILE)
info_cfg_option(WITH_JACK)
info_cfg_option(WITH_JACK_DYNLOAD)
info_cfg_option(WITH_OPENAL) info_cfg_option(WITH_OPENAL)
info_cfg_option(WITH_SDL) info_cfg_option(WITH_SDL)
info_cfg_option(WITH_SDL_DYNLOAD) info_cfg_option(WITH_SDL_DYNLOAD)
info_cfg_option(WITH_JACK)
info_cfg_option(WITH_JACK_DYNLOAD)
info_cfg_option(WITH_CODEC_AVI)
info_cfg_option(WITH_CODEC_FFMPEG)
info_cfg_option(WITH_CODEC_SNDFILE)
info_cfg_text("Compression:") info_cfg_text("Compression:")
info_cfg_option(WITH_LZMA) info_cfg_option(WITH_LZMA)
info_cfg_option(WITH_LZO) info_cfg_option(WITH_LZO)
info_cfg_text("Python:") info_cfg_text("Python:")
if(APPLE)
info_cfg_option(WITH_PYTHON_FRAMEWORK)
endif()
info_cfg_option(WITH_PYTHON_INSTALL) info_cfg_option(WITH_PYTHON_INSTALL)
info_cfg_option(WITH_PYTHON_INSTALL_NUMPY) info_cfg_option(WITH_PYTHON_INSTALL_NUMPY)
info_cfg_option(WITH_PYTHON_MODULE) info_cfg_option(WITH_PYTHON_MODULE)
info_cfg_option(WITH_PYTHON_SAFETY) info_cfg_option(WITH_PYTHON_SAFETY)
if(APPLE)
info_cfg_option(WITH_PYTHON_FRAMEWORK)
endif()
info_cfg_text("Modifiers:") info_cfg_text("Modifiers:")
info_cfg_option(WITH_MOD_REMESH)
info_cfg_option(WITH_MOD_FLUID) info_cfg_option(WITH_MOD_FLUID)
info_cfg_option(WITH_MOD_OCEANSIM) info_cfg_option(WITH_MOD_OCEANSIM)
info_cfg_option(WITH_MOD_REMESH)
info_cfg_text("OpenGL:") info_cfg_text("OpenGL:")
info_cfg_option(WITH_GLEW_ES)
info_cfg_option(WITH_GL_EGL)
info_cfg_option(WITH_GL_PROFILE_ES20)
if(WIN32) if(WIN32)
info_cfg_option(WITH_GL_ANGLE) info_cfg_option(WITH_GL_ANGLE)
endif() endif()
info_cfg_option(WITH_GL_EGL)
info_cfg_option(WITH_GL_PROFILE_ES20)
info_cfg_option(WITH_GLEW_ES)
info_cfg_text("") info_cfg_text("")

View File

@@ -41,7 +41,6 @@ Convenience Targets
* developer: Enable faster builds, error checking and tests, recommended for developers. * developer: Enable faster builds, error checking and tests, recommended for developers.
* config: Run cmake configuration tool to set build options. * config: Run cmake configuration tool to set build options.
* ninja: Use ninja build tool for faster builds. * ninja: Use ninja build tool for faster builds.
* ccache: Use ccache for faster rebuilds.
Note: passing the argument 'BUILD_DIR=path' when calling make will override the default build dir. Note: passing the argument 'BUILD_DIR=path' when calling make will override the default build dir.
Note: passing the argument 'BUILD_CMAKE_ARGS=args' lets you add cmake arguments. Note: passing the argument 'BUILD_CMAKE_ARGS=args' lets you add cmake arguments.
@@ -183,13 +182,8 @@ endif
ifndef DEPS_INSTALL_DIR ifndef DEPS_INSTALL_DIR
DEPS_INSTALL_DIR:=$(shell dirname "$(BLENDER_DIR)")/lib/$(OS_NCASE) DEPS_INSTALL_DIR:=$(shell dirname "$(BLENDER_DIR)")/lib/$(OS_NCASE)
# Add processor type to directory name, except for darwin x86_64 ifneq ($(OS_NCASE),darwin)
# which by convention does not have it. # Add processor type to directory name
ifeq ($(OS_NCASE),darwin)
ifneq ($(CPU),x86_64)
DEPS_INSTALL_DIR:=$(DEPS_INSTALL_DIR)_$(CPU)
endif
else
DEPS_INSTALL_DIR:=$(DEPS_INSTALL_DIR)_$(CPU) DEPS_INSTALL_DIR:=$(DEPS_INSTALL_DIR)_$(CPU)
endif endif
endif endif
@@ -203,7 +197,7 @@ endif
# in libraries, or python 2 for running make update to get it. # in libraries, or python 2 for running make update to get it.
ifeq ($(OS_NCASE),darwin) ifeq ($(OS_NCASE),darwin)
ifeq (, $(shell command -v $(PYTHON))) ifeq (, $(shell command -v $(PYTHON)))
PYTHON:=$(DEPS_INSTALL_DIR)/python/bin/python3.7m PYTHON:=../lib/darwin/python/bin/python3.7m
ifeq (, $(shell command -v $(PYTHON))) ifeq (, $(shell command -v $(PYTHON)))
PYTHON:=python PYTHON:=python
endif endif
@@ -247,10 +241,6 @@ ifneq "$(findstring developer, $(MAKECMDGOALS))" ""
CMAKE_CONFIG_ARGS:=-C"$(BLENDER_DIR)/build_files/cmake/config/blender_developer.cmake" $(CMAKE_CONFIG_ARGS) CMAKE_CONFIG_ARGS:=-C"$(BLENDER_DIR)/build_files/cmake/config/blender_developer.cmake" $(CMAKE_CONFIG_ARGS)
endif endif
ifneq "$(findstring ccache, $(MAKECMDGOALS))" ""
CMAKE_CONFIG_ARGS:=-DWITH_COMPILER_CCACHE=YES $(CMAKE_CONFIG_ARGS)
endif
# ----------------------------------------------------------------------------- # -----------------------------------------------------------------------------
# build tool # build tool
@@ -350,7 +340,6 @@ headless: all
bpy: all bpy: all
developer: all developer: all
ninja: all ninja: all
ccache: all
# ----------------------------------------------------------------------------- # -----------------------------------------------------------------------------
# Build dependencies # Build dependencies
@@ -525,7 +514,7 @@ format: .FORCE
# Simple version of ./doc/python_api/sphinx_doc_gen.sh with no PDF generation. # Simple version of ./doc/python_api/sphinx_doc_gen.sh with no PDF generation.
doc_py: .FORCE doc_py: .FORCE
ASAN_OPTIONS=halt_on_error=0:${ASAN_OPTIONS} \ ASAN_OPTIONS=halt_on_error=0 \
$(BLENDER_BIN) --background -noaudio --factory-startup \ $(BLENDER_BIN) --background -noaudio --factory-startup \
--python doc/python_api/sphinx_doc_gen.py --python doc/python_api/sphinx_doc_gen.py
sphinx-build -b html -j $(NPROCS) doc/python_api/sphinx-in doc/python_api/sphinx-out sphinx-build -b html -j $(NPROCS) doc/python_api/sphinx-in doc/python_api/sphinx-out

View File

@@ -85,17 +85,19 @@ include(cmake/flexbison.cmake)
include(cmake/osl.cmake) include(cmake/osl.cmake)
include(cmake/tbb.cmake) include(cmake/tbb.cmake)
include(cmake/openvdb.cmake) include(cmake/openvdb.cmake)
include(cmake/nanovdb.cmake)
include(cmake/python.cmake) include(cmake/python.cmake)
include(cmake/python_site_packages.cmake) include(cmake/python_site_packages.cmake)
include(cmake/package_python.cmake) include(cmake/package_python.cmake)
include(cmake/numpy.cmake) include(cmake/numpy.cmake)
include(cmake/usd.cmake) include(cmake/usd.cmake)
include(cmake/potrace.cmake) include(cmake/potrace.cmake)
include(cmake/haru.cmake)
# Boost needs to be included after python.cmake due to the PYTHON_BINARY variable being needed. # Boost needs to be included after python.cmake due to the PYTHON_BINARY variable being needed.
include(cmake/boost.cmake) include(cmake/boost.cmake)
include(cmake/pugixml.cmake) if(UNIX)
# Rely on PugiXML compiled with OpenImageIO
else()
include(cmake/pugixml.cmake)
endif()
if((NOT APPLE) OR ("${CMAKE_OSX_ARCHITECTURES}" STREQUAL "x86_64")) if((NOT APPLE) OR ("${CMAKE_OSX_ARCHITECTURES}" STREQUAL "x86_64"))
include(cmake/ispc.cmake) include(cmake/ispc.cmake)
include(cmake/openimagedenoise.cmake) include(cmake/openimagedenoise.cmake)

View File

@@ -42,14 +42,8 @@ if(UNIX)
endforeach() endforeach()
if(APPLE) if(APPLE)
# Homebrew has different default locations for ARM and Intel macOS. if(NOT EXISTS "/usr/local/opt/bison/bin/bison")
if("${CMAKE_HOST_SYSTEM_PROCESSOR}" STREQUAL "arm64") set(_software_missing "${_software_missing} bison")
set(HOMEBREW_LOCATION "/opt/homebrew")
else()
set(HOMEBREW_LOCATION "/usr/local")
endif()
if(NOT EXISTS "${HOMEBREW_LOCATION}/opt/bison/bin/bison")
string(APPEND _software_missing " bison")
endif() endif()
endif() endif()

View File

@@ -17,14 +17,13 @@
# ***** END GPL LICENSE BLOCK ***** # ***** END GPL LICENSE BLOCK *****
set(CLANG_EXTRA_ARGS set(CLANG_EXTRA_ARGS
-DLLVM_DIR="${LIBDIR}/llvm/lib/cmake/llvm/" -DCLANG_PATH_TO_LLVM_SOURCE=${BUILD_DIR}/ll/src/ll
-DCLANG_PATH_TO_LLVM_BUILD=${LIBDIR}/llvm
-DLLVM_USE_CRT_RELEASE=MD -DLLVM_USE_CRT_RELEASE=MD
-DLLVM_USE_CRT_DEBUG=MDd -DLLVM_USE_CRT_DEBUG=MDd
-DLLVM_CONFIG=${LIBDIR}/llvm/bin/llvm-config -DLLVM_CONFIG=${LIBDIR}/llvm/bin/llvm-config
) )
set(BUILD_CLANG_TOOLS OFF)
if(WIN32) if(WIN32)
set(CLANG_GENERATOR "Ninja") set(CLANG_GENERATOR "Ninja")
else() else()
@@ -32,32 +31,11 @@ else()
endif() endif()
if(APPLE) if(APPLE)
set(BUILD_CLANG_TOOLS ON)
set(CLANG_EXTRA_ARGS ${CLANG_EXTRA_ARGS} set(CLANG_EXTRA_ARGS ${CLANG_EXTRA_ARGS}
-DLIBXML2_LIBRARY=${LIBDIR}/xml2/lib/libxml2.a -DLIBXML2_LIBRARY=${LIBDIR}/xml2/lib/libxml2.a
) )
endif() endif()
if(BUILD_CLANG_TOOLS)
# ExternalProject_Add does not allow multiple tarballs to be
# downloaded. Work around this by having an empty build action
# for the extra tools, and referring the clang build to the location
# of the clang-tools-extra source.
ExternalProject_Add(external_clang_tools
URL ${CLANG_TOOLS_URI}
DOWNLOAD_DIR ${DOWNLOAD_DIR}
URL_HASH MD5=${CLANG_TOOLS_HASH}
INSTALL_DIR ${LIBDIR}/clang_tools
PREFIX ${BUILD_DIR}/clang_tools
CONFIGURE_COMMAND echo "."
BUILD_COMMAND echo "."
INSTALL_COMMAND echo "."
)
list(APPEND CLANG_EXTRA_ARGS
-DLLVM_EXTERNAL_CLANG_TOOLS_EXTRA_SOURCE_DIR=${BUILD_DIR}/clang_tools/src/external_clang_tools/
)
endif()
ExternalProject_Add(external_clang ExternalProject_Add(external_clang
URL ${CLANG_URI} URL ${CLANG_URI}
DOWNLOAD_DIR ${DOWNLOAD_DIR} DOWNLOAD_DIR ${DOWNLOAD_DIR}
@@ -87,14 +65,6 @@ add_dependencies(
ll ll
) )
if(BUILD_CLANG_TOOLS)
# `external_clang_tools` is for downloading the source, not compiling it.
add_dependencies(
external_clang
external_clang_tools
)
endif()
# We currently do not build libxml2 on Windows. # We currently do not build libxml2 on Windows.
if(NOT WIN32) if(NOT WIN32)
add_dependencies( add_dependencies(

View File

@@ -1,46 +0,0 @@
# ***** BEGIN GPL LICENSE BLOCK *****
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ***** END GPL LICENSE BLOCK *****
set(HARU_EXTRA_ARGS
-DLIBHPDF_SHARED=OFF
-DLIBHPDF_STATIC=ON
-DLIBHPDF_EXAMPLES=OFF
-DLIBHPDF_ENABLE_EXCEPTIONS=ON
)
ExternalProject_Add(external_haru
URL ${HARU_URI}
DOWNLOAD_DIR ${DOWNLOAD_DIR}
URL_HASH MD5=${HARU_HASH}
PREFIX ${BUILD_DIR}/haru
PATCH_COMMAND ${PATCH_CMD} -p 1 -d ${BUILD_DIR}/haru/src/external_haru < ${PATCH_DIR}/haru.diff
CMAKE_ARGS
-DCMAKE_POSITION_INDEPENDENT_CODE=ON -DCMAKE_INSTALL_PREFIX=${LIBDIR}/haru
${DEFAULT_CMAKE_FLAGS} ${HARU_EXTRA_ARGS}
INSTALL_DIR ${LIBDIR}/haru
)
if(WIN32)
if(BUILD_MODE STREQUAL Release)
ExternalProject_Add_Step(external_haru after_install
COMMAND ${CMAKE_COMMAND} -E copy_directory ${LIBDIR}/haru/include ${HARVEST_TARGET}/haru/include
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/haru/lib/libhpdfs.lib ${HARVEST_TARGET}/haru/lib/libhpdfs.lib
DEPENDEES install
)
endif()
endif()

View File

@@ -98,10 +98,6 @@ harvest(jpg/include jpeg/include "*.h")
harvest(jpg/lib jpeg/lib "libjpeg.a") harvest(jpg/lib jpeg/lib "libjpeg.a")
harvest(lame/lib ffmpeg/lib "*.a") harvest(lame/lib ffmpeg/lib "*.a")
harvest(clang/bin llvm/bin "clang-format") harvest(clang/bin llvm/bin "clang-format")
if(BUILD_CLANG_TOOLS)
harvest(clang/bin llvm/bin "clang-tidy")
harvest(clang/share/clang llvm/share "run-clang-tidy.py")
endif()
harvest(clang/include llvm/include "*") harvest(clang/include llvm/include "*")
harvest(llvm/include llvm/include "*") harvest(llvm/include llvm/include "*")
harvest(llvm/bin llvm/bin "llvm-config") harvest(llvm/bin llvm/bin "llvm-config")
@@ -150,8 +146,10 @@ harvest(openjpeg/lib openjpeg/lib "*.a")
harvest(opensubdiv/include opensubdiv/include "*.h") harvest(opensubdiv/include opensubdiv/include "*.h")
harvest(opensubdiv/lib opensubdiv/lib "*.a") harvest(opensubdiv/lib opensubdiv/lib "*.a")
harvest(openvdb/include/openvdb openvdb/include/openvdb "*.h") harvest(openvdb/include/openvdb openvdb/include/openvdb "*.h")
if(WITH_NANOVDB)
harvest(openvdb/nanovdb nanovdb/include/nanovdb "*.h")
endif()
harvest(openvdb/lib openvdb/lib "*.a") harvest(openvdb/lib openvdb/lib "*.a")
harvest(nanovdb/nanovdb nanovdb/include/nanovdb "*.h")
harvest(xr_openxr_sdk/include/openxr xr_openxr_sdk/include/openxr "*.h") harvest(xr_openxr_sdk/include/openxr xr_openxr_sdk/include/openxr "*.h")
harvest(xr_openxr_sdk/lib xr_openxr_sdk/lib "*.a") harvest(xr_openxr_sdk/lib xr_openxr_sdk/lib "*.a")
harvest(osl/bin osl/bin "oslc") harvest(osl/bin osl/bin "oslc")
@@ -160,8 +158,6 @@ harvest(osl/lib osl/lib "*.a")
harvest(osl/shaders osl/shaders "*.h") harvest(osl/shaders osl/shaders "*.h")
harvest(png/include png/include "*.h") harvest(png/include png/include "*.h")
harvest(png/lib png/lib "*.a") harvest(png/lib png/lib "*.a")
harvest(pugixml/include pugixml/include "*.hpp")
harvest(pugixml/lib pugixml/lib "*.a")
harvest(python/bin python/bin "python${PYTHON_SHORT_VERSION}m") harvest(python/bin python/bin "python${PYTHON_SHORT_VERSION}m")
harvest(python/include python/include "*h") harvest(python/include python/include "*h")
harvest(python/lib python/lib "*") harvest(python/lib python/lib "*")
@@ -187,8 +183,6 @@ harvest(usd/lib/usd usd/lib/usd "*")
harvest(usd/plugin usd/plugin "*") harvest(usd/plugin usd/plugin "*")
harvest(potrace/include potrace/include "*.h") harvest(potrace/include potrace/include "*.h")
harvest(potrace/lib potrace/lib "*.a") harvest(potrace/lib potrace/lib "*.a")
harvest(haru/include haru/include "*.h")
harvest(haru/lib haru/lib "*.a")
if(UNIX AND NOT APPLE) if(UNIX AND NOT APPLE)
harvest(libglu/lib mesa/lib "*.so*") harvest(libglu/lib mesa/lib "*.so*")

View File

@@ -25,13 +25,8 @@ if(WIN32)
elseif(APPLE) elseif(APPLE)
# Use bison installed via Homebrew. # Use bison installed via Homebrew.
# The one which comes which Xcode toolset is too old. # The one which comes which Xcode toolset is too old.
if("${CMAKE_HOST_SYSTEM_PROCESSOR}" STREQUAL "arm64")
set(HOMEBREW_LOCATION "/opt/homebrew")
else()
set(HOMEBREW_LOCATION "/usr/local")
endif()
set(ISPC_EXTRA_ARGS_APPLE set(ISPC_EXTRA_ARGS_APPLE
-DBISON_EXECUTABLE=${HOMEBREW_LOCATION}/opt/bison/bin/bison -DBISON_EXECUTABLE=/usr/local/opt/bison/bin/bison
) )
elseif(UNIX) elseif(UNIX)
set(ISPC_EXTRA_ARGS_UNIX set(ISPC_EXTRA_ARGS_UNIX

View File

@@ -1,54 +0,0 @@
# ***** BEGIN GPL LICENSE BLOCK *****
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ***** END GPL LICENSE BLOCK *****
set(NANOVDB_EXTRA_ARGS
# NanoVDB is header-only, so only need the install target
-DNANOVDB_BUILD_UNITTESTS=OFF
-DNANOVDB_BUILD_EXAMPLES=OFF
-DNANOVDB_BUILD_BENCHMARK=OFF
-DNANOVDB_BUILD_DOCS=OFF
-DNANOVDB_BUILD_TOOLS=OFF
-DNANOVDB_CUDA_KEEP_PTX=OFF
# Do not need to include any of the dependencies because of this
-DNANOVDB_USE_OPENVDB=OFF
-DNANOVDB_USE_OPENGL=OFF
-DNANOVDB_USE_OPENCL=OFF
-DNANOVDB_USE_CUDA=OFF
-DNANOVDB_USE_TBB=OFF
-DNANOVDB_USE_BLOSC=OFF
-DNANOVDB_USE_ZLIB=OFF
-DNANOVDB_USE_OPTIX=OFF
-DNANOVDB_ALLOW_FETCHCONTENT=OFF
)
ExternalProject_Add(nanovdb
URL ${NANOVDB_URI}
DOWNLOAD_DIR ${DOWNLOAD_DIR}
URL_HASH MD5=${NANOVDB_HASH}
PREFIX ${BUILD_DIR}/nanovdb
SOURCE_SUBDIR nanovdb
CMAKE_ARGS -DCMAKE_INSTALL_PREFIX=${LIBDIR}/nanovdb ${DEFAULT_CMAKE_FLAGS} ${NANOVDB_EXTRA_ARGS}
INSTALL_DIR ${LIBDIR}/nanovdb
)
if(WIN32)
ExternalProject_Add_Step(nanovdb after_install
COMMAND ${CMAKE_COMMAND} -E copy_directory ${LIBDIR}/nanovdb/nanovdb ${HARVEST_TARGET}/nanovdb/include/nanovdb
DEPENDEES install
)
endif()

View File

@@ -112,9 +112,6 @@ set(OPENIMAGEIO_EXTRA_ARGS
-DOPENEXR_IEX_LIBRARY=${LIBDIR}/openexr/lib/${LIBPREFIX}Iex${OPENEXR_VERSION_POSTFIX}${LIBEXT} -DOPENEXR_IEX_LIBRARY=${LIBDIR}/openexr/lib/${LIBPREFIX}Iex${OPENEXR_VERSION_POSTFIX}${LIBEXT}
-DOPENEXR_ILMIMF_LIBRARY=${LIBDIR}/openexr/lib/${LIBPREFIX}IlmImf${OPENEXR_VERSION_POSTFIX}${LIBEXT} -DOPENEXR_ILMIMF_LIBRARY=${LIBDIR}/openexr/lib/${LIBPREFIX}IlmImf${OPENEXR_VERSION_POSTFIX}${LIBEXT}
-DSTOP_ON_WARNING=OFF -DSTOP_ON_WARNING=OFF
-DUSE_EXTERNAL_PUGIXML=ON
-DPUGIXML_LIBRARY=${LIBDIR}/pugixml/lib/${LIBPREFIX}pugixml${LIBEXT}
-DPUGIXML_INCLUDE_DIR=${LIBDIR}/pugixml/include/
${WEBP_FLAGS} ${WEBP_FLAGS}
${OIIO_SIMD_FLAGS} ${OIIO_SIMD_FLAGS}
) )
@@ -137,7 +134,6 @@ add_dependencies(
external_jpeg external_jpeg
external_boost external_boost
external_tiff external_tiff
external_pugixml
external_openjpeg${OPENJPEG_POSTFIX} external_openjpeg${OPENJPEG_POSTFIX}
${WEBP_DEP} ${WEBP_DEP}
) )

View File

@@ -54,6 +54,20 @@ set(OPENVDB_EXTRA_ARGS
-DOPENVDB_CORE_STATIC=${OPENVDB_STATIC} -DOPENVDB_CORE_STATIC=${OPENVDB_STATIC}
-DOPENVDB_BUILD_BINARIES=Off -DOPENVDB_BUILD_BINARIES=Off
-DCMAKE_DEBUG_POSTFIX=_d -DCMAKE_DEBUG_POSTFIX=_d
# NanoVDB is header-only, so only need the install target
-DNANOVDB_BUILD_UNITTESTS=OFF
-DNANOVDB_BUILD_EXAMPLES=OFF
-DNANOVDB_BUILD_BENCHMARK=OFF
-DNANOVDB_BUILD_DOCS=OFF
-DNANOVDB_BUILD_TOOLS=OFF
-DNANOVDB_CUDA_KEEP_PTX=OFF
-DNANOVDB_USE_OPENGL=OFF
-DNANOVDB_USE_OPENGL=OFF
-DNANOVDB_USE_CUDA=OFF
-DNANOVDB_USE_TBB=OFF
-DNANOVDB_USE_OPTIX=OFF
-DNANOVDB_USE_OPENVDB=OFF
-DNANOVDB_ALLOW_FETCHCONTENT=OFF
) )
if(WIN32) if(WIN32)
@@ -74,12 +88,18 @@ else()
) )
endif() endif()
if(WITH_NANOVDB)
set(OPENVDB_PATCH_FILE openvdb_nanovdb.diff)
else()
set(OPENVDB_PATCH_FILE openvdb.diff)
endif()
ExternalProject_Add(openvdb ExternalProject_Add(openvdb
URL ${OPENVDB_URI} URL ${OPENVDB_URI}
DOWNLOAD_DIR ${DOWNLOAD_DIR} DOWNLOAD_DIR ${DOWNLOAD_DIR}
URL_HASH MD5=${OPENVDB_HASH} URL_HASH MD5=${OPENVDB_HASH}
PREFIX ${BUILD_DIR}/openvdb PREFIX ${BUILD_DIR}/openvdb
PATCH_COMMAND ${PATCH_CMD} -p 1 -d ${BUILD_DIR}/openvdb/src/openvdb < ${PATCH_DIR}/openvdb.diff PATCH_COMMAND ${PATCH_CMD} -p 1 -d ${BUILD_DIR}/openvdb/src/openvdb < ${PATCH_DIR}/${OPENVDB_PATCH_FILE}
CMAKE_ARGS -DCMAKE_INSTALL_PREFIX=${LIBDIR}/openvdb ${DEFAULT_CMAKE_FLAGS} ${OPENVDB_EXTRA_ARGS} CMAKE_ARGS -DCMAKE_INSTALL_PREFIX=${LIBDIR}/openvdb ${DEFAULT_CMAKE_FLAGS} ${OPENVDB_EXTRA_ARGS}
INSTALL_DIR ${LIBDIR}/openvdb INSTALL_DIR ${LIBDIR}/openvdb
) )
@@ -101,6 +121,12 @@ if(WIN32)
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/openvdb/bin/openvdb.dll ${HARVEST_TARGET}/openvdb/bin/openvdb.dll COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/openvdb/bin/openvdb.dll ${HARVEST_TARGET}/openvdb/bin/openvdb.dll
DEPENDEES install DEPENDEES install
) )
if(WITH_NANOVDB)
ExternalProject_Add_Step(openvdb nanovdb_install
COMMAND ${CMAKE_COMMAND} -E copy_directory ${LIBDIR}/openvdb/nanovdb ${HARVEST_TARGET}/nanovdb/include/nanovdb
DEPENDEES after_install
)
endif()
endif() endif()
if(BUILD_MODE STREQUAL Debug) if(BUILD_MODE STREQUAL Debug)
ExternalProject_Add_Step(openvdb after_install ExternalProject_Add_Step(openvdb after_install

View File

@@ -21,6 +21,7 @@ if(WIN32)
endif() endif()
option(WITH_WEBP "Enable building of oiio with webp support" OFF) option(WITH_WEBP "Enable building of oiio with webp support" OFF)
option(WITH_BOOST_PYTHON "Enable building of boost with python support" OFF) option(WITH_BOOST_PYTHON "Enable building of boost with python support" OFF)
option(WITH_NANOVDB "Enable building of OpenVDB with NanoVDB included" OFF)
set(MAKE_THREADS 1 CACHE STRING "Number of threads to run make with") set(MAKE_THREADS 1 CACHE STRING "Number of threads to run make with")
if(NOT BUILD_MODE) if(NOT BUILD_MODE)
@@ -56,7 +57,7 @@ if(WIN32)
if(MSVC_VERSION GREATER 1909) if(MSVC_VERSION GREATER 1909)
set(COMMON_MSVC_FLAGS "/Wv:18") #some deps with warnings as error aren't quite ready for dealing with the new 2017 warnings. set(COMMON_MSVC_FLAGS "/Wv:18") #some deps with warnings as error aren't quite ready for dealing with the new 2017 warnings.
endif() endif()
string(APPEND COMMON_MSVC_FLAGS " /bigobj") set(COMMON_MSVC_FLAGS "${COMMON_MSVC_FLAGS} /bigobj")
if(WITH_OPTIMIZED_DEBUG) if(WITH_OPTIMIZED_DEBUG)
set(BLENDER_CMAKE_C_FLAGS_DEBUG "/MDd ${COMMON_MSVC_FLAGS} /O2 /Ob2 /DNDEBUG /DPSAPI_VERSION=1 /DOIIO_STATIC_BUILD /DTINYFORMAT_ALLOW_WCHAR_STRINGS") set(BLENDER_CMAKE_C_FLAGS_DEBUG "/MDd ${COMMON_MSVC_FLAGS} /O2 /Ob2 /DNDEBUG /DPSAPI_VERSION=1 /DOIIO_STATIC_BUILD /DTINYFORMAT_ALLOW_WCHAR_STRINGS")
else() else()

View File

@@ -78,10 +78,14 @@ set(OSL_EXTRA_ARGS
-DINSTALL_DOCS=OFF -DINSTALL_DOCS=OFF
${OSL_SIMD_FLAGS} ${OSL_SIMD_FLAGS}
-DPARTIO_LIBRARIES= -DPARTIO_LIBRARIES=
-DPUGIXML_HOME=${LIBDIR}/pugixml
) )
if(APPLE) if(WIN32)
set(OSL_EXTRA_ARGS
${OSL_EXTRA_ARGS}
-DPUGIXML_HOME=${LIBDIR}/pugixml
)
elseif(APPLE)
# Make symbol hiding consistent with OIIO which defaults to OFF, # Make symbol hiding consistent with OIIO which defaults to OFF,
# avoids linker warnings on macOS # avoids linker warnings on macOS
set(OSL_EXTRA_ARGS set(OSL_EXTRA_ARGS
@@ -110,9 +114,17 @@ add_dependencies(
external_zlib external_zlib
external_flexbison external_flexbison
external_openimageio external_openimageio
external_pugixml
) )
if(UNIX)
# Rely on PugiXML compiled with OpenImageIO
else()
add_dependencies(
external_osl
external_pugixml
)
endif()
if(WIN32) if(WIN32)
if(BUILD_MODE STREQUAL Release) if(BUILD_MODE STREQUAL Release)
ExternalProject_Add_Step(external_osl after_install ExternalProject_Add_Step(external_osl after_install

View File

@@ -30,13 +30,13 @@ ExternalProject_Add(external_pugixml
if(WIN32) if(WIN32)
if(BUILD_MODE STREQUAL Release) if(BUILD_MODE STREQUAL Release)
ExternalProject_Add_Step(external_pugixml after_install ExternalProject_Add_Step(external_pugixml after_install
COMMAND ${CMAKE_COMMAND} -E copy_directory ${LIBDIR}/pugixml ${HARVEST_TARGET}/pugixml COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/pugixml/lib/pugixml.lib ${HARVEST_TARGET}/osl/lib/pugixml.lib
DEPENDEES install DEPENDEES install
) )
endif() endif()
if(BUILD_MODE STREQUAL Debug) if(BUILD_MODE STREQUAL Debug)
ExternalProject_Add_Step(external_pugixml after_install ExternalProject_Add_Step(external_pugixml after_install
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/pugixml/lib/pugixml.lib ${HARVEST_TARGET}/pugixml/lib/pugixml_d.lib COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/pugixml/lib/pugixml.lib ${HARVEST_TARGET}/osl/lib/pugixml_d.lib
DEPENDEES install DEPENDEES install
) )
endif() endif()

View File

@@ -42,7 +42,7 @@ if(UNIX)
-DSQLITE_MAX_VARIABLE_NUMBER=250000 \ -DSQLITE_MAX_VARIABLE_NUMBER=250000 \
-fPIC") -fPIC")
set(SQLITE_CONFIGURE_ENV ${SQLITE_CONFIGURE_ENV} && export LDFLAGS=${SQLITE_LDFLAGS} && export CFLAGS=${SQLITE_CFLAGS}) set(SQLITE_CONFIGURE_ENV ${SQLITE_CONFIGURE_ENV} && export LDFLAGS=${SQLITE_LDFLAGS} && export CFLAGS=${SQLITE_CFLAGS})
set(SQLITE_CONFIGURATION_ARGS ${SQLITE_CONFIGURATION_ARGS} --enable-threadsafe --enable-load-extension --enable-json1 --enable-fts4 --enable-fts5 --disable-tcl set(SQLITE_CONFIGURATION_ARGS ${SQLITE_CONFIGURATION_ARGS} --enable-threadsafe --enable-load-extension --enable-json1 --enable-fts4 --enable-fts5
--enable-shared=no) --enable-shared=no)
endif() endif()

View File

@@ -120,9 +120,6 @@ set(LLVM_HASH 31eb9ce73dd2a0f8dcab8319fb03f8fc)
set(CLANG_URI https://github.com/llvm/llvm-project/releases/download/llvmorg-${LLVM_VERSION}/clang-${LLVM_VERSION}.src.tar.xz) set(CLANG_URI https://github.com/llvm/llvm-project/releases/download/llvmorg-${LLVM_VERSION}/clang-${LLVM_VERSION}.src.tar.xz)
set(CLANG_HASH 13468e4a44940efef1b75e8641752f90) set(CLANG_HASH 13468e4a44940efef1b75e8641752f90)
set(CLANG_TOOLS_URI https://github.com/llvm/llvm-project/releases/download/llvmorg-${LLVM_VERSION}/clang-tools-extra-${LLVM_VERSION}.src.tar.xz)
set(CLANG_TOOLS_HASH c76293870b564c6a7968622b475b7646)
set(OPENMP_URI https://github.com/llvm/llvm-project/releases/download/llvmorg-${LLVM_VERSION}/openmp-${LLVM_VERSION}.src.tar.xz) set(OPENMP_URI https://github.com/llvm/llvm-project/releases/download/llvmorg-${LLVM_VERSION}/openmp-${LLVM_VERSION}.src.tar.xz)
set(OPENMP_HASH 6eade16057edbdecb3c4eef9daa2bfcf) set(OPENMP_HASH 6eade16057edbdecb3c4eef9daa2bfcf)
@@ -148,13 +145,15 @@ set(TBB_VERSION 2019_U9)
set(TBB_URI https://github.com/oneapi-src/oneTBB/archive/${TBB_VERSION}.tar.gz) set(TBB_URI https://github.com/oneapi-src/oneTBB/archive/${TBB_VERSION}.tar.gz)
set(TBB_HASH 26263622e9187212ec240dcf01b66207) set(TBB_HASH 26263622e9187212ec240dcf01b66207)
set(OPENVDB_VERSION 7.0.0) if(WITH_NANOVDB)
set(OPENVDB_URI https://github.com/AcademySoftwareFoundation/openvdb/archive/v${OPENVDB_VERSION}.tar.gz) set(OPENVDB_GIT_UID e62f7a0bf1e27397223c61ddeaaf57edf111b77f)
set(OPENVDB_HASH fd6c4f168282f7e0e494d290cd531fa8) set(OPENVDB_URI https://github.com/AcademySoftwareFoundation/openvdb/archive/${OPENVDB_GIT_UID}.tar.gz)
set(OPENVDB_HASH 90919510bc6ccd630fedc56f748cb199)
set(NANOVDB_GIT_UID e62f7a0bf1e27397223c61ddeaaf57edf111b77f) else()
set(NANOVDB_URI https://github.com/AcademySoftwareFoundation/openvdb/archive/${NANOVDB_GIT_UID}.tar.gz) set(OPENVDB_VERSION 7.0.0)
set(NANOVDB_HASH 90919510bc6ccd630fedc56f748cb199) set(OPENVDB_URI https://github.com/AcademySoftwareFoundation/openvdb/archive/v${OPENVDB_VERSION}.tar.gz)
set(OPENVDB_HASH fd6c4f168282f7e0e494d290cd531fa8)
endif()
set(IDNA_VERSION 2.9) set(IDNA_VERSION 2.9)
set(CHARDET_VERSION 3.0.4) set(CHARDET_VERSION 3.0.4)
@@ -331,7 +330,3 @@ set(GMP_HASH a325e3f09e6d91e62101e59f9bda3ec1)
set(POTRACE_VERSION 1.16) set(POTRACE_VERSION 1.16)
set(POTRACE_URI http://potrace.sourceforge.net/download/${POTRACE_VERSION}/potrace-${POTRACE_VERSION}.tar.gz) set(POTRACE_URI http://potrace.sourceforge.net/download/${POTRACE_VERSION}/potrace-${POTRACE_VERSION}.tar.gz)
set(POTRACE_HASH 5f0bd87ddd9a620b0c4e65652ef93d69) set(POTRACE_HASH 5f0bd87ddd9a620b0c4e65652ef93d69)
set(HARU_VERSION 2_3_0)
set(HARU_URI https://github.com/libharu/libharu/archive/RELEASE_${HARU_VERSION}.tar.gz)
set(HARU_HASH 4f916aa49c3069b3a10850013c507460)

View File

@@ -51,7 +51,7 @@ ARGS=$( \
getopt \ getopt \
-o s:i:t:h \ -o s:i:t:h \
--long source:,install:,tmp:,info:,threads:,help,show-deps,no-sudo,no-build,no-confirm,\ --long source:,install:,tmp:,info:,threads:,help,show-deps,no-sudo,no-build,no-confirm,\
with-all,with-opencollada,with-jack,with-embree,with-oidn,with-nanovdb,\ with-all,with-opencollada,with-jack,with-embree,with-oidn,\
ver-ocio:,ver-oiio:,ver-llvm:,ver-osl:,ver-osd:,ver-openvdb:,ver-xr-openxr:,\ ver-ocio:,ver-oiio:,ver-llvm:,ver-osl:,ver-osd:,ver-openvdb:,ver-xr-openxr:,\
force-all,force-python,force-numpy,force-boost,force-tbb,\ force-all,force-python,force-numpy,force-boost,force-tbb,\
force-ocio,force-openexr,force-oiio,force-llvm,force-osl,force-osd,force-openvdb,\ force-ocio,force-openexr,force-oiio,force-llvm,force-osl,force-osd,force-openvdb,\
@@ -151,9 +151,6 @@ ARGUMENTS_INFO="\"COMMAND LINE ARGUMENTS:
--with-oidn --with-oidn
Build and install the OpenImageDenoise libraries. Build and install the OpenImageDenoise libraries.
--with-nanovdb
Build and install the NanoVDB branch of OpenVDB (instead of official release of OpenVDB).
--with-jack --with-jack
Install the jack libraries. Install the jack libraries.
@@ -385,25 +382,25 @@ USE_CXX11=true
CLANG_FORMAT_VERSION_MIN="6.0" CLANG_FORMAT_VERSION_MIN="6.0"
CLANG_FORMAT_VERSION_MAX="10.0" CLANG_FORMAT_VERSION_MAX="10.0"
PYTHON_VERSION="3.9.1" PYTHON_VERSION="3.7.7"
PYTHON_VERSION_SHORT="3.9" PYTHON_VERSION_SHORT="3.7"
PYTHON_VERSION_MIN="3.7" PYTHON_VERSION_MIN="3.7"
PYTHON_VERSION_MAX="3.10" PYTHON_VERSION_MAX="3.9"
PYTHON_VERSION_INSTALLED=$PYTHON_VERSION_MIN PYTHON_VERSION_INSTALLED=$PYTHON_VERSION_MIN
PYTHON_FORCE_BUILD=false PYTHON_FORCE_BUILD=false
PYTHON_FORCE_REBUILD=false PYTHON_FORCE_REBUILD=false
PYTHON_SKIP=false PYTHON_SKIP=false
NUMPY_VERSION="1.19.5" NUMPY_VERSION="1.17.5"
NUMPY_VERSION_SHORT="1.19" NUMPY_VERSION_SHORT="1.17"
NUMPY_VERSION_MIN="1.8" NUMPY_VERSION_MIN="1.8"
NUMPY_VERSION_MAX="2.0" NUMPY_VERSION_MAX="2.0"
NUMPY_FORCE_BUILD=false NUMPY_FORCE_BUILD=false
NUMPY_FORCE_REBUILD=false NUMPY_FORCE_REBUILD=false
NUMPY_SKIP=false NUMPY_SKIP=false
BOOST_VERSION="1.73.0" BOOST_VERSION="1.70.0"
BOOST_VERSION_SHORT="1.73" BOOST_VERSION_SHORT="1.70"
BOOST_VERSION_MIN="1.49" BOOST_VERSION_MIN="1.49"
BOOST_VERSION_MAX="2.0" BOOST_VERSION_MAX="2.0"
BOOST_FORCE_BUILD=false BOOST_FORCE_BUILD=false
@@ -438,8 +435,8 @@ _with_built_openexr=false
OIIO_VERSION="2.1.15.0" OIIO_VERSION="2.1.15.0"
OIIO_VERSION_SHORT="2.1" OIIO_VERSION_SHORT="2.1"
OIIO_VERSION_MIN="2.1.12" OIIO_VERSION_MIN="1.8"
OIIO_VERSION_MAX="2.2.10" OIIO_VERSION_MAX="3.0"
OIIO_FORCE_BUILD=false OIIO_FORCE_BUILD=false
OIIO_FORCE_REBUILD=false OIIO_FORCE_REBUILD=false
OIIO_SKIP=false OIIO_SKIP=false
@@ -483,7 +480,7 @@ OPENVDB_FORCE_REBUILD=false
OPENVDB_SKIP=false OPENVDB_SKIP=false
# Alembic needs to be compiled for now # Alembic needs to be compiled for now
ALEMBIC_VERSION="1.7.16" ALEMBIC_VERSION="1.7.12"
ALEMBIC_VERSION_SHORT="1.7" ALEMBIC_VERSION_SHORT="1.7"
ALEMBIC_VERSION_MIN="1.7" ALEMBIC_VERSION_MIN="1.7"
ALEMBIC_VERSION_MAX="2.0" ALEMBIC_VERSION_MAX="2.0"
@@ -679,10 +676,6 @@ while true; do
--with-oidn) --with-oidn)
WITH_OIDN=true; shift; continue WITH_OIDN=true; shift; continue
;; ;;
--with-nanovdb)
WITH_NANOVDB=true;
shift; continue
;;
--with-jack) --with-jack)
WITH_JACK=true; shift; continue; WITH_JACK=true; shift; continue;
;; ;;
@@ -964,11 +957,6 @@ if [ "$WITH_ALL" = true -a "$OIDN_SKIP" = false ]; then
fi fi
if [ "$WITH_ALL" = true ]; then if [ "$WITH_ALL" = true ]; then
WITH_JACK=true WITH_JACK=true
WITH_NANOVDB=true
fi
if [ "$WITH_NANOVDB" = true ]; then
OPENVDB_FORCE_BUILD=true
fi fi
@@ -1041,15 +1029,11 @@ OSD_SOURCE=( "https://github.com/PixarAnimationStudios/OpenSubdiv/archive/v${OSD
OPENVDB_USE_REPO=false OPENVDB_USE_REPO=false
OPENVDB_BLOSC_SOURCE=( "https://github.com/Blosc/c-blosc/archive/v${OPENVDB_BLOSC_VERSION}.tar.gz" ) OPENVDB_BLOSC_SOURCE=( "https://github.com/Blosc/c-blosc/archive/v${OPENVDB_BLOSC_VERSION}.tar.gz" )
OPENVDB_SOURCE=( "https://github.com/AcademySoftwareFoundation/openvdb/archive/v${OPENVDB_VERSION}.tar.gz" ) OPENVDB_SOURCE=( "https://github.com/dreamworksanimation/openvdb/archive/v${OPENVDB_VERSION}.tar.gz" )
#~ OPENVDB_SOURCE_REPO=( "https://github.com/AcademySoftwareFoundation/openvdb.git" ) #~ OPENVDB_SOURCE_REPO=( "https:///dreamworksanimation/openvdb.git" )
#~ OPENVDB_SOURCE_REPO_UID="404659fffa659da075d1c9416e4fc939139a84ee" #~ OPENVDB_SOURCE_REPO_UID="404659fffa659da075d1c9416e4fc939139a84ee"
#~ OPENVDB_SOURCE_REPO_BRANCH="dev" #~ OPENVDB_SOURCE_REPO_BRANCH="dev"
NANOVDB_USE_REPO=false
NANOVDB_SOURCE_REPO_UID="e62f7a0bf1e27397223c61ddeaaf57edf111b77f"
NANOVDB_SOURCE=( "https://github.com/AcademySoftwareFoundation/openvdb/archive/${NANOVDB_SOURCE_REPO_UID}.tar.gz" )
ALEMBIC_USE_REPO=false ALEMBIC_USE_REPO=false
ALEMBIC_SOURCE=( "https://github.com/alembic/alembic/archive/${ALEMBIC_VERSION}.tar.gz" ) ALEMBIC_SOURCE=( "https://github.com/alembic/alembic/archive/${ALEMBIC_VERSION}.tar.gz" )
# ALEMBIC_SOURCE_REPO=( "https://github.com/alembic/alembic.git" ) # ALEMBIC_SOURCE_REPO=( "https://github.com/alembic/alembic.git" )
@@ -2064,6 +2048,7 @@ compile_OIIO() {
cmake_d="$cmake_d -D CMAKE_PREFIX_PATH=$_inst" cmake_d="$cmake_d -D CMAKE_PREFIX_PATH=$_inst"
cmake_d="$cmake_d -D CMAKE_INSTALL_PREFIX=$_inst" cmake_d="$cmake_d -D CMAKE_INSTALL_PREFIX=$_inst"
cmake_d="$cmake_d -D STOP_ON_WARNING=OFF" cmake_d="$cmake_d -D STOP_ON_WARNING=OFF"
cmake_d="$cmake_d -D BUILDSTATIC=OFF"
cmake_d="$cmake_d -D LINKSTATIC=OFF" cmake_d="$cmake_d -D LINKSTATIC=OFF"
cmake_d="$cmake_d -D USE_SIMD=sse2" cmake_d="$cmake_d -D USE_SIMD=sse2"
@@ -2085,7 +2070,7 @@ compile_OIIO() {
cmake_d="$cmake_d -D USE_OPENCV=OFF" cmake_d="$cmake_d -D USE_OPENCV=OFF"
cmake_d="$cmake_d -D BUILD_TESTING=OFF" cmake_d="$cmake_d -D BUILD_TESTING=OFF"
cmake_d="$cmake_d -D OIIO_BUILD_TESTS=OFF" cmake_d="$cmake_d -D OIIO_BUILD_TESTS=OFF"
cmake_d="$cmake_d -D OIIO_BUILD_TOOLS=ON" cmake_d="$cmake_d -D OIIO_BUILD_TOOLS=OFF"
cmake_d="$cmake_d -D TXT2MAN=" cmake_d="$cmake_d -D TXT2MAN="
#cmake_d="$cmake_d -D CMAKE_EXPORT_COMPILE_COMMANDS=ON" #cmake_d="$cmake_d -D CMAKE_EXPORT_COMPILE_COMMANDS=ON"
#cmake_d="$cmake_d -D CMAKE_VERBOSE_MAKEFILE=ON" #cmake_d="$cmake_d -D CMAKE_VERBOSE_MAKEFILE=ON"
@@ -2098,6 +2083,9 @@ compile_OIIO() {
# if [ -d $INST/ocio ]; then # if [ -d $INST/ocio ]; then
# cmake_d="$cmake_d -D OCIO_PATH=$INST/ocio" # cmake_d="$cmake_d -D OCIO_PATH=$INST/ocio"
# fi # fi
cmake_d="$cmake_d -D USE_OCIO=OFF"
cmake_d="$cmake_d -D OIIO_BUILD_CPP11=ON"
if file /bin/cp | grep -q '32-bit'; then if file /bin/cp | grep -q '32-bit'; then
cflags="-fPIC -m32 -march=i686" cflags="-fPIC -m32 -march=i686"
@@ -2606,115 +2594,11 @@ compile_BLOSC() {
# ---------------------------------------------------------------------------- # ----------------------------------------------------------------------------
# Build OpenVDB # Build OpenVDB
_init_nanovdb() {
_src=$SRC/openvdb-$OPENVDB_VERSION/nanovdb
_inst=$INST/nanovdb-$OPENVDB_VERSION_SHORT
_inst_shortcut=$INST/nanovdb
}
_update_deps_nanovdb() {
:
}
clean_nanovdb() {
_init_nanovdb
if [ -d $_inst ]; then
_update_deps_nanovdb
fi
_git=true # Mere trick to prevent clean from removing $_src...
_clean
}
install_NanoVDB() {
# To be changed each time we make edits that would modify the compiled results!
nanovdb_magic=1
_init_nanovdb
# Clean install if needed!
magic_compile_check nanovdb-$OPENVDB_VERSION $nanovdb_magic
if [ $? -eq 1 ]; then
clean_nanovdb
fi
if [ ! -d $_inst ]; then
INFO "Installing NanoVDB v$OPENVDB_VERSION"
_is_building=true
# Rebuild dependencies as well!
_update_deps_nanovdb
prepare_inst
if [ ! -d $_src ]; then
ERROR "NanoVDB not found in openvdb-$OPENVDB_VERSION ($_src), exiting"
exit 1
fi
# Always refresh the whole build!
if [ -d build ]; then
rm -rf build
fi
mkdir build
cd build
cmake_d="-D CMAKE_BUILD_TYPE=Release"
cmake_d="$cmake_d -D CMAKE_INSTALL_PREFIX=$_inst"
# NanoVDB is header-only, so only need the install target
cmake_d="$cmake_d -D NANOVDB_BUILD_UNITTESTS=OFF"
cmake_d="$cmake_d -D NANOVDB_BUILD_EXAMPLES=OFF"
cmake_d="$cmake_d -D NANOVDB_BUILD_BENCHMARK=OFF"
cmake_d="$cmake_d -D NANOVDB_BUILD_DOCS=OFF"
cmake_d="$cmake_d -D NANOVDB_BUILD_TOOLS=OFF"
cmake_d="$cmake_d -D NANOVDB_CUDA_KEEP_PTX=OFF"
# Do not need to include any of the dependencies because of this
cmake_d="$cmake_d -D NANOVDB_USE_OPENVDB=OFF"
cmake_d="$cmake_d -D NANOVDB_USE_OPENGL=OFF"
cmake_d="$cmake_d -D NANOVDB_USE_OPENCL=OFF"
cmake_d="$cmake_d -D NANOVDB_USE_CUDA=OFF"
cmake_d="$cmake_d -D NANOVDB_USE_TBB=OFF"
cmake_d="$cmake_d -D NANOVDB_USE_BLOSC=OFF"
cmake_d="$cmake_d -D NANOVDB_USE_ZLIB=OFF"
cmake_d="$cmake_d -D NANOVDB_USE_OPTIX=OFF"
cmake_d="$cmake_d -D NANOVDB_ALLOW_FETCHCONTENT=OFF"
cmake $cmake_d $_src
make -j$THREADS install
make clean
#~ mkdir -p $_inst
#~ cp -r $_src/include $_inst/include
if [ -d $_inst ]; then
_create_inst_shortcut
else
ERROR "NanoVDB-v$OPENVDB_VERSION failed to install, exiting"
exit 1
fi
magic_compile_set nanovdb-$OPENVDB_VERSION $nanovdb_magic
cd $CWD
INFO "Done compiling NanoVDB-v$OPENVDB_VERSION!"
_is_building=false
else
INFO "Own NanoVDB-v$OPENVDB_VERSION is up to date, nothing to do!"
fi
}
_init_openvdb() { _init_openvdb() {
_src=$SRC/openvdb-$OPENVDB_VERSION _src=$SRC/openvdb-$OPENVDB_VERSION
_git=false _git=false
_inst=$INST/openvdb-$OPENVDB_VERSION_SHORT _inst=$INST/openvdb-$OPENVDB_VERSION_SHORT
_inst_shortcut=$INST/openvdb _inst_shortcut=$INST/openvdb
_openvdb_source=$OPENVDB_SOURCE
if [ "$WITH_NANOVDB" = true ]; then
_openvdb_source=$NANOVDB_SOURCE
fi
} }
_update_deps_openvdb() { _update_deps_openvdb() {
@@ -2739,7 +2623,7 @@ compile_OPENVDB() {
PRINT "" PRINT ""
# To be changed each time we make edits that would modify the compiled result! # To be changed each time we make edits that would modify the compiled result!
openvdb_magic=2 openvdb_magic=1
_init_openvdb _init_openvdb
# Clean install if needed! # Clean install if needed!
@@ -2749,7 +2633,7 @@ compile_OPENVDB() {
fi fi
if [ ! -d $_inst ]; then if [ ! -d $_inst ]; then
INFO "Building OpenVDB-$OPENVDB_VERSION (with NanoVDB: $WITH_NANOVDB)" INFO "Building OpenVDB-$OPENVDB_VERSION"
_is_building=true _is_building=true
# Rebuild dependencies as well! # Rebuild dependencies as well!
@@ -2757,17 +2641,12 @@ compile_OPENVDB() {
prepare_inst prepare_inst
if [ ! -d $_src ]; then if [ ! -d $_src -o true ]; then
mkdir -p $SRC mkdir -p $SRC
download _openvdb_source[@] "$_src.tar.gz" download OPENVDB_SOURCE[@] "$_src.tar.gz"
INFO "Unpacking OpenVDB-$OPENVDB_VERSION" INFO "Unpacking OpenVDB-$OPENVDB_VERSION"
if [ "$WITH_NANOVDB" = true ]; then tar -C $SRC -xf $_src.tar.gz
tar -C $SRC --transform "s,(.*/?)openvdb-$NANOVDB_SOURCE_REPO_UID[^/]*(.*),\1openvdb-$OPENVDB_VERSION\2,x" \
-xf $_src.tar.gz
else
tar -C $SRC -xf $_src.tar.gz
fi
fi fi
cd $_src cd $_src
@@ -2781,40 +2660,33 @@ compile_OPENVDB() {
#~ git reset --hard #~ git reset --hard
#~ fi #~ fi
# Always refresh the whole build! # Source builds here
if [ -d build ]; then cd openvdb
rm -rf build
fi
mkdir build
cd build
cmake_d="-D CMAKE_BUILD_TYPE=Release" make_d="DESTDIR=$_inst"
cmake_d="$cmake_d -D CMAKE_INSTALL_PREFIX=$_inst" make_d="$make_d HDSO=/usr"
cmake_d="$cmake_d -D USE_STATIC_DEPENDENCIES=OFF"
cmake_d="$cmake_d -D OPENVDB_BUILD_BINARIES=OFF"
if [ -d $INST/boost ]; then if [ -d $INST/boost ]; then
cmake_d="$cmake_d -D BOOST_ROOT=$INST/boost" make_d="$make_d BOOST_INCL_DIR=$INST/boost/include BOOST_LIB_DIR=$INST/boost/lib"
cmake_d="$cmake_d -D Boost_USE_MULTITHREADED=ON"
cmake_d="$cmake_d -D Boost_NO_SYSTEM_PATHS=ON"
cmake_d="$cmake_d -D Boost_NO_BOOST_CMAKE=ON"
fi fi
if [ -d $INST/tbb ]; then if [ -d $INST/tbb ]; then
cmake_d="$cmake_d -D TBB_ROOT=$INST/tbb" make_d="$make_d TBB_ROOT=$INST/tbb TBB_USE_STATIC_LIBS=OFF"
fi fi
if [ "$_with_built_openexr" = true ]; then if [ "$_with_built_openexr" = true ]; then
cmake_d="$cmake_d -D IlmBase_ROOT=$INST/openexr" make_d="$make_d ILMBASE_INCL_DIR=$INST/openexr/include ILMBASE_LIB_DIR=$INST/openexr/lib"
cmake_d="$cmake_d -D OpenEXR_ROOT=$INST/openexr" make_d="$make_d EXR_INCL_DIR=$INST/openexr/include EXR_LIB_DIR=$INST/openexr/lib"
INFO "ILMBASE_HOME=$INST/openexr"
fi fi
if [ -d $INST/blosc ]; then if [ -d $INST/blosc ]; then
cmake_d="$cmake_d -D Blosc_ROOT=$INST/blosc" make_d="$make_d BLOSC_INCL_DIR=$INST/blosc/include BLOSC_LIB_DIR=$INST/blosc/lib"
fi fi
cmake $cmake_d .. # Build without log4cplus, glfw, python module & docs
make_d="$make_d LOG4CPLUS_INCL_DIR= GLFW_INCL_DIR= PYTHON_VERSION= DOXYGEN="
make -j$THREADS install make -j$THREADS lib $make_d install
make clean make clean
if [ -d $_inst ]; then if [ -d $_inst ]; then
@@ -2835,10 +2707,6 @@ compile_OPENVDB() {
fi fi
run_ldconfig "openvdb" run_ldconfig "openvdb"
if [ "$WITH_NANOVDB" = true ]; then
install_NanoVDB
fi
} }
# ---------------------------------------------------------------------------- # ----------------------------------------------------------------------------
@@ -4068,7 +3936,7 @@ install_DEB() {
else else
check_package_version_ge_lt_DEB libopenimageio-dev $OIIO_VERSION_MIN $OIIO_VERSION_MAX check_package_version_ge_lt_DEB libopenimageio-dev $OIIO_VERSION_MIN $OIIO_VERSION_MAX
if [ $? -eq 0 -a "$_with_built_openexr" = false ]; then if [ $? -eq 0 -a "$_with_built_openexr" = false ]; then
install_packages_DEB libopenimageio-dev openimageio-tools install_packages_DEB libopenimageio-dev
clean_OIIO clean_OIIO
else else
compile_OIIO compile_OIIO
@@ -4710,13 +4578,13 @@ install_RPM() {
INFO "Forced OpenImageIO building, as requested..." INFO "Forced OpenImageIO building, as requested..."
compile_OIIO compile_OIIO
else else
check_package_version_ge_lt_RPM OpenImageIO-devel $OIIO_VERSION_MIN $OIIO_VERSION_MAX #check_package_version_ge_lt_RPM OpenImageIO-devel $OIIO_VERSION_MIN $OIIO_VERSION_MAX
if [ $? -eq 0 -a $_with_built_openexr == false ]; then #if [ $? -eq 0 -a $_with_built_openexr == false ]; then
install_packages_RPM OpenImageIO-devel OpenImageIO-utils # install_packages_RPM OpenImageIO-devel
clean_OIIO # clean_OIIO
else #else
compile_OIIO compile_OIIO
fi #fi
fi fi
@@ -5823,13 +5691,6 @@ print_info() {
PRINT " $_1" PRINT " $_1"
_buildargs="$_buildargs $_1" _buildargs="$_buildargs $_1"
fi fi
if [ -d $INST/nanovdb ]; then
_1="-D WITH_NANOVDB=ON"
_2="-D NANOVDB_ROOT_DIR=$INST/nanovdb"
PRINT " $_1"
PRINT " $_2"
_buildargs="$_buildargs $_1 $_2"
fi
fi fi
if [ "$WITH_OPENCOLLADA" = true ]; then if [ "$WITH_OPENCOLLADA" = true ]; then

View File

@@ -1,12 +0,0 @@
diff --git a/src/hpdf_image_ccitt.c b/src/hpdf_image_ccitt.c
index 8672763..9be531a 100644
--- a/src/hpdf_image_ccitt.c
+++ b/src/hpdf_image_ccitt.c
@@ -21,7 +21,6 @@
#include <memory.h>
#include <assert.h>
-#define G3CODES
#include "t4.h"
typedef unsigned int uint32;

View File

@@ -34,24 +34,3 @@ diff -Naur orig/src/include/OpenImageIO/platform.h external_openimageio/src/incl
# include <windows.h> # include <windows.h>
#endif #endif
diff -Naur orig/src/libutil/ustring.cpp external_openimageio/src/libutil/ustring.cpp
--- orig/src/libutil/ustring.cpp 2020-05-11 05:43:52.000000000 +0200
+++ external_openimageio/src/libutil/ustring.cpp 2020-11-26 12:06:08.000000000 +0100
@@ -337,6 +337,8 @@
// the std::string to make it point to our chars! In such a case, the
// destructor will be careful not to allow a deallocation.
+ // Disable internal std::string for Apple silicon based Macs
+#if !(defined(__APPLE__) && defined(__arm64__))
#if defined(__GNUC__) && !defined(_LIBCPP_VERSION) \
&& defined(_GLIBCXX_USE_CXX11_ABI) && _GLIBCXX_USE_CXX11_ABI
// NEW gcc ABI
@@ -382,7 +384,7 @@
return;
}
#endif
-
+#endif
// Remaining cases - just assign the internal string. This may result
// in double allocation for the chars. If you care about that, do
// something special for your platform, much like we did for gcc and

View File

@@ -0,0 +1,135 @@
diff -Naur orig/cmake/FindIlmBase.cmake openvdb/cmake/FindIlmBase.cmake
--- orig/cmake/FindIlmBase.cmake 2019-12-06 12:11:33 -0700
+++ openvdb/cmake/FindIlmBase.cmake 2020-08-12 12:48:44 -0600
@@ -217,6 +217,8 @@
set(CMAKE_FIND_LIBRARY_SUFFIXES ".lib")
endif()
list(APPEND CMAKE_FIND_LIBRARY_SUFFIXES "${_IlmBase_Version_Suffix}.lib")
+ list(APPEND CMAKE_FIND_LIBRARY_SUFFIXES "_s.lib")
+ list(APPEND CMAKE_FIND_LIBRARY_SUFFIXES "_s_d.lib")
else()
if(ILMBASE_USE_STATIC_LIBS)
set(CMAKE_FIND_LIBRARY_SUFFIXES ".a")
diff -Naur orig/cmake/FindOpenEXR.cmake openvdb/cmake/FindOpenEXR.cmake
--- orig/cmake/FindOpenEXR.cmake 2019-12-06 12:11:33 -0700
+++ openvdb/cmake/FindOpenEXR.cmake 2020-08-12 12:48:44 -0600
@@ -210,6 +210,8 @@
set(CMAKE_FIND_LIBRARY_SUFFIXES ".lib")
endif()
list(APPEND CMAKE_FIND_LIBRARY_SUFFIXES "${_OpenEXR_Version_Suffix}.lib")
+ list(APPEND CMAKE_FIND_LIBRARY_SUFFIXES "_s.lib")
+ list(APPEND CMAKE_FIND_LIBRARY_SUFFIXES "_s_d.lib")
else()
if(OPENEXR_USE_STATIC_LIBS)
set(CMAKE_FIND_LIBRARY_SUFFIXES ".a")
diff -Naur orig/openvdb/openvdb/CMakeLists.txt openvdb/openvdb/openvdb/CMakeLists.txt
--- orig/openvdb/openvdb/CMakeLists.txt 2019-12-06 12:11:33 -0700
+++ openvdb/openvdb/openvdb/CMakeLists.txt 2020-08-12 14:12:26 -0600
@@ -105,7 +105,9 @@
# http://boost.2283326.n4.nabble.com/CMake-config-scripts-broken-in-1-70-td4708957.html
# https://github.com/boostorg/boost_install/commit/160c7cb2b2c720e74463865ef0454d4c4cd9ae7c
set(BUILD_SHARED_LIBS ON)
- set(Boost_USE_STATIC_LIBS OFF)
+ if(NOT WIN32) # blender links boost statically on windows
+ set(Boost_USE_STATIC_LIBS OFF)
+ endif()
endif()
find_package(Boost ${MINIMUM_BOOST_VERSION} REQUIRED COMPONENTS iostreams system)
@@ -193,6 +195,7 @@
if(OPENVDB_DISABLE_BOOST_IMPLICIT_LINKING)
add_definitions(-DBOOST_ALL_NO_LIB)
endif()
+ add_definitions(-D__TBB_NO_IMPLICIT_LINKAGE -DOPENVDB_OPENEXR_STATICLIB)
endif()
# @todo Should be target definitions
@@ -383,7 +386,12 @@
# imported targets.
if(OPENVDB_CORE_SHARED)
- add_library(openvdb_shared SHARED ${OPENVDB_LIBRARY_SOURCE_FILES})
+ if(WIN32)
+ configure_file(version.rc.in ${CMAKE_CURRENT_BINARY_DIR}/version.rc @ONLY)
+ add_library(openvdb_shared SHARED ${OPENVDB_LIBRARY_SOURCE_FILES} ${CMAKE_CURRENT_BINARY_DIR}/version.rc)
+ else()
+ add_library(openvdb_shared SHARED ${OPENVDB_LIBRARY_SOURCE_FILES})
+ endif()
endif()
if(OPENVDB_CORE_STATIC)
diff -Naur orig/openvdb/openvdb/version.rc.in openvdb/openvdb/openvdb/version.rc.in
--- orig/openvdb/openvdb/version.rc.in 1969-12-31 17:00:00 -0700
+++ openvdb/openvdb/openvdb/version.rc.in 2020-08-12 14:15:01 -0600
@@ -0,0 +1,48 @@
+#include <winver.h>
+
+#define VER_FILEVERSION @OpenVDB_MAJOR_VERSION@,@OpenVDB_MINOR_VERSION@,@OpenVDB_PATCH_VERSION@,0
+#define VER_FILEVERSION_STR "@OpenVDB_MAJOR_VERSION@.@OpenVDB_MINOR_VERSION@.@OpenVDB_PATCH_VERSION@.0\0"
+
+#define VER_PRODUCTVERSION @OpenVDB_MAJOR_VERSION@,@OpenVDB_MINOR_VERSION@,@OpenVDB_PATCH_VERSION@,0
+#define VER_PRODUCTVERSION_STR "@OpenVDB_MAJOR_VERSION@.@OpenVDB_MINOR_VERSION@\0"
+
+#ifndef DEBUG
+#define VER_DEBUG 0
+#else
+#define VER_DEBUG VS_FF_DEBUG
+#endif
+
+VS_VERSION_INFO VERSIONINFO
+FILEVERSION VER_FILEVERSION
+PRODUCTVERSION VER_PRODUCTVERSION
+FILEFLAGSMASK VS_FFI_FILEFLAGSMASK
+FILEFLAGS (VER_DEBUG)
+FILEOS VOS__WINDOWS32
+FILETYPE VFT_DLL
+FILESUBTYPE VFT2_UNKNOWN
+BEGIN
+ BLOCK "StringFileInfo"
+ BEGIN
+ BLOCK "040904E4"
+ BEGIN
+ VALUE "FileDescription", "OpenVDB"
+ VALUE "FileVersion", VER_FILEVERSION_STR
+ VALUE "InternalName", "OpenVDB"
+ VALUE "ProductName", "OpenVDB"
+ VALUE "ProductVersion", VER_PRODUCTVERSION_STR
+ END
+ END
+
+ BLOCK "VarFileInfo"
+ BEGIN
+ /* The following line should only be modified for localized versions. */
+ /* It consists of any number of WORD,WORD pairs, with each pair */
+ /* describing a language,codepage combination supported by the file. */
+ /* */
+ /* For example, a file might have values "0x409,1252" indicating that it */
+ /* supports English language (0x409) in the Windows ANSI codepage (1252). */
+
+ VALUE "Translation", 0x409, 1252
+
+ END
+END
diff -Naur openvdb-original/CMakeLists.txt openvdb/CMakeLists.txt
--- openvdb-original/CMakeLists.txt 2020-08-27 03:34:02.000000000 +0200
+++ openvdb/CMakeLists.txt 2020-09-02 10:56:21.665735244 +0200
@@ -68,6 +68,7 @@
option(OPENVDB_INSTALL_HOUDINI_PYTHONRC [=[Install a Houdini startup script that sets
the visibilty of OpenVDB nodes and their native equivalents.]=] OFF)
option(OPENVDB_BUILD_MAYA_PLUGIN "Build the Maya plugin" OFF)
+option(OPENVDB_BUILD_NANOVDB "Build nanovdb" ON)
option(OPENVDB_ENABLE_RPATH "Build with RPATH information" ON)
option(OPENVDB_CXX_STRICT "Enable or disable pre-defined compiler warnings" OFF)
option(OPENVDB_CODE_COVERAGE "Enable code coverage. This also overrides CMAKE_BUILD_TYPE to Debug" OFF)
@@ -740,6 +741,10 @@
add_subdirectory(openvdb_maya)
endif()
+if(OPENVDB_BUILD_NANOVDB)
+ add_subdirectory(nanovdb)
+endif()
+
##########################################################################
add_custom_target(uninstall

View File

@@ -88,6 +88,7 @@ class VersionInfo:
self.short_version = "%d.%02d" % (version_numbers[0], version_numbers[1]) self.short_version = "%d.%02d" % (version_numbers[0], version_numbers[1])
self.version = "%d.%02d.%d" % version_numbers self.version = "%d.%02d.%d" % version_numbers
self.version_cycle = self._parse_header_file(blender_h, 'BLENDER_VERSION_CYCLE') self.version_cycle = self._parse_header_file(blender_h, 'BLENDER_VERSION_CYCLE')
self.version_cycle_number = self._parse_header_file(blender_h, 'BLENDER_VERSION_CYCLE_NUMBER')
self.hash = self._parse_header_file(buildinfo_h, 'BUILD_HASH')[1:-1] self.hash = self._parse_header_file(buildinfo_h, 'BUILD_HASH')[1:-1]
if self.version_cycle == "release": if self.version_cycle == "release":
@@ -96,7 +97,8 @@ class VersionInfo:
self.is_development_build = False self.is_development_build = False
elif self.version_cycle == "rc": elif self.version_cycle == "rc":
# Release candidate # Release candidate
self.full_version = self.version + self.version_cycle version_cycle = self.version_cycle + self.version_cycle_number
self.full_version = self.version + version_cycle
self.is_development_build = False self.is_development_build = False
else: else:
# Development build # Development build

View File

@@ -18,72 +18,12 @@
# <pep8 compliant> # <pep8 compliant>
import dataclasses
import json
import os import os
from pathlib import Path from pathlib import Path
from typing import Optional
import codesign.util as util import codesign.util as util
class ArchiveStateError(Exception):
message: str
def __init__(self, message):
self.message = message
super().__init__(self.message)
@dataclasses.dataclass
class ArchiveState:
"""
Additional information (state) of the archive
Includes information like expected file size of the archive file in the case
the archive file is expected to be successfully created.
If the archive can not be created, this state will contain error message
indicating details of error.
"""
# Size in bytes of the corresponding archive.
file_size: Optional[int] = None
# Non-empty value indicates that error has happenned.
error_message: str = ''
def has_error(self) -> bool:
"""
Check whether the archive is at error state
"""
return self.error_message
def serialize_to_string(self) -> str:
payload = dataclasses.asdict(self)
return json.dumps(payload, sort_keys=True, indent=4)
def serialize_to_file(self, filepath: Path) -> None:
string = self.serialize_to_string()
filepath.write_text(string)
@classmethod
def deserialize_from_string(cls, string: str) -> 'ArchiveState':
try:
object_as_dict = json.loads(string)
except json.decoder.JSONDecodeError:
raise ArchiveStateError('Error parsing JSON')
return cls(**object_as_dict)
@classmethod
def deserialize_from_file(cls, filepath: Path):
string = filepath.read_text()
return cls.deserialize_from_string(string)
class ArchiveWithIndicator: class ArchiveWithIndicator:
""" """
The idea of this class is to wrap around logic which takes care of keeping The idea of this class is to wrap around logic which takes care of keeping
@@ -139,19 +79,6 @@ class ArchiveWithIndicator:
if not self.ready_indicator_filepath.exists(): if not self.ready_indicator_filepath.exists():
return False return False
try:
archive_state = ArchiveState.deserialize_from_file(
self.ready_indicator_filepath)
except ArchiveStateError as error:
print(f'Error deserializing archive state: {error.message}')
return False
if archive_state.has_error():
# If the error did happen during codesign procedure there will be no
# corresponding archive file.
# The caller code will deal with the error check further.
return True
# Sometimes on macOS indicator file appears prior to the actual archive # Sometimes on macOS indicator file appears prior to the actual archive
# despite the order of creation and os.sync() used in tag_ready(). # despite the order of creation and os.sync() used in tag_ready().
# So consider archive not ready if there is an indicator without an # So consider archive not ready if there is an indicator without an
@@ -161,11 +88,23 @@ class ArchiveWithIndicator:
f'({self.archive_filepath}) to appear.') f'({self.archive_filepath}) to appear.')
return False return False
# Read archive size from indicator/
#
# Assume that file is either empty or is fully written. This is being checked
# by performing ValueError check since empty string will throw this exception
# when attempted to be converted to int.
expected_archive_size_str = self.ready_indicator_filepath.read_text()
try:
expected_archive_size = int(expected_archive_size_str)
except ValueError:
print(f'Invalid archive size "{expected_archive_size_str}"')
return False
# Wait for until archive is fully stored. # Wait for until archive is fully stored.
actual_archive_size = self.archive_filepath.stat().st_size actual_archive_size = self.archive_filepath.stat().st_size
if actual_archive_size != archive_state.file_size: if actual_archive_size != expected_archive_size:
print('Partial/invalid archive size (expected ' print('Partial/invalid archive size (expected '
f'{archive_state.file_size} got {actual_archive_size})') f'{expected_archive_size} got {actual_archive_size})')
return False return False
return True return True
@@ -190,7 +129,7 @@ class ArchiveWithIndicator:
print(f'Exception checking archive: {e}') print(f'Exception checking archive: {e}')
return False return False
def tag_ready(self, error_message='') -> None: def tag_ready(self) -> None:
""" """
Tag the archive as ready by creating the corresponding indication file. Tag the archive as ready by creating the corresponding indication file.
@@ -199,34 +138,13 @@ class ArchiveWithIndicator:
If it is violated, an assert will fail. If it is violated, an assert will fail.
""" """
assert not self.is_ready() assert not self.is_ready()
# Try the best to make sure everything is synced to the file system, # Try the best to make sure everything is synced to the file system,
# to avoid any possibility of stamp appearing on a network share prior to # to avoid any possibility of stamp appearing on a network share prior to
# an actual file. # an actual file.
if util.get_current_platform() != util.Platform.WINDOWS: if util.get_current_platform() != util.Platform.WINDOWS:
os.sync() os.sync()
archive_size = self.archive_filepath.stat().st_size
archive_size = -1 self.ready_indicator_filepath.write_text(str(archive_size))
if self.archive_filepath.exists():
archive_size = self.archive_filepath.stat().st_size
archive_info = ArchiveState(
file_size=archive_size, error_message=error_message)
self.ready_indicator_filepath.write_text(
archive_info.serialize_to_string())
def get_state(self) -> ArchiveState:
"""
Get state object for this archive
The state is read from the corresponding state file.
"""
try:
return ArchiveState.deserialize_from_file(self.ready_indicator_filepath)
except ArchiveStateError as error:
return ArchiveState(error_message=f'Error in information format: {error}')
def clean(self) -> None: def clean(self) -> None:
""" """

View File

@@ -58,7 +58,6 @@ import codesign.util as util
from codesign.absolute_and_relative_filename import AbsoluteAndRelativeFileName from codesign.absolute_and_relative_filename import AbsoluteAndRelativeFileName
from codesign.archive_with_indicator import ArchiveWithIndicator from codesign.archive_with_indicator import ArchiveWithIndicator
from codesign.exception import CodeSignException
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -146,13 +145,13 @@ class BaseCodeSigner(metaclass=abc.ABCMeta):
def cleanup_environment_for_builder(self) -> None: def cleanup_environment_for_builder(self) -> None:
# TODO(sergey): Revisit need of cleaning up the existing files. # TODO(sergey): Revisit need of cleaning up the existing files.
# In practice it wasn't so helpful, and with multiple clients # In practice it wasn't so helpful, and with multiple clients
# talking to the same server it becomes even more tricky. # talking to the same server it becomes even mor etricky.
pass pass
def cleanup_environment_for_signing_server(self) -> None: def cleanup_environment_for_signing_server(self) -> None:
# TODO(sergey): Revisit need of cleaning up the existing files. # TODO(sergey): Revisit need of cleaning up the existing files.
# In practice it wasn't so helpful, and with multiple clients # In practice it wasn't so helpful, and with multiple clients
# talking to the same server it becomes even more tricky. # talking to the same server it becomes even mor etricky.
pass pass
def generate_request_id(self) -> str: def generate_request_id(self) -> str:
@@ -221,15 +220,9 @@ class BaseCodeSigner(metaclass=abc.ABCMeta):
""" """
Wait until archive with signed files is available. Wait until archive with signed files is available.
Will only return if the archive with signed files is available. If there
was an error during code sign procedure the SystemExit exception is
raised, with the message set to the error reported by the codesign
server.
Will only wait for the configured time. If that time exceeds and there Will only wait for the configured time. If that time exceeds and there
is still no responce from the signing server the application will exit is still no responce from the signing server the application will exit
with a non-zero exit code. with a non-zero exit code.
""" """
signed_archive_info = self.signed_archive_info_for_request_id( signed_archive_info = self.signed_archive_info_for_request_id(
@@ -243,17 +236,9 @@ class BaseCodeSigner(metaclass=abc.ABCMeta):
time.sleep(1) time.sleep(1)
time_slept_in_seconds = time.monotonic() - time_start time_slept_in_seconds = time.monotonic() - time_start
if time_slept_in_seconds > timeout_in_seconds: if time_slept_in_seconds > timeout_in_seconds:
signed_archive_info.clean()
unsigned_archive_info.clean() unsigned_archive_info.clean()
raise SystemExit("Signing server didn't finish signing in " raise SystemExit("Signing server didn't finish signing in "
f'{timeout_in_seconds} seconds, dying :(') f"{timeout_in_seconds} seconds, dying :(")
archive_state = signed_archive_info.get_state()
if archive_state.has_error():
signed_archive_info.clean()
unsigned_archive_info.clean()
raise SystemExit(
f'Error happenned during codesign procedure: {archive_state.error_message}')
def copy_signed_files_to_directory( def copy_signed_files_to_directory(
self, signed_dir: Path, destination_dir: Path) -> None: self, signed_dir: Path, destination_dir: Path) -> None:
@@ -411,13 +396,7 @@ class BaseCodeSigner(metaclass=abc.ABCMeta):
temp_dir) temp_dir)
logger_server.info('Signing all requested files...') logger_server.info('Signing all requested files...')
try: self.sign_all_files(files)
self.sign_all_files(files)
except CodeSignException as error:
signed_archive_info.tag_ready(error_message=error.message)
unsigned_archive_info.clean()
logger_server.info('Signing is complete with errors.')
return
logger_server.info('Packing signed files...') logger_server.info('Packing signed files...')
pack_files(files=files, pack_files(files=files,

View File

@@ -1,26 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
class CodeSignException(Exception):
message: str
def __init__(self, message):
self.message = message
super().__init__(self.message)

View File

@@ -33,7 +33,6 @@ from buildbot_utils import Builder
from codesign.absolute_and_relative_filename import AbsoluteAndRelativeFileName from codesign.absolute_and_relative_filename import AbsoluteAndRelativeFileName
from codesign.base_code_signer import BaseCodeSigner from codesign.base_code_signer import BaseCodeSigner
from codesign.exception import CodeSignException
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
logger_server = logger.getChild('server') logger_server = logger.getChild('server')
@@ -46,10 +45,6 @@ EXTENSIONS_TO_BE_SIGNED = {'.dylib', '.so', '.dmg'}
NAME_PREFIXES_TO_BE_SIGNED = {'python'} NAME_PREFIXES_TO_BE_SIGNED = {'python'}
class NotarizationException(CodeSignException):
pass
def is_file_from_bundle(file: AbsoluteAndRelativeFileName) -> bool: def is_file_from_bundle(file: AbsoluteAndRelativeFileName) -> bool:
""" """
Check whether file is coming from an .app bundle Check whether file is coming from an .app bundle
@@ -191,7 +186,7 @@ class MacOSCodeSigner(BaseCodeSigner):
file.absolute_filepath] file.absolute_filepath]
self.run_command_or_mock(command, util.Platform.MACOS) self.run_command_or_mock(command, util.Platform.MACOS)
def codesign_all_files(self, files: List[AbsoluteAndRelativeFileName]) -> None: def codesign_all_files(self, files: List[AbsoluteAndRelativeFileName]) -> bool:
""" """
Run codesign tool on all eligible files in the given list. Run codesign tool on all eligible files in the given list.
@@ -230,6 +225,8 @@ class MacOSCodeSigner(BaseCodeSigner):
file_index + 1, num_signed_files, file_index + 1, num_signed_files,
signed_file.relative_filepath) signed_file.relative_filepath)
return True
def codesign_bundles( def codesign_bundles(
self, files: List[AbsoluteAndRelativeFileName]) -> None: self, files: List[AbsoluteAndRelativeFileName]) -> None:
""" """
@@ -276,6 +273,8 @@ class MacOSCodeSigner(BaseCodeSigner):
files.extend(extra_files) files.extend(extra_files)
return True
############################################################################ ############################################################################
# Notarization. # Notarization.
@@ -335,40 +334,7 @@ class MacOSCodeSigner(BaseCodeSigner):
logger_server.error('xcrun command did not report RequestUUID') logger_server.error('xcrun command did not report RequestUUID')
return None return None
def notarize_review_status(self, xcrun_output: str) -> bool: def notarize_wait_result(self, request_uuid: str) -> bool:
"""
Review status returned by xcrun's notarization info
Returns truth if the notarization process has finished.
If there are errors during notarization, a NotarizationException()
exception is thrown with status message from the notarial office.
"""
# Parse status and message
status = xcrun_field_value_from_output('Status', xcrun_output)
status_message = xcrun_field_value_from_output(
'Status Message', xcrun_output)
if status == 'success':
logger_server.info(
'Package successfully notarized: %s', status_message)
return True
if status == 'invalid':
logger_server.error(xcrun_output)
logger_server.error(
'Package notarization has failed: %s', status_message)
raise NotarizationException(status_message)
if status == 'in progress':
return False
logger_server.info(
'Unknown notarization status %s (%s)', status, status_message)
return False
def notarize_wait_result(self, request_uuid: str) -> None:
""" """
Wait for until notarial office have a reply Wait for until notarial office have a reply
""" """
@@ -385,11 +351,29 @@ class MacOSCodeSigner(BaseCodeSigner):
timeout_in_seconds = self.config.MACOS_NOTARIZE_TIMEOUT_IN_SECONDS timeout_in_seconds = self.config.MACOS_NOTARIZE_TIMEOUT_IN_SECONDS
while True: while True:
xcrun_output = self.check_output_or_mock( output = self.check_output_or_mock(
command, util.Platform.MACOS, allow_nonzero_exit_code=True) command, util.Platform.MACOS, allow_nonzero_exit_code=True)
# Parse status and message
status = xcrun_field_value_from_output('Status', output)
status_message = xcrun_field_value_from_output(
'Status Message', output)
if self.notarize_review_status(xcrun_output): # Review status.
break if status:
if status == 'success':
logger_server.info(
'Package successfully notarized: %s', status_message)
return True
elif status == 'invalid':
logger_server.error(output)
logger_server.error(
'Package notarization has failed: %s', status_message)
return False
elif status == 'in progress':
pass
else:
logger_server.info(
'Unknown notarization status %s (%s)', status, status_message)
logger_server.info('Keep waiting for notarization office.') logger_server.info('Keep waiting for notarization office.')
time.sleep(30) time.sleep(30)
@@ -410,6 +394,8 @@ class MacOSCodeSigner(BaseCodeSigner):
command = ['xcrun', 'stapler', 'staple', '-v', file.absolute_filepath] command = ['xcrun', 'stapler', 'staple', '-v', file.absolute_filepath]
self.check_output_or_mock(command, util.Platform.MACOS) self.check_output_or_mock(command, util.Platform.MACOS)
return True
def notarize_dmg(self, file: AbsoluteAndRelativeFileName) -> bool: def notarize_dmg(self, file: AbsoluteAndRelativeFileName) -> bool:
""" """
Run entire pipeline to get DMG notarized. Run entire pipeline to get DMG notarized.
@@ -428,7 +414,10 @@ class MacOSCodeSigner(BaseCodeSigner):
return False return False
# Staple. # Staple.
self.notarize_staple(file) if not self.notarize_staple(file):
return False
return True
def notarize_all_dmg( def notarize_all_dmg(
self, files: List[AbsoluteAndRelativeFileName]) -> bool: self, files: List[AbsoluteAndRelativeFileName]) -> bool:
@@ -443,7 +432,10 @@ class MacOSCodeSigner(BaseCodeSigner):
if not self.check_file_is_to_be_signed(file): if not self.check_file_is_to_be_signed(file):
continue continue
self.notarize_dmg(file) if not self.notarize_dmg(file):
return False
return True
############################################################################ ############################################################################
# Entry point. # Entry point.
@@ -451,6 +443,11 @@ class MacOSCodeSigner(BaseCodeSigner):
def sign_all_files(self, files: List[AbsoluteAndRelativeFileName]) -> None: def sign_all_files(self, files: List[AbsoluteAndRelativeFileName]) -> None:
# TODO(sergey): Handle errors somehow. # TODO(sergey): Handle errors somehow.
self.codesign_all_files(files) if not self.codesign_all_files(files):
self.codesign_bundles(files) return
self.notarize_all_dmg(files)
if not self.codesign_bundles(files):
return
if not self.notarize_all_dmg(files):
return

View File

@@ -29,7 +29,6 @@ from buildbot_utils import Builder
from codesign.absolute_and_relative_filename import AbsoluteAndRelativeFileName from codesign.absolute_and_relative_filename import AbsoluteAndRelativeFileName
from codesign.base_code_signer import BaseCodeSigner from codesign.base_code_signer import BaseCodeSigner
from codesign.exception import CodeSignException
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
logger_server = logger.getChild('server') logger_server = logger.getChild('server')
@@ -41,9 +40,6 @@ BLACKLIST_FILE_PREFIXES = (
'api-ms-', 'concrt', 'msvcp', 'ucrtbase', 'vcomp', 'vcruntime') 'api-ms-', 'concrt', 'msvcp', 'ucrtbase', 'vcomp', 'vcruntime')
class SigntoolException(CodeSignException):
pass
class WindowsCodeSigner(BaseCodeSigner): class WindowsCodeSigner(BaseCodeSigner):
def check_file_is_to_be_signed( def check_file_is_to_be_signed(
self, file: AbsoluteAndRelativeFileName) -> bool: self, file: AbsoluteAndRelativeFileName) -> bool:
@@ -54,46 +50,12 @@ class WindowsCodeSigner(BaseCodeSigner):
return file.relative_filepath.suffix in EXTENSIONS_TO_BE_SIGNED return file.relative_filepath.suffix in EXTENSIONS_TO_BE_SIGNED
def get_sign_command_prefix(self) -> List[str]: def get_sign_command_prefix(self) -> List[str]:
return [ return [
'signtool', 'sign', '/v', 'signtool', 'sign', '/v',
'/f', self.config.WIN_CERTIFICATE_FILEPATH, '/f', self.config.WIN_CERTIFICATE_FILEPATH,
'/tr', self.config.WIN_TIMESTAMP_AUTHORITY_URL] '/tr', self.config.WIN_TIMESTAMP_AUTHORITY_URL]
def run_codesign_tool(self, filepath: Path) -> None:
command = self.get_sign_command_prefix() + [filepath]
try:
codesign_output = self.check_output_or_mock(command, util.Platform.WINDOWS)
except subprocess.CalledProcessError as e:
raise SigntoolException(f'Error running signtool {e}')
logger_server.info(f'signtool output:\n{codesign_output}')
got_number_of_success = False
for line in codesign_output.split('\n'):
line_clean = line.strip()
line_clean_lower = line_clean.lower()
if line_clean_lower.startswith('number of warnings') or \
line_clean_lower.startswith('number of errors'):
number = int(line_clean_lower.split(':')[1])
if number != 0:
raise SigntoolException('Non-clean success of signtool')
if line_clean_lower.startswith('number of files successfully signed'):
got_number_of_success = True
number = int(line_clean_lower.split(':')[1])
if number != 1:
raise SigntoolException('Signtool did not consider codesign a success')
if not got_number_of_success:
raise SigntoolException('Signtool did not report number of files signed')
def sign_all_files(self, files: List[AbsoluteAndRelativeFileName]) -> None: def sign_all_files(self, files: List[AbsoluteAndRelativeFileName]) -> None:
# NOTE: Sign files one by one to avoid possible command line length # NOTE: Sign files one by one to avoid possible command line length
# overflow (which could happen if we ever decide to sign every binary # overflow (which could happen if we ever decide to sign every binary
@@ -111,7 +73,12 @@ class WindowsCodeSigner(BaseCodeSigner):
file_index + 1, num_files, file.relative_filepath) file_index + 1, num_files, file.relative_filepath)
continue continue
command = self.get_sign_command_prefix()
command.append(file.absolute_filepath)
logger_server.info( logger_server.info(
'Running signtool command for file [%d/%d] %s...', 'Running signtool command for file [%d/%d] %s...',
file_index + 1, num_files, file.relative_filepath) file_index + 1, num_files, file.relative_filepath)
self.run_codesign_tool(file.absolute_filepath) # TODO(sergey): Check the status somehow. With a missing certificate
# the command still exists with a zero code.
self.run_command_or_mock(command, util.Platform.WINDOWS)
# TODO(sergey): Report number of signed and ignored files.

View File

@@ -42,7 +42,7 @@ def get_cmake_options(builder):
elif builder.platform == 'linux': elif builder.platform == 'linux':
config_file = "build_files/buildbot/config/blender_linux.cmake" config_file = "build_files/buildbot/config/blender_linux.cmake"
optix_sdk_dir = os.path.join(builder.blender_dir, '..', '..', 'NVIDIA-Optix-SDK-7.1') optix_sdk_dir = os.path.join(builder.blender_dir, '..', '..', 'NVIDIA-Optix-SDK')
options.append('-DOPTIX_ROOT_DIR:PATH=' + optix_sdk_dir) options.append('-DOPTIX_ROOT_DIR:PATH=' + optix_sdk_dir)
# Workaround to build sm_30 kernels with CUDA 10, since CUDA 11 no longer supports that architecture # Workaround to build sm_30 kernels with CUDA 10, since CUDA 11 no longer supports that architecture

View File

@@ -34,8 +34,6 @@ set(_clang_tidy_SEARCH_DIRS
# TODO(sergey): Find more reliable way of finding the latest clang-tidy. # TODO(sergey): Find more reliable way of finding the latest clang-tidy.
find_program(CLANG_TIDY_EXECUTABLE find_program(CLANG_TIDY_EXECUTABLE
NAMES NAMES
clang-tidy-12
clang-tidy-11
clang-tidy-10 clang-tidy-10
clang-tidy-9 clang-tidy-9
clang-tidy-8 clang-tidy-8
@@ -45,10 +43,7 @@ find_program(CLANG_TIDY_EXECUTABLE
${_clang_tidy_SEARCH_DIRS} ${_clang_tidy_SEARCH_DIRS}
) )
if(CLANG_TIDY_EXECUTABLE AND NOT EXISTS ${CLANG_TIDY_EXECUTABLE}) if(CLANG_TIDY_EXECUTABLE)
message(WARNING "Cached or directly specified Clang-Tidy executable does not exist.")
set(CLANG_TIDY_FOUND FALSE)
elseif(CLANG_TIDY_EXECUTABLE)
# Mark clang-tidy as found. # Mark clang-tidy as found.
set(CLANG_TIDY_FOUND TRUE) set(CLANG_TIDY_FOUND TRUE)

View File

@@ -1,64 +0,0 @@
# - Find HARU library
# Find the native Haru includes and library
# This module defines
# HARU_INCLUDE_DIRS, where to find hpdf.h, set when
# HARU_INCLUDE_DIR is found.
# HARU_LIBRARIES, libraries to link against to use Haru.
# HARU_ROOT_DIR, The base directory to search for Haru.
# This can also be an environment variable.
# HARU_FOUND, If false, do not try to use Haru.
#
# also defined, but not for general use are
# HARU_LIBRARY, where to find the Haru library.
#=============================================================================
# Copyright 2021 Blender Foundation.
#
# Distributed under the OSI-approved BSD 3-Clause License,
# see accompanying file BSD-3-Clause-license.txt for details.
#=============================================================================
# If HARU_ROOT_DIR was defined in the environment, use it.
if(NOT HARU_ROOT_DIR AND NOT $ENV{HARU_ROOT_DIR} STREQUAL "")
set(HARU_ROOT_DIR $ENV{HARU_ROOT_DIR})
endif()
set(_haru_SEARCH_DIRS
${HARU_ROOT_DIR}
/opt/lib/haru
)
find_path(HARU_INCLUDE_DIR
NAMES
hpdf.h
HINTS
${_haru_SEARCH_DIRS}
PATH_SUFFIXES
include/haru
)
find_library(HARU_LIBRARY
NAMES
hpdfs
HINTS
${_haru_SEARCH_DIRS}
PATH_SUFFIXES
lib64 lib
)
# Handle the QUIETLY and REQUIRED arguments and set HARU_FOUND to TRUE if
# all listed variables are TRUE.
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(Haru DEFAULT_MSG HARU_LIBRARY HARU_INCLUDE_DIR)
if(HARU_FOUND)
set(HARU_LIBRARIES ${HARU_LIBRARY})
set(HARU_INCLUDE_DIRS ${HARU_INCLUDE_DIR})
endif()
mark_as_advanced(
HARU_INCLUDE_DIR
HARU_LIBRARY
)
unset(_haru_SEARCH_DIRS)

View File

@@ -49,7 +49,7 @@ FIND_LIBRARY(PUGIXML_LIBRARY
# handle the QUIETLY and REQUIRED arguments and set PUGIXML_FOUND to TRUE if # handle the QUIETLY and REQUIRED arguments and set PUGIXML_FOUND to TRUE if
# all listed variables are TRUE # all listed variables are TRUE
INCLUDE(FindPackageHandleStandardArgs) INCLUDE(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(PugiXML DEFAULT_MSG FIND_PACKAGE_HANDLE_STANDARD_ARGS(PUGIXML DEFAULT_MSG
PUGIXML_LIBRARY PUGIXML_INCLUDE_DIR) PUGIXML_LIBRARY PUGIXML_INCLUDE_DIR)
IF(PUGIXML_FOUND) IF(PUGIXML_FOUND)

View File

@@ -330,9 +330,6 @@ function(gtest_add_tests)
set(gtest_case_name_regex ".*\\( *([A-Za-z_0-9]+) *, *([A-Za-z_0-9]+) *\\).*") set(gtest_case_name_regex ".*\\( *([A-Za-z_0-9]+) *, *([A-Za-z_0-9]+) *\\).*")
set(gtest_test_type_regex "(TYPED_TEST|TEST_?[FP]?)") set(gtest_test_type_regex "(TYPED_TEST|TEST_?[FP]?)")
# This will get a filter for each test suite.
set(test_filters "")
foreach(source IN LISTS ARGS_SOURCES) foreach(source IN LISTS ARGS_SOURCES)
if(NOT ARGS_SKIP_DEPENDENCY) if(NOT ARGS_SKIP_DEPENDENCY)
set_property(DIRECTORY APPEND PROPERTY CMAKE_CONFIGURE_DEPENDS ${source}) set_property(DIRECTORY APPEND PROPERTY CMAKE_CONFIGURE_DEPENDS ${source})
@@ -379,32 +376,175 @@ function(gtest_add_tests)
list(APPEND testList ${ctest_test_name}) list(APPEND testList ${ctest_test_name})
endif() endif()
else() else()
# BLENDER: collect tests named "suite.testcase" as list of "suite.*" filters. set(ctest_test_name ${ARGS_TEST_PREFIX}${gtest_test_name}${ARGS_TEST_SUFFIX})
string(REGEX REPLACE "\\..*$" "" gtest_suite_name ${gtest_test_name}) add_test(NAME ${ctest_test_name}
list(APPEND test_filters "${gtest_suite_name}.*") ${workDir}
COMMAND ${ARGS_TARGET}
--gtest_filter=${gtest_test_name}
${ARGS_EXTRA_ARGS}
)
list(APPEND testList ${ctest_test_name})
endif() endif()
endforeach() endforeach()
endforeach() endforeach()
# Join all found GTest suite names into one big filter.
list(REMOVE_DUPLICATES test_filters)
list(JOIN test_filters ":" gtest_filter)
add_test(NAME ${ARGS_TEST_PREFIX}
${workDir}
COMMAND ${ARGS_TARGET}
--gtest_filter=${gtest_filter}
${ARGS_EXTRA_ARGS}
)
list(APPEND testList ${ARGS_TEST_PREFIX})
if(ARGS_TEST_LIST) if(ARGS_TEST_LIST)
set(${ARGS_TEST_LIST} ${testList} PARENT_SCOPE) set(${ARGS_TEST_LIST} ${testList} PARENT_SCOPE)
endif() endif()
endfunction() endfunction()
# BLENDER: remove the discovery function gtest_discover_tests(). It's not used, #------------------------------------------------------------------------------
# as it generates too many test invocations.
function(gtest_discover_tests TARGET)
cmake_parse_arguments(
""
"NO_PRETTY_TYPES;NO_PRETTY_VALUES"
"TEST_PREFIX;TEST_SUFFIX;WORKING_DIRECTORY;TEST_LIST;DISCOVERY_TIMEOUT;XML_OUTPUT_DIR;DISCOVERY_MODE"
"EXTRA_ARGS;PROPERTIES"
${ARGN}
)
if(NOT _WORKING_DIRECTORY)
set(_WORKING_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}")
endif()
if(NOT _TEST_LIST)
set(_TEST_LIST ${TARGET}_TESTS)
endif()
if(NOT _DISCOVERY_TIMEOUT)
set(_DISCOVERY_TIMEOUT 5)
endif()
if(NOT _DISCOVERY_MODE)
if(NOT CMAKE_GTEST_DISCOVER_TESTS_DISCOVERY_MODE)
set(CMAKE_GTEST_DISCOVER_TESTS_DISCOVERY_MODE "POST_BUILD")
endif()
set(_DISCOVERY_MODE ${CMAKE_GTEST_DISCOVER_TESTS_DISCOVERY_MODE})
endif()
get_property(
has_counter
TARGET ${TARGET}
PROPERTY CTEST_DISCOVERED_TEST_COUNTER
SET
)
if(has_counter)
get_property(
counter
TARGET ${TARGET}
PROPERTY CTEST_DISCOVERED_TEST_COUNTER
)
math(EXPR counter "${counter} + 1")
else()
set(counter 1)
endif()
set_property(
TARGET ${TARGET}
PROPERTY CTEST_DISCOVERED_TEST_COUNTER
${counter}
)
# Define rule to generate test list for aforementioned test executable
# Blender: use _ instead of [] to avoid problems with zsh regex.
set(ctest_file_base "${CMAKE_CURRENT_BINARY_DIR}/${TARGET}_${counter}_")
set(ctest_include_file "${ctest_file_base}_include.cmake")
set(ctest_tests_file "${ctest_file_base}_tests.cmake")
get_property(crosscompiling_emulator
TARGET ${TARGET}
PROPERTY CROSSCOMPILING_EMULATOR
)
if(_DISCOVERY_MODE STREQUAL "POST_BUILD")
add_custom_command(
TARGET ${TARGET} POST_BUILD
BYPRODUCTS "${ctest_tests_file}"
COMMAND "${CMAKE_COMMAND}"
-D "TEST_TARGET=${TARGET}"
-D "TEST_EXECUTABLE=$<TARGET_FILE:${TARGET}>"
-D "TEST_EXECUTOR=${crosscompiling_emulator}"
-D "TEST_WORKING_DIR=${_WORKING_DIRECTORY}"
-D "TEST_EXTRA_ARGS=${_EXTRA_ARGS}"
-D "TEST_PROPERTIES=${_PROPERTIES}"
-D "TEST_PREFIX=${_TEST_PREFIX}"
-D "TEST_SUFFIX=${_TEST_SUFFIX}"
-D "NO_PRETTY_TYPES=${_NO_PRETTY_TYPES}"
-D "NO_PRETTY_VALUES=${_NO_PRETTY_VALUES}"
-D "TEST_LIST=${_TEST_LIST}"
-D "CTEST_FILE=${ctest_tests_file}"
-D "TEST_DISCOVERY_TIMEOUT=${_DISCOVERY_TIMEOUT}"
-D "TEST_XML_OUTPUT_DIR=${_XML_OUTPUT_DIR}"
-P "${_GOOGLETEST_DISCOVER_TESTS_SCRIPT}"
VERBATIM
)
file(WRITE "${ctest_include_file}"
"if(EXISTS \"${ctest_tests_file}\")\n"
" include(\"${ctest_tests_file}\")\n"
"else()\n"
" add_test(${TARGET}_NOT_BUILT ${TARGET}_NOT_BUILT)\n"
"endif()\n"
)
elseif(_DISCOVERY_MODE STREQUAL "PRE_TEST")
get_property(GENERATOR_IS_MULTI_CONFIG GLOBAL
PROPERTY GENERATOR_IS_MULTI_CONFIG
)
if(GENERATOR_IS_MULTI_CONFIG)
set(ctest_tests_file "${ctest_file_base}_tests-$<CONFIG>.cmake")
endif()
string(CONCAT ctest_include_content
"if(EXISTS \"$<TARGET_FILE:${TARGET}>\")" "\n"
" if(\"$<TARGET_FILE:${TARGET}>\" IS_NEWER_THAN \"${ctest_tests_file}\")" "\n"
" include(\"${_GOOGLETEST_DISCOVER_TESTS_SCRIPT}\")" "\n"
" gtest_discover_tests_impl(" "\n"
" TEST_EXECUTABLE" " [==[" "$<TARGET_FILE:${TARGET}>" "]==]" "\n"
" TEST_EXECUTOR" " [==[" "${crosscompiling_emulator}" "]==]" "\n"
" TEST_WORKING_DIR" " [==[" "${_WORKING_DIRECTORY}" "]==]" "\n"
" TEST_EXTRA_ARGS" " [==[" "${_EXTRA_ARGS}" "]==]" "\n"
" TEST_PROPERTIES" " [==[" "${_PROPERTIES}" "]==]" "\n"
" TEST_PREFIX" " [==[" "${_TEST_PREFIX}" "]==]" "\n"
" TEST_SUFFIX" " [==[" "${_TEST_SUFFIX}" "]==]" "\n"
" NO_PRETTY_TYPES" " [==[" "${_NO_PRETTY_TYPES}" "]==]" "\n"
" NO_PRETTY_VALUES" " [==[" "${_NO_PRETTY_VALUES}" "]==]" "\n"
" TEST_LIST" " [==[" "${_TEST_LIST}" "]==]" "\n"
" CTEST_FILE" " [==[" "${ctest_tests_file}" "]==]" "\n"
" TEST_DISCOVERY_TIMEOUT" " [==[" "${_DISCOVERY_TIMEOUT}" "]==]" "\n"
" TEST_XML_OUTPUT_DIR" " [==[" "${_XML_OUTPUT_DIR}" "]==]" "\n"
" )" "\n"
" endif()" "\n"
" include(\"${ctest_tests_file}\")" "\n"
"else()" "\n"
" add_test(${TARGET}_NOT_BUILT ${TARGET}_NOT_BUILT)" "\n"
"endif()" "\n"
)
if(GENERATOR_IS_MULTI_CONFIG)
foreach(_config ${CMAKE_CONFIGURATION_TYPES})
file(GENERATE OUTPUT "${ctest_file_base}_include-${_config}.cmake" CONTENT "${ctest_include_content}" CONDITION $<CONFIG:${_config}>)
endforeach()
file(WRITE "${ctest_include_file}" "include(\"${ctest_file_base}_include-\${CTEST_CONFIGURATION_TYPE}.cmake\")")
else()
file(GENERATE OUTPUT "${ctest_file_base}_include.cmake" CONTENT "${ctest_include_content}")
file(WRITE "${ctest_include_file}" "include(\"${ctest_file_base}_include.cmake\")")
endif()
else()
message(FATAL_ERROR "Unknown DISCOVERY_MODE: ${_DISCOVERY_MODE}")
endif()
# Add discovered tests to directory TEST_INCLUDE_FILES
set_property(DIRECTORY
APPEND PROPERTY TEST_INCLUDE_FILES "${ctest_include_file}"
)
endfunction()
###############################################################################
set(_GOOGLETEST_DISCOVER_TESTS_SCRIPT
${CMAKE_CURRENT_LIST_DIR}/GTestAddTests.cmake
)
# Restore project's policies # Restore project's policies
cmake_policy(POP) cmake_policy(POP)

View File

@@ -0,0 +1,191 @@
# Distributed under the OSI-approved BSD 3-Clause License,
# see accompanying file BSD-3-Clause-license.txt for details.
# Blender: disable ASAN leak detection when trying to discover tests.
set(ENV{ASAN_OPTIONS} "detect_leaks=0")
cmake_minimum_required(VERSION ${CMAKE_VERSION})
# Overwrite possibly existing ${_CTEST_FILE} with empty file
set(flush_tests_MODE WRITE)
# Flushes script to ${_CTEST_FILE}
macro(flush_script)
file(${flush_tests_MODE} "${_CTEST_FILE}" "${script}")
set(flush_tests_MODE APPEND)
set(script "")
endmacro()
# Flushes tests_buffer to tests
macro(flush_tests_buffer)
list(APPEND tests "${tests_buffer}")
set(tests_buffer "")
endmacro()
macro(add_command NAME)
set(_args "")
foreach(_arg ${ARGN})
if(_arg MATCHES "[^-./:a-zA-Z0-9_]")
string(APPEND _args " [==[${_arg}]==]")
else()
string(APPEND _args " ${_arg}")
endif()
endforeach()
string(APPEND script "${NAME}(${_args})\n")
string(LENGTH "${script}" _script_len)
if(${_script_len} GREATER "50000")
flush_script()
endif()
# Unsets macro local variables to prevent leakage outside of this macro.
unset(_args)
unset(_script_len)
endmacro()
function(gtest_discover_tests_impl)
cmake_parse_arguments(
""
""
"NO_PRETTY_TYPES;NO_PRETTY_VALUES;TEST_EXECUTABLE;TEST_EXECUTOR;TEST_WORKING_DIR;TEST_PREFIX;TEST_SUFFIX;TEST_LIST;CTEST_FILE;TEST_DISCOVERY_TIMEOUT;TEST_XML_OUTPUT_DIR"
"TEST_EXTRA_ARGS;TEST_PROPERTIES"
${ARGN}
)
set(prefix "${_TEST_PREFIX}")
set(suffix "${_TEST_SUFFIX}")
set(extra_args ${_TEST_EXTRA_ARGS})
set(properties ${_TEST_PROPERTIES})
set(script)
set(suite)
set(tests)
set(tests_buffer)
# Run test executable to get list of available tests
if(NOT EXISTS "${_TEST_EXECUTABLE}")
message(FATAL_ERROR
"Specified test executable does not exist.\n"
" Path: '${_TEST_EXECUTABLE}'"
)
endif()
execute_process(
COMMAND ${_TEST_EXECUTOR} "${_TEST_EXECUTABLE}" --gtest_list_tests
WORKING_DIRECTORY "${_TEST_WORKING_DIR}"
TIMEOUT ${_TEST_DISCOVERY_TIMEOUT}
OUTPUT_VARIABLE output
RESULT_VARIABLE result
)
if(NOT ${result} EQUAL 0)
string(REPLACE "\n" "\n " output "${output}")
message(FATAL_ERROR
"Error running test executable.\n"
" Path: '${_TEST_EXECUTABLE}'\n"
" Result: ${result}\n"
" Output:\n"
" ${output}\n"
)
endif()
# Preserve semicolon in test-parameters
string(REPLACE [[;]] [[\;]] output "${output}")
string(REPLACE "\n" ";" output "${output}")
# Parse output
foreach(line ${output})
# Skip header
if(NOT line MATCHES "gtest_main\\.cc")
# Do we have a module name or a test name?
if(NOT line MATCHES "^ ")
# Module; remove trailing '.' to get just the name...
string(REGEX REPLACE "\\.( *#.*)?" "" suite "${line}")
if(line MATCHES "#" AND NOT _NO_PRETTY_TYPES)
string(REGEX REPLACE "/[0-9]\\.+ +#.*= +" "/" pretty_suite "${line}")
else()
set(pretty_suite "${suite}")
endif()
string(REGEX REPLACE "^DISABLED_" "" pretty_suite "${pretty_suite}")
else()
# Test name; strip spaces and comments to get just the name...
string(REGEX REPLACE " +" "" test "${line}")
if(test MATCHES "#" AND NOT _NO_PRETTY_VALUES)
string(REGEX REPLACE "/[0-9]+#GetParam..=" "/" pretty_test "${test}")
else()
string(REGEX REPLACE "#.*" "" pretty_test "${test}")
endif()
string(REGEX REPLACE "^DISABLED_" "" pretty_test "${pretty_test}")
string(REGEX REPLACE "#.*" "" test "${test}")
if(NOT "${_TEST_XML_OUTPUT_DIR}" STREQUAL "")
set(TEST_XML_OUTPUT_PARAM "--gtest_output=xml:${_TEST_XML_OUTPUT_DIR}/${prefix}${suite}.${test}${suffix}.xml")
else()
unset(TEST_XML_OUTPUT_PARAM)
endif()
# sanitize test name for further processing downstream
set(testname "${prefix}${pretty_suite}.${pretty_test}${suffix}")
# escape \
string(REPLACE [[\]] [[\\]] testname "${testname}")
# escape ;
string(REPLACE [[;]] [[\;]] testname "${testname}")
# escape $
string(REPLACE [[$]] [[\$]] testname "${testname}")
# ...and add to script
add_command(add_test
"${testname}"
${_TEST_EXECUTOR}
"${_TEST_EXECUTABLE}"
"--gtest_filter=${suite}.${test}"
"--gtest_also_run_disabled_tests"
${TEST_XML_OUTPUT_PARAM}
${extra_args}
)
if(suite MATCHES "^DISABLED" OR test MATCHES "^DISABLED")
add_command(set_tests_properties
"${testname}"
PROPERTIES DISABLED TRUE
)
endif()
add_command(set_tests_properties
"${testname}"
PROPERTIES
WORKING_DIRECTORY "${_TEST_WORKING_DIR}"
SKIP_REGULAR_EXPRESSION "\\\\[ SKIPPED \\\\]"
${properties}
)
list(APPEND tests_buffer "${testname}")
list(LENGTH tests_buffer tests_buffer_length)
if(${tests_buffer_length} GREATER "250")
flush_tests_buffer()
endif()
endif()
endif()
endforeach()
# Create a list of all discovered tests, which users may use to e.g. set
# properties on the tests
flush_tests_buffer()
add_command(set ${_TEST_LIST} ${tests})
# Write CTest script
flush_script()
endfunction()
if(CMAKE_SCRIPT_MODE_FILE)
gtest_discover_tests_impl(
NO_PRETTY_TYPES ${NO_PRETTY_TYPES}
NO_PRETTY_VALUES ${NO_PRETTY_VALUES}
TEST_EXECUTABLE ${TEST_EXECUTABLE}
TEST_EXECUTOR ${TEST_EXECUTOR}
TEST_WORKING_DIR ${TEST_WORKING_DIR}
TEST_PREFIX ${TEST_PREFIX}
TEST_SUFFIX ${TEST_SUFFIX}
TEST_LIST ${TEST_LIST}
CTEST_FILE ${CTEST_FILE}
TEST_DISCOVERY_TIMEOUT ${TEST_DISCOVERY_TIMEOUT}
TEST_XML_OUTPUT_DIR ${TEST_XML_OUTPUT_DIR}
TEST_EXTRA_ARGS ${TEST_EXTRA_ARGS}
TEST_PROPERTIES ${TEST_PROPERTIES}
)
endif()

View File

@@ -8,17 +8,6 @@
# #
#============================================================================= #=============================================================================
function(GET_BLENDER_TEST_INSTALL_DIR VARIABLE_NAME)
get_property(GENERATOR_IS_MULTI_CONFIG GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
if(GENERATOR_IS_MULTI_CONFIG)
string(REPLACE "\${BUILD_TYPE}" "$<CONFIG>" TEST_INSTALL_DIR ${CMAKE_INSTALL_PREFIX})
else()
string(REPLACE "\${BUILD_TYPE}" "" TEST_INSTALL_DIR ${CMAKE_INSTALL_PREFIX})
endif()
set(${VARIABLE_NAME} "${TEST_INSTALL_DIR}" PARENT_SCOPE)
endfunction()
macro(BLENDER_SRC_GTEST_EX) macro(BLENDER_SRC_GTEST_EX)
if(WITH_GTESTS) if(WITH_GTESTS)
set(options SKIP_ADD_TEST) set(options SKIP_ADD_TEST)
@@ -86,7 +75,13 @@ macro(BLENDER_SRC_GTEST_EX)
target_link_libraries(${TARGET_NAME} ${GMP_LIBRARIES}) target_link_libraries(${TARGET_NAME} ${GMP_LIBRARIES})
endif() endif()
GET_BLENDER_TEST_INSTALL_DIR(TEST_INSTALL_DIR) get_property(GENERATOR_IS_MULTI_CONFIG GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
if(GENERATOR_IS_MULTI_CONFIG)
string(REPLACE "\${BUILD_TYPE}" "$<CONFIG>" TEST_INSTALL_DIR ${CMAKE_INSTALL_PREFIX})
else()
string(REPLACE "\${BUILD_TYPE}" "" TEST_INSTALL_DIR ${CMAKE_INSTALL_PREFIX})
endif()
set_target_properties(${TARGET_NAME} PROPERTIES set_target_properties(${TARGET_NAME} PROPERTIES
RUNTIME_OUTPUT_DIRECTORY "${TESTS_OUTPUT_DIR}" RUNTIME_OUTPUT_DIRECTORY "${TESTS_OUTPUT_DIR}"
RUNTIME_OUTPUT_DIRECTORY_RELEASE "${TESTS_OUTPUT_DIR}" RUNTIME_OUTPUT_DIRECTORY_RELEASE "${TESTS_OUTPUT_DIR}"
@@ -99,12 +94,9 @@ macro(BLENDER_SRC_GTEST_EX)
# Don't fail tests on leaks since these often happen in external libraries # Don't fail tests on leaks since these often happen in external libraries
# that we can't fix. # that we can't fix.
set_tests_properties(${TARGET_NAME} PROPERTIES set_tests_properties(${TARGET_NAME} PROPERTIES ENVIRONMENT LSAN_OPTIONS=exitcode=0)
ENVIRONMENT LSAN_OPTIONS=exitcode=0:$ENV{LSAN_OPTIONS}
)
endif() endif()
if(WIN32) if(WIN32)
set_target_properties(${TARGET_NAME} PROPERTIES VS_GLOBAL_VcpkgEnabled "false")
unset(MANIFEST) unset(MANIFEST)
endif() endif()
unset(TEST_INC) unset(TEST_INC)

View File

@@ -128,7 +128,7 @@ if(EXISTS ${SOURCE_DIR}/.git)
OUTPUT_STRIP_TRAILING_WHITESPACE) OUTPUT_STRIP_TRAILING_WHITESPACE)
if(NOT _git_changed_files STREQUAL "") if(NOT _git_changed_files STREQUAL "")
string(APPEND MY_WC_BRANCH " (modified)") set(MY_WC_BRANCH "${MY_WC_BRANCH} (modified)")
else() else()
# Unpushed commits are also considered local modifications # Unpushed commits are also considered local modifications
execute_process(COMMAND git log @{u}.. execute_process(COMMAND git log @{u}..
@@ -137,7 +137,7 @@ if(EXISTS ${SOURCE_DIR}/.git)
OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_STRIP_TRAILING_WHITESPACE
ERROR_QUIET) ERROR_QUIET)
if(NOT _git_unpushed_log STREQUAL "") if(NOT _git_unpushed_log STREQUAL "")
string(APPEND MY_WC_BRANCH " (modified)") set(MY_WC_BRANCH "${MY_WC_BRANCH} (modified)")
endif() endif()
unset(_git_unpushed_log) unset(_git_unpushed_log)
endif() endif()
@@ -161,7 +161,6 @@ file(WRITE buildinfo.h.txt
"#define BUILD_BRANCH \"${MY_WC_BRANCH}\"\n" "#define BUILD_BRANCH \"${MY_WC_BRANCH}\"\n"
"#define BUILD_DATE \"${BUILD_DATE}\"\n" "#define BUILD_DATE \"${BUILD_DATE}\"\n"
"#define BUILD_TIME \"${BUILD_TIME}\"\n" "#define BUILD_TIME \"${BUILD_TIME}\"\n"
"#include \"buildinfo_static.h\"\n"
) )
# cleanup # cleanup

View File

@@ -1,8 +0,0 @@
/* CMake expanded values that won't change between CMake execution (unlike date/time).
* This is included by `buildinfo.h` generated by `buildinfo.cmake`. */
#define BUILD_PLATFORM "@BUILD_PLATFORM@"
#define BUILD_TYPE "@BUILD_TYPE@"
#define BUILD_CFLAGS "@BUILD_CFLAGS@"
#define BUILD_CXXFLAGS "@BUILD_CXXFLAGS@"
#define BUILD_LINKFLAGS "@BUILD_LINKFLAGS@"
#define BUILD_SYSTEM "@BUILD_SYSTEM@"

View File

@@ -13,7 +13,7 @@ Invocation:
export CLANG_BIND_DIR="/dsk/src/llvm/tools/clang/bindings/python" export CLANG_BIND_DIR="/dsk/src/llvm/tools/clang/bindings/python"
export CLANG_LIB_DIR="/opt/llvm/lib" export CLANG_LIB_DIR="/opt/llvm/lib"
python clang_array_check.py somefile.c -DSOME_DEFINE -I/some/include python2 clang_array_check.py somefile.c -DSOME_DEFINE -I/some/include
... defines and includes are optional ... defines and includes are optional
@@ -76,32 +76,6 @@ defs_precalc = {
"glNormal3bv": {0: 3}, "glNormal3bv": {0: 3},
"glNormal3iv": {0: 3}, "glNormal3iv": {0: 3},
"glNormal3sv": {0: 3}, "glNormal3sv": {0: 3},
# GPU immediate mode.
"immVertex2iv": {1: 2},
"immVertex2fv": {1: 2},
"immVertex3fv": {1: 3},
"immAttr2fv": {1: 2},
"immAttr3fv": {1: 3},
"immAttr4fv": {1: 4},
"immAttr3ubv": {1: 3},
"immAttr4ubv": {1: 4},
"immUniform2fv": {1: 2},
"immUniform3fv": {1: 3},
"immUniform4fv": {1: 4},
"immUniformColor3fv": {0: 3},
"immUniformColor4fv": {0: 4},
"immUniformColor3ubv": {1: 3},
"immUniformColor4ubv": {1: 4},
"immUniformColor3fvAlpha": {0: 3},
"immUniformColor4fvAlpha": {0: 4},
} }
# ----------------------------------------------------------------------------- # -----------------------------------------------------------------------------
@@ -126,8 +100,7 @@ else:
if CLANG_LIB_DIR is None: if CLANG_LIB_DIR is None:
print("$CLANG_LIB_DIR clang lib dir not set") print("$CLANG_LIB_DIR clang lib dir not set")
if CLANG_BIND_DIR: sys.path.append(CLANG_BIND_DIR)
sys.path.append(CLANG_BIND_DIR)
import clang import clang
import clang.cindex import clang.cindex
@@ -135,8 +108,7 @@ from clang.cindex import (CursorKind,
TypeKind, TypeKind,
TokenKind) TokenKind)
if CLANG_LIB_DIR: clang.cindex.Config.set_library_path(CLANG_LIB_DIR)
clang.cindex.Config.set_library_path(CLANG_LIB_DIR)
index = clang.cindex.Index.create() index = clang.cindex.Index.create()

View File

@@ -20,8 +20,6 @@
# <pep8 compliant> # <pep8 compliant>
# Note: this code should be cleaned up / refactored.
import sys import sys
if sys.version_info.major < 3: if sys.version_info.major < 3:
print("\nPython3.x needed, found %s.\nAborting!\n" % print("\nPython3.x needed, found %s.\nAborting!\n" %
@@ -39,23 +37,12 @@ from cmake_consistency_check_config import (
import os import os
from os.path import ( from os.path import join, dirname, normpath, splitext
dirname,
join,
normpath,
splitext,
)
global_h = set() global_h = set()
global_c = set() global_c = set()
global_refs = {} global_refs = {}
# Flatten `IGNORE_SOURCE_MISSING` to avoid nested looping.
IGNORE_SOURCE_MISSING = [
(k, ignore_path) for k, ig_list in IGNORE_SOURCE_MISSING
for ignore_path in ig_list
]
# Ignore cmake file, path pairs. # Ignore cmake file, path pairs.
global_ignore_source_missing = {} global_ignore_source_missing = {}
for k, v in IGNORE_SOURCE_MISSING: for k, v in IGNORE_SOURCE_MISSING:
@@ -191,8 +178,6 @@ def cmake_get_src(f):
if not l: if not l:
pass pass
elif l in local_ignore_source_missing:
local_ignore_source_missing.remove(l)
elif l.startswith("$"): elif l.startswith("$"):
if context_name == "SRC": if context_name == "SRC":
# assume if it ends with context_name we know about it # assume if it ends with context_name we know about it
@@ -242,7 +227,10 @@ def cmake_get_src(f):
# replace_line(f, i - 1, new_path_rel) # replace_line(f, i - 1, new_path_rel)
else: else:
raise Exception("non existent include %s:%d -> %s" % (f, i, new_file)) if l in local_ignore_source_missing:
local_ignore_source_missing.remove(l)
else:
raise Exception("non existent include %s:%d -> %s" % (f, i, new_file))
# print(new_file) # print(new_file)
@@ -270,16 +258,16 @@ def cmake_get_src(f):
def is_ignore_source(f, ignore_used): def is_ignore_source(f, ignore_used):
for index, ignore_path in enumerate(IGNORE_SOURCE): for index, ig in enumerate(IGNORE_SOURCE):
if ignore_path in f: if ig in f:
ignore_used[index] = True ignore_used[index] = True
return True return True
return False return False
def is_ignore_cmake(f, ignore_used): def is_ignore_cmake(f, ignore_used):
for index, ignore_path in enumerate(IGNORE_CMAKE): for index, ig in enumerate(IGNORE_CMAKE):
if ignore_path in f: if ig in f:
ignore_used[index] = True ignore_used[index] = True
return True return True
return False return False
@@ -310,7 +298,7 @@ def main():
for cf, i in refs: for cf, i in refs:
errs.append((cf, i)) errs.append((cf, i))
else: else:
raise Exception("CMake references missing, internal error, aborting!") raise Exception("CMake referenecs missing, internal error, aborting!")
is_err = True is_err = True
errs.sort() errs.sort()
@@ -321,7 +309,7 @@ def main():
# print("sed '%dd' '%s' > '%s.tmp' ; mv '%s.tmp' '%s'" % (i, cf, cf, cf, cf)) # print("sed '%dd' '%s' > '%s.tmp' ; mv '%s.tmp' '%s'" % (i, cf, cf, cf, cf))
if is_err: if is_err:
raise Exception("CMake references missing files, aborting!") raise Exception("CMake referenecs missing files, aborting!")
del is_err del is_err
del errs del errs
@@ -332,7 +320,7 @@ def main():
if cf not in global_c: if cf not in global_c:
print("missing_c: ", cf) print("missing_c: ", cf)
# Check if automake builds a corresponding .o file. # check if automake builds a corrasponding .o file.
''' '''
if cf in global_c: if cf in global_c:
out1 = os.path.splitext(cf)[0] + ".o" out1 = os.path.splitext(cf)[0] + ".o"
@@ -368,21 +356,21 @@ def main():
# Check ignores aren't stale # Check ignores aren't stale
print("\nCheck for unused 'IGNORE_SOURCE' paths...") print("\nCheck for unused 'IGNORE_SOURCE' paths...")
for index, ignore_path in enumerate(IGNORE_SOURCE): for index, ig in enumerate(IGNORE_SOURCE):
if not ignore_used_source[index]: if not ignore_used_source[index]:
print("unused ignore: %r" % ignore_path) print("unused ignore: %r" % ig)
# Check ignores aren't stale # Check ignores aren't stale
print("\nCheck for unused 'IGNORE_SOURCE_MISSING' paths...") print("\nCheck for unused 'IGNORE_SOURCE_MISSING' paths...")
for k, v in sorted(global_ignore_source_missing.items()): for k, v in sorted(global_ignore_source_missing.items()):
for ignore_path in v: for ig in v:
print("unused ignore: %r -> %r" % (ignore_path, k)) print("unused ignore: %r -> %r" % (ig, k))
# Check ignores aren't stale # Check ignores aren't stale
print("\nCheck for unused 'IGNORE_CMAKE' paths...") print("\nCheck for unused 'IGNORE_CMAKE' paths...")
for index, ignore_path in enumerate(IGNORE_CMAKE): for index, ig in enumerate(IGNORE_CMAKE):
if not ignore_used_cmake[index]: if not ignore_used_cmake[index]:
print("unused ignore: %r" % ignore_path) print("unused ignore: %r" % ig)
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -34,18 +34,8 @@ IGNORE_SOURCE = (
# Ignore cmake file, path pairs. # Ignore cmake file, path pairs.
IGNORE_SOURCE_MISSING = ( IGNORE_SOURCE_MISSING = (
( # Use for cycles stand-alone. # Use for cycles stand-alone.
"intern/cycles/util/CMakeLists.txt", ( ("intern/cycles/util/CMakeLists.txt", "../../third_party/numaapi/include"),
"../../third_party/numaapi/include",
)),
( # Use for `WITH_NANOVDB`.
"intern/cycles/kernel/CMakeLists.txt", (
"nanovdb/util/CSampleFromVoxels.h",
"nanovdb/util/SampleFromVoxels.h",
"nanovdb/NanoVDB.h",
"nanovdb/CNanoVDB.h",
),
),
) )
IGNORE_CMAKE = ( IGNORE_CMAKE = (

View File

@@ -32,7 +32,7 @@ CHECKER_IGNORE_PREFIX = [
"intern/moto", "intern/moto",
] ]
CHECKER_BIN = "python3" CHECKER_BIN = "python2"
CHECKER_ARGS = [ CHECKER_ARGS = [
os.path.join(os.path.dirname(__file__), "clang_array_check.py"), os.path.join(os.path.dirname(__file__), "clang_array_check.py"),

View File

@@ -19,7 +19,6 @@ set(WITH_DRACO ON CACHE BOOL "" FORCE)
set(WITH_FFTW3 ON CACHE BOOL "" FORCE) set(WITH_FFTW3 ON CACHE BOOL "" FORCE)
set(WITH_FREESTYLE ON CACHE BOOL "" FORCE) set(WITH_FREESTYLE ON CACHE BOOL "" FORCE)
set(WITH_GMP ON CACHE BOOL "" FORCE) set(WITH_GMP ON CACHE BOOL "" FORCE)
set(WITH_HARU ON CACHE BOOL "" FORCE)
set(WITH_IK_ITASC ON CACHE BOOL "" FORCE) set(WITH_IK_ITASC ON CACHE BOOL "" FORCE)
set(WITH_IK_SOLVER ON CACHE BOOL "" FORCE) set(WITH_IK_SOLVER ON CACHE BOOL "" FORCE)
set(WITH_IMAGE_CINEON ON CACHE BOOL "" FORCE) set(WITH_IMAGE_CINEON ON CACHE BOOL "" FORCE)
@@ -46,9 +45,6 @@ set(WITH_OPENSUBDIV ON CACHE BOOL "" FORCE)
set(WITH_OPENVDB ON CACHE BOOL "" FORCE) set(WITH_OPENVDB ON CACHE BOOL "" FORCE)
set(WITH_OPENVDB_BLOSC ON CACHE BOOL "" FORCE) set(WITH_OPENVDB_BLOSC ON CACHE BOOL "" FORCE)
set(WITH_POTRACE ON CACHE BOOL "" FORCE) set(WITH_POTRACE ON CACHE BOOL "" FORCE)
set(WITH_PUGIXML ON CACHE BOOL "" FORCE)
set(WITH_NANOVDB ON CACHE BOOL "" FORCE)
set(WITH_POTRACE ON CACHE BOOL "" FORCE)
set(WITH_PYTHON_INSTALL ON CACHE BOOL "" FORCE) set(WITH_PYTHON_INSTALL ON CACHE BOOL "" FORCE)
set(WITH_QUADRIFLOW ON CACHE BOOL "" FORCE) set(WITH_QUADRIFLOW ON CACHE BOOL "" FORCE)
set(WITH_SDL ON CACHE BOOL "" FORCE) set(WITH_SDL ON CACHE BOOL "" FORCE)

View File

@@ -24,7 +24,6 @@ set(WITH_DRACO OFF CACHE BOOL "" FORCE)
set(WITH_FFTW3 OFF CACHE BOOL "" FORCE) set(WITH_FFTW3 OFF CACHE BOOL "" FORCE)
set(WITH_FREESTYLE OFF CACHE BOOL "" FORCE) set(WITH_FREESTYLE OFF CACHE BOOL "" FORCE)
set(WITH_GMP OFF CACHE BOOL "" FORCE) set(WITH_GMP OFF CACHE BOOL "" FORCE)
set(WITH_HARU OFF CACHE BOOL "" FORCE)
set(WITH_IK_ITASC OFF CACHE BOOL "" FORCE) set(WITH_IK_ITASC OFF CACHE BOOL "" FORCE)
set(WITH_IK_SOLVER OFF CACHE BOOL "" FORCE) set(WITH_IK_SOLVER OFF CACHE BOOL "" FORCE)
set(WITH_IMAGE_CINEON OFF CACHE BOOL "" FORCE) set(WITH_IMAGE_CINEON OFF CACHE BOOL "" FORCE)
@@ -52,9 +51,6 @@ set(WITH_OPENIMAGEIO OFF CACHE BOOL "" FORCE)
set(WITH_OPENMP OFF CACHE BOOL "" FORCE) set(WITH_OPENMP OFF CACHE BOOL "" FORCE)
set(WITH_OPENSUBDIV OFF CACHE BOOL "" FORCE) set(WITH_OPENSUBDIV OFF CACHE BOOL "" FORCE)
set(WITH_OPENVDB OFF CACHE BOOL "" FORCE) set(WITH_OPENVDB OFF CACHE BOOL "" FORCE)
set(WITH_POTRACE OFF CACHE BOOL "" FORCE)
set(WITH_PUGIXML OFF CACHE BOOL "" FORCE)
set(WITH_NANOVDB OFF CACHE BOOL "" FORCE)
set(WITH_QUADRIFLOW OFF CACHE BOOL "" FORCE) set(WITH_QUADRIFLOW OFF CACHE BOOL "" FORCE)
set(WITH_SDL OFF CACHE BOOL "" FORCE) set(WITH_SDL OFF CACHE BOOL "" FORCE)
set(WITH_TBB OFF CACHE BOOL "" FORCE) set(WITH_TBB OFF CACHE BOOL "" FORCE)

View File

@@ -20,7 +20,6 @@ set(WITH_DRACO ON CACHE BOOL "" FORCE)
set(WITH_FFTW3 ON CACHE BOOL "" FORCE) set(WITH_FFTW3 ON CACHE BOOL "" FORCE)
set(WITH_FREESTYLE ON CACHE BOOL "" FORCE) set(WITH_FREESTYLE ON CACHE BOOL "" FORCE)
set(WITH_GMP ON CACHE BOOL "" FORCE) set(WITH_GMP ON CACHE BOOL "" FORCE)
set(WITH_HARU ON CACHE BOOL "" FORCE)
set(WITH_IK_SOLVER ON CACHE BOOL "" FORCE) set(WITH_IK_SOLVER ON CACHE BOOL "" FORCE)
set(WITH_IK_ITASC ON CACHE BOOL "" FORCE) set(WITH_IK_ITASC ON CACHE BOOL "" FORCE)
set(WITH_IMAGE_CINEON ON CACHE BOOL "" FORCE) set(WITH_IMAGE_CINEON ON CACHE BOOL "" FORCE)
@@ -47,9 +46,6 @@ set(WITH_OPENSUBDIV ON CACHE BOOL "" FORCE)
set(WITH_OPENVDB ON CACHE BOOL "" FORCE) set(WITH_OPENVDB ON CACHE BOOL "" FORCE)
set(WITH_OPENVDB_BLOSC ON CACHE BOOL "" FORCE) set(WITH_OPENVDB_BLOSC ON CACHE BOOL "" FORCE)
set(WITH_POTRACE ON CACHE BOOL "" FORCE) set(WITH_POTRACE ON CACHE BOOL "" FORCE)
set(WITH_PUGIXML ON CACHE BOOL "" FORCE)
set(WITH_NANOVDB ON CACHE BOOL "" FORCE)
set(WITH_POTRACE ON CACHE BOOL "" FORCE)
set(WITH_PYTHON_INSTALL ON CACHE BOOL "" FORCE) set(WITH_PYTHON_INSTALL ON CACHE BOOL "" FORCE)
set(WITH_QUADRIFLOW ON CACHE BOOL "" FORCE) set(WITH_QUADRIFLOW ON CACHE BOOL "" FORCE)
set(WITH_SDL ON CACHE BOOL "" FORCE) set(WITH_SDL ON CACHE BOOL "" FORCE)

View File

@@ -28,7 +28,6 @@ set(WITH_OPENCOLLADA OFF CACHE BOOL "" FORCE)
set(WITH_INTERNATIONAL OFF CACHE BOOL "" FORCE) set(WITH_INTERNATIONAL OFF CACHE BOOL "" FORCE)
set(WITH_BULLET OFF CACHE BOOL "" FORCE) set(WITH_BULLET OFF CACHE BOOL "" FORCE)
set(WITH_OPENVDB OFF CACHE BOOL "" FORCE) set(WITH_OPENVDB OFF CACHE BOOL "" FORCE)
set(WITH_NANOVDB OFF CACHE BOOL "" FORCE)
set(WITH_ALEMBIC OFF CACHE BOOL "" FORCE) set(WITH_ALEMBIC OFF CACHE BOOL "" FORCE)
# Depends on Python install, do this to quiet warning. # Depends on Python install, do this to quiet warning.

View File

@@ -60,19 +60,6 @@ function(list_assert_duplicates
unset(_len_after) unset(_len_after)
endfunction() endfunction()
# Adds a native path separator to the end of the path:
#
# - 'example' -> 'example/'
# - '/example///' -> '/example/'
#
macro(path_ensure_trailing_slash
path_new path_input
)
file(TO_NATIVE_PATH "/" _path_sep)
string(REGEX REPLACE "[${_path_sep}]+$" "" ${path_new} ${path_input})
set(${path_new} "${${path_new}}${_path_sep}")
unset(_path_sep)
endmacro()
# foo_bar.spam --> foo_barMySuffix.spam # foo_bar.spam --> foo_barMySuffix.spam
macro(file_suffix macro(file_suffix
@@ -196,7 +183,7 @@ function(blender_user_header_search_paths
foreach(_INC ${includes}) foreach(_INC ${includes})
get_filename_component(_ABS_INC ${_INC} ABSOLUTE) get_filename_component(_ABS_INC ${_INC} ABSOLUTE)
# _ALL_INCS is a space-separated string of file paths in quotes. # _ALL_INCS is a space-separated string of file paths in quotes.
string(APPEND _ALL_INCS " \"${_ABS_INC}\"") set(_ALL_INCS "${_ALL_INCS} \"${_ABS_INC}\"")
endforeach() endforeach()
set_target_properties(${name} PROPERTIES XCODE_ATTRIBUTE_USER_HEADER_SEARCH_PATHS "${_ALL_INCS}") set_target_properties(${name} PROPERTIES XCODE_ATTRIBUTE_USER_HEADER_SEARCH_PATHS "${_ALL_INCS}")
endif() endif()
@@ -263,11 +250,11 @@ macro(add_cc_flags_custom_test
string(TOUPPER ${name} _name_upper) string(TOUPPER ${name} _name_upper)
if(DEFINED CMAKE_C_FLAGS_${_name_upper}) if(DEFINED CMAKE_C_FLAGS_${_name_upper})
message(STATUS "Using custom CFLAGS: CMAKE_C_FLAGS_${_name_upper} in \"${CMAKE_CURRENT_SOURCE_DIR}\"") message(STATUS "Using custom CFLAGS: CMAKE_C_FLAGS_${_name_upper} in \"${CMAKE_CURRENT_SOURCE_DIR}\"")
string(APPEND CMAKE_C_FLAGS " ${CMAKE_C_FLAGS_${_name_upper}}" ${ARGV1}) set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${CMAKE_C_FLAGS_${_name_upper}}" ${ARGV1})
endif() endif()
if(DEFINED CMAKE_CXX_FLAGS_${_name_upper}) if(DEFINED CMAKE_CXX_FLAGS_${_name_upper})
message(STATUS "Using custom CXXFLAGS: CMAKE_CXX_FLAGS_${_name_upper} in \"${CMAKE_CURRENT_SOURCE_DIR}\"") message(STATUS "Using custom CXXFLAGS: CMAKE_CXX_FLAGS_${_name_upper} in \"${CMAKE_CURRENT_SOURCE_DIR}\"")
string(APPEND CMAKE_CXX_FLAGS " ${CMAKE_CXX_FLAGS_${_name_upper}}" ${ARGV1}) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${CMAKE_CXX_FLAGS_${_name_upper}}" ${ARGV1})
endif() endif()
unset(_name_upper) unset(_name_upper)
@@ -388,43 +375,6 @@ function(blender_add_lib
set_property(GLOBAL APPEND PROPERTY BLENDER_LINK_LIBS ${name}) set_property(GLOBAL APPEND PROPERTY BLENDER_LINK_LIBS ${name})
endfunction() endfunction()
function(blender_add_test_suite)
if (ARGC LESS 1)
message(FATAL_ERROR "No arguments supplied to blender_add_test_suite()")
endif()
# Parse the arguments
set(oneValueArgs TARGET SUITE_NAME)
set(multiValueArgs SOURCES)
cmake_parse_arguments(ARGS "" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
# Figure out the release dir, as some tests need files from there.
GET_BLENDER_TEST_INSTALL_DIR(TEST_INSTALL_DIR)
if(APPLE)
set(_test_release_dir ${TEST_INSTALL_DIR}/Blender.app/Contents/Resources/${BLENDER_VERSION})
else()
if(WIN32 OR WITH_INSTALL_PORTABLE)
set(_test_release_dir ${TEST_INSTALL_DIR}/${BLENDER_VERSION})
else()
set(_test_release_dir ${TEST_INSTALL_DIR}/share/blender/${BLENDER_VERSION})
endif()
endif()
# Define a test case with our custom gtest_add_tests() command.
include(GTest)
gtest_add_tests(
TARGET ${ARGS_TARGET}
SOURCES "${ARGS_SOURCES}"
TEST_PREFIX ${ARGS_SUITE_NAME}
WORKING_DIRECTORY "${TEST_INSTALL_DIR}"
EXTRA_ARGS
--test-assets-dir "${CMAKE_SOURCE_DIR}/../lib/tests"
--test-release-dir "${_test_release_dir}"
)
unset(_test_release_dir)
endfunction()
# Add tests for a Blender library, to be called in tandem with blender_add_lib(). # Add tests for a Blender library, to be called in tandem with blender_add_lib().
# The tests will be part of the blender_test executable (see tests/gtests/runner). # The tests will be part of the blender_test executable (see tests/gtests/runner).
function(blender_add_test_lib function(blender_add_test_lib
@@ -458,12 +408,6 @@ function(blender_add_test_lib
blender_add_lib__impl(${name} "${sources}" "${includes}" "${includes_sys}" "${library_deps}") blender_add_lib__impl(${name} "${sources}" "${includes}" "${includes_sys}" "${library_deps}")
set_property(GLOBAL APPEND PROPERTY BLENDER_TEST_LIBS ${name}) set_property(GLOBAL APPEND PROPERTY BLENDER_TEST_LIBS ${name})
blender_add_test_suite(
TARGET blender_test
SUITE_NAME ${name}
SOURCES "${sources}"
)
endfunction() endfunction()
@@ -497,10 +441,14 @@ function(blender_add_test_executable
SKIP_ADD_TEST SKIP_ADD_TEST
) )
blender_add_test_suite( include(GTest)
TARGET ${name}_test set(_GOOGLETEST_DISCOVER_TESTS_SCRIPT
SUITE_NAME ${name} ${CMAKE_SOURCE_DIR}/build_files/cmake/Modules/GTestAddTests.cmake
SOURCES "${sources}" )
gtest_discover_tests(${name}_test
DISCOVERY_MODE PRE_TEST
WORKING_DIRECTORY "${TEST_INSTALL_DIR}"
) )
endfunction() endfunction()
@@ -727,14 +675,14 @@ endmacro()
macro(add_c_flag macro(add_c_flag
flag) flag)
string(APPEND CMAKE_C_FLAGS " ${flag}") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${flag}")
string(APPEND CMAKE_CXX_FLAGS " ${flag}") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${flag}")
endmacro() endmacro()
macro(add_cxx_flag macro(add_cxx_flag
flag) flag)
string(APPEND CMAKE_CXX_FLAGS " ${flag}") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${flag}")
endmacro() endmacro()
macro(remove_strict_flags) macro(remove_strict_flags)
@@ -1178,7 +1126,6 @@ endfunction()
function(find_python_package function(find_python_package
package package
relative_include_dir
) )
string(TOUPPER ${package} _upper_package) string(TOUPPER ${package} _upper_package)
@@ -1210,10 +1157,7 @@ function(find_python_package
dist-packages dist-packages
vendor-packages vendor-packages
NO_DEFAULT_PATH NO_DEFAULT_PATH
DOC
"Path to python site-packages or dist-packages containing '${package}' module"
) )
mark_as_advanced(PYTHON_${_upper_package}_PATH)
if(NOT EXISTS "${PYTHON_${_upper_package}_PATH}") if(NOT EXISTS "${PYTHON_${_upper_package}_PATH}")
message(WARNING message(WARNING
@@ -1231,50 +1175,6 @@ function(find_python_package
set(WITH_PYTHON_INSTALL_${_upper_package} OFF PARENT_SCOPE) set(WITH_PYTHON_INSTALL_${_upper_package} OFF PARENT_SCOPE)
else() else()
message(STATUS "${package} found at '${PYTHON_${_upper_package}_PATH}'") message(STATUS "${package} found at '${PYTHON_${_upper_package}_PATH}'")
if(NOT "${relative_include_dir}" STREQUAL "")
set(_relative_include_dir "${package}/${relative_include_dir}")
unset(PYTHON_${_upper_package}_INCLUDE_DIRS CACHE)
find_path(PYTHON_${_upper_package}_INCLUDE_DIRS
NAMES
"${_relative_include_dir}"
HINTS
"${PYTHON_LIBPATH}/"
"${PYTHON_LIBPATH}/python${PYTHON_VERSION}/"
"${PYTHON_LIBPATH}/python${_PY_VER_MAJOR}/"
PATH_SUFFIXES
"site-packages/"
"dist-packages/"
"vendor-packages/"
NO_DEFAULT_PATH
DOC
"Path to python site-packages or dist-packages containing '${package}' module header files"
)
mark_as_advanced(PYTHON_${_upper_package}_INCLUDE_DIRS)
if(NOT EXISTS "${PYTHON_${_upper_package}_INCLUDE_DIRS}")
message(WARNING
"Python package '${package}' include dir path could not be found in:\n"
"'${PYTHON_LIBPATH}/python${PYTHON_VERSION}/site-packages/${_relative_include_dir}', "
"'${PYTHON_LIBPATH}/python${_PY_VER_MAJOR}/site-packages/${_relative_include_dir}', "
"'${PYTHON_LIBPATH}/python${PYTHON_VERSION}/dist-packages/${_relative_include_dir}', "
"'${PYTHON_LIBPATH}/python${_PY_VER_MAJOR}/dist-packages/${_relative_include_dir}', "
"'${PYTHON_LIBPATH}/python${PYTHON_VERSION}/vendor-packages/${_relative_include_dir}', "
"'${PYTHON_LIBPATH}/python${_PY_VER_MAJOR}/vendor-packages/${_relative_include_dir}', "
"\n"
"The 'WITH_PYTHON_${_upper_package}' option will be disabled.\n"
"The build will be usable, only add-ons that depend on this package won't be functional."
)
set(WITH_PYTHON_${_upper_package} OFF PARENT_SCOPE)
else()
set(_temp "${PYTHON_${_upper_package}_INCLUDE_DIRS}/${package}/${relative_include_dir}")
unset(PYTHON_${_upper_package}_INCLUDE_DIRS CACHE)
set(PYTHON_${_upper_package}_INCLUDE_DIRS "${_temp}"
CACHE PATH "Path to the include directory of the ${package} module")
message(STATUS "${package} include files found at '${PYTHON_${_upper_package}_INCLUDE_DIRS}'")
endif()
endif()
endif() endif()
endif() endif()
endfunction() endfunction()

View File

@@ -60,23 +60,11 @@ if(WITH_OPENAL)
endif() endif()
endif() endif()
if(WITH_JACK)
find_library(JACK_FRAMEWORK
NAMES jackmp
)
if(NOT JACK_FRAMEWORK)
set(WITH_JACK OFF)
else()
set(JACK_INCLUDE_DIRS ${JACK_FRAMEWORK}/headers)
endif()
endif()
if(NOT DEFINED LIBDIR) if(NOT DEFINED LIBDIR)
if("${CMAKE_OSX_ARCHITECTURES}" STREQUAL "x86_64") set(LIBDIR ${CMAKE_SOURCE_DIR}/../lib/darwin)
set(LIBDIR ${CMAKE_SOURCE_DIR}/../lib/darwin) # Prefer lib directory paths
else() file(GLOB LIB_SUBDIRS ${LIBDIR}/*)
set(LIBDIR ${CMAKE_SOURCE_DIR}/../lib/darwin_${CMAKE_OSX_ARCHITECTURES}) set(CMAKE_PREFIX_PATH ${LIB_SUBDIRS})
endif()
else() else()
message(STATUS "Using pre-compiled LIBDIR: ${LIBDIR}") message(STATUS "Using pre-compiled LIBDIR: ${LIBDIR}")
endif() endif()
@@ -84,10 +72,6 @@ if(NOT EXISTS "${LIBDIR}/")
message(FATAL_ERROR "Mac OSX requires pre-compiled libs at: '${LIBDIR}'") message(FATAL_ERROR "Mac OSX requires pre-compiled libs at: '${LIBDIR}'")
endif() endif()
# Prefer lib directory paths
file(GLOB LIB_SUBDIRS ${LIBDIR}/*)
set(CMAKE_PREFIX_PATH ${LIB_SUBDIRS})
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
# Find precompiled libraries, and avoid system or user-installed ones. # Find precompiled libraries, and avoid system or user-installed ones.
@@ -110,6 +94,17 @@ if(WITH_OPENSUBDIV)
find_package(OpenSubdiv) find_package(OpenSubdiv)
endif() endif()
if(WITH_JACK)
find_library(JACK_FRAMEWORK
NAMES jackmp
)
if(NOT JACK_FRAMEWORK)
set(WITH_JACK OFF)
else()
set(JACK_INCLUDE_DIRS ${JACK_FRAMEWORK}/headers)
endif()
endif()
if(WITH_CODEC_SNDFILE) if(WITH_CODEC_SNDFILE)
find_package(SndFile) find_package(SndFile)
find_library(_sndfile_FLAC_LIBRARY NAMES flac HINTS ${LIBDIR}/sndfile/lib) find_library(_sndfile_FLAC_LIBRARY NAMES flac HINTS ${LIBDIR}/sndfile/lib)
@@ -199,7 +194,7 @@ if(SYSTEMSTUBS_LIBRARY)
list(APPEND PLATFORM_LINKLIBS SystemStubs) list(APPEND PLATFORM_LINKLIBS SystemStubs)
endif() endif()
string(APPEND PLATFORM_CFLAGS " -pipe -funsigned-char -fno-strict-aliasing") set(PLATFORM_CFLAGS "${PLATFORM_CFLAGS} -pipe -funsigned-char")
set(PLATFORM_LINKFLAGS set(PLATFORM_LINKFLAGS
"-fexceptions -framework CoreServices -framework Foundation -framework IOKit -framework AppKit -framework Cocoa -framework Carbon -framework AudioUnit -framework AudioToolbox -framework CoreAudio -framework Metal -framework QuartzCore" "-fexceptions -framework CoreServices -framework Foundation -framework IOKit -framework AppKit -framework Cocoa -framework Carbon -framework AudioUnit -framework AudioToolbox -framework CoreAudio -framework Metal -framework QuartzCore"
) )
@@ -207,12 +202,12 @@ set(PLATFORM_LINKFLAGS
list(APPEND PLATFORM_LINKLIBS c++) list(APPEND PLATFORM_LINKLIBS c++)
if(WITH_JACK) if(WITH_JACK)
string(APPEND PLATFORM_LINKFLAGS " -F/Library/Frameworks -weak_framework jackmp") set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -F/Library/Frameworks -weak_framework jackmp")
endif() endif()
if(WITH_PYTHON_MODULE OR WITH_PYTHON_FRAMEWORK) if(WITH_PYTHON_MODULE OR WITH_PYTHON_FRAMEWORK)
# force cmake to link right framework # force cmake to link right framework
string(APPEND PLATFORM_LINKFLAGS " /Library/Frameworks/Python.framework/Versions/${PYTHON_VERSION}/Python") set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} /Library/Frameworks/Python.framework/Versions/${PYTHON_VERSION}/Python")
endif() endif()
if(WITH_OPENCOLLADA) if(WITH_OPENCOLLADA)
@@ -227,7 +222,7 @@ if(WITH_SDL)
find_package(SDL2) find_package(SDL2)
set(SDL_INCLUDE_DIR ${SDL2_INCLUDE_DIRS}) set(SDL_INCLUDE_DIR ${SDL2_INCLUDE_DIRS})
set(SDL_LIBRARY ${SDL2_LIBRARIES}) set(SDL_LIBRARY ${SDL2_LIBRARIES})
string(APPEND PLATFORM_LINKFLAGS " -framework ForceFeedback") set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -framework ForceFeedback")
endif() endif()
set(PNG_ROOT ${LIBDIR}/png) set(PNG_ROOT ${LIBDIR}/png)
@@ -271,15 +266,7 @@ if(WITH_BOOST)
endif() endif()
if(WITH_INTERNATIONAL OR WITH_CODEC_FFMPEG) if(WITH_INTERNATIONAL OR WITH_CODEC_FFMPEG)
string(APPEND PLATFORM_LINKFLAGS " -liconv") # boost_locale and ffmpeg needs it ! set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -liconv") # boost_locale and ffmpeg needs it !
endif()
if(WITH_PUGIXML)
find_package(PugiXML)
if(NOT PUGIXML_FOUND)
message(WARNING "PugiXML not found, disabling WITH_PUGIXML")
set(WITH_PUGIXML OFF)
endif()
endif() endif()
if(WITH_OPENIMAGEIO) if(WITH_OPENIMAGEIO)
@@ -350,7 +337,7 @@ if(WITH_CYCLES_EMBREE)
find_package(Embree 3.8.0 REQUIRED) find_package(Embree 3.8.0 REQUIRED)
# Increase stack size for Embree, only works for executables. # Increase stack size for Embree, only works for executables.
if(NOT WITH_PYTHON_MODULE) if(NOT WITH_PYTHON_MODULE)
string(APPEND PLATFORM_LINKFLAGS " -Wl,-stack_size,0x100000") set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -Xlinker -stack_size -Xlinker 0x100000")
endif() endif()
# Embree static library linking can mix up SSE and AVX symbols, causing # Embree static library linking can mix up SSE and AVX symbols, causing
@@ -394,7 +381,7 @@ if(WITH_OPENMP)
set(OPENMP_FOUND ON) set(OPENMP_FOUND ON)
set(OpenMP_C_FLAGS "-Xclang -fopenmp -I'${LIBDIR}/openmp/include'") set(OpenMP_C_FLAGS "-Xclang -fopenmp -I'${LIBDIR}/openmp/include'")
set(OpenMP_CXX_FLAGS "-Xclang -fopenmp -I'${LIBDIR}/openmp/include'") set(OpenMP_CXX_FLAGS "-Xclang -fopenmp -I'${LIBDIR}/openmp/include'")
string(APPEND CMAKE_EXE_LINKER_FLAGS " -L'${LIBDIR}/openmp/lib' -lomp") set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -L'${LIBDIR}/openmp/lib' -lomp")
# Copy libomp.dylib to allow executables like datatoc and tests to work. # Copy libomp.dylib to allow executables like datatoc and tests to work.
# `@executable_path/../Resources/lib/` is a default dylib search path. # `@executable_path/../Resources/lib/` is a default dylib search path.
@@ -432,14 +419,6 @@ if(WITH_GMP)
endif() endif()
endif() endif()
if(WITH_HARU)
find_package(Haru)
if(NOT HARU_FOUND)
message(WARNING "Haru not found, disabling WITH_HARU")
set(WITH_HARU OFF)
endif()
endif()
if(EXISTS ${LIBDIR}) if(EXISTS ${LIBDIR})
without_system_libs_end() without_system_libs_end()
endif() endif()
@@ -449,50 +428,36 @@ endif()
set(EXETYPE MACOSX_BUNDLE) set(EXETYPE MACOSX_BUNDLE)
set(CMAKE_C_FLAGS_DEBUG "-g") set(CMAKE_C_FLAGS_DEBUG "-fno-strict-aliasing -g")
set(CMAKE_CXX_FLAGS_DEBUG "-g") set(CMAKE_CXX_FLAGS_DEBUG "-fno-strict-aliasing -g")
if(CMAKE_OSX_ARCHITECTURES MATCHES "x86_64" OR CMAKE_OSX_ARCHITECTURES MATCHES "i386") if(CMAKE_OSX_ARCHITECTURES MATCHES "x86_64" OR CMAKE_OSX_ARCHITECTURES MATCHES "i386")
set(CMAKE_CXX_FLAGS_RELEASE "-O2 -mdynamic-no-pic -msse -msse2 -msse3 -mssse3") set(CMAKE_CXX_FLAGS_RELEASE "-O2 -mdynamic-no-pic -msse -msse2 -msse3 -mssse3")
set(CMAKE_C_FLAGS_RELEASE "-O2 -mdynamic-no-pic -msse -msse2 -msse3 -mssse3") set(CMAKE_C_FLAGS_RELEASE "-O2 -mdynamic-no-pic -msse -msse2 -msse3 -mssse3")
if(NOT CMAKE_C_COMPILER_ID MATCHES "Clang") if(NOT CMAKE_C_COMPILER_ID MATCHES "Clang")
string(APPEND CMAKE_C_FLAGS_RELEASE " -ftree-vectorize -fvariable-expansion-in-unroller") set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -ftree-vectorize -fvariable-expansion-in-unroller")
string(APPEND CMAKE_CXX_FLAGS_RELEASE " -ftree-vectorize -fvariable-expansion-in-unroller") set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -ftree-vectorize -fvariable-expansion-in-unroller")
endif() endif()
else() else()
set(CMAKE_C_FLAGS_RELEASE "-O2 -mdynamic-no-pic") set(CMAKE_C_FLAGS_RELEASE "-O2 -mdynamic-no-pic -fno-strict-aliasing")
set(CMAKE_CXX_FLAGS_RELEASE "-O2 -mdynamic-no-pic") set(CMAKE_CXX_FLAGS_RELEASE "-O2 -mdynamic-no-pic -fno-strict-aliasing")
endif() endif()
if(${XCODE_VERSION} VERSION_EQUAL 5 OR ${XCODE_VERSION} VERSION_GREATER 5) if(${XCODE_VERSION} VERSION_EQUAL 5 OR ${XCODE_VERSION} VERSION_GREATER 5)
# Xcode 5 is always using CLANG, which has too low template depth of 128 for libmv # Xcode 5 is always using CLANG, which has too low template depth of 128 for libmv
string(APPEND CMAKE_CXX_FLAGS " -ftemplate-depth=1024") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -ftemplate-depth=1024")
endif() endif()
# Avoid conflicts with Luxrender, and other plug-ins that may use the same # Avoid conflicts with Luxrender, and other plug-ins that may use the same
# libraries as Blender with a different version or build options. # libraries as Blender with a different version or build options.
string(APPEND PLATFORM_LINKFLAGS set(PLATFORM_LINKFLAGS
" -Wl,-unexported_symbols_list,'${CMAKE_SOURCE_DIR}/source/creator/osx_locals.map'" "${PLATFORM_LINKFLAGS} -Xlinker -unexported_symbols_list -Xlinker '${CMAKE_SOURCE_DIR}/source/creator/osx_locals.map'"
) )
string(APPEND CMAKE_CXX_FLAGS " -stdlib=libc++") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -stdlib=libc++")
string(APPEND PLATFORM_LINKFLAGS " -stdlib=libc++") set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -stdlib=libc++")
# Suppress ranlib "has no symbols" warnings (workaround for T48250) # Suppress ranlib "has no symbols" warnings (workaround for T48250)
set(CMAKE_C_ARCHIVE_CREATE "<CMAKE_AR> Scr <TARGET> <LINK_FLAGS> <OBJECTS>") set(CMAKE_C_ARCHIVE_CREATE "<CMAKE_AR> Scr <TARGET> <LINK_FLAGS> <OBJECTS>")
set(CMAKE_CXX_ARCHIVE_CREATE "<CMAKE_AR> Scr <TARGET> <LINK_FLAGS> <OBJECTS>") set(CMAKE_CXX_ARCHIVE_CREATE "<CMAKE_AR> Scr <TARGET> <LINK_FLAGS> <OBJECTS>")
set(CMAKE_C_ARCHIVE_FINISH "<CMAKE_RANLIB> -no_warning_for_no_symbols -c <TARGET>") set(CMAKE_C_ARCHIVE_FINISH "<CMAKE_RANLIB> -no_warning_for_no_symbols -c <TARGET>")
set(CMAKE_CXX_ARCHIVE_FINISH "<CMAKE_RANLIB> -no_warning_for_no_symbols -c <TARGET>") set(CMAKE_CXX_ARCHIVE_FINISH "<CMAKE_RANLIB> -no_warning_for_no_symbols -c <TARGET>")
if(WITH_COMPILER_CCACHE)
if(NOT CMAKE_GENERATOR STREQUAL "Xcode")
find_program(CCACHE_PROGRAM ccache)
if(CCACHE_PROGRAM)
# Makefiles and ninja
set(CMAKE_C_COMPILER_LAUNCHER "${CCACHE_PROGRAM}" CACHE STRING "" FORCE)
set(CMAKE_CXX_COMPILER_LAUNCHER "${CCACHE_PROGRAM}" CACHE STRING "" FORCE)
else()
message(WARNING "Ccache NOT found, disabling WITH_COMPILER_CCACHE")
set(WITH_COMPILER_CCACHE OFF)
endif()
endif()
endif()

View File

@@ -150,36 +150,7 @@ endif()
if(NOT ${CMAKE_GENERATOR} MATCHES "Xcode") if(NOT ${CMAKE_GENERATOR} MATCHES "Xcode")
# Force CMAKE_OSX_DEPLOYMENT_TARGET for makefiles, will not work else (CMake bug?) # Force CMAKE_OSX_DEPLOYMENT_TARGET for makefiles, will not work else (CMake bug?)
string(APPEND CMAKE_C_FLAGS " -mmacosx-version-min=${CMAKE_OSX_DEPLOYMENT_TARGET}") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -mmacosx-version-min=${CMAKE_OSX_DEPLOYMENT_TARGET}")
string(APPEND CMAKE_CXX_FLAGS " -mmacosx-version-min=${CMAKE_OSX_DEPLOYMENT_TARGET}") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -mmacosx-version-min=${CMAKE_OSX_DEPLOYMENT_TARGET}")
add_definitions("-DMACOSX_DEPLOYMENT_TARGET=${CMAKE_OSX_DEPLOYMENT_TARGET}") add_definitions("-DMACOSX_DEPLOYMENT_TARGET=${CMAKE_OSX_DEPLOYMENT_TARGET}")
endif() endif()
if(WITH_COMPILER_CCACHE)
if(CMAKE_GENERATOR STREQUAL "Xcode")
find_program(CCACHE_PROGRAM ccache)
if(CCACHE_PROGRAM)
get_filename_component(ccompiler "${CMAKE_C_COMPILER}" NAME)
get_filename_component(cxxcompiler "${CMAKE_CXX_COMPILER}" NAME)
# Ccache can figure out which compiler to use if it's invoked from
# a symlink with the name of the compiler.
# https://ccache.dev/manual/4.1.html#_run_modes
set(_fake_compiler_dir "${CMAKE_BINARY_DIR}/ccache")
execute_process(COMMAND ${CMAKE_COMMAND} -E make_directory ${_fake_compiler_dir})
set(_fake_C_COMPILER "${_fake_compiler_dir}/${ccompiler}")
set(_fake_CXX_COMPILER "${_fake_compiler_dir}/${cxxcompiler}")
execute_process(COMMAND ${CMAKE_COMMAND} -E create_symlink "${CCACHE_PROGRAM}" ${_fake_C_COMPILER})
execute_process(COMMAND ${CMAKE_COMMAND} -E create_symlink "${CCACHE_PROGRAM}" ${_fake_CXX_COMPILER})
set(CMAKE_XCODE_ATTRIBUTE_CC ${_fake_C_COMPILER} CACHE STRING "" FORCE)
set(CMAKE_XCODE_ATTRIBUTE_CXX ${_fake_CXX_COMPILER} CACHE STRING "" FORCE)
set(CMAKE_XCODE_ATTRIBUTE_LD ${_fake_C_COMPILER} CACHE STRING "" FORCE)
set(CMAKE_XCODE_ATTRIBUTE_LDPLUSPLUS ${_fake_CXX_COMPILER} CACHE STRING "" FORCE)
unset(_fake_compiler_dir)
unset(_fake_C_COMPILER)
unset(_fake_CXX_COMPILER)
else()
message(WARNING "Ccache NOT found, disabling WITH_COMPILER_CCACHE")
set(WITH_COMPILER_CCACHE OFF)
endif()
endif()
endif()

View File

@@ -73,7 +73,7 @@ if(EXISTS ${LIBDIR})
endif() endif()
if(WITH_STATIC_LIBS) if(WITH_STATIC_LIBS)
string(APPEND CMAKE_EXE_LINKER_FLAGS " -static-libstdc++") set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -static-libstdc++")
endif() endif()
# Wrapper to prefer static libraries # Wrapper to prefer static libraries
@@ -350,17 +350,15 @@ if(WITH_BOOST)
endif() endif()
endif() endif()
if(WITH_PUGIXML)
find_package_wrapper(PugiXML)
if (NOT PUGIXML_FOUND)
set(WITH_PUGIXML OFF)
message(STATUS "PugiXML not found, disabling WITH_PUGIXML")
endif()
endif()
if(WITH_OPENIMAGEIO) if(WITH_OPENIMAGEIO)
find_package_wrapper(OpenImageIO) find_package_wrapper(OpenImageIO)
if(NOT OPENIMAGEIO_PUGIXML_FOUND AND WITH_CYCLES_STANDALONE)
find_package_wrapper(PugiXML)
else()
set(PUGIXML_INCLUDE_DIR "${OPENIMAGEIO_INCLUDE_DIR/OpenImageIO}")
set(PUGIXML_LIBRARIES "")
endif()
set(OPENIMAGEIO_LIBRARIES set(OPENIMAGEIO_LIBRARIES
${OPENIMAGEIO_LIBRARIES} ${OPENIMAGEIO_LIBRARIES}
${PNG_LIBRARIES} ${PNG_LIBRARIES}
@@ -615,20 +613,14 @@ endif()
# GNU Compiler # GNU Compiler
if(CMAKE_COMPILER_IS_GNUCC) if(CMAKE_COMPILER_IS_GNUCC)
# ffp-contract=off: set(PLATFORM_CFLAGS "-pipe -fPIC -funsigned-char -fno-strict-aliasing")
# Automatically turned on when building with "-march=native". This is
# explicitly turned off here as it will make floating point math give a bit
# different results. This will lead to automated test failures. So disable
# this until we support it. Seems to default to off in clang and the intel
# compiler.
set(PLATFORM_CFLAGS "-pipe -fPIC -funsigned-char -fno-strict-aliasing -ffp-contract=off")
# `maybe-uninitialized` is unreliable in release builds, but fine in debug builds. # `maybe-uninitialized` is unreliable in release builds, but fine in debug builds.
set(GCC_EXTRA_FLAGS_RELEASE "-Wno-maybe-uninitialized") set(GCC_EXTRA_FLAGS_RELEASE "-Wno-maybe-uninitialized")
set(CMAKE_C_FLAGS_RELEASE "${GCC_EXTRA_FLAGS_RELEASE} ${CMAKE_C_FLAGS_RELEASE}") set(CMAKE_C_FLAGS_RELEASE "${GCC_EXTRA_FLAGS_RELEASE} ${CMAKE_C_FLAGS_RELEASE}")
set(CMAKE_C_FLAGS_RELWITHDEBINFO "${GCC_EXTRA_FLAGS_RELEASE} ${CMAKE_C_FLAGS_RELWITHDEBINFO}") set(CMAKE_C_FLAGS_RELWITHDEBINFO "${GCC_EXTRA_FLAGS_RELEASE} ${CMAKE_C_FLAGS_RELWITHDEBINFO}")
set(CMAKE_CXX_FLAGS_RELEASE "${GCC_EXTRA_FLAGS_RELEASE} ${CMAKE_CXX_FLAGS_RELEASE}") set(CMAKE_CXX_FLAGS_RELEASE "${GCC_EXTRA_FLAGS_RELEASE} ${CMAKE_CXX_FLAGS_RELEASE}")
string(PREPEND CMAKE_CXX_FLAGS_RELWITHDEBINFO "${GCC_EXTRA_FLAGS_RELEASE} ") set(CMAKE_CXX_FLAGS_RELWITHDEBINFO "${GCC_EXTRA_FLAGS_RELEASE} ${CMAKE_CXX_FLAGS_RELWITHDEBINFO}")
unset(GCC_EXTRA_FLAGS_RELEASE) unset(GCC_EXTRA_FLAGS_RELEASE)
if(WITH_LINKER_GOLD) if(WITH_LINKER_GOLD)
@@ -636,8 +628,8 @@ if(CMAKE_COMPILER_IS_GNUCC)
COMMAND ${CMAKE_C_COMPILER} -fuse-ld=gold -Wl,--version COMMAND ${CMAKE_C_COMPILER} -fuse-ld=gold -Wl,--version
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION) ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
if("${LD_VERSION}" MATCHES "GNU gold") if("${LD_VERSION}" MATCHES "GNU gold")
string(APPEND CMAKE_C_FLAGS " -fuse-ld=gold") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fuse-ld=gold")
string(APPEND CMAKE_CXX_FLAGS " -fuse-ld=gold") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fuse-ld=gold")
else() else()
message(STATUS "GNU gold linker isn't available, using the default system linker.") message(STATUS "GNU gold linker isn't available, using the default system linker.")
endif() endif()
@@ -649,8 +641,8 @@ if(CMAKE_COMPILER_IS_GNUCC)
COMMAND ${CMAKE_C_COMPILER} -fuse-ld=lld -Wl,--version COMMAND ${CMAKE_C_COMPILER} -fuse-ld=lld -Wl,--version
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION) ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
if("${LD_VERSION}" MATCHES "LLD") if("${LD_VERSION}" MATCHES "LLD")
string(APPEND CMAKE_C_FLAGS " -fuse-ld=lld") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fuse-ld=lld")
string(APPEND CMAKE_CXX_FLAGS " -fuse-ld=lld") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fuse-ld=lld")
else() else()
message(STATUS "LLD linker isn't available, using the default system linker.") message(STATUS "LLD linker isn't available, using the default system linker.")
endif() endif()
@@ -675,12 +667,12 @@ elseif(CMAKE_C_COMPILER_ID MATCHES "Intel")
endif() endif()
mark_as_advanced(XILD) mark_as_advanced(XILD)
string(APPEND CMAKE_C_FLAGS " -fp-model precise -prec_div -parallel") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fp-model precise -prec_div -parallel")
string(APPEND CMAKE_CXX_FLAGS " -fp-model precise -prec_div -parallel") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fp-model precise -prec_div -parallel")
# string(APPEND PLATFORM_CFLAGS " -diag-enable sc3") # set(PLATFORM_CFLAGS "${PLATFORM_CFLAGS} -diag-enable sc3")
set(PLATFORM_CFLAGS "-pipe -fPIC -funsigned-char -fno-strict-aliasing") set(PLATFORM_CFLAGS "-pipe -fPIC -funsigned-char -fno-strict-aliasing")
string(APPEND PLATFORM_LINKFLAGS " -static-intel") set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -static-intel")
endif() endif()
# Avoid conflicts with Mesa llvmpipe, Luxrender, and other plug-ins that may # Avoid conflicts with Mesa llvmpipe, Luxrender, and other plug-ins that may
@@ -693,17 +685,5 @@ set(PLATFORM_LINKFLAGS
# browsers can't properly detect blender as an executable then. Still enabled # browsers can't properly detect blender as an executable then. Still enabled
# for non-portable installs as typically used by Linux distributions. # for non-portable installs as typically used by Linux distributions.
if(WITH_INSTALL_PORTABLE) if(WITH_INSTALL_PORTABLE)
string(APPEND CMAKE_EXE_LINKER_FLAGS " -no-pie") set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -no-pie")
endif()
if(WITH_COMPILER_CCACHE)
find_program(CCACHE_PROGRAM ccache)
if(CCACHE_PROGRAM)
# Makefiles and ninja
set(CMAKE_C_COMPILER_LAUNCHER "${CCACHE_PROGRAM}" CACHE STRING "" FORCE)
set(CMAKE_CXX_COMPILER_LAUNCHER "${CCACHE_PROGRAM}" CACHE STRING "" FORCE)
else()
message(WARNING "Ccache NOT found, disabling WITH_COMPILER_CCACHE")
set(WITH_COMPILER_CCACHE OFF)
endif()
endif() endif()

View File

@@ -49,7 +49,7 @@ if(CMAKE_C_COMPILER_ID MATCHES "Clang")
if(NOT EXISTS "${CLANG_OPENMP_DLL}") if(NOT EXISTS "${CLANG_OPENMP_DLL}")
message(FATAL_ERROR "Clang OpenMP library (${CLANG_OPENMP_DLL}) not found.") message(FATAL_ERROR "Clang OpenMP library (${CLANG_OPENMP_DLL}) not found.")
endif() endif()
string(APPEND CMAKE_EXE_LINKER_FLAGS " \"${CLANG_OPENMP_LIB}\"") set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} \"${CLANG_OPENMP_LIB}\"")
endif() endif()
if(WITH_WINDOWS_STRIPPED_PDB) if(WITH_WINDOWS_STRIPPED_PDB)
message(WARNING "stripped pdb not supported with clang, disabling..") message(WARNING "stripped pdb not supported with clang, disabling..")
@@ -112,9 +112,9 @@ unset(_min_ver)
# needed for some MSVC installations # needed for some MSVC installations
# 4099 : PDB 'filename' was not found with 'object/library' # 4099 : PDB 'filename' was not found with 'object/library'
string(APPEND CMAKE_EXE_LINKER_FLAGS " /SAFESEH:NO /ignore:4099") set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} /SAFESEH:NO /ignore:4099")
string(APPEND CMAKE_SHARED_LINKER_FLAGS " /SAFESEH:NO /ignore:4099") set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} /SAFESEH:NO /ignore:4099")
string(APPEND CMAKE_MODULE_LINKER_FLAGS " /SAFESEH:NO /ignore:4099") set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} /SAFESEH:NO /ignore:4099")
list(APPEND PLATFORM_LINKLIBS list(APPEND PLATFORM_LINKLIBS
ws2_32 vfw32 winmm kernel32 user32 gdi32 comdlg32 Comctl32 version ws2_32 vfw32 winmm kernel32 user32 gdi32 comdlg32 Comctl32 version
@@ -154,18 +154,18 @@ if(WITH_WINDOWS_PDB)
endif() endif()
if(MSVC_CLANG) # Clangs version of cl doesn't support all flags if(MSVC_CLANG) # Clangs version of cl doesn't support all flags
string(APPEND CMAKE_CXX_FLAGS " ${CXX_WARN_FLAGS} /nologo /J /Gd /EHsc -Wno-unused-command-line-argument -Wno-microsoft-enum-forward-reference ") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${CXX_WARN_FLAGS} /nologo /J /Gd /EHsc -Wno-unused-command-line-argument -Wno-microsoft-enum-forward-reference ")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /nologo /J /Gd -Wno-unused-command-line-argument -Wno-microsoft-enum-forward-reference") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /nologo /J /Gd -Wno-unused-command-line-argument -Wno-microsoft-enum-forward-reference")
else() else()
string(APPEND CMAKE_CXX_FLAGS " /nologo /J /Gd /MP /EHsc /bigobj") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /nologo /J /Gd /MP /EHsc /bigobj")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /nologo /J /Gd /MP /bigobj") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /nologo /J /Gd /MP /bigobj")
endif() endif()
# C++ standards conformace (/permissive-) is available on msvc 15.5 (1912) and up # C++ standards conformace (/permissive-) is available on msvc 15.5 (1912) and up
if(MSVC_VERSION GREATER 1911 AND NOT MSVC_CLANG) if(MSVC_VERSION GREATER 1911 AND NOT MSVC_CLANG)
string(APPEND CMAKE_CXX_FLAGS " /permissive-") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /permissive-")
# Two-phase name lookup does not place nicely with OpenMP yet, so disable for now # Two-phase name lookup does not place nicely with OpenMP yet, so disable for now
string(APPEND CMAKE_CXX_FLAGS " /Zc:twoPhase-") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /Zc:twoPhase-")
endif() endif()
if(WITH_WINDOWS_SCCACHE AND CMAKE_VS_MSBUILD_COMMAND) if(WITH_WINDOWS_SCCACHE AND CMAKE_VS_MSBUILD_COMMAND)
@@ -183,33 +183,33 @@ else()
set(SYMBOL_FORMAT /ZI) set(SYMBOL_FORMAT /ZI)
endif() endif()
string(APPEND CMAKE_CXX_FLAGS_DEBUG " /MDd ${SYMBOL_FORMAT}") set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /MDd ${SYMBOL_FORMAT}")
string(APPEND CMAKE_C_FLAGS_DEBUG " /MDd ${SYMBOL_FORMAT}") set(CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG} /MDd ${SYMBOL_FORMAT}")
string(APPEND CMAKE_CXX_FLAGS_RELEASE " /MD ${PDB_INFO_OVERRIDE_FLAGS}") set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /MD ${PDB_INFO_OVERRIDE_FLAGS}")
string(APPEND CMAKE_C_FLAGS_RELEASE " /MD ${PDB_INFO_OVERRIDE_FLAGS}") set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} /MD ${PDB_INFO_OVERRIDE_FLAGS}")
string(APPEND CMAKE_CXX_FLAGS_MINSIZEREL " /MD ${PDB_INFO_OVERRIDE_FLAGS}") set(CMAKE_CXX_FLAGS_MINSIZEREL "${CMAKE_CXX_FLAGS_MINSIZEREL} /MD ${PDB_INFO_OVERRIDE_FLAGS}")
string(APPEND CMAKE_C_FLAGS_MINSIZEREL " /MD ${PDB_INFO_OVERRIDE_FLAGS}") set(CMAKE_C_FLAGS_MINSIZEREL "${CMAKE_C_FLAGS_MINSIZEREL} /MD ${PDB_INFO_OVERRIDE_FLAGS}")
string(APPEND CMAKE_CXX_FLAGS_RELWITHDEBINFO " /MD ${SYMBOL_FORMAT}") set(CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO} /MD ${SYMBOL_FORMAT}")
string(APPEND CMAKE_C_FLAGS_RELWITHDEBINFO " /MD ${SYMBOL_FORMAT}") set(CMAKE_C_FLAGS_RELWITHDEBINFO "${CMAKE_C_FLAGS_RELWITHDEBINFO} /MD ${SYMBOL_FORMAT}")
unset(SYMBOL_FORMAT) unset(SYMBOL_FORMAT)
# JMC is available on msvc 15.8 (1915) and up # JMC is available on msvc 15.8 (1915) and up
if(MSVC_VERSION GREATER 1914 AND NOT MSVC_CLANG) if(MSVC_VERSION GREATER 1914 AND NOT MSVC_CLANG)
string(APPEND CMAKE_CXX_FLAGS_DEBUG " /JMC") set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /JMC")
endif() endif()
string(APPEND PLATFORM_LINKFLAGS " /SUBSYSTEM:CONSOLE /STACK:2097152") set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} /SUBSYSTEM:CONSOLE /STACK:2097152")
set(PLATFORM_LINKFLAGS_RELEASE "/NODEFAULTLIB:libcmt.lib /NODEFAULTLIB:libcmtd.lib /NODEFAULTLIB:msvcrtd.lib") set(PLATFORM_LINKFLAGS_RELEASE "/NODEFAULTLIB:libcmt.lib /NODEFAULTLIB:libcmtd.lib /NODEFAULTLIB:msvcrtd.lib")
string(APPEND PLATFORM_LINKFLAGS_DEBUG " /IGNORE:4099 /NODEFAULTLIB:libcmt.lib /NODEFAULTLIB:msvcrt.lib /NODEFAULTLIB:libcmtd.lib") set(PLATFORM_LINKFLAGS_DEBUG "${PLATFORM_LINKFLAGS_DEBUG} /IGNORE:4099 /NODEFAULTLIB:libcmt.lib /NODEFAULTLIB:msvcrt.lib /NODEFAULTLIB:libcmtd.lib")
# Ignore meaningless for us linker warnings. # Ignore meaningless for us linker warnings.
string(APPEND PLATFORM_LINKFLAGS " /ignore:4049 /ignore:4217 /ignore:4221") set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} /ignore:4049 /ignore:4217 /ignore:4221")
set(PLATFORM_LINKFLAGS_RELEASE "${PLATFORM_LINKFLAGS} ${PDB_INFO_OVERRIDE_LINKER_FLAGS}") set(PLATFORM_LINKFLAGS_RELEASE "${PLATFORM_LINKFLAGS} ${PDB_INFO_OVERRIDE_LINKER_FLAGS}")
string(APPEND CMAKE_STATIC_LINKER_FLAGS " /ignore:4221") set(CMAKE_STATIC_LINKER_FLAGS "${CMAKE_STATIC_LINKER_FLAGS} /ignore:4221")
if(CMAKE_CL_64) if(CMAKE_CL_64)
string(PREPEND PLATFORM_LINKFLAGS "/MACHINE:X64 ") set(PLATFORM_LINKFLAGS "/MACHINE:X64 ${PLATFORM_LINKFLAGS}")
else() else()
string(PREPEND PLATFORM_LINKFLAGS "/MACHINE:IX86 /LARGEADDRESSAWARE ") set(PLATFORM_LINKFLAGS "/MACHINE:IX86 /LARGEADDRESSAWARE ${PLATFORM_LINKFLAGS}")
endif() endif()
if(NOT DEFINED LIBDIR) if(NOT DEFINED LIBDIR)
@@ -239,24 +239,9 @@ if(NOT EXISTS "${LIBDIR}/")
message(FATAL_ERROR "\n\nWindows requires pre-compiled libs at: '${LIBDIR}'. Please run `make update` in the blender source folder to obtain them.") message(FATAL_ERROR "\n\nWindows requires pre-compiled libs at: '${LIBDIR}'. Please run `make update` in the blender source folder to obtain them.")
endif() endif()
if(CMAKE_GENERATOR MATCHES "^Visual Studio.+" AND # Only supported in the VS IDE
MSVC_VERSION GREATER_EQUAL 1924 AND # Supported for 16.4+
WITH_CLANG_TIDY # And Clang Tidy needs to be on
)
set(CMAKE_VS_GLOBALS
"RunCodeAnalysis=false"
"EnableMicrosoftCodeAnalysis=false"
"EnableClangTidyCodeAnalysis=true"
)
set(VS_CLANG_TIDY On)
endif()
# Mark libdir as system headers with a lower warn level, to resolve some warnings # Mark libdir as system headers with a lower warn level, to resolve some warnings
# that we have very little control over # that we have very little control over
if(MSVC_VERSION GREATER_EQUAL 1914 AND # Available with 15.7+ if(MSVC_VERSION GREATER_EQUAL 1914 AND NOT MSVC_CLANG AND NOT WITH_WINDOWS_SCCACHE)
NOT MSVC_CLANG AND # But not for clang
NOT WITH_WINDOWS_SCCACHE AND # And not when sccache is enabled
NOT VS_CLANG_TIDY) # Clang-tidy does not like these options
add_compile_options(/experimental:external /external:templates- /external:I "${LIBDIR}" /external:W0) add_compile_options(/experimental:external /external:templates- /external:I "${LIBDIR}" /external:W0)
endif() endif()
@@ -268,11 +253,6 @@ foreach(child ${children})
endif() endif()
endforeach() endforeach()
if(WITH_PUGIXML)
set(PUGIXML_LIBRARIES optimized ${LIBDIR}/pugixml/lib/pugixml.lib debug ${LIBDIR}/pugixml/lib/pugixml_d.lib)
set(PUGIXML_INCLUDE_DIR ${LIBDIR}/pugixml/include)
endif()
set(ZLIB_INCLUDE_DIRS ${LIBDIR}/zlib/include) set(ZLIB_INCLUDE_DIRS ${LIBDIR}/zlib/include)
set(ZLIB_LIBRARIES ${LIBDIR}/zlib/lib/libz_st.lib) set(ZLIB_LIBRARIES ${LIBDIR}/zlib/lib/libz_st.lib)
set(ZLIB_INCLUDE_DIR ${LIBDIR}/zlib/include) set(ZLIB_INCLUDE_DIR ${LIBDIR}/zlib/include)
@@ -671,10 +651,11 @@ if(WITH_CYCLES_OSL)
optimized ${OSL_LIB_COMP} optimized ${OSL_LIB_COMP}
optimized ${OSL_LIB_EXEC} optimized ${OSL_LIB_EXEC}
optimized ${OSL_LIB_QUERY} optimized ${OSL_LIB_QUERY}
optimized ${CYCLES_OSL}/lib/pugixml.lib
debug ${OSL_LIB_EXEC_DEBUG} debug ${OSL_LIB_EXEC_DEBUG}
debug ${OSL_LIB_COMP_DEBUG} debug ${OSL_LIB_COMP_DEBUG}
debug ${OSL_LIB_QUERY_DEBUG} debug ${OSL_LIB_QUERY_DEBUG}
${PUGIXML_LIBRARIES} debug ${CYCLES_OSL}/lib/pugixml_d.lib
) )
find_path(OSL_INCLUDE_DIR OSL/oslclosure.h PATHS ${CYCLES_OSL}/include) find_path(OSL_INCLUDE_DIR OSL/oslclosure.h PATHS ${CYCLES_OSL}/include)
find_program(OSL_COMPILER NAMES oslc PATHS ${CYCLES_OSL}/bin) find_program(OSL_COMPILER NAMES oslc PATHS ${CYCLES_OSL}/bin)
@@ -800,15 +781,3 @@ if(WITH_POTRACE)
set(POTRACE_LIBRARIES ${LIBDIR}/potrace/lib/potrace.lib) set(POTRACE_LIBRARIES ${LIBDIR}/potrace/lib/potrace.lib)
set(POTRACE_FOUND On) set(POTRACE_FOUND On)
endif() endif()
if(WITH_HARU)
if(EXISTS ${LIBDIR}/haru)
set(HARU_FOUND On)
set(HARU_ROOT_DIR ${LIBDIR}/haru)
set(HARU_INCLUDE_DIRS ${HARU_ROOT_DIR}/include)
set(HARU_LIBRARIES ${HARU_ROOT_DIR}/lib/libhpdfs.lib)
else()
message(WARNING "Haru was not found, disabling WITH_HARU")
set(WITH_HARU OFF)
endif()
endif()

View File

@@ -31,7 +31,7 @@ if(WITH_WINDOWS_BUNDLE_CRT)
foreach(lib ${CMAKE_INSTALL_SYSTEM_RUNTIME_LIBS}) foreach(lib ${CMAKE_INSTALL_SYSTEM_RUNTIME_LIBS})
get_filename_component(filename ${lib} NAME) get_filename_component(filename ${lib} NAME)
file(SHA1 "${lib}" sha1_file) file(SHA1 "${lib}" sha1_file)
string(APPEND CRTLIBS " <file name=\"${filename}\" hash=\"${sha1_file}\" hashalg=\"SHA1\" />\n") set(CRTLIBS "${CRTLIBS} <file name=\"${filename}\" hash=\"${sha1_file}\" hashalg=\"SHA1\" />\n")
endforeach() endforeach()
configure_file(${CMAKE_SOURCE_DIR}/release/windows/manifest/blender.crt.manifest.in ${CMAKE_CURRENT_BINARY_DIR}/blender.crt.manifest @ONLY) configure_file(${CMAKE_SOURCE_DIR}/release/windows/manifest/blender.crt.manifest.in ${CMAKE_CURRENT_BINARY_DIR}/blender.crt.manifest @ONLY)
file(TOUCH ${manifest_trigger_file}) file(TOUCH ${manifest_trigger_file})

View File

@@ -8,7 +8,6 @@
import argparse import argparse
import os import os
import platform
import shutil import shutil
import sys import sys
@@ -50,12 +49,7 @@ def svn_update(args, release_version):
# Checkout precompiled libraries # Checkout precompiled libraries
if sys.platform == 'darwin': if sys.platform == 'darwin':
if platform.machine() == 'x86_64': lib_platform = "darwin"
lib_platform = "darwin"
elif platform.machine() == 'arm64':
lib_platform = "darwin_arm64"
else:
lib_platform = None
elif sys.platform == 'win32': elif sys.platform == 'win32':
# Windows checkout is usually handled by bat scripts since python3 to run # Windows checkout is usually handled by bat scripts since python3 to run
# this script is bundled as part of the precompiled libraries. However it # this script is bundled as part of the precompiled libraries. However it

View File

@@ -92,5 +92,5 @@ echo if "%%VSCMD_VER%%" == "" ^( >> %BUILD_DIR%\rebuild.cmd
echo call "%VCVARS%" %BUILD_ARCH% >> %BUILD_DIR%\rebuild.cmd echo call "%VCVARS%" %BUILD_ARCH% >> %BUILD_DIR%\rebuild.cmd
echo ^) >> %BUILD_DIR%\rebuild.cmd echo ^) >> %BUILD_DIR%\rebuild.cmd
echo echo %%TIME%% ^> buildtime.txt >> %BUILD_DIR%\rebuild.cmd echo echo %%TIME%% ^> buildtime.txt >> %BUILD_DIR%\rebuild.cmd
echo ninja install %%* >> %BUILD_DIR%\rebuild.cmd echo ninja install >> %BUILD_DIR%\rebuild.cmd
echo echo %%TIME%% ^>^> buildtime.txt >> %BUILD_DIR%\rebuild.cmd echo echo %%TIME%% ^>^> buildtime.txt >> %BUILD_DIR%\rebuild.cmd

View File

@@ -38,7 +38,7 @@ PROJECT_NAME = Blender
# could be handy for archiving the generated documentation or if some version # could be handy for archiving the generated documentation or if some version
# control system is used. # control system is used.
PROJECT_NUMBER = "V2.93" PROJECT_NUMBER = "V2.92"
# Using the PROJECT_BRIEF tag one can provide an optional one line description # Using the PROJECT_BRIEF tag one can provide an optional one line description
# for a project that appears at the top of each page and should give viewer a # for a project that appears at the top of each page and should give viewer a
@@ -453,7 +453,7 @@ TYPEDEF_HIDES_STRUCT = NO
# the optimal cache size from a speed point of view. # the optimal cache size from a speed point of view.
# Minimum value: 0, maximum value: 9, default value: 0. # Minimum value: 0, maximum value: 9, default value: 0.
LOOKUP_CACHE_SIZE = 3 LOOKUP_CACHE_SIZE = 0
#--------------------------------------------------------------------------- #---------------------------------------------------------------------------
# Build related configuration options # Build related configuration options
@@ -1321,7 +1321,7 @@ DOCSET_PUBLISHER_NAME = Publisher
# The default value is: NO. # The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES. # This tag requires that the tag GENERATE_HTML is set to YES.
GENERATE_HTMLHELP = NO GENERATE_HTMLHELP = YES
# The CHM_FILE tag can be used to specify the file name of the resulting .chm # The CHM_FILE tag can be used to specify the file name of the resulting .chm
# file. You can add a path in front of the file if the result should not be # file. You can add a path in front of the file if the result should not be

View File

@@ -121,10 +121,6 @@
* \ingroup editors * \ingroup editors
*/ */
/** \defgroup edasset asset
* \ingroup editors
*/
/** \defgroup edcurve curve /** \defgroup edcurve curve
* \ingroup editors * \ingroup editors
*/ */

View File

@@ -52,11 +52,10 @@ outfilename = sys.argv[2]
cmd = [blender_bin, "--help"] cmd = [blender_bin, "--help"]
print(" executing:", " ".join(cmd)) print(" executing:", " ".join(cmd))
ASAN_OPTIONS = "exitcode=0:" + os.environ.get("ASAN_OPTIONS", "")
blender_help = subprocess.run( blender_help = subprocess.run(
cmd, env={"ASAN_OPTIONS": ASAN_OPTIONS}, check=True, stdout=subprocess.PIPE).stdout.decode(encoding="utf-8") cmd, env={"ASAN_OPTIONS": "exitcode=0"}, check=True, stdout=subprocess.PIPE).stdout.decode(encoding="utf-8")
blender_version = subprocess.run( blender_version = subprocess.run(
[blender_bin, "--version"], env={"ASAN_OPTIONS": ASAN_OPTIONS}, check=True, stdout=subprocess.PIPE).stdout.decode(encoding="utf-8").strip() [blender_bin, "--version"], env={"ASAN_OPTIONS": "exitcode=0"}, check=True, stdout=subprocess.PIPE).stdout.decode(encoding="utf-8").strip()
blender_version, blender_date = (blender_version.split("build") + [None, None])[0:2] blender_version, blender_date = (blender_version.split("build") + [None, None])[0:2]
blender_version = blender_version.rstrip().partition(" ")[2] # remove 'Blender' prefix. blender_version = blender_version.rstrip().partition(" ")[2] # remove 'Blender' prefix.
if blender_date is None: if blender_date is None:

View File

@@ -1,30 +0,0 @@
"""
.. note::
Properties defined at run-time store the values of the properties as custom-properties.
This method checks if the underlying data exists, causing the property to be considered *set*.
A common pattern for operators is to calculate a value for the properties
that have not had their values explicitly set by the caller
(where the caller could be a key-binding, menu-items or Python script for example).
In the case of executing operators multiple times, values are re-used from the previous execution.
For example: subdividing a mesh with a smooth value of 1.0 will keep using
that value on subsequent calls to subdivision, unless the operator is called with
that property set to a different value.
This behavior can be disabled using the ``SKIP_SAVE`` option when the property is declared (see: :mod:`bpy.props`).
The ``ghost`` argument allows detecting how a value from a previous execution is handled.
- When true: The property is considered unset even if the value from a previous call is used.
- When false: The existence of any values causes ``is_property_set`` to return true.
While this argument should typically be omitted, there are times when
it's important to know if a value is anything besides the default.
For example, the previous value may have been scaled by the scene's unit scale.
In this case scaling the value multiple times would cause problems, so the ``ghost`` argument should be false.
"""

View File

@@ -163,13 +163,13 @@ Now in the button's context menu select *Copy Data Path*, then paste the result
.. code-block:: python .. code-block:: python
bpy.context.active_object.modifiers["Subdivision"].levels bpy.context.active_object.modifiers["Subsurf"].levels
Press :kbd:`Return` and you'll get the current value of 1. Now try changing the value to 2: Press :kbd:`Return` and you'll get the current value of 1. Now try changing the value to 2:
.. code-block:: python .. code-block:: python
bpy.context.active_object.modifiers["Subdivision"].levels = 2 bpy.context.active_object.modifiers["Subsurf"].levels = 2
You can see the value update in the Subdivision Surface modifier's UI as well as the cube. You can see the value update in the Subdivision Surface modifier's UI as well as the cube.
@@ -185,31 +185,43 @@ For example, if you want to access the texture of a brush via Python to adjust i
#. Start in the default scene and enable Sculpt Mode from the 3D Viewport header. #. Start in the default scene and enable Sculpt Mode from the 3D Viewport header.
#. From the Sidebar expand the Brush Settings panel's *Texture* subpanel and add a new texture. #. From the Sidebar expand the Brush Settings panel's *Texture* subpanel and add a new texture.
*Notice the texture data-block menu itself doesn't have very useful links (you can check the tooltips).* *Notice the texture data-block menu itself doesn't have very useful links (you can check the tooltips).*
#. The contrast setting isn't exposed in the Sidebar, so view the texture in the #. The contrast setting isn't exposed in the Sidebar, so view the texture in the properties editor:
:ref:`Properties Editor <blender_manual:bpy.types.Texture.contrast`
- In the properties editor select the Texture tab.
- Select brush texture.
- Expand the *Colors* panel to locate the *Contrast* number field.
#. Open the context menu of the contrast field and select *Online Python Reference*. #. Open the context menu of the contrast field and select *Online Python Reference*.
This takes you to ``bpy.types.Texture.contrast``. Now you can see that ``contrast`` is a property of texture. This takes you to ``bpy.types.Texture.contrast``. Now you can see that ``contrast`` is a property of texture.
#. To find out how to access the texture from the brush check on the references at the bottom of the page. #. To find out how to access the texture from the brush check on the references at the bottom of the page.
Sometimes there are many references, and it may take some guesswork to find the right one, Sometimes there are many references, and it may take some guesswork to find the right one,
but in this case it's ``tool_settings.sculpt.brush.texture``. but in this case it's ``Brush.texture``.
#. Now you know that the texture can be accessed from ``bpy.data.brushes["BrushName"].texture`` #. Now you know that the texture can be accessed from ``bpy.data.brushes["BrushName"].texture``
but normally you *won't* want to access the brush by name, instead you want to access the active brush. but normally you *won't* want to access the brush by name, instead you want to access the active brush.
So the next step is to check on where brushes are accessed from via the references. So the next step is to check on where brushes are accessed from via the references.
In this case there it is simply ``bpy.context.brush``.
Now you can use the Python console to form the nested properties needed to access brush textures contrast: Now you can use the Python console to form the nested properties needed to access brush textures contrast:
:menuselection:`Context --> Tool Settings --> Sculpt --> Brush --> Texture --> Contrast`. *Context -> Brush -> Texture -> Contrast*.
Since the attribute for each is given along the way you can compose the data path in the Python console: Since the attribute for each is given along the way you can compose the data path in the Python console:
.. code-block:: python .. code-block:: python
bpy.context.tool_settings.sculpt.brush.texture.contrast bpy.context.brush.texture.contrast
There can be multiple ways to access the same data, which you choose often depends on the task.
An alternate path to access the same setting is:
.. code-block:: python
bpy.context.sculpt.brush.texture.contrast
Or access the brush directly: Or access the brush directly:
.. code-block:: python .. code-block:: python
bpy.data.textures["Texture"].contrast bpy.data.brushes["BrushName"].texture.contrast
If you are writing a user tool normally you want to use the :mod:`bpy.context` since the user normally expects If you are writing a user tool normally you want to use the :mod:`bpy.context` since the user normally expects

View File

@@ -35,13 +35,12 @@ but not to fully cover each topic.
A quick list of helpful things to know before starting: A quick list of helpful things to know before starting:
- Enable :ref:`Developer Extra <blender_manual:prefs-interface-dev-extras` - Blender uses Python 3.x; some online documentation still assumes version 2.x.
and :ref:`Python Tooltips <blender_manual:prefs-interface-tooltips-python>`. - The interactive console is great for testing one-liners.
- The :ref:`Python Console <blender_manual:bpy.types.SpaceConsole>` It also has autocompletion so you can inspect the API quickly.
is great for testing one-liners; it has autocompletion so you can inspect the API quickly. - Button tooltips show Python attributes and operator names.
- Button tooltips show Python attributes and operator names (when enabled see above). - The context menu of buttons directly links to this API documentation.
- The context menu of buttons directly links to this API documentation (when enabled see above). - More operator examples can be found in the text editor's template menu.
- Many python examples can be found in the text editor's template menu.
- To examine further scripts distributed with Blender, see: - To examine further scripts distributed with Blender, see:
- ``scripts/startup/bl_ui`` for the user interface. - ``scripts/startup/bl_ui`` for the user interface.
@@ -238,7 +237,7 @@ Examples:
{'FINISHED'} {'FINISHED'}
>>> bpy.ops.mesh.hide(unselected=False) >>> bpy.ops.mesh.hide(unselected=False)
{'FINISHED'} {'FINISHED'}
>>> bpy.ops.object.transform_apply() >>> bpy.ops.object.scale_apply()
{'FINISHED'} {'FINISHED'}
.. tip:: .. tip::

View File

@@ -24,9 +24,10 @@ The three main use cases for the terminal are:
- If the script runs for too long or you accidentally enter an infinite loop, - If the script runs for too long or you accidentally enter an infinite loop,
:kbd:`Ctrl-C` in the terminal (:kbd:`Ctrl-Break` on Windows) will quit the script early. :kbd:`Ctrl-C` in the terminal (:kbd:`Ctrl-Break` on Windows) will quit the script early.
.. seealso:: .. note::
:ref:`blender_manual:command_line-launch-index`. For Linux and macOS users this means starting the terminal first, then running Blender from within it.
On Windows the terminal can be enabled from the Help menu.
Interface Tricks Interface Tricks

View File

@@ -551,7 +551,7 @@ def example_extract_docstring(filepath):
file.close() file.close()
return "", 0 return "", 0
for line in file: for line in file.readlines():
line_no += 1 line_no += 1
if line.startswith('"""'): if line.startswith('"""'):
break break
@@ -559,13 +559,6 @@ def example_extract_docstring(filepath):
text.append(line.rstrip()) text.append(line.rstrip())
line_no += 1 line_no += 1
# Skip over blank lines so the Python code doesn't have blank lines at the top.
for line in file:
if line.strip():
break
line_no += 1
file.close() file.close()
return "\n".join(text), line_no return "\n".join(text), line_no
@@ -2195,20 +2188,9 @@ def setup_blender():
# Remove handlers since the functions get included # Remove handlers since the functions get included
# in the doc-string and don't have meaningful names. # in the doc-string and don't have meaningful names.
lists_to_restore = [] for ls in bpy.app.handlers:
for var in bpy.app.handlers: if isinstance(ls, list):
if isinstance(var, list): ls.clear()
lists_to_restore.append((var[:], var))
var.clear()
return {
"lists_to_restore": lists_to_restore,
}
def teardown_blender(setup_data):
for var_src, var_dst in setup_data["lists_to_restore"]:
var_dst[:] = var_src
def main(): def main():
@@ -2217,7 +2199,7 @@ def main():
setup_monkey_patch() setup_monkey_patch()
# Perform changes to Blender it's self. # Perform changes to Blender it's self.
setup_data = setup_blender() setup_blender()
# eventually, create the dirs # eventually, create the dirs
for dir_path in [ARGS.output_dir, SPHINX_IN]: for dir_path in [ARGS.output_dir, SPHINX_IN]:
@@ -2323,8 +2305,6 @@ def main():
shutil.copy(os.path.join(SPHINX_OUT_PDF, "contents.pdf"), shutil.copy(os.path.join(SPHINX_OUT_PDF, "contents.pdf"),
os.path.join(REFERENCE_PATH, BLENDER_PDF_FILENAME)) os.path.join(REFERENCE_PATH, BLENDER_PDF_FILENAME))
teardown_blender(setup_data)
sys.exit() sys.exit()

View File

@@ -55,7 +55,7 @@ if $DO_EXE_BLENDER ; then
# Don't delete existing docs, now partial updates are used for quick builds. # Don't delete existing docs, now partial updates are used for quick builds.
# #
# Disable ASAN error halt since it results in nonzero exit code on any minor issue. # Disable ASAN error halt since it results in nonzero exit code on any minor issue.
ASAN_OPTIONS=halt_on_error=0:${ASAN_OPTIONS} \ ASAN_OPTIONS=halt_on_error=0 \
$BLENDER_BIN \ $BLENDER_BIN \
--background \ --background \
-noaudio \ -noaudio \

View File

@@ -1,5 +1,7 @@
/* T76453: Prevent Long enum lists */ /* Prevent Long enum lists */
.field-list li { .field-body {
display: block;
width: 100%;
max-height: 245px; max-height: 245px;
overflow-y: auto !important; overflow-y: auto !important;
} }

4
extern/README vendored
View File

@@ -1,4 +0,0 @@
When updating a library remember to:
* Update the README.blender with the corresponding version.
* Update the THIRD-PARTY-LICENSE.txt document

View File

@@ -1,5 +0,0 @@
Project: Audaspace
URL: https://audaspace.github.io/
License: Apache 2.0
Upstream version: 1.3 (Last Release)
Local modifications: None

View File

@@ -24,6 +24,6 @@ set(JACK_FOUND ${WITH_JACK})
set(LIBSNDFILE_FOUND ${WITH_CODEC_SNDFILE}) set(LIBSNDFILE_FOUND ${WITH_CODEC_SNDFILE})
set(OPENAL_FOUND ${WITH_OPENAL}) set(OPENAL_FOUND ${WITH_OPENAL})
set(PYTHONLIBS_FOUND TRUE) set(PYTHONLIBS_FOUND TRUE)
set(NUMPY_FOUND ${WITH_PYTHON_NUMPY}) set(NUMPY_FOUND TRUE)
set(NUMPY_INCLUDE_DIRS ${PYTHON_NUMPY_INCLUDE_DIRS}) set(NUMPY_INCLUDE_DIRS ${PYTHON_NUMPY_INCLUDE_DIRS})
set(SDL_FOUND ${WITH_SDL}) set(SDL_FOUND ${WITH_SDL})

View File

@@ -72,9 +72,6 @@ protected:
/// The channel mapper reader in between. /// The channel mapper reader in between.
std::shared_ptr<ChannelMapperReader> m_mapper; std::shared_ptr<ChannelMapperReader> m_mapper;
/// Whether the source is being read for the first time.
bool m_first_reading;
/// Whether to keep the source if end of it is reached. /// Whether to keep the source if end of it is reached.
bool m_keep; bool m_keep;

View File

@@ -78,7 +78,7 @@ bool SoftwareDevice::SoftwareHandle::pause(bool keep)
} }
SoftwareDevice::SoftwareHandle::SoftwareHandle(SoftwareDevice* device, std::shared_ptr<IReader> reader, std::shared_ptr<PitchReader> pitch, std::shared_ptr<ResampleReader> resampler, std::shared_ptr<ChannelMapperReader> mapper, bool keep) : SoftwareDevice::SoftwareHandle::SoftwareHandle(SoftwareDevice* device, std::shared_ptr<IReader> reader, std::shared_ptr<PitchReader> pitch, std::shared_ptr<ResampleReader> resampler, std::shared_ptr<ChannelMapperReader> mapper, bool keep) :
m_reader(reader), m_pitch(pitch), m_resampler(resampler), m_mapper(mapper), m_first_reading(true), m_keep(keep), m_user_pitch(1.0f), m_user_volume(1.0f), m_user_pan(0.0f), m_volume(0.0f), m_old_volume(0.0f), m_loopcount(0), m_reader(reader), m_pitch(pitch), m_resampler(resampler), m_mapper(mapper), m_keep(keep), m_user_pitch(1.0f), m_user_volume(1.0f), m_user_pan(0.0f), m_volume(0.0f), m_old_volume(0.0f), m_loopcount(0),
m_relative(true), m_volume_max(1.0f), m_volume_min(0), m_distance_max(std::numeric_limits<float>::max()), m_relative(true), m_volume_max(1.0f), m_volume_min(0), m_distance_max(std::numeric_limits<float>::max()),
m_distance_reference(1.0f), m_attenuation(1.0f), m_cone_angle_outer(M_PI), m_cone_angle_inner(M_PI), m_cone_volume_outer(0), m_distance_reference(1.0f), m_attenuation(1.0f), m_cone_angle_outer(M_PI), m_cone_angle_inner(M_PI), m_cone_volume_outer(0),
m_flags(RENDER_CONE), m_stop(nullptr), m_stop_data(nullptr), m_status(STATUS_PLAYING), m_device(device) m_flags(RENDER_CONE), m_stop(nullptr), m_stop_data(nullptr), m_status(STATUS_PLAYING), m_device(device)
@@ -106,14 +106,6 @@ void SoftwareDevice::SoftwareHandle::update()
if(m_pitch->getSpecs().channels != CHANNELS_MONO) if(m_pitch->getSpecs().channels != CHANNELS_MONO)
{ {
m_volume = m_user_volume; m_volume = m_user_volume;
// we don't know a previous volume if this source has never been read before
if(m_first_reading)
{
m_old_volume = m_volume;
m_first_reading = false;
}
m_pitch->setPitch(m_user_pitch); m_pitch->setPitch(m_user_pitch);
return; return;
} }
@@ -222,13 +214,6 @@ void SoftwareDevice::SoftwareHandle::update()
m_volume *= m_user_volume; m_volume *= m_user_volume;
} }
// we don't know a previous volume if this source has never been read before
if(m_first_reading)
{
m_old_volume = m_volume;
m_first_reading = false;
}
// 3D Cue // 3D Cue
Quaternion orientation; Quaternion orientation;
@@ -769,8 +754,6 @@ void SoftwareDevice::mix(data_t* buffer, int length)
{ {
m_mixer->mix(buf, pos, len, sound->m_volume, sound->m_old_volume); m_mixer->mix(buf, pos, len, sound->m_volume, sound->m_old_volume);
sound->m_old_volume = sound->m_volume;
pos += len; pos += len;
if(sound->m_loopcount > 0) if(sound->m_loopcount > 0)

View File

@@ -22,7 +22,6 @@
#include <mutex> #include <mutex>
#define KEEP_TIME 10 #define KEEP_TIME 10
#define POSITION_EPSILON (1.0 / static_cast<double>(RATE_48000))
AUD_NAMESPACE_BEGIN AUD_NAMESPACE_BEGIN
@@ -65,7 +64,7 @@ bool SequenceHandle::updatePosition(double position)
if(m_handle.get()) if(m_handle.get())
{ {
// we currently have a handle, let's check where we are // we currently have a handle, let's check where we are
if(position - POSITION_EPSILON >= m_entry->m_end) if(position >= m_entry->m_end)
{ {
if(position >= m_entry->m_end + KEEP_TIME) if(position >= m_entry->m_end + KEEP_TIME)
// far end, stopping // far end, stopping
@@ -77,7 +76,7 @@ bool SequenceHandle::updatePosition(double position)
return true; return true;
} }
} }
else if(position + POSITION_EPSILON >= m_entry->m_begin) else if(position >= m_entry->m_begin)
{ {
// inside, resuming // inside, resuming
m_handle->resume(); m_handle->resume();
@@ -99,7 +98,7 @@ bool SequenceHandle::updatePosition(double position)
else else
{ {
// we don't have a handle, let's start if we should be playing // we don't have a handle, let's start if we should be playing
if(position + POSITION_EPSILON >= m_entry->m_begin && position - POSITION_EPSILON <= m_entry->m_end) if(position >= m_entry->m_begin && position <= m_entry->m_end)
{ {
start(); start();
return m_valid; return m_valid;

View File

@@ -423,7 +423,7 @@ set(LIB
if(CMAKE_COMPILER_IS_GNUCXX) if(CMAKE_COMPILER_IS_GNUCXX)
# needed for gcc 4.6+ # needed for gcc 4.6+
string(APPEND CMAKE_CXX_FLAGS " -fpermissive") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fpermissive")
endif() endif()
if(MSVC) if(MSVC)

View File

@@ -1,5 +0,0 @@
Project: Bullet Continuous Collision Detection and Physics Library
URL: http://bulletphysics.org
License: zlib
Upstream version: 3.07
Local modifications: Fixed inertia

1137
extern/ceres/ChangeLog vendored

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,4 @@
Project: Ceres Solver Project: Ceres Solver
URL: http://ceres-solver.org/ URL: http://ceres-solver.org/
License: BSD 3-Clause Upstream version 1.11 (aef9c9563b08d5f39eee1576af133a84749d1b48)
Upstream version 2.0.0
Local modifications: None Local modifications: None

View File

@@ -8,8 +8,8 @@ else
fi fi
repo="https://ceres-solver.googlesource.com/ceres-solver" repo="https://ceres-solver.googlesource.com/ceres-solver"
#branch="master" branch="master"
tag="2.0.0" tag=""
tmp=`mktemp -d` tmp=`mktemp -d`
checkout="$tmp/ceres" checkout="$tmp/ceres"

View File

@@ -153,44 +153,28 @@ template <typename CostFunctor,
int... Ns> // Number of parameters in each parameter block. int... Ns> // Number of parameters in each parameter block.
class AutoDiffCostFunction : public SizedCostFunction<kNumResiduals, Ns...> { class AutoDiffCostFunction : public SizedCostFunction<kNumResiduals, Ns...> {
public: public:
// Takes ownership of functor by default. Uses the template-provided // Takes ownership of functor. Uses the template-provided value for the
// value for the number of residuals ("kNumResiduals"). // number of residuals ("kNumResiduals").
explicit AutoDiffCostFunction(CostFunctor* functor, explicit AutoDiffCostFunction(CostFunctor* functor) : functor_(functor) {
Ownership ownership = TAKE_OWNERSHIP)
: functor_(functor), ownership_(ownership) {
static_assert(kNumResiduals != DYNAMIC, static_assert(kNumResiduals != DYNAMIC,
"Can't run the fixed-size constructor if the number of " "Can't run the fixed-size constructor if the number of "
"residuals is set to ceres::DYNAMIC."); "residuals is set to ceres::DYNAMIC.");
} }
// Takes ownership of functor by default. Ignores the template-provided // Takes ownership of functor. Ignores the template-provided
// kNumResiduals in favor of the "num_residuals" argument provided. // kNumResiduals in favor of the "num_residuals" argument provided.
// //
// This allows for having autodiff cost functions which return varying // This allows for having autodiff cost functions which return varying
// numbers of residuals at runtime. // numbers of residuals at runtime.
AutoDiffCostFunction(CostFunctor* functor, AutoDiffCostFunction(CostFunctor* functor, int num_residuals)
int num_residuals, : functor_(functor) {
Ownership ownership = TAKE_OWNERSHIP)
: functor_(functor), ownership_(ownership) {
static_assert(kNumResiduals == DYNAMIC, static_assert(kNumResiduals == DYNAMIC,
"Can't run the dynamic-size constructor if the number of " "Can't run the dynamic-size constructor if the number of "
"residuals is not ceres::DYNAMIC."); "residuals is not ceres::DYNAMIC.");
SizedCostFunction<kNumResiduals, Ns...>::set_num_residuals(num_residuals); SizedCostFunction<kNumResiduals, Ns...>::set_num_residuals(num_residuals);
} }
explicit AutoDiffCostFunction(AutoDiffCostFunction&& other) virtual ~AutoDiffCostFunction() {}
: functor_(std::move(other.functor_)), ownership_(other.ownership_) {}
virtual ~AutoDiffCostFunction() {
// Manually release pointer if configured to not take ownership rather than
// deleting only if ownership is taken.
// This is to stay maximally compatible to old user code which may have
// forgotten to implement a virtual destructor, from when the
// AutoDiffCostFunction always took ownership.
if (ownership_ == DO_NOT_TAKE_OWNERSHIP) {
functor_.release();
}
}
// Implementation details follow; clients of the autodiff cost function should // Implementation details follow; clients of the autodiff cost function should
// not have to examine below here. // not have to examine below here.
@@ -217,7 +201,6 @@ class AutoDiffCostFunction : public SizedCostFunction<kNumResiduals, Ns...> {
private: private:
std::unique_ptr<CostFunctor> functor_; std::unique_ptr<CostFunctor> functor_;
Ownership ownership_;
}; };
} // namespace ceres } // namespace ceres

View File

@@ -38,10 +38,8 @@
#ifndef CERES_PUBLIC_C_API_H_ #ifndef CERES_PUBLIC_C_API_H_
#define CERES_PUBLIC_C_API_H_ #define CERES_PUBLIC_C_API_H_
// clang-format off
#include "ceres/internal/port.h" #include "ceres/internal/port.h"
#include "ceres/internal/disable_warnings.h" #include "ceres/internal/disable_warnings.h"
// clang-format on
#ifdef __cplusplus #ifdef __cplusplus
extern "C" { extern "C" {

View File

@@ -144,7 +144,8 @@ class CostFunctionToFunctor {
// Extract parameter block pointers from params. // Extract parameter block pointers from params.
using Indices = using Indices =
std::make_integer_sequence<int, ParameterDims::kNumParameterBlocks>; std::make_integer_sequence<int,
ParameterDims::kNumParameterBlocks>;
std::array<const T*, ParameterDims::kNumParameterBlocks> parameter_blocks = std::array<const T*, ParameterDims::kNumParameterBlocks> parameter_blocks =
GetParameterPointers<T>(params, Indices()); GetParameterPointers<T>(params, Indices());

View File

@@ -51,7 +51,7 @@ class CovarianceImpl;
// ======= // =======
// It is very easy to use this class incorrectly without understanding // It is very easy to use this class incorrectly without understanding
// the underlying mathematics. Please read and understand the // the underlying mathematics. Please read and understand the
// documentation completely before attempting to use it. // documentation completely before attempting to use this class.
// //
// //
// This class allows the user to evaluate the covariance for a // This class allows the user to evaluate the covariance for a
@@ -73,7 +73,7 @@ class CovarianceImpl;
// the maximum likelihood estimate of x given observations y is the // the maximum likelihood estimate of x given observations y is the
// solution to the non-linear least squares problem: // solution to the non-linear least squares problem:
// //
// x* = arg min_x |f(x) - y|^2 // x* = arg min_x |f(x)|^2
// //
// And the covariance of x* is given by // And the covariance of x* is given by
// //
@@ -220,11 +220,11 @@ class CERES_EXPORT Covariance {
// 1. DENSE_SVD uses Eigen's JacobiSVD to perform the // 1. DENSE_SVD uses Eigen's JacobiSVD to perform the
// computations. It computes the singular value decomposition // computations. It computes the singular value decomposition
// //
// U * D * V' = J // U * S * V' = J
// //
// and then uses it to compute the pseudo inverse of J'J as // and then uses it to compute the pseudo inverse of J'J as
// //
// pseudoinverse[J'J] = V * pseudoinverse[D^2] * V' // pseudoinverse[J'J]^ = V * pseudoinverse[S] * V'
// //
// It is an accurate but slow method and should only be used // It is an accurate but slow method and should only be used
// for small to moderate sized problems. It can handle // for small to moderate sized problems. It can handle
@@ -235,7 +235,7 @@ class CERES_EXPORT Covariance {
// //
// Q * R = J // Q * R = J
// //
// [J'J]^-1 = [R'*R]^-1 // [J'J]^-1 = [R*R']^-1
// //
// SPARSE_QR is not capable of computing the covariance if the // SPARSE_QR is not capable of computing the covariance if the
// Jacobian is rank deficient. Depending on the value of // Jacobian is rank deficient. Depending on the value of

View File

@@ -40,7 +40,6 @@
#include "ceres/dynamic_cost_function.h" #include "ceres/dynamic_cost_function.h"
#include "ceres/internal/fixed_array.h" #include "ceres/internal/fixed_array.h"
#include "ceres/jet.h" #include "ceres/jet.h"
#include "ceres/types.h"
#include "glog/logging.h" #include "glog/logging.h"
namespace ceres { namespace ceres {
@@ -79,24 +78,10 @@ namespace ceres {
template <typename CostFunctor, int Stride = 4> template <typename CostFunctor, int Stride = 4>
class DynamicAutoDiffCostFunction : public DynamicCostFunction { class DynamicAutoDiffCostFunction : public DynamicCostFunction {
public: public:
// Takes ownership by default. explicit DynamicAutoDiffCostFunction(CostFunctor* functor)
DynamicAutoDiffCostFunction(CostFunctor* functor, : functor_(functor) {}
Ownership ownership = TAKE_OWNERSHIP)
: functor_(functor), ownership_(ownership) {}
explicit DynamicAutoDiffCostFunction(DynamicAutoDiffCostFunction&& other) virtual ~DynamicAutoDiffCostFunction() {}
: functor_(std::move(other.functor_)), ownership_(other.ownership_) {}
virtual ~DynamicAutoDiffCostFunction() {
// Manually release pointer if configured to not take ownership
// rather than deleting only if ownership is taken. This is to
// stay maximally compatible to old user code which may have
// forgotten to implement a virtual destructor, from when the
// AutoDiffCostFunction always took ownership.
if (ownership_ == DO_NOT_TAKE_OWNERSHIP) {
functor_.release();
}
}
bool Evaluate(double const* const* parameters, bool Evaluate(double const* const* parameters,
double* residuals, double* residuals,
@@ -166,9 +151,6 @@ class DynamicAutoDiffCostFunction : public DynamicCostFunction {
} }
} }
if (num_active_parameters == 0) {
return (*functor_)(parameters, residuals);
}
// When `num_active_parameters % Stride != 0` then it can be the case // When `num_active_parameters % Stride != 0` then it can be the case
// that `active_parameter_count < Stride` while parameter_cursor is less // that `active_parameter_count < Stride` while parameter_cursor is less
// than the total number of parameters and with no remaining non-constant // than the total number of parameters and with no remaining non-constant
@@ -266,7 +248,6 @@ class DynamicAutoDiffCostFunction : public DynamicCostFunction {
private: private:
std::unique_ptr<CostFunctor> functor_; std::unique_ptr<CostFunctor> functor_;
Ownership ownership_;
}; };
} // namespace ceres } // namespace ceres

View File

@@ -44,7 +44,6 @@
#include "ceres/internal/numeric_diff.h" #include "ceres/internal/numeric_diff.h"
#include "ceres/internal/parameter_dims.h" #include "ceres/internal/parameter_dims.h"
#include "ceres/numeric_diff_options.h" #include "ceres/numeric_diff_options.h"
#include "ceres/types.h"
#include "glog/logging.h" #include "glog/logging.h"
namespace ceres { namespace ceres {
@@ -85,10 +84,6 @@ class DynamicNumericDiffCostFunction : public DynamicCostFunction {
const NumericDiffOptions& options = NumericDiffOptions()) const NumericDiffOptions& options = NumericDiffOptions())
: functor_(functor), ownership_(ownership), options_(options) {} : functor_(functor), ownership_(ownership), options_(options) {}
explicit DynamicNumericDiffCostFunction(
DynamicNumericDiffCostFunction&& other)
: functor_(std::move(other.functor_)), ownership_(other.ownership_) {}
virtual ~DynamicNumericDiffCostFunction() { virtual ~DynamicNumericDiffCostFunction() {
if (ownership_ != TAKE_OWNERSHIP) { if (ownership_ != TAKE_OWNERSHIP) {
functor_.release(); functor_.release();

View File

@@ -62,8 +62,7 @@ class CERES_EXPORT GradientProblemSolver {
// Minimizer options ---------------------------------------- // Minimizer options ----------------------------------------
LineSearchDirectionType line_search_direction_type = LBFGS; LineSearchDirectionType line_search_direction_type = LBFGS;
LineSearchType line_search_type = WOLFE; LineSearchType line_search_type = WOLFE;
NonlinearConjugateGradientType nonlinear_conjugate_gradient_type = NonlinearConjugateGradientType nonlinear_conjugate_gradient_type = FLETCHER_REEVES;
FLETCHER_REEVES;
// The LBFGS hessian approximation is a low rank approximation to // The LBFGS hessian approximation is a low rank approximation to
// the inverse of the Hessian matrix. The rank of the // the inverse of the Hessian matrix. The rank of the

View File

@@ -198,7 +198,7 @@ struct Make1stOrderPerturbation {
template <int N, int Offset, typename T, typename JetT> template <int N, int Offset, typename T, typename JetT>
struct Make1stOrderPerturbation<N, N, Offset, T, JetT> { struct Make1stOrderPerturbation<N, N, Offset, T, JetT> {
public: public:
static void Apply(const T* src, JetT* dst) {} static void Apply(const T* /*src*/, JetT* /*dst*/) {}
}; };
// Calls Make1stOrderPerturbation for every parameter block. // Calls Make1stOrderPerturbation for every parameter block.
@@ -229,9 +229,7 @@ struct Make1stOrderPerturbations<std::integer_sequence<int, N, Ns...>,
// End of 'recursion'. Nothing more to do. // End of 'recursion'. Nothing more to do.
template <int ParameterIdx, int Total> template <int ParameterIdx, int Total>
struct Make1stOrderPerturbations<std::integer_sequence<int>, struct Make1stOrderPerturbations<std::integer_sequence<int>, ParameterIdx, Total> {
ParameterIdx,
Total> {
template <typename T, typename JetT> template <typename T, typename JetT>
static void Apply(T const* const* /* NOT USED */, JetT* /* NOT USED */) {} static void Apply(T const* const* /* NOT USED */, JetT* /* NOT USED */) {}
}; };

View File

@@ -34,11 +34,11 @@
#define CERES_WARNINGS_DISABLED #define CERES_WARNINGS_DISABLED
#ifdef _MSC_VER #ifdef _MSC_VER
#pragma warning(push) #pragma warning( push )
// Disable the warning C4251 which is triggered by stl classes in // Disable the warning C4251 which is triggered by stl classes in
// Ceres' public interface. To quote MSDN: "C4251 can be ignored " // Ceres' public interface. To quote MSDN: "C4251 can be ignored "
// "if you are deriving from a type in the Standard C++ Library" // "if you are deriving from a type in the Standard C++ Library"
#pragma warning(disable : 4251) #pragma warning( disable : 4251 )
#endif #endif
#endif // CERES_WARNINGS_DISABLED #endif // CERES_WARNINGS_DISABLED

View File

@@ -36,26 +36,31 @@
namespace ceres { namespace ceres {
typedef Eigen::Matrix<double, Eigen::Dynamic, 1> Vector; typedef Eigen::Matrix<double, Eigen::Dynamic, 1> Vector;
typedef Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor> typedef Eigen::Matrix<double,
Matrix; Eigen::Dynamic,
Eigen::Dynamic,
Eigen::RowMajor> Matrix;
typedef Eigen::Map<Vector> VectorRef; typedef Eigen::Map<Vector> VectorRef;
typedef Eigen::Map<Matrix> MatrixRef; typedef Eigen::Map<Matrix> MatrixRef;
typedef Eigen::Map<const Vector> ConstVectorRef; typedef Eigen::Map<const Vector> ConstVectorRef;
typedef Eigen::Map<const Matrix> ConstMatrixRef; typedef Eigen::Map<const Matrix> ConstMatrixRef;
// Column major matrices for DenseSparseMatrix/DenseQRSolver // Column major matrices for DenseSparseMatrix/DenseQRSolver
typedef Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::ColMajor> typedef Eigen::Matrix<double,
ColMajorMatrix; Eigen::Dynamic,
Eigen::Dynamic,
Eigen::ColMajor> ColMajorMatrix;
typedef Eigen::Map<ColMajorMatrix, 0, Eigen::Stride<Eigen::Dynamic, 1>> typedef Eigen::Map<ColMajorMatrix, 0,
ColMajorMatrixRef; Eigen::Stride<Eigen::Dynamic, 1>> ColMajorMatrixRef;
typedef Eigen::Map<const ColMajorMatrix, 0, Eigen::Stride<Eigen::Dynamic, 1>> typedef Eigen::Map<const ColMajorMatrix,
ConstColMajorMatrixRef; 0,
Eigen::Stride<Eigen::Dynamic, 1>> ConstColMajorMatrixRef;
// C++ does not support templated typdefs, thus the need for this // C++ does not support templated typdefs, thus the need for this
// struct so that we can support statically sized Matrix and Maps. // struct so that we can support statically sized Matrix and Maps.
template <int num_rows = Eigen::Dynamic, int num_cols = Eigen::Dynamic> template <int num_rows = Eigen::Dynamic, int num_cols = Eigen::Dynamic>
struct EigenTypes { struct EigenTypes {
typedef Eigen::Matrix<double, typedef Eigen::Matrix<double,
num_rows, num_rows,

View File

@@ -30,7 +30,6 @@
#ifndef CERES_PUBLIC_INTERNAL_FIXED_ARRAY_H_ #ifndef CERES_PUBLIC_INTERNAL_FIXED_ARRAY_H_
#define CERES_PUBLIC_INTERNAL_FIXED_ARRAY_H_ #define CERES_PUBLIC_INTERNAL_FIXED_ARRAY_H_
#include <Eigen/Core> // For Eigen::aligned_allocator
#include <algorithm> #include <algorithm>
#include <array> #include <array>
#include <cstddef> #include <cstddef>
@@ -38,6 +37,8 @@
#include <tuple> #include <tuple>
#include <type_traits> #include <type_traits>
#include <Eigen/Core> // For Eigen::aligned_allocator
#include "ceres/internal/memory.h" #include "ceres/internal/memory.h"
#include "glog/logging.h" #include "glog/logging.h"

View File

@@ -62,8 +62,7 @@ struct SumImpl;
// Strip of and sum the first number. // Strip of and sum the first number.
template <typename T, T N, T... Ns> template <typename T, T N, T... Ns>
struct SumImpl<std::integer_sequence<T, N, Ns...>> { struct SumImpl<std::integer_sequence<T, N, Ns...>> {
static constexpr T Value = static constexpr T Value = N + SumImpl<std::integer_sequence<T, Ns...>>::Value;
N + SumImpl<std::integer_sequence<T, Ns...>>::Value;
}; };
// Strip of and sum the first two numbers. // Strip of and sum the first two numbers.
@@ -130,14 +129,10 @@ template <typename T, T Sum, typename SeqIn, typename SeqOut>
struct ExclusiveScanImpl; struct ExclusiveScanImpl;
template <typename T, T Sum, T N, T... Ns, T... Rs> template <typename T, T Sum, T N, T... Ns, T... Rs>
struct ExclusiveScanImpl<T, struct ExclusiveScanImpl<T, Sum, std::integer_sequence<T, N, Ns...>,
Sum,
std::integer_sequence<T, N, Ns...>,
std::integer_sequence<T, Rs...>> { std::integer_sequence<T, Rs...>> {
using Type = using Type =
typename ExclusiveScanImpl<T, typename ExclusiveScanImpl<T, Sum + N, std::integer_sequence<T, Ns...>,
Sum + N,
std::integer_sequence<T, Ns...>,
std::integer_sequence<T, Rs..., Sum>>::Type; std::integer_sequence<T, Rs..., Sum>>::Type;
}; };

View File

@@ -47,17 +47,15 @@
#include "ceres/types.h" #include "ceres/types.h"
#include "glog/logging.h" #include "glog/logging.h"
namespace ceres { namespace ceres {
namespace internal { namespace internal {
// This is split from the main class because C++ doesn't allow partial template // This is split from the main class because C++ doesn't allow partial template
// specializations for member functions. The alternative is to repeat the main // specializations for member functions. The alternative is to repeat the main
// class for differing numbers of parameters, which is also unfortunate. // class for differing numbers of parameters, which is also unfortunate.
template <typename CostFunctor, template <typename CostFunctor, NumericDiffMethodType kMethod,
NumericDiffMethodType kMethod, int kNumResiduals, typename ParameterDims, int kParameterBlock,
int kNumResiduals,
typename ParameterDims,
int kParameterBlock,
int kParameterBlockSize> int kParameterBlockSize>
struct NumericDiff { struct NumericDiff {
// Mutates parameters but must restore them before return. // Mutates parameters but must restore them before return.
@@ -68,23 +66,23 @@ struct NumericDiff {
int num_residuals, int num_residuals,
int parameter_block_index, int parameter_block_index,
int parameter_block_size, int parameter_block_size,
double** parameters, double **parameters,
double* jacobian) { double *jacobian) {
using Eigen::ColMajor;
using Eigen::Map; using Eigen::Map;
using Eigen::Matrix; using Eigen::Matrix;
using Eigen::RowMajor; using Eigen::RowMajor;
using Eigen::ColMajor;
DCHECK(jacobian); DCHECK(jacobian);
const int num_residuals_internal = const int num_residuals_internal =
(kNumResiduals != ceres::DYNAMIC ? kNumResiduals : num_residuals); (kNumResiduals != ceres::DYNAMIC ? kNumResiduals : num_residuals);
const int parameter_block_index_internal = const int parameter_block_index_internal =
(kParameterBlock != ceres::DYNAMIC ? kParameterBlock (kParameterBlock != ceres::DYNAMIC ? kParameterBlock :
: parameter_block_index); parameter_block_index);
const int parameter_block_size_internal = const int parameter_block_size_internal =
(kParameterBlockSize != ceres::DYNAMIC ? kParameterBlockSize (kParameterBlockSize != ceres::DYNAMIC ? kParameterBlockSize :
: parameter_block_size); parameter_block_size);
typedef Matrix<double, kNumResiduals, 1> ResidualVector; typedef Matrix<double, kNumResiduals, 1> ResidualVector;
typedef Matrix<double, kParameterBlockSize, 1> ParameterVector; typedef Matrix<double, kParameterBlockSize, 1> ParameterVector;
@@ -99,17 +97,17 @@ struct NumericDiff {
(kParameterBlockSize == 1) ? ColMajor : RowMajor> (kParameterBlockSize == 1) ? ColMajor : RowMajor>
JacobianMatrix; JacobianMatrix;
Map<JacobianMatrix> parameter_jacobian( Map<JacobianMatrix> parameter_jacobian(jacobian,
jacobian, num_residuals_internal, parameter_block_size_internal); num_residuals_internal,
parameter_block_size_internal);
Map<ParameterVector> x_plus_delta( Map<ParameterVector> x_plus_delta(
parameters[parameter_block_index_internal], parameters[parameter_block_index_internal],
parameter_block_size_internal); parameter_block_size_internal);
ParameterVector x(x_plus_delta); ParameterVector x(x_plus_delta);
ParameterVector step_size = ParameterVector step_size = x.array().abs() *
x.array().abs() * ((kMethod == RIDDERS) ((kMethod == RIDDERS) ? options.ridders_relative_initial_step_size :
? options.ridders_relative_initial_step_size options.relative_step_size);
: options.relative_step_size);
// It is not a good idea to make the step size arbitrarily // It is not a good idea to make the step size arbitrarily
// small. This will lead to problems with round off and numerical // small. This will lead to problems with round off and numerical
@@ -120,8 +118,8 @@ struct NumericDiff {
// For Ridders' method, the initial step size is required to be large, // For Ridders' method, the initial step size is required to be large,
// thus ridders_relative_initial_step_size is used. // thus ridders_relative_initial_step_size is used.
if (kMethod == RIDDERS) { if (kMethod == RIDDERS) {
min_step_size = min_step_size = std::max(min_step_size,
std::max(min_step_size, options.ridders_relative_initial_step_size); options.ridders_relative_initial_step_size);
} }
// For each parameter in the parameter block, use finite differences to // For each parameter in the parameter block, use finite differences to
@@ -135,9 +133,7 @@ struct NumericDiff {
const double delta = std::max(min_step_size, step_size(j)); const double delta = std::max(min_step_size, step_size(j));
if (kMethod == RIDDERS) { if (kMethod == RIDDERS) {
if (!EvaluateRiddersJacobianColumn(functor, if (!EvaluateRiddersJacobianColumn(functor, j, delta,
j,
delta,
options, options,
num_residuals_internal, num_residuals_internal,
parameter_block_size_internal, parameter_block_size_internal,
@@ -150,9 +146,7 @@ struct NumericDiff {
return false; return false;
} }
} else { } else {
if (!EvaluateJacobianColumn(functor, if (!EvaluateJacobianColumn(functor, j, delta,
j,
delta,
num_residuals_internal, num_residuals_internal,
parameter_block_size_internal, parameter_block_size_internal,
x.data(), x.data(),
@@ -188,7 +182,8 @@ struct NumericDiff {
typedef Matrix<double, kParameterBlockSize, 1> ParameterVector; typedef Matrix<double, kParameterBlockSize, 1> ParameterVector;
Map<const ParameterVector> x(x_ptr, parameter_block_size); Map<const ParameterVector> x(x_ptr, parameter_block_size);
Map<ParameterVector> x_plus_delta(x_plus_delta_ptr, parameter_block_size); Map<ParameterVector> x_plus_delta(x_plus_delta_ptr,
parameter_block_size);
Map<ResidualVector> residuals(residuals_ptr, num_residuals); Map<ResidualVector> residuals(residuals_ptr, num_residuals);
Map<ResidualVector> temp_residuals(temp_residuals_ptr, num_residuals); Map<ResidualVector> temp_residuals(temp_residuals_ptr, num_residuals);
@@ -196,8 +191,9 @@ struct NumericDiff {
// Mutate 1 element at a time and then restore. // Mutate 1 element at a time and then restore.
x_plus_delta(parameter_index) = x(parameter_index) + delta; x_plus_delta(parameter_index) = x(parameter_index) + delta;
if (!VariadicEvaluate<ParameterDims>( if (!VariadicEvaluate<ParameterDims>(*functor,
*functor, parameters, residuals.data())) { parameters,
residuals.data())) {
return false; return false;
} }
@@ -210,8 +206,9 @@ struct NumericDiff {
// Compute the function on the other side of x(parameter_index). // Compute the function on the other side of x(parameter_index).
x_plus_delta(parameter_index) = x(parameter_index) - delta; x_plus_delta(parameter_index) = x(parameter_index) - delta;
if (!VariadicEvaluate<ParameterDims>( if (!VariadicEvaluate<ParameterDims>(*functor,
*functor, parameters, temp_residuals.data())) { parameters,
temp_residuals.data())) {
return false; return false;
} }
@@ -220,7 +217,8 @@ struct NumericDiff {
} else { } else {
// Forward difference only; reuse existing residuals evaluation. // Forward difference only; reuse existing residuals evaluation.
residuals -= residuals -=
Map<const ResidualVector>(residuals_at_eval_point, num_residuals); Map<const ResidualVector>(residuals_at_eval_point,
num_residuals);
} }
// Restore x_plus_delta. // Restore x_plus_delta.
@@ -256,17 +254,17 @@ struct NumericDiff {
double* x_plus_delta_ptr, double* x_plus_delta_ptr,
double* temp_residuals_ptr, double* temp_residuals_ptr,
double* residuals_ptr) { double* residuals_ptr) {
using Eigen::aligned_allocator;
using Eigen::Map; using Eigen::Map;
using Eigen::Matrix; using Eigen::Matrix;
using Eigen::aligned_allocator;
typedef Matrix<double, kNumResiduals, 1> ResidualVector; typedef Matrix<double, kNumResiduals, 1> ResidualVector;
typedef Matrix<double, kNumResiduals, Eigen::Dynamic> typedef Matrix<double, kNumResiduals, Eigen::Dynamic> ResidualCandidateMatrix;
ResidualCandidateMatrix;
typedef Matrix<double, kParameterBlockSize, 1> ParameterVector; typedef Matrix<double, kParameterBlockSize, 1> ParameterVector;
Map<const ParameterVector> x(x_ptr, parameter_block_size); Map<const ParameterVector> x(x_ptr, parameter_block_size);
Map<ParameterVector> x_plus_delta(x_plus_delta_ptr, parameter_block_size); Map<ParameterVector> x_plus_delta(x_plus_delta_ptr,
parameter_block_size);
Map<ResidualVector> residuals(residuals_ptr, num_residuals); Map<ResidualVector> residuals(residuals_ptr, num_residuals);
Map<ResidualVector> temp_residuals(temp_residuals_ptr, num_residuals); Map<ResidualVector> temp_residuals(temp_residuals_ptr, num_residuals);
@@ -277,16 +275,18 @@ struct NumericDiff {
// As the derivative is estimated, the step size decreases. // As the derivative is estimated, the step size decreases.
// By default, the step sizes are chosen so that the middle column // By default, the step sizes are chosen so that the middle column
// of the Romberg tableau uses the input delta. // of the Romberg tableau uses the input delta.
double current_step_size = double current_step_size = delta *
delta * pow(options.ridders_step_shrink_factor, pow(options.ridders_step_shrink_factor,
options.max_num_ridders_extrapolations / 2); options.max_num_ridders_extrapolations / 2);
// Double-buffering temporary differential candidate vectors // Double-buffering temporary differential candidate vectors
// from previous step size. // from previous step size.
ResidualCandidateMatrix stepsize_candidates_a( ResidualCandidateMatrix stepsize_candidates_a(
num_residuals, options.max_num_ridders_extrapolations); num_residuals,
options.max_num_ridders_extrapolations);
ResidualCandidateMatrix stepsize_candidates_b( ResidualCandidateMatrix stepsize_candidates_b(
num_residuals, options.max_num_ridders_extrapolations); num_residuals,
options.max_num_ridders_extrapolations);
ResidualCandidateMatrix* current_candidates = &stepsize_candidates_a; ResidualCandidateMatrix* current_candidates = &stepsize_candidates_a;
ResidualCandidateMatrix* previous_candidates = &stepsize_candidates_b; ResidualCandidateMatrix* previous_candidates = &stepsize_candidates_b;
@@ -304,9 +304,7 @@ struct NumericDiff {
// 3. Extrapolation becomes numerically unstable. // 3. Extrapolation becomes numerically unstable.
for (int i = 0; i < options.max_num_ridders_extrapolations; ++i) { for (int i = 0; i < options.max_num_ridders_extrapolations; ++i) {
// Compute the numerical derivative at this step size. // Compute the numerical derivative at this step size.
if (!EvaluateJacobianColumn(functor, if (!EvaluateJacobianColumn(functor, parameter_index, current_step_size,
parameter_index,
current_step_size,
num_residuals, num_residuals,
parameter_block_size, parameter_block_size,
x.data(), x.data(),
@@ -329,24 +327,23 @@ struct NumericDiff {
// Extrapolation factor for Richardson acceleration method (see below). // Extrapolation factor for Richardson acceleration method (see below).
double richardson_factor = options.ridders_step_shrink_factor * double richardson_factor = options.ridders_step_shrink_factor *
options.ridders_step_shrink_factor; options.ridders_step_shrink_factor;
for (int k = 1; k <= i; ++k) { for (int k = 1; k <= i; ++k) {
// Extrapolate the various orders of finite differences using // Extrapolate the various orders of finite differences using
// the Richardson acceleration method. // the Richardson acceleration method.
current_candidates->col(k) = current_candidates->col(k) =
(richardson_factor * current_candidates->col(k - 1) - (richardson_factor * current_candidates->col(k - 1) -
previous_candidates->col(k - 1)) / previous_candidates->col(k - 1)) / (richardson_factor - 1.0);
(richardson_factor - 1.0);
richardson_factor *= options.ridders_step_shrink_factor * richardson_factor *= options.ridders_step_shrink_factor *
options.ridders_step_shrink_factor; options.ridders_step_shrink_factor;
// Compute the difference between the previous value and the current. // Compute the difference between the previous value and the current.
double candidate_error = std::max( double candidate_error = std::max(
(current_candidates->col(k) - current_candidates->col(k - 1)) (current_candidates->col(k) -
.norm(), current_candidates->col(k - 1)).norm(),
(current_candidates->col(k) - previous_candidates->col(k - 1)) (current_candidates->col(k) -
.norm()); previous_candidates->col(k - 1)).norm());
// If the error has decreased, update results. // If the error has decreased, update results.
if (candidate_error <= norm_error) { if (candidate_error <= norm_error) {
@@ -368,9 +365,8 @@ struct NumericDiff {
// Check to see if the current gradient estimate is numerically unstable. // Check to see if the current gradient estimate is numerically unstable.
// If so, bail out and return the last stable result. // If so, bail out and return the last stable result.
if (i > 0) { if (i > 0) {
double tableau_error = double tableau_error = (current_candidates->col(i) -
(current_candidates->col(i) - previous_candidates->col(i - 1)) previous_candidates->col(i - 1)).norm();
.norm();
// Compare current error to the chosen candidate's error. // Compare current error to the chosen candidate's error.
if (tableau_error >= 2 * norm_error) { if (tableau_error >= 2 * norm_error) {
@@ -486,18 +482,14 @@ struct EvaluateJacobianForParameterBlocks<ParameterDims,
// End of 'recursion'. Nothing more to do. // End of 'recursion'. Nothing more to do.
template <typename ParameterDims, int ParameterIdx> template <typename ParameterDims, int ParameterIdx>
struct EvaluateJacobianForParameterBlocks<ParameterDims, struct EvaluateJacobianForParameterBlocks<ParameterDims, std::integer_sequence<int>,
std::integer_sequence<int>,
ParameterIdx> { ParameterIdx> {
template <NumericDiffMethodType method, template <NumericDiffMethodType method, int kNumResiduals,
int kNumResiduals,
typename CostFunctor> typename CostFunctor>
static bool Apply(const CostFunctor* /* NOT USED*/, static bool Apply(const CostFunctor* /* NOT USED*/,
const double* /* NOT USED*/, const double* /* NOT USED*/,
const NumericDiffOptions& /* NOT USED*/, const NumericDiffOptions& /* NOT USED*/, int /* NOT USED*/,
int /* NOT USED*/, double** /* NOT USED*/, double** /* NOT USED*/) {
double** /* NOT USED*/,
double** /* NOT USED*/) {
return true; return true;
} }
}; };

View File

@@ -35,17 +35,17 @@
#include "ceres/internal/config.h" #include "ceres/internal/config.h"
#if defined(CERES_USE_OPENMP) #if defined(CERES_USE_OPENMP)
#if defined(CERES_USE_CXX_THREADS) || defined(CERES_NO_THREADS) # if defined(CERES_USE_CXX_THREADS) || defined(CERES_NO_THREADS)
#error CERES_USE_OPENMP is mutually exclusive to CERES_USE_CXX_THREADS and CERES_NO_THREADS # error CERES_USE_OPENMP is mutually exclusive to CERES_USE_CXX_THREADS and CERES_NO_THREADS
#endif # endif
#elif defined(CERES_USE_CXX_THREADS) #elif defined(CERES_USE_CXX_THREADS)
#if defined(CERES_USE_OPENMP) || defined(CERES_NO_THREADS) # if defined(CERES_USE_OPENMP) || defined(CERES_NO_THREADS)
#error CERES_USE_CXX_THREADS is mutually exclusive to CERES_USE_OPENMP, CERES_USE_CXX_THREADS and CERES_NO_THREADS # error CERES_USE_CXX_THREADS is mutually exclusive to CERES_USE_OPENMP, CERES_USE_CXX_THREADS and CERES_NO_THREADS
#endif # endif
#elif defined(CERES_NO_THREADS) #elif defined(CERES_NO_THREADS)
#if defined(CERES_USE_OPENMP) || defined(CERES_USE_CXX_THREADS) # if defined(CERES_USE_OPENMP) || defined(CERES_USE_CXX_THREADS)
#error CERES_NO_THREADS is mutually exclusive to CERES_USE_OPENMP and CERES_USE_CXX_THREADS # error CERES_NO_THREADS is mutually exclusive to CERES_USE_OPENMP and CERES_USE_CXX_THREADS
#endif # endif
#else #else
# error One of CERES_USE_OPENMP, CERES_USE_CXX_THREADS or CERES_NO_THREADS must be defined. # error One of CERES_USE_OPENMP, CERES_USE_CXX_THREADS or CERES_NO_THREADS must be defined.
#endif #endif
@@ -54,57 +54,37 @@
// compiled without any sparse back-end. Verify that it has not subsequently // compiled without any sparse back-end. Verify that it has not subsequently
// been inconsistently redefined. // been inconsistently redefined.
#if defined(CERES_NO_SPARSE) #if defined(CERES_NO_SPARSE)
#if !defined(CERES_NO_SUITESPARSE) # if !defined(CERES_NO_SUITESPARSE)
#error CERES_NO_SPARSE requires CERES_NO_SUITESPARSE. # error CERES_NO_SPARSE requires CERES_NO_SUITESPARSE.
#endif # endif
#if !defined(CERES_NO_CXSPARSE) # if !defined(CERES_NO_CXSPARSE)
#error CERES_NO_SPARSE requires CERES_NO_CXSPARSE # error CERES_NO_SPARSE requires CERES_NO_CXSPARSE
#endif # endif
#if !defined(CERES_NO_ACCELERATE_SPARSE) # if !defined(CERES_NO_ACCELERATE_SPARSE)
#error CERES_NO_SPARSE requires CERES_NO_ACCELERATE_SPARSE # error CERES_NO_SPARSE requires CERES_NO_ACCELERATE_SPARSE
#endif # endif
#if defined(CERES_USE_EIGEN_SPARSE) # if defined(CERES_USE_EIGEN_SPARSE)
#error CERES_NO_SPARSE requires !CERES_USE_EIGEN_SPARSE # error CERES_NO_SPARSE requires !CERES_USE_EIGEN_SPARSE
#endif # endif
#endif #endif
// A macro to signal which functions and classes are exported when // A macro to signal which functions and classes are exported when
// building a shared library. // building a DLL with MSVC.
#if defined(_MSC_VER) //
#define CERES_API_SHARED_IMPORT __declspec(dllimport) // Note that the ordering here is important, CERES_BUILDING_SHARED_LIBRARY
#define CERES_API_SHARED_EXPORT __declspec(dllexport) // is only defined locally when Ceres is compiled, it is never exported to
#elif defined(__GNUC__) // users. However, in order that we do not have to configure config.h
#define CERES_API_SHARED_IMPORT __attribute__((visibility("default"))) // separately for building vs installing, if we are using MSVC and building
#define CERES_API_SHARED_EXPORT __attribute__((visibility("default"))) // a shared library, then both CERES_BUILDING_SHARED_LIBRARY and
// CERES_USING_SHARED_LIBRARY will be defined when Ceres is compiled.
// Hence it is important that the check for CERES_BUILDING_SHARED_LIBRARY
// happens first.
#if defined(_MSC_VER) && defined(CERES_BUILDING_SHARED_LIBRARY)
# define CERES_EXPORT __declspec(dllexport)
#elif defined(_MSC_VER) && defined(CERES_USING_SHARED_LIBRARY)
# define CERES_EXPORT __declspec(dllimport)
#else #else
#define CERES_API_SHARED_IMPORT # define CERES_EXPORT
#define CERES_API_SHARED_EXPORT
#endif
// CERES_BUILDING_SHARED_LIBRARY is only defined locally when Ceres itself is
// compiled as a shared library, it is never exported to users. In order that
// we do not have to configure config.h separately when building Ceres as either
// a static or dynamic library, we define both CERES_USING_SHARED_LIBRARY and
// CERES_BUILDING_SHARED_LIBRARY when building as a shared library.
#if defined(CERES_USING_SHARED_LIBRARY)
#if defined(CERES_BUILDING_SHARED_LIBRARY)
// Compiling Ceres itself as a shared library.
#define CERES_EXPORT CERES_API_SHARED_EXPORT
#else
// Using Ceres as a shared library.
#define CERES_EXPORT CERES_API_SHARED_IMPORT
#endif
#else
// Ceres was compiled as a static library, export everything.
#define CERES_EXPORT
#endif
// Unit tests reach in and test internal functionality so we need a way to make
// those symbols visible
#ifdef CERES_EXPORT_INTERNAL_SYMBOLS
#define CERES_EXPORT_INTERNAL CERES_EXPORT
#else
#define CERES_EXPORT_INTERNAL
#endif #endif
#endif // CERES_PUBLIC_INTERNAL_PORT_H_ #endif // CERES_PUBLIC_INTERNAL_PORT_H_

View File

@@ -32,7 +32,7 @@
#undef CERES_WARNINGS_DISABLED #undef CERES_WARNINGS_DISABLED
#ifdef _MSC_VER #ifdef _MSC_VER
#pragma warning(pop) #pragma warning( pop )
#endif #endif
#endif // CERES_WARNINGS_DISABLED #endif // CERES_WARNINGS_DISABLED

View File

@@ -46,10 +46,8 @@ namespace internal {
// For fixed size cost functors // For fixed size cost functors
template <typename Functor, typename T, int... Indices> template <typename Functor, typename T, int... Indices>
inline bool VariadicEvaluateImpl(const Functor& functor, inline bool VariadicEvaluateImpl(const Functor& functor, T const* const* input,
T const* const* input, T* output, std::false_type /*is_dynamic*/,
T* output,
std::false_type /*is_dynamic*/,
std::integer_sequence<int, Indices...>) { std::integer_sequence<int, Indices...>) {
static_assert(sizeof...(Indices), static_assert(sizeof...(Indices),
"Invalid number of parameter blocks. At least one parameter " "Invalid number of parameter blocks. At least one parameter "
@@ -59,31 +57,26 @@ inline bool VariadicEvaluateImpl(const Functor& functor,
// For dynamic sized cost functors // For dynamic sized cost functors
template <typename Functor, typename T> template <typename Functor, typename T>
inline bool VariadicEvaluateImpl(const Functor& functor, inline bool VariadicEvaluateImpl(const Functor& functor, T const* const* input,
T const* const* input, T* output, std::true_type /*is_dynamic*/,
T* output,
std::true_type /*is_dynamic*/,
std::integer_sequence<int>) { std::integer_sequence<int>) {
return functor(input, output); return functor(input, output);
} }
// For ceres cost functors (not ceres::CostFunction) // For ceres cost functors (not ceres::CostFunction)
template <typename ParameterDims, typename Functor, typename T> template <typename ParameterDims, typename Functor, typename T>
inline bool VariadicEvaluateImpl(const Functor& functor, inline bool VariadicEvaluateImpl(const Functor& functor, T const* const* input,
T const* const* input, T* output, const void* /* NOT USED */) {
T* output,
const void* /* NOT USED */) {
using ParameterBlockIndices = using ParameterBlockIndices =
std::make_integer_sequence<int, ParameterDims::kNumParameterBlocks>; std::make_integer_sequence<int, ParameterDims::kNumParameterBlocks>;
using IsDynamic = std::integral_constant<bool, ParameterDims::kIsDynamic>; using IsDynamic = std::integral_constant<bool, ParameterDims::kIsDynamic>;
return VariadicEvaluateImpl( return VariadicEvaluateImpl(functor, input, output, IsDynamic(),
functor, input, output, IsDynamic(), ParameterBlockIndices()); ParameterBlockIndices());
} }
// For ceres::CostFunction // For ceres::CostFunction
template <typename ParameterDims, typename Functor, typename T> template <typename ParameterDims, typename Functor, typename T>
inline bool VariadicEvaluateImpl(const Functor& functor, inline bool VariadicEvaluateImpl(const Functor& functor, T const* const* input,
T const* const* input,
T* output, T* output,
const CostFunction* /* NOT USED */) { const CostFunction* /* NOT USED */) {
return functor.Evaluate(input, output, nullptr); return functor.Evaluate(input, output, nullptr);
@@ -102,8 +95,7 @@ inline bool VariadicEvaluateImpl(const Functor& functor,
// blocks. The signature of the functor must have the following signature // blocks. The signature of the functor must have the following signature
// 'bool()(const T* i_1, const T* i_2, ... const T* i_n, T* output)'. // 'bool()(const T* i_1, const T* i_2, ... const T* i_n, T* output)'.
template <typename ParameterDims, typename Functor, typename T> template <typename ParameterDims, typename Functor, typename T>
inline bool VariadicEvaluate(const Functor& functor, inline bool VariadicEvaluate(const Functor& functor, T const* const* input,
T const* const* input,
T* output) { T* output) {
return VariadicEvaluateImpl<ParameterDims>(functor, input, output, &functor); return VariadicEvaluateImpl<ParameterDims>(functor, input, output, &functor);
} }

View File

@@ -73,7 +73,7 @@ struct CERES_EXPORT IterationSummary {
bool step_is_successful = false; bool step_is_successful = false;
// Value of the objective function. // Value of the objective function.
double cost = 0.0; double cost = 0.90;
// Change in the value of the objective function in this // Change in the value of the objective function in this
// iteration. This can be positive or negative. // iteration. This can be positive or negative.

View File

@@ -388,8 +388,6 @@ using std::cbrt;
using std::ceil; using std::ceil;
using std::cos; using std::cos;
using std::cosh; using std::cosh;
using std::erf;
using std::erfc;
using std::exp; using std::exp;
using std::exp2; using std::exp2;
using std::floor; using std::floor;
@@ -575,21 +573,6 @@ inline Jet<T, N> fmin(const Jet<T, N>& x, const Jet<T, N>& y) {
return y < x ? y : x; return y < x ? y : x;
} }
// erf is defined as an integral that cannot be expressed analyticaly
// however, the derivative is trivial to compute
// erf(x + h) = erf(x) + h * 2*exp(-x^2)/sqrt(pi)
template <typename T, int N>
inline Jet<T, N> erf(const Jet<T, N>& x) {
return Jet<T, N>(erf(x.a), x.v * M_2_SQRTPI * exp(-x.a * x.a));
}
// erfc(x) = 1-erf(x)
// erfc(x + h) = erfc(x) + h * (-2*exp(-x^2)/sqrt(pi))
template <typename T, int N>
inline Jet<T, N> erfc(const Jet<T, N>& x) {
return Jet<T, N>(erfc(x.a), -x.v * M_2_SQRTPI * exp(-x.a * x.a));
}
// Bessel functions of the first kind with integer order equal to 0, 1, n. // Bessel functions of the first kind with integer order equal to 0, 1, n.
// //
// Microsoft has deprecated the j[0,1,n]() POSIX Bessel functions in favour of // Microsoft has deprecated the j[0,1,n]() POSIX Bessel functions in favour of

View File

@@ -90,8 +90,8 @@ namespace ceres {
// //
// An example that occurs commonly in Structure from Motion problems // An example that occurs commonly in Structure from Motion problems
// is when camera rotations are parameterized using Quaternion. There, // is when camera rotations are parameterized using Quaternion. There,
// it is useful to only make updates orthogonal to that 4-vector // it is useful only make updates orthogonal to that 4-vector defining
// defining the quaternion. One way to do this is to let delta be a 3 // the quaternion. One way to do this is to let delta be a 3
// dimensional vector and define Plus to be // dimensional vector and define Plus to be
// //
// Plus(x, delta) = [cos(|delta|), sin(|delta|) delta / |delta|] * x // Plus(x, delta) = [cos(|delta|), sin(|delta|) delta / |delta|] * x
@@ -99,7 +99,7 @@ namespace ceres {
// The multiplication between the two 4-vectors on the RHS is the // The multiplication between the two 4-vectors on the RHS is the
// standard quaternion product. // standard quaternion product.
// //
// Given f and a point x, optimizing f can now be restated as // Given g and a point x, optimizing f can now be restated as
// //
// min f(Plus(x, delta)) // min f(Plus(x, delta))
// delta // delta
@@ -306,7 +306,6 @@ class CERES_EXPORT ProductParameterization : public LocalParameterization {
public: public:
ProductParameterization(const ProductParameterization&) = delete; ProductParameterization(const ProductParameterization&) = delete;
ProductParameterization& operator=(const ProductParameterization&) = delete; ProductParameterization& operator=(const ProductParameterization&) = delete;
virtual ~ProductParameterization() {}
// //
// NOTE: The constructor takes ownership of the input local // NOTE: The constructor takes ownership of the input local
// parameterizations. // parameterizations.
@@ -342,8 +341,7 @@ class CERES_EXPORT ProductParameterization : public LocalParameterization {
bool Plus(const double* x, bool Plus(const double* x,
const double* delta, const double* delta,
double* x_plus_delta) const override; double* x_plus_delta) const override;
bool ComputeJacobian(const double* x, bool ComputeJacobian(const double* x, double* jacobian) const override;
double* jacobian) const override;
int GlobalSize() const override { return global_size_; } int GlobalSize() const override { return global_size_; }
int LocalSize() const override { return local_size_; } int LocalSize() const override { return local_size_; }
@@ -356,8 +354,8 @@ class CERES_EXPORT ProductParameterization : public LocalParameterization {
} // namespace ceres } // namespace ceres
// clang-format off
#include "ceres/internal/reenable_warnings.h" #include "ceres/internal/reenable_warnings.h"
#include "ceres/internal/line_parameterization.h" #include "ceres/internal/line_parameterization.h"
#endif // CERES_PUBLIC_LOCAL_PARAMETERIZATION_H_ #endif // CERES_PUBLIC_LOCAL_PARAMETERIZATION_H_

View File

@@ -192,10 +192,7 @@ class NumericDiffCostFunction : public SizedCostFunction<kNumResiduals, Ns...> {
} }
} }
explicit NumericDiffCostFunction(NumericDiffCostFunction&& other) ~NumericDiffCostFunction() {
: functor_(std::move(other.functor_)), ownership_(other.ownership_) {}
virtual ~NumericDiffCostFunction() {
if (ownership_ != TAKE_OWNERSHIP) { if (ownership_ != TAKE_OWNERSHIP) {
functor_.release(); functor_.release();
} }

View File

@@ -453,15 +453,13 @@ class CERES_EXPORT Problem {
// problem.AddResidualBlock(new MyCostFunction, nullptr, &x); // problem.AddResidualBlock(new MyCostFunction, nullptr, &x);
// //
// double cost = 0.0; // double cost = 0.0;
// problem.Evaluate(Problem::EvaluateOptions(), &cost, // problem.Evaluate(Problem::EvaluateOptions(), &cost, nullptr, nullptr, nullptr);
// nullptr, nullptr, nullptr);
// //
// The cost is evaluated at x = 1. If you wish to evaluate the // The cost is evaluated at x = 1. If you wish to evaluate the
// problem at x = 2, then // problem at x = 2, then
// //
// x = 2; // x = 2;
// problem.Evaluate(Problem::EvaluateOptions(), &cost, // problem.Evaluate(Problem::EvaluateOptions(), &cost, nullptr, nullptr, nullptr);
// nullptr, nullptr, nullptr);
// //
// is the way to do so. // is the way to do so.
// //
@@ -477,7 +475,7 @@ class CERES_EXPORT Problem {
// at the end of an iteration during a solve. // at the end of an iteration during a solve.
// //
// Note 4: If an EvaluationCallback is associated with the problem, // Note 4: If an EvaluationCallback is associated with the problem,
// then its PrepareForEvaluation method will be called every time // then its PrepareForEvaluation method will be called everytime
// this method is called with new_point = true. // this method is called with new_point = true.
bool Evaluate(const EvaluateOptions& options, bool Evaluate(const EvaluateOptions& options,
double* cost, double* cost,
@@ -511,41 +509,23 @@ class CERES_EXPORT Problem {
// apply_loss_function as the name implies allows the user to switch // apply_loss_function as the name implies allows the user to switch
// the application of the loss function on and off. // the application of the loss function on and off.
// //
// If an EvaluationCallback is associated with the problem, then its // WARNING: If an EvaluationCallback is associated with the problem
// PrepareForEvaluation method will be called every time this method // then it is the user's responsibility to call it before calling
// is called with new_point = true. This conservatively assumes that // this method.
// the user may have changed the parameter values since the previous //
// call to evaluate / solve. For improved efficiency, and only if // This is because, if the user calls this method multiple times, we
// you know that the parameter values have not changed between // cannot tell if the underlying parameter blocks have changed
// calls, see EvaluateResidualBlockAssumingParametersUnchanged(). // between calls or not. So if EvaluateResidualBlock was responsible
// for calling the EvaluationCallback, it will have to do it
// everytime it is called. Which makes the common case where the
// parameter blocks do not change, inefficient. So we leave it to
// the user to call the EvaluationCallback as needed.
bool EvaluateResidualBlock(ResidualBlockId residual_block_id, bool EvaluateResidualBlock(ResidualBlockId residual_block_id,
bool apply_loss_function, bool apply_loss_function,
double* cost, double* cost,
double* residuals, double* residuals,
double** jacobians) const; double** jacobians) const;
// Same as EvaluateResidualBlock except that if an
// EvaluationCallback is associated with the problem, then its
// PrepareForEvaluation method will be called every time this method
// is called with new_point = false.
//
// This means, if an EvaluationCallback is associated with the
// problem then it is the user's responsibility to call
// PrepareForEvaluation before calling this method if necessary,
// i.e. iff the parameter values have been changed since the last
// call to evaluate / solve.'
//
// This is because, as the name implies, we assume that the
// parameter blocks did not change since the last time
// PrepareForEvaluation was called (via Solve, Evaluate or
// EvaluateResidualBlock).
bool EvaluateResidualBlockAssumingParametersUnchanged(
ResidualBlockId residual_block_id,
bool apply_loss_function,
double* cost,
double* residuals,
double** jacobians) const;
private: private:
friend class Solver; friend class Solver;
friend class Covariance; friend class Covariance;

View File

@@ -320,8 +320,8 @@ inline void QuaternionToAngleAxis(const T* quaternion, T* angle_axis) {
} }
template <typename T> template <typename T>
void RotationMatrixToQuaternion(const T* R, T* quaternion) { void RotationMatrixToQuaternion(const T* R, T* angle_axis) {
RotationMatrixToQuaternion(ColumnMajorAdapter3x3(R), quaternion); RotationMatrixToQuaternion(ColumnMajorAdapter3x3(R), angle_axis);
} }
// This algorithm comes from "Quaternion Calculus and Fast Animation", // This algorithm comes from "Quaternion Calculus and Fast Animation",

View File

@@ -360,8 +360,7 @@ class CERES_EXPORT Solver {
// //
// If Solver::Options::preconditioner_type == SUBSET, then // If Solver::Options::preconditioner_type == SUBSET, then
// residual_blocks_for_subset_preconditioner must be non-empty. // residual_blocks_for_subset_preconditioner must be non-empty.
std::unordered_set<ResidualBlockId> std::unordered_set<ResidualBlockId> residual_blocks_for_subset_preconditioner;
residual_blocks_for_subset_preconditioner;
// Ceres supports using multiple dense linear algebra libraries // Ceres supports using multiple dense linear algebra libraries
// for dense matrix factorizations. Currently EIGEN and LAPACK are // for dense matrix factorizations. Currently EIGEN and LAPACK are
@@ -839,7 +838,7 @@ class CERES_EXPORT Solver {
int num_linear_solves = -1; int num_linear_solves = -1;
// Time (in seconds) spent evaluating the residual vector. // Time (in seconds) spent evaluating the residual vector.
double residual_evaluation_time_in_seconds = -1.0; double residual_evaluation_time_in_seconds = 1.0;
// Number of residual only evaluations. // Number of residual only evaluations.
int num_residual_evaluations = -1; int num_residual_evaluations = -1;

View File

@@ -50,7 +50,7 @@ namespace ceres {
// delete on it upon completion. // delete on it upon completion.
enum Ownership { enum Ownership {
DO_NOT_TAKE_OWNERSHIP, DO_NOT_TAKE_OWNERSHIP,
TAKE_OWNERSHIP, TAKE_OWNERSHIP
}; };
// TODO(keir): Considerably expand the explanations of each solver type. // TODO(keir): Considerably expand the explanations of each solver type.
@@ -185,19 +185,19 @@ enum SparseLinearAlgebraLibraryType {
enum DenseLinearAlgebraLibraryType { enum DenseLinearAlgebraLibraryType {
EIGEN, EIGEN,
LAPACK, LAPACK
}; };
// Logging options // Logging options
// The options get progressively noisier. // The options get progressively noisier.
enum LoggingType { enum LoggingType {
SILENT, SILENT,
PER_MINIMIZER_ITERATION, PER_MINIMIZER_ITERATION
}; };
enum MinimizerType { enum MinimizerType {
LINE_SEARCH, LINE_SEARCH,
TRUST_REGION, TRUST_REGION
}; };
enum LineSearchDirectionType { enum LineSearchDirectionType {
@@ -412,7 +412,7 @@ enum DumpFormatType {
// specified for the number of residuals. If specified, then the // specified for the number of residuals. If specified, then the
// number of residuas for that cost function can vary at runtime. // number of residuas for that cost function can vary at runtime.
enum DimensionType { enum DimensionType {
DYNAMIC = -1, DYNAMIC = -1
}; };
// The differentiation method used to compute numerical derivatives in // The differentiation method used to compute numerical derivatives in
@@ -433,7 +433,7 @@ enum NumericDiffMethodType {
enum LineSearchInterpolationType { enum LineSearchInterpolationType {
BISECTION, BISECTION,
QUADRATIC, QUADRATIC,
CUBIC, CUBIC
}; };
enum CovarianceAlgorithmType { enum CovarianceAlgorithmType {
@@ -448,7 +448,8 @@ enum CovarianceAlgorithmType {
// did not write to that memory location. // did not write to that memory location.
const double kImpossibleValue = 1e302; const double kImpossibleValue = 1e302;
CERES_EXPORT const char* LinearSolverTypeToString(LinearSolverType type); CERES_EXPORT const char* LinearSolverTypeToString(
LinearSolverType type);
CERES_EXPORT bool StringToLinearSolverType(std::string value, CERES_EXPORT bool StringToLinearSolverType(std::string value,
LinearSolverType* type); LinearSolverType* type);
@@ -458,23 +459,25 @@ CERES_EXPORT bool StringToPreconditionerType(std::string value,
CERES_EXPORT const char* VisibilityClusteringTypeToString( CERES_EXPORT const char* VisibilityClusteringTypeToString(
VisibilityClusteringType type); VisibilityClusteringType type);
CERES_EXPORT bool StringToVisibilityClusteringType( CERES_EXPORT bool StringToVisibilityClusteringType(std::string value,
std::string value, VisibilityClusteringType* type); VisibilityClusteringType* type);
CERES_EXPORT const char* SparseLinearAlgebraLibraryTypeToString( CERES_EXPORT const char* SparseLinearAlgebraLibraryTypeToString(
SparseLinearAlgebraLibraryType type); SparseLinearAlgebraLibraryType type);
CERES_EXPORT bool StringToSparseLinearAlgebraLibraryType( CERES_EXPORT bool StringToSparseLinearAlgebraLibraryType(
std::string value, SparseLinearAlgebraLibraryType* type); std::string value,
SparseLinearAlgebraLibraryType* type);
CERES_EXPORT const char* DenseLinearAlgebraLibraryTypeToString( CERES_EXPORT const char* DenseLinearAlgebraLibraryTypeToString(
DenseLinearAlgebraLibraryType type); DenseLinearAlgebraLibraryType type);
CERES_EXPORT bool StringToDenseLinearAlgebraLibraryType( CERES_EXPORT bool StringToDenseLinearAlgebraLibraryType(
std::string value, DenseLinearAlgebraLibraryType* type); std::string value,
DenseLinearAlgebraLibraryType* type);
CERES_EXPORT const char* TrustRegionStrategyTypeToString( CERES_EXPORT const char* TrustRegionStrategyTypeToString(
TrustRegionStrategyType type); TrustRegionStrategyType type);
CERES_EXPORT bool StringToTrustRegionStrategyType( CERES_EXPORT bool StringToTrustRegionStrategyType(std::string value,
std::string value, TrustRegionStrategyType* type); TrustRegionStrategyType* type);
CERES_EXPORT const char* DoglegTypeToString(DoglegType type); CERES_EXPORT const char* DoglegTypeToString(DoglegType type);
CERES_EXPORT bool StringToDoglegType(std::string value, DoglegType* type); CERES_EXPORT bool StringToDoglegType(std::string value, DoglegType* type);
@@ -484,39 +487,41 @@ CERES_EXPORT bool StringToMinimizerType(std::string value, MinimizerType* type);
CERES_EXPORT const char* LineSearchDirectionTypeToString( CERES_EXPORT const char* LineSearchDirectionTypeToString(
LineSearchDirectionType type); LineSearchDirectionType type);
CERES_EXPORT bool StringToLineSearchDirectionType( CERES_EXPORT bool StringToLineSearchDirectionType(std::string value,
std::string value, LineSearchDirectionType* type); LineSearchDirectionType* type);
CERES_EXPORT const char* LineSearchTypeToString(LineSearchType type); CERES_EXPORT const char* LineSearchTypeToString(LineSearchType type);
CERES_EXPORT bool StringToLineSearchType(std::string value, CERES_EXPORT bool StringToLineSearchType(std::string value, LineSearchType* type);
LineSearchType* type);
CERES_EXPORT const char* NonlinearConjugateGradientTypeToString( CERES_EXPORT const char* NonlinearConjugateGradientTypeToString(
NonlinearConjugateGradientType type); NonlinearConjugateGradientType type);
CERES_EXPORT bool StringToNonlinearConjugateGradientType( CERES_EXPORT bool StringToNonlinearConjugateGradientType(
std::string value, NonlinearConjugateGradientType* type); std::string value,
NonlinearConjugateGradientType* type);
CERES_EXPORT const char* LineSearchInterpolationTypeToString( CERES_EXPORT const char* LineSearchInterpolationTypeToString(
LineSearchInterpolationType type); LineSearchInterpolationType type);
CERES_EXPORT bool StringToLineSearchInterpolationType( CERES_EXPORT bool StringToLineSearchInterpolationType(
std::string value, LineSearchInterpolationType* type); std::string value,
LineSearchInterpolationType* type);
CERES_EXPORT const char* CovarianceAlgorithmTypeToString( CERES_EXPORT const char* CovarianceAlgorithmTypeToString(
CovarianceAlgorithmType type); CovarianceAlgorithmType type);
CERES_EXPORT bool StringToCovarianceAlgorithmType( CERES_EXPORT bool StringToCovarianceAlgorithmType(
std::string value, CovarianceAlgorithmType* type); std::string value,
CovarianceAlgorithmType* type);
CERES_EXPORT const char* NumericDiffMethodTypeToString( CERES_EXPORT const char* NumericDiffMethodTypeToString(
NumericDiffMethodType type); NumericDiffMethodType type);
CERES_EXPORT bool StringToNumericDiffMethodType(std::string value, CERES_EXPORT bool StringToNumericDiffMethodType(
NumericDiffMethodType* type); std::string value,
NumericDiffMethodType* type);
CERES_EXPORT const char* LoggingTypeToString(LoggingType type); CERES_EXPORT const char* LoggingTypeToString(LoggingType type);
CERES_EXPORT bool StringtoLoggingType(std::string value, LoggingType* type); CERES_EXPORT bool StringtoLoggingType(std::string value, LoggingType* type);
CERES_EXPORT const char* DumpFormatTypeToString(DumpFormatType type); CERES_EXPORT const char* DumpFormatTypeToString(DumpFormatType type);
CERES_EXPORT bool StringtoDumpFormatType(std::string value, CERES_EXPORT bool StringtoDumpFormatType(std::string value, DumpFormatType* type);
DumpFormatType* type);
CERES_EXPORT bool StringtoDumpFormatType(std::string value, LoggingType* type); CERES_EXPORT bool StringtoDumpFormatType(std::string value, LoggingType* type);
CERES_EXPORT const char* TerminationTypeToString(TerminationType type); CERES_EXPORT const char* TerminationTypeToString(TerminationType type);

View File

@@ -41,9 +41,8 @@
#define CERES_TO_STRING(x) CERES_TO_STRING_HELPER(x) #define CERES_TO_STRING(x) CERES_TO_STRING_HELPER(x)
// The Ceres version as a string; for example "1.9.0". // The Ceres version as a string; for example "1.9.0".
#define CERES_VERSION_STRING \ #define CERES_VERSION_STRING CERES_TO_STRING(CERES_VERSION_MAJOR) "." \
CERES_TO_STRING(CERES_VERSION_MAJOR) \ CERES_TO_STRING(CERES_VERSION_MINOR) "." \
"." CERES_TO_STRING(CERES_VERSION_MINOR) "." CERES_TO_STRING( \ CERES_TO_STRING(CERES_VERSION_REVISION)
CERES_VERSION_REVISION)
#endif // CERES_PUBLIC_VERSION_H_ #endif // CERES_PUBLIC_VERSION_H_

View File

@@ -33,19 +33,18 @@
#ifndef CERES_NO_ACCELERATE_SPARSE #ifndef CERES_NO_ACCELERATE_SPARSE
#include "ceres/accelerate_sparse.h"
#include <algorithm> #include <algorithm>
#include <string> #include <string>
#include <vector> #include <vector>
#include "ceres/accelerate_sparse.h"
#include "ceres/compressed_col_sparse_matrix_utils.h" #include "ceres/compressed_col_sparse_matrix_utils.h"
#include "ceres/compressed_row_sparse_matrix.h" #include "ceres/compressed_row_sparse_matrix.h"
#include "ceres/triplet_sparse_matrix.h" #include "ceres/triplet_sparse_matrix.h"
#include "glog/logging.h" #include "glog/logging.h"
#define CASESTR(x) \ #define CASESTR(x) case x: return #x
case x: \
return #x
namespace ceres { namespace ceres {
namespace internal { namespace internal {
@@ -69,7 +68,7 @@ const char* SparseStatusToString(SparseStatus_t status) {
// aligned to kAccelerateRequiredAlignment and returns a pointer to the // aligned to kAccelerateRequiredAlignment and returns a pointer to the
// aligned start. // aligned start.
void* ResizeForAccelerateAlignment(const size_t required_size, void* ResizeForAccelerateAlignment(const size_t required_size,
std::vector<uint8_t>* workspace) { std::vector<uint8_t> *workspace) {
// As per the Accelerate documentation, all workspace memory passed to the // As per the Accelerate documentation, all workspace memory passed to the
// sparse solver functions must be 16-byte aligned. // sparse solver functions must be 16-byte aligned.
constexpr int kAccelerateRequiredAlignment = 16; constexpr int kAccelerateRequiredAlignment = 16;
@@ -81,28 +80,29 @@ void* ResizeForAccelerateAlignment(const size_t required_size,
size_t size_from_aligned_start = workspace->size(); size_t size_from_aligned_start = workspace->size();
void* aligned_solve_workspace_start = void* aligned_solve_workspace_start =
reinterpret_cast<void*>(workspace->data()); reinterpret_cast<void*>(workspace->data());
aligned_solve_workspace_start = std::align(kAccelerateRequiredAlignment, aligned_solve_workspace_start =
required_size, std::align(kAccelerateRequiredAlignment,
aligned_solve_workspace_start, required_size,
size_from_aligned_start); aligned_solve_workspace_start,
size_from_aligned_start);
CHECK(aligned_solve_workspace_start != nullptr) CHECK(aligned_solve_workspace_start != nullptr)
<< "required_size: " << required_size << "required_size: " << required_size
<< ", workspace size: " << workspace->size(); << ", workspace size: " << workspace->size();
return aligned_solve_workspace_start; return aligned_solve_workspace_start;
} }
template <typename Scalar> template<typename Scalar>
void AccelerateSparse<Scalar>::Solve(NumericFactorization* numeric_factor, void AccelerateSparse<Scalar>::Solve(NumericFactorization* numeric_factor,
DenseVector* rhs_and_solution) { DenseVector* rhs_and_solution) {
// From SparseSolve() documentation in Solve.h // From SparseSolve() documentation in Solve.h
const int required_size = numeric_factor->solveWorkspaceRequiredStatic + const int required_size =
numeric_factor->solveWorkspaceRequiredPerRHS; numeric_factor->solveWorkspaceRequiredStatic +
SparseSolve(*numeric_factor, numeric_factor->solveWorkspaceRequiredPerRHS;
*rhs_and_solution, SparseSolve(*numeric_factor, *rhs_and_solution,
ResizeForAccelerateAlignment(required_size, &solve_workspace_)); ResizeForAccelerateAlignment(required_size, &solve_workspace_));
} }
template <typename Scalar> template<typename Scalar>
typename AccelerateSparse<Scalar>::ASSparseMatrix typename AccelerateSparse<Scalar>::ASSparseMatrix
AccelerateSparse<Scalar>::CreateSparseMatrixTransposeView( AccelerateSparse<Scalar>::CreateSparseMatrixTransposeView(
CompressedRowSparseMatrix* A) { CompressedRowSparseMatrix* A) {
@@ -112,7 +112,7 @@ AccelerateSparse<Scalar>::CreateSparseMatrixTransposeView(
// //
// Accelerate's columnStarts is a long*, not an int*. These types might be // Accelerate's columnStarts is a long*, not an int*. These types might be
// different (e.g. ARM on iOS) so always make a copy. // different (e.g. ARM on iOS) so always make a copy.
column_starts_.resize(A->num_rows() + 1); // +1 for final column length. column_starts_.resize(A->num_rows() +1); // +1 for final column length.
std::copy_n(A->rows(), column_starts_.size(), &column_starts_[0]); std::copy_n(A->rows(), column_starts_.size(), &column_starts_[0]);
ASSparseMatrix At; ASSparseMatrix At;
@@ -136,31 +136,29 @@ AccelerateSparse<Scalar>::CreateSparseMatrixTransposeView(
return At; return At;
} }
template <typename Scalar> template<typename Scalar>
typename AccelerateSparse<Scalar>::SymbolicFactorization typename AccelerateSparse<Scalar>::SymbolicFactorization
AccelerateSparse<Scalar>::AnalyzeCholesky(ASSparseMatrix* A) { AccelerateSparse<Scalar>::AnalyzeCholesky(ASSparseMatrix* A) {
return SparseFactor(SparseFactorizationCholesky, A->structure); return SparseFactor(SparseFactorizationCholesky, A->structure);
} }
template <typename Scalar> template<typename Scalar>
typename AccelerateSparse<Scalar>::NumericFactorization typename AccelerateSparse<Scalar>::NumericFactorization
AccelerateSparse<Scalar>::Cholesky(ASSparseMatrix* A, AccelerateSparse<Scalar>::Cholesky(ASSparseMatrix* A,
SymbolicFactorization* symbolic_factor) { SymbolicFactorization* symbolic_factor) {
return SparseFactor(*symbolic_factor, *A); return SparseFactor(*symbolic_factor, *A);
} }
template <typename Scalar> template<typename Scalar>
void AccelerateSparse<Scalar>::Cholesky(ASSparseMatrix* A, void AccelerateSparse<Scalar>::Cholesky(ASSparseMatrix* A,
NumericFactorization* numeric_factor) { NumericFactorization* numeric_factor) {
// From SparseRefactor() documentation in Solve.h // From SparseRefactor() documentation in Solve.h
const int required_size = const int required_size = std::is_same<Scalar, double>::value
std::is_same<Scalar, double>::value ? numeric_factor->symbolicFactorization.workspaceSize_Double
? numeric_factor->symbolicFactorization.workspaceSize_Double : numeric_factor->symbolicFactorization.workspaceSize_Float;
: numeric_factor->symbolicFactorization.workspaceSize_Float; return SparseRefactor(*A, numeric_factor,
return SparseRefactor( ResizeForAccelerateAlignment(required_size,
*A, &factorization_workspace_));
numeric_factor,
ResizeForAccelerateAlignment(required_size, &factorization_workspace_));
} }
// Instantiate only for the specific template types required/supported s/t the // Instantiate only for the specific template types required/supported s/t the
@@ -168,33 +166,34 @@ void AccelerateSparse<Scalar>::Cholesky(ASSparseMatrix* A,
template class AccelerateSparse<double>; template class AccelerateSparse<double>;
template class AccelerateSparse<float>; template class AccelerateSparse<float>;
template <typename Scalar> template<typename Scalar>
std::unique_ptr<SparseCholesky> AppleAccelerateCholesky<Scalar>::Create( std::unique_ptr<SparseCholesky>
OrderingType ordering_type) { AppleAccelerateCholesky<Scalar>::Create(OrderingType ordering_type) {
return std::unique_ptr<SparseCholesky>( return std::unique_ptr<SparseCholesky>(
new AppleAccelerateCholesky<Scalar>(ordering_type)); new AppleAccelerateCholesky<Scalar>(ordering_type));
} }
template <typename Scalar> template<typename Scalar>
AppleAccelerateCholesky<Scalar>::AppleAccelerateCholesky( AppleAccelerateCholesky<Scalar>::AppleAccelerateCholesky(
const OrderingType ordering_type) const OrderingType ordering_type)
: ordering_type_(ordering_type) {} : ordering_type_(ordering_type) {}
template <typename Scalar> template<typename Scalar>
AppleAccelerateCholesky<Scalar>::~AppleAccelerateCholesky() { AppleAccelerateCholesky<Scalar>::~AppleAccelerateCholesky() {
FreeSymbolicFactorization(); FreeSymbolicFactorization();
FreeNumericFactorization(); FreeNumericFactorization();
} }
template <typename Scalar> template<typename Scalar>
CompressedRowSparseMatrix::StorageType CompressedRowSparseMatrix::StorageType
AppleAccelerateCholesky<Scalar>::StorageType() const { AppleAccelerateCholesky<Scalar>::StorageType() const {
return CompressedRowSparseMatrix::LOWER_TRIANGULAR; return CompressedRowSparseMatrix::LOWER_TRIANGULAR;
} }
template <typename Scalar> template<typename Scalar>
LinearSolverTerminationType AppleAccelerateCholesky<Scalar>::Factorize( LinearSolverTerminationType
CompressedRowSparseMatrix* lhs, std::string* message) { AppleAccelerateCholesky<Scalar>::Factorize(CompressedRowSparseMatrix* lhs,
std::string* message) {
CHECK_EQ(lhs->storage_type(), StorageType()); CHECK_EQ(lhs->storage_type(), StorageType());
if (lhs == NULL) { if (lhs == NULL) {
*message = "Failure: Input lhs is NULL."; *message = "Failure: Input lhs is NULL.";
@@ -235,9 +234,11 @@ LinearSolverTerminationType AppleAccelerateCholesky<Scalar>::Factorize(
return LINEAR_SOLVER_SUCCESS; return LINEAR_SOLVER_SUCCESS;
} }
template <typename Scalar> template<typename Scalar>
LinearSolverTerminationType AppleAccelerateCholesky<Scalar>::Solve( LinearSolverTerminationType
const double* rhs, double* solution, std::string* message) { AppleAccelerateCholesky<Scalar>::Solve(const double* rhs,
double* solution,
std::string* message) {
CHECK_EQ(numeric_factor_->status, SparseStatusOK) CHECK_EQ(numeric_factor_->status, SparseStatusOK)
<< "Solve called without a call to Factorize first (" << "Solve called without a call to Factorize first ("
<< SparseStatusToString(numeric_factor_->status) << ")."; << SparseStatusToString(numeric_factor_->status) << ").";
@@ -261,7 +262,7 @@ LinearSolverTerminationType AppleAccelerateCholesky<Scalar>::Solve(
return LINEAR_SOLVER_SUCCESS; return LINEAR_SOLVER_SUCCESS;
} }
template <typename Scalar> template<typename Scalar>
void AppleAccelerateCholesky<Scalar>::FreeSymbolicFactorization() { void AppleAccelerateCholesky<Scalar>::FreeSymbolicFactorization() {
if (symbolic_factor_) { if (symbolic_factor_) {
SparseCleanup(*symbolic_factor_); SparseCleanup(*symbolic_factor_);
@@ -269,7 +270,7 @@ void AppleAccelerateCholesky<Scalar>::FreeSymbolicFactorization() {
} }
} }
template <typename Scalar> template<typename Scalar>
void AppleAccelerateCholesky<Scalar>::FreeNumericFactorization() { void AppleAccelerateCholesky<Scalar>::FreeNumericFactorization() {
if (numeric_factor_) { if (numeric_factor_) {
SparseCleanup(*numeric_factor_); SparseCleanup(*numeric_factor_);
@@ -282,7 +283,7 @@ void AppleAccelerateCholesky<Scalar>::FreeNumericFactorization() {
template class AppleAccelerateCholesky<double>; template class AppleAccelerateCholesky<double>;
template class AppleAccelerateCholesky<float>; template class AppleAccelerateCholesky<float>;
} // namespace internal }
} // namespace ceres }
#endif // CERES_NO_ACCELERATE_SPARSE #endif // CERES_NO_ACCELERATE_SPARSE

Some files were not shown because too many files have changed in this diff Show More