Compare commits

..

57 Commits

Author SHA1 Message Date
1a5777fd5e Fix compilation 2016-09-09 23:59:55 +02:00
ab7d449299 Merge branch 'master' into temp_display_optimization 2016-09-09 23:14:13 +02:00
Julian Eisel
5c706c6496 Merge branch 'master' into temp_display_optimization 2016-04-18 21:15:47 +02:00
a3762f1fab Define GPU buffer streams for material/ui data.
Declare UV/Normal/Color buffers as deprecated. We will prefer
unified interleaved formats if we need to.

vertex format stays separate because it is needed for passes
like shadow map or edge pass or uv drawing.
2016-03-12 12:33:15 +01:00
Antony Ryakiotakis
7fa7a0b703 WIP: Add some structs for vertex format of derivedmeshes. 2016-03-11 23:57:33 +01:00
Antony Ryakiotakis
a728572ce6 Merge branch 'master' into HEAD 2016-03-11 22:15:23 +01:00
c54b44592b Merge branch 'master' into temp_display_optimization 2015-12-28 20:55:01 +01:00
3b3cd248db editmesh VBO: support deformed vertex coordinates too. 2015-12-27 20:55:42 +01:00
613d505eab Merge branch 'master' into temp_display_optimization
Conflicts:
	source/blender/blenkernel/intern/editderivedmesh.c
2015-12-27 20:18:15 +01:00
f713f39182 Merge branch 'master' into temp_display_optimization
Conflicts:
	source/blender/blenkernel/intern/editderivedmesh.c
	source/blender/gpu/intern/gpu_buffers.c
2015-12-06 23:53:58 +01:00
d8dd5fa42c Merge branch 'master' into temp_display_optimization 2015-11-15 21:01:47 +01:00
fa05817bb5 Merge branch 'master' into temp_display_optimization 2015-10-20 22:32:53 +03:00
7b8d680064 Merge branch 'master' into temp_display_optimization
Conflicts:
	source/blender/gpu/intern/gpu_buffers.c
2015-10-16 13:42:37 +03:00
8471b7145a WIP editmode colors 2015-07-30 12:41:31 +02:00
5b316690ed Merge branch 'master' into temp_display_optimization 2015-07-29 16:37:43 +02:00
388695b1f0 Mapped face selection uses VBOs properly now 2015-07-28 18:21:35 +02:00
9074d9572e Edit mode drawing
Draw all materials of the mesh
Don't display hidden triangles.
2015-07-28 17:52:45 +02:00
25d86e4459 First basic GPU upload for edit derived meshes.
Normals, coordinates and triangles work.
2015-07-28 16:58:59 +02:00
e9c7e0ee8c Merge branch 'master' into temp_display_optimization 2015-07-28 16:58:38 +02:00
e41731286e Merge branch 'master' into temp_display_optimization
Conflicts:
	source/blender/blenkernel/BKE_DerivedMesh.h
	source/blender/blenkernel/intern/cdderivedmesh.c
	source/blender/windowmanager/intern/wm_subwindow.c
2015-07-24 17:45:51 +02:00
0b48bfae87 Merge branch 'master' into temp_display_optimization
Conflicts:
	source/blender/blenkernel/intern/cdderivedmesh.c
2015-07-15 18:46:04 +02:00
eeb1d58167 Merge branch 'master' into temp_display_optimization
Conflicts:
	source/blender/blenkernel/intern/cdderivedmesh.c
2015-07-14 23:56:43 +02:00
4f9b58cb71 VBO offscreen selection drawing, cdderivedmesh
Get rid of legacy drawing, it's only used for selection,
in which case we can prepare a temporary color buffer and draw
at once. Code is not complete here because we still redundantly
set the draw color in the draw function and don't ommit hidden
faces automatically. Still it works 100% without immediate mode
now.
2015-07-14 22:36:34 +02:00
94a6399a89 Merge branch 'master' into temp_display_optimization 2015-07-14 21:23:41 +02:00
f292199403 Merge branch 'master' into temp_display_optimization
Conflicts:
	source/blender/blenkernel/intern/cdderivedmesh.c
	source/blender/blenkernel/intern/subsurf_ccg.c
2015-07-14 15:58:39 +02:00
05634836f0 Fast drawing for sculpting multires using VBOs (non VBO won't have
fast mode but will not spend time on that).

With this commit branch should be in full feature parity with master.
2015-07-14 15:26:37 +02:00
fb5ec0997e Subsurf drawing refactoring:
All textured drawing now works correctly using VBOs (with usual CPU
 overhead still, but this will be taken care of separately).
Solid texture painting should also work fine.
Renamed ccg GPU functions so they are more consistent with cddm
2015-07-14 12:28:07 +02:00
18b81af276 Merge branch 'master' into temp_display_optimization 2015-07-13 18:13:01 +02:00
43bf4a0f51 Bah, merge change that was left uncommitted 2015-07-12 16:40:47 +02:00
5a7b0cf1c7 Merge branch 'master' into temp_display_optimization
Conflicts:
	source/blender/gpu/intern/gpu_buffers.c
2015-07-12 16:26:12 +02:00
e8b187ca14 Merge branch 'master' into temp_display_optimization 2015-07-02 21:51:29 +02:00
2edc5b807b Use shorts for uploading normals to GPU instead of floats.
The range of shorts is acceptable for good looking
normals.

We allocate a buffer with 4 shorts to keep data aligned to 32bit
boundaries. The total mesh cost for full triangle meshes is now
equal to master branch even with index buffers and memory gains
are even more for ngons.
2015-07-01 12:16:03 +02:00
c255161468 Cleanup: Use GPU material counter instead of allocating new counter
array every time.
2015-06-30 18:51:15 +02:00
52187f826c Refactor subsurf multi material textured drawing.
Use the same system as cdderivermesh - still result is not correct
though.
2015-06-30 15:17:10 +02:00
9566b13672 GLSL fix one more error - meant that attributes would not get uploaded
correctly.
2015-06-30 12:22:53 +02:00
a5f03ee002 Merge branch 'master' into temp_display_optimization 2015-06-30 12:00:15 +02:00
60ad7c09c1 Temporarily fix crash until subsurf textured multi material is
completely supported.
2015-06-29 19:32:26 +02:00
09665a5301 GLSL drawing redesign:
Use new upload scheme. Idea here is that materials share a VBO where
the size of each element is the maximum size of all mesh materials.

While this will waste some size if material sizes differ it is the
simplest scheme to use and allows easy reuse of indices as opposed to
separating the materials in separate vertex buffers. In fact if we do
that, management gets quite complex and code much more error prone.

I may write an extra blog post to explain the choices here
at some point.
2015-06-29 17:18:39 +02:00
850e9ce176 Redesign of textured drawing: Get rid of triangle_to_mface array
Instead we can be smarter here and add an mface array to the
GPUBufferMaterial instead.
This will help us do less iterations on CPU for quad meshes as well as
avoiding checking materials every face (faces are now always sorted per
material so this happens implicitly).
2015-06-29 16:45:50 +02:00
a50de5fd94 Merge branch 'master' into temp_display_optimization 2015-06-27 17:21:12 +02:00
5d592a9686 Merge branch 'master' into temp_display_optimization 2015-06-26 17:51:41 +02:00
33ddcf4dc3 Fix crash in viewport benchmark file with GLSL
Total loops were calculated erroneously, probably due to ngon to loop
conversion.
2015-06-26 17:09:49 +02:00
1951604bab Make code compile with GPU_DEBUG 2015-06-26 16:18:06 +02:00
3351ab28d9 Merge branch 'master' into temp_display_optimization 2015-06-26 14:59:37 +02:00
43eabfe5b8 Fix error with vertex color upload for subsurf 2015-06-25 15:06:23 +02:00
8a077c90d2 Subsurf VBOs now properly supports colors 2015-06-24 18:44:33 +02:00
c314507cbf Semi fix UVs not working in higher level. Looks like issue is mesh
colors somehow being used. Disabled for now but real fix would be to
investigate why they are there in first place.
2015-06-24 17:27:39 +02:00
62f0f0607f First version of textured subsurf drawing with VBOs and index buffers.
TODOs:

* UVs fail for high subdivision
* Vertex colors still not supported
* Texture painting still not supported
2015-06-24 16:36:09 +02:00
188c1cc184 Fix crash reported on blenderartists
A really bad issue that meant we would miscount many mixed triangle/quad
meshes.
2015-06-24 11:58:12 +02:00
1b64055c2c WIP code that handles textured drawing for subsurf with vertex buffers. 2015-06-23 19:22:58 +02:00
4c553ce09f Merge branch 'master' into temp_display_optimization 2015-06-23 17:52:37 +02:00
d4ad7836b6 Subsurf drawing:
Solid shading uses indexed drawing (textured/GLSL drawing are still not
implemented).

For subsurf we might be able to squeeze more memory but it would only
work for smooth shading and it requires more elaborate counting for all
data upload functions. May be interesting to do in the future though.
2015-06-23 16:57:18 +02:00
21414b511f Cleanup: Use "elements", the OpenGL term instead of "points", makes it a
bit clearer what the use is intended for.
2015-06-23 12:26:31 +02:00
22f38137d7 Merge branch 'master' into temp_display_optimization 2015-06-23 12:07:05 +02:00
f018b99b51 Merge branch 'master' into temp_display_optimization 2015-06-23 11:28:53 +02:00
0c50ba7ec3 Use index buffers for drawing with vertex buffers.
Tests show that we gain approximately 20-25% performance
by that for solid mode drawing.

Good memory/performance is gained for quad meshes mostly
but we also gain performance in triangle meshes as well.

This commit will not get rid of the CPU overhead in
some modes (textured/GLSL) yet, but it prepares the
code for caching changes to make things better
.
2015-06-22 19:17:12 +02:00
530ee08e93 Port Subsurf to VBO code from GPUDataRequest branch.
This adds support for vertex buffers for solid shaded mode for subsurf
modifiers, making drawing of subsurf much faster in this mode.

It also moves towards the goal of data driven requests for our new
renderer system, but this will need further changes down the line.

Everything should work as before with the exception of simplified
multires drawing in sculpting (feature enabled while rotating the view).
2015-06-22 15:57:04 +02:00
732 changed files with 14739 additions and 91673 deletions

View File

@@ -155,7 +155,6 @@ option_defaults_init(
_init_BUILDINFO
_init_CODEC_FFMPEG
_init_CYCLES_OSL
_init_CYCLES_OPENSUBDIV
_init_IMAGE_OPENEXR
_init_INPUT_NDOF
_init_JACK
@@ -173,7 +172,6 @@ if(UNIX AND NOT APPLE)
# disable less important dependencies by default
set(_init_CODEC_FFMPEG OFF)
set(_init_CYCLES_OSL OFF)
set(_init_CYCLES_OPENSUBDIV OFF)
set(_init_IMAGE_OPENEXR OFF)
set(_init_JACK OFF)
set(_init_OPENCOLLADA OFF)
@@ -333,7 +331,7 @@ option(WITH_ALEMBIC "Enable Alembic Support" OFF)
option(WITH_ALEMBIC_HDF5 "Enable Legacy Alembic Support (not officially supported)" OFF)
if(APPLE)
option(WITH_CODEC_QUICKTIME "Enable Quicktime Support" OFF)
option(WITH_CODEC_QUICKTIME "Enable Quicktime Support" ON)
endif()
# 3D format support
@@ -343,9 +341,9 @@ option(WITH_OPENCOLLADA "Enable OpenCollada Support (http://www.opencollada.or
# Sound output
option(WITH_SDL "Enable SDL for sound and joystick support" ${_init_SDL})
option(WITH_OPENAL "Enable OpenAL Support (http://www.openal.org)" ON)
option(WITH_JACK "Enable JACK Support (http://www.jackaudio.org)" ${_init_JACK})
option(WITH_JACK "Enable Jack Support (http://www.jackaudio.org)" ${_init_JACK})
if(UNIX AND NOT APPLE)
option(WITH_JACK_DYNLOAD "Enable runtime dynamic JACK libraries loading" OFF)
option(WITH_JACK_DYNLOAD "Enable runtime dynamic Jack libraries loading" OFF)
endif()
if(UNIX AND NOT APPLE)
option(WITH_SDL_DYNLOAD "Enable runtime dynamic SDL libraries loading" OFF)
@@ -402,9 +400,9 @@ option(WITH_CYCLES "Enable Cycles Render Engine" ON)
option(WITH_CYCLES_STANDALONE "Build Cycles standalone application" OFF)
option(WITH_CYCLES_STANDALONE_GUI "Build Cycles standalone with GUI" OFF)
option(WITH_CYCLES_OSL "Build Cycles with OSL support" ${_init_CYCLES_OSL})
option(WITH_CYCLES_OPENSUBDIV "Build Cycles with OpenSubdiv support" ${_init_CYCLES_OPENSUBDIV})
option(WITH_CYCLES_OPENSUBDIV "Build Cycles with OpenSubdiv support" ON)
option(WITH_CYCLES_CUDA_BINARIES "Build Cycles CUDA binaries" OFF)
set(CYCLES_CUDA_BINARIES_ARCH sm_20 sm_21 sm_30 sm_35 sm_37 sm_50 sm_52 sm_60 sm_61 CACHE STRING "CUDA architectures to build binaries for")
set(CYCLES_CUDA_BINARIES_ARCH sm_20 sm_21 sm_30 sm_35 sm_37 sm_50 sm_52 CACHE STRING "CUDA architectures to build binaries for")
mark_as_advanced(CYCLES_CUDA_BINARIES_ARCH)
unset(PLATFORM_DEFAULT)
option(WITH_CYCLES_LOGGING "Build Cycles with logging support" ON)
@@ -508,12 +506,6 @@ mark_as_advanced(WITH_C11)
option(WITH_CXX11 "Build with C++11 standard enabled, for development use only!" ${_cxx11_init})
mark_as_advanced(WITH_CXX11)
# Compiler toolchain
if(CMAKE_COMPILER_IS_GNUCC)
option(WITH_LINKER_GOLD "Use ld.gold linker which is usually faster than ld.bfd" ON)
mark_as_advanced(WITH_LINKER_GOLD)
endif()
# Dependency graph
option(WITH_LEGACY_DEPSGRAPH "Build Blender with legacy dependency graph" ON)
mark_as_advanced(WITH_LEGACY_DEPSGRAPH)
@@ -635,21 +627,9 @@ if(APPLE)
set(CMAKE_FIND_ROOT_PATH ${CMAKE_OSX_SYSROOT})
endif()
if(WITH_CXX11)
# 10.9 is our min. target, if you use higher sdk, weak linking happens
if(CMAKE_OSX_DEPLOYMENT_TARGET)
if(${CMAKE_OSX_DEPLOYMENT_TARGET} VERSION_LESS 10.9)
message(STATUS "Setting deployment target to 10.9, lower versions are incompatible with WITH_CXX11")
set(CMAKE_OSX_DEPLOYMENT_TARGET "10.9" CACHE STRING "" FORCE)
endif()
else()
set(CMAKE_OSX_DEPLOYMENT_TARGET "10.9" CACHE STRING "" FORCE)
endif()
else()
if(NOT CMAKE_OSX_DEPLOYMENT_TARGET)
# 10.6 is our min. target, if you use higher sdk, weak linking happens
set(CMAKE_OSX_DEPLOYMENT_TARGET "10.6" CACHE STRING "" FORCE)
endif()
if(NOT CMAKE_OSX_DEPLOYMENT_TARGET)
# 10.6 is our min. target, if you use higher sdk, weak linking happens
set(CMAKE_OSX_DEPLOYMENT_TARGET "10.6" CACHE STRING "" FORCE)
endif()
if(NOT ${CMAKE_GENERATOR} MATCHES "Xcode")
@@ -737,7 +717,7 @@ elseif(WITH_CYCLES OR WITH_OPENIMAGEIO OR WITH_AUDASPACE OR WITH_INTERNATIONAL O
# Keep enabled
else()
# New dependency graph needs either Boost or C++11 for function bindings.
if(NOT WITH_CXX11)
if(NOT USE_CXX11)
# Enabled but we don't need it
set(WITH_BOOST OFF)
endif()
@@ -972,6 +952,11 @@ if(WITH_CYCLES)
)
endif()
endif()
if(WITH_CYCLES_OPENSUBDIV AND NOT WITH_OPENSUBDIV)
message(STATUS "WITH_CYCLES_OPENSUBDIV requires WITH_OPENSUBDIV to be ON, turning OFF")
set(WITH_CYCLES_OPENSUBDIV OFF)
endif()
endif()
if(WITH_INTERNATIONAL)
@@ -992,7 +977,7 @@ if(SUPPORT_SSE_BUILD)
add_definitions(-D__SSE__ -D__MMX__)
endif()
if(SUPPORT_SSE2_BUILD)
set(PLATFORM_CFLAGS " ${PLATFORM_CFLAGS} ${COMPILER_SSE2_FLAG}")
set(PLATFORM_CFLAGS " ${COMPILER_SSE2_FLAG} ${PLATFORM_CFLAGS}")
add_definitions(-D__SSE2__)
if(NOT SUPPORT_SSE_BUILD) # dont double up
add_definitions(-D__MMX__)

View File

@@ -25,8 +25,7 @@
ARGS=$( \
getopt \
-o s:i:t:h \
--long source:,install:,tmp:,info:,threads:,help,show-deps,no-sudo,no-build,no-confirm,use-cxx11,\
with-all,with-opencollada,\
--long source:,install:,tmp:,info:,threads:,help,show-deps,no-sudo,no-build,no-confirm,with-all,with-opencollada,\
ver-ocio:,ver-oiio:,ver-llvm:,ver-osl:,ver-osd:,ver-openvdb:,\
force-all,force-python,force-numpy,force-boost,\
force-ocio,force-openexr,force-oiio,force-llvm,force-osl,force-osd,force-openvdb,\
@@ -104,11 +103,6 @@ ARGUMENTS_INFO="\"COMMAND LINE ARGUMENTS:
--no-confirm
Disable any interaction with user (suitable for automated run).
--use-cxx11
Build all libraries in cpp11 'mode' (will be mandatory soon in blender2.8 branch).
NOTE: If your compiler is gcc-6.0 or above, you probably *want* to enable this option (since it's default
standard starting from this version).
--with-all
By default, a number of optional and not-so-often needed libraries are not installed.
This option will try to install them, at the cost of potential conflicts (depending on
@@ -287,7 +281,6 @@ SUDO="sudo"
NO_BUILD=false
NO_CONFIRM=false
USE_CXX11=false
PYTHON_VERSION="3.5.1"
PYTHON_VERSION_MIN="3.5"
@@ -499,9 +492,6 @@ while true; do
--no-confirm)
NO_CONFIRM=true; shift; continue
;;
--use-cxx11)
USE_CXX11=true; shift; continue
;;
--with-all)
WITH_ALL=true; shift; continue
;;
@@ -713,21 +703,6 @@ if [ "$WITH_ALL" = true -a "$OPENCOLLADA_SKIP" = false ]; then
fi
WARNING "****WARNING****"
PRINT "If you are experiencing issues building Blender, _*TRY A FRESH, CLEAN BUILD FIRST*_!"
PRINT "The same goes for install_deps itself, if you encounter issues, please first erase everything in $SRC and $INST"
PRINT "(provided obviously you did not add anything yourself in those dirs!), and run install_deps.sh again!"
PRINT "Often, changes in the libs built by this script, or in your distro package, cannot be handled simply, so..."
PRINT ""
PRINT "You may also try to use the '--build-foo' options to bypass your distribution's packages"
PRINT "for some troublesome/buggy libraries..."
PRINT ""
PRINT ""
PRINT "Ran with:"
PRINT " install_deps.sh $COMMANDLINE"
PRINT ""
PRINT ""
# This has to be done here, because user might force some versions...
PYTHON_SOURCE=( "https://www.python.org/ftp/python/$PYTHON_VERSION/Python-$PYTHON_VERSION.tgz" )
@@ -791,20 +766,7 @@ OPENCOLLADA_REPO_BRANCH="master"
FFMPEG_SOURCE=( "http://ffmpeg.org/releases/ffmpeg-$FFMPEG_VERSION.tar.bz2" )
CXXFLAGS_BACK=$CXXFLAGS
if [ "$USE_CXX11" = true ]; then
WARNING "You are trying to use c++11, this *should* go smoothely with any very recent distribution
However, if you are experiencing linking errors (also when building Blender itself), please try the following:
* Re-run this script with `--build-all --force-all` options.
* Ensure your gcc version is at the very least 4.8, if possible you should really rather use gcc-5.1 or above.
Please note that until the transition to C++11-built libraries if completed in your distribution, situation will
remain fuzzy and incompatibilities may happen..."
PRINT ""
PRINT ""
CXXFLAGS="$CXXFLAGS -std=c++11"
export CXXFLAGS
fi
#### Show Dependencies ####
@@ -817,7 +779,7 @@ Those libraries should be available as packages in all recent distributions (opt
* libjpeg, libpng, libtiff, [libopenjpeg], [libopenal].
* libx11, libxcursor, libxi, libxrandr, libxinerama (and other libx... as needed).
* libsqlite3, libbz2, libssl, libfftw3, libxml2, libtinyxml, yasm, libyaml-cpp.
* libsdl1.2, libglew, [libglewmx].\""
* libsdl1.2, libglew, libglewmx.\""
DEPS_SPECIFIC_INFO="\"BUILDABLE DEPENDENCIES:
@@ -991,7 +953,7 @@ prepare_opt() {
# Check whether the current package needs to be recompiled, based on a dummy file containing a magic number in its name...
magic_compile_check() {
if [ -f $INST/.$1-magiccheck-$2-$USE_CXX11 ]; then
if [ -f $INST/.$1-magiccheck-$2 ]; then
return 0
else
return 1
@@ -1000,7 +962,7 @@ magic_compile_check() {
magic_compile_set() {
rm -f $INST/.$1-magiccheck-*
touch $INST/.$1-magiccheck-$2-$USE_CXX11
touch $INST/.$1-magiccheck-$2
}
# Note: should clean nicely in $INST, but not in $SRC, when we switch to a new version of a lib...
@@ -1660,10 +1622,6 @@ compile_OIIO() {
# fi
cmake_d="$cmake_d -D USE_OCIO=OFF"
if [ "$USE_CXX11" = true ]; then
cmake_d="$cmake_d -D OIIO_BUILD_CPP11=ON"
fi
if file /bin/cp | grep -q '32-bit'; then
cflags="-fPIC -m32 -march=i686"
else
@@ -1875,9 +1833,6 @@ compile_OSL() {
cmake_d="$cmake_d -D OSL_BUILD_PLUGINS=OFF"
cmake_d="$cmake_d -D OSL_BUILD_TESTS=OFF"
cmake_d="$cmake_d -D USE_SIMD=sse2"
if [ "$USE_CXX11" = true ]; then
cmake_d="$cmake_d -D OSL_BUILD_CPP11=1"
fi
#~ cmake_d="$cmake_d -D ILMBASE_VERSION=$ILMBASE_VERSION"
@@ -2607,9 +2562,8 @@ install_DEB() {
git libfreetype6-dev libx11-dev flex bison libtbb-dev libxxf86vm-dev \
libxcursor-dev libxi-dev wget libsqlite3-dev libxrandr-dev libxinerama-dev \
libbz2-dev libncurses5-dev libssl-dev liblzma-dev libreadline-dev $OPENJPEG_DEV \
libopenal-dev libglew-dev yasm $THEORA_DEV $VORBIS_DEV $OGG_DEV \
libopenal-dev libglew-dev libglewmx-dev yasm $THEORA_DEV $VORBIS_DEV $OGG_DEV \
libsdl1.2-dev libfftw3-dev patch bzip2 libxml2-dev libtinyxml-dev libjemalloc-dev"
# libglewmx-dev (broken in deb testing currently...)
OPENJPEG_USE=true
VORBIS_USE=true
@@ -4043,6 +3997,9 @@ install_OTHER() {
fi
if [ "$_do_compile_llvm" = true ]; then
install_packages_DEB libffi-dev
# LLVM can't find the debian ffi header dir
_FFI_INCLUDE_DIR=`dpkg -L libffi-dev | grep -e ".*/ffi.h" | sed -r 's/(.*)\/ffi.h/\1/'`
PRINT ""
compile_LLVM
have_llvm=true
@@ -4061,6 +4018,7 @@ install_OTHER() {
if [ "$_do_compile_osl" = true ]; then
if [ "$have_llvm" = true ]; then
install_packages_DEB flex bison libtbb-dev
PRINT ""
compile_OSL
else
@@ -4079,6 +4037,7 @@ install_OTHER() {
fi
if [ "$_do_compile_osd" = true ]; then
install_packages_DEB flex bison libtbb-dev
PRINT ""
compile_OSD
fi
@@ -4095,6 +4054,10 @@ install_OTHER() {
fi
if [ "$_do_compile_collada" = true ]; then
install_packages_DEB libpcre3-dev
# Find path to libxml shared lib...
_XML2_LIB=`dpkg -L libxml2-dev | grep -e ".*/libxml2.so"`
# No package
PRINT ""
compile_OpenCOLLADA
fi
@@ -4179,6 +4142,16 @@ print_info_ffmpeglink() {
}
print_info() {
PRINT ""
PRINT ""
WARNING "****WARNING****"
PRINT "If you are experiencing issues building Blender, _*TRY A FRESH, CLEAN BUILD FIRST*_!"
PRINT "The same goes for install_deps itself, if you encounter issues, please first erase everything in $SRC and $INST"
PRINT "(provided obviously you did not add anything yourself in those dirs!), and run install_deps.sh again!"
PRINT "Often, changes in the libs built by this script, or in your distro package, cannot be handled simply, so..."
PRINT ""
PRINT "You may also try to use the '--build-foo' options to bypass your distribution's packages"
PRINT "for some troublesome/buggy libraries..."
PRINT ""
PRINT ""
PRINT "Ran with:"
@@ -4191,12 +4164,6 @@ print_info() {
_buildargs="$_buildargs -U *OPENCOLORIO* -U *OPENEXR* -U *OPENIMAGEIO* -U *LLVM* -U *CYCLES*"
_buildargs="$_buildargs -U *OPENSUBDIV* -U *OPENVDB* -U *COLLADA* -U *FFMPEG* -U *ALEMBIC*"
if [ "$USE_CXX11" = true ]; then
_1="-D WITH_CXX11=ON"
PRINT " $_1"
_buildargs="$_buildargs $_1"
fi
_1="-D WITH_CODEC_SNDFILE=ON"
PRINT " $_1"
_buildargs="$_buildargs $_1"
@@ -4360,6 +4327,3 @@ PRINT ""
# Switch back to user language.
LANG=LANG_BACK
export LANG
CXXFLAGS=$CXXFLAGS_BACK
export CXXFLAGS

View File

@@ -94,7 +94,6 @@ all_repositories = {
r'git://git.blender.org/blender-translations.git': 'blender-translations',
r'git://git.blender.org/blender-addons.git': 'blender-addons',
r'git://git.blender.org/blender-addons-contrib.git': 'blender-addons-contrib',
r'git://git.blender.org/blender-dev-tools.git': 'blender-dev-tools',
r'https://svn.blender.org/svnroot/bf-blender/': 'lib svn',
}
@@ -129,7 +128,6 @@ def schedule_force_build(name):
forcesched.CodebaseParameter(hide=True, codebase="blender-translations"),
forcesched.CodebaseParameter(hide=True, codebase="blender-addons"),
forcesched.CodebaseParameter(hide=True, codebase="blender-addons-contrib"),
forcesched.CodebaseParameter(hide=True, codebase="blender-dev-tools"),
forcesched.CodebaseParameter(hide=True, codebase="lib svn")],
properties=[]))
@@ -145,7 +143,6 @@ def schedule_build(name, hour, minute=0):
"blender-translations": {"repository": "", "branch": "master"},
"blender-addons": {"repository": "", "branch": "master"},
"blender-addons-contrib": {"repository": "", "branch": "master"},
"blender-dev-tools": {"repository": "", "branch": "master"},
"lib svn": {"repository": "", "branch": "trunk"}},
branch=current_branch,
builderNames=[name],
@@ -267,8 +264,7 @@ def generic_builder(id, libdir='', branch='', rsync=False):
for submodule in ('blender-translations',
'blender-addons',
'blender-addons-contrib',
'blender-dev-tools'):
'blender-addons-contrib'):
f.addStep(git_submodule_step(submodule))
f.addStep(git_step(branch))
@@ -303,8 +299,7 @@ add_builder(c, 'linux_glibc219_i686_cmake', '', generic_builder, hour=3)
add_builder(c, 'linux_glibc219_x86_64_cmake', '', generic_builder, hour=4)
add_builder(c, 'win32_cmake_vc2013', 'windows_vc12', generic_builder, hour=3)
add_builder(c, 'win64_cmake_vc2013', 'win64_vc12', generic_builder, hour=4)
add_builder(c, 'win32_cmake_vc2015', 'windows_vc14', generic_builder, hour=5)
add_builder(c, 'win64_cmake_vc2015', 'win64_vc14', generic_builder, hour=6)
add_builder(c, 'win64_cmake_vc2015', 'win64_vc14', generic_builder, hour=5)
# STATUS TARGETS
#

View File

@@ -72,11 +72,8 @@ if 'cmake' in builder:
# Set up OSX architecture
if builder.endswith('x86_64_10_6_cmake'):
cmake_extra_options.append('-DCMAKE_OSX_ARCHITECTURES:STRING=x86_64')
cmake_extra_options.append('-DCUDA_NVCC_EXECUTABLE=/usr/local/cuda8-hack/bin/nvcc')
cmake_extra_options.append('-DWITH_CODEC_QUICKTIME=OFF')
cmake_extra_options.append('-DCMAKE_OSX_DEPLOYMENT_TARGET=10.6')
build_cubins = False
cmake_extra_options.append('-DCUDA_NVCC_EXECUTABLE=/usr/local/cuda-hack/bin/nvcc')
cmake_extra_options.append('-DCUDA_NVCC8_EXECUTABLE=/usr/local/cuda8-hack/bin/nvcc')
elif builder.startswith('win'):
if builder.endswith('_vc2015'):
@@ -93,7 +90,8 @@ if 'cmake' in builder:
elif builder.startswith('win32'):
bits = 32
cmake_options.extend(['-G', 'Visual Studio 12 2013'])
cmake_extra_options.append('-DCUDA_NVCC_EXECUTABLE:FILEPATH=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v8.0/bin/nvcc.exe')
cmake_extra_options.append('-DCUDA_NVCC_EXECUTABLE:FILEPATH=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v7.5/bin/nvcc.exe')
cmake_extra_options.append('-DCUDA_NVCC8_EXECUTABLE:FILEPATH=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v8.0/bin/nvcc.exe')
elif builder.startswith('linux'):
tokens = builder.split("_")
@@ -113,7 +111,8 @@ if 'cmake' in builder:
cuda_chroot_name = 'buildbot_' + deb_name + '_x86_64'
targets = ['player', 'blender', 'cuda']
cmake_extra_options.append('-DCUDA_NVCC_EXECUTABLE=/usr/local/cuda-8.0/bin/nvcc')
cmake_extra_options.append('-DCUDA_NVCC_EXECUTABLE=/usr/local/cuda-7.5/bin/nvcc')
cmake_extra_options.append('-DCUDA_NVCC8_EXECUTABLE=/usr/local/cuda-8.0/bin/nvcc')
cmake_options.append("-C" + os.path.join(blender_dir, cmake_config_file))
@@ -183,8 +182,10 @@ if 'cmake' in builder:
print('Condifuration FAILED!')
sys.exit(retcode)
if 'win32' in builder or 'win64' in builder:
command = ['cmake', '--build', '.', '--target', target_name, '--config', 'Release']
if 'win32' in builder:
command = ['msbuild', 'INSTALL.vcxproj', '/Property:PlatformToolset=v120_xp', '/p:Configuration=Release']
elif 'win64' in builder:
command = ['msbuild', 'INSTALL.vcxproj', '/p:Configuration=Release']
else:
command = target_chroot_prefix + ['make', '-s', '-j2', target_name]

View File

@@ -1,15 +1,15 @@
# - Find JACK library
# Find the native JACK includes and library
# - Find Jack library
# Find the native Jack includes and library
# This module defines
# JACK_INCLUDE_DIRS, where to find jack.h, Set when
# JACK_INCLUDE_DIR is found.
# JACK_LIBRARIES, libraries to link against to use JACK.
# JACK_ROOT_DIR, The base directory to search for JACK.
# JACK_LIBRARIES, libraries to link against to use Jack.
# JACK_ROOT_DIR, The base directory to search for Jack.
# This can also be an environment variable.
# JACK_FOUND, If false, do not try to use JACK.
# JACK_FOUND, If false, do not try to use Jack.
#
# also defined, but not for general use are
# JACK_LIBRARY, where to find the JACK library.
# JACK_LIBRARY, where to find the Jack library.
#=============================================================================
# Copyright 2011 Blender Foundation.

View File

@@ -27,12 +27,13 @@ if(EXISTS ${SOURCE_DIR}/.git)
OUTPUT_VARIABLE MY_WC_HASH
OUTPUT_STRIP_TRAILING_WHITESPACE)
execute_process(COMMAND git branch --list master blender-v* --contains ${MY_WC_HASH}
execute_process(COMMAND git branch --list master --contains ${MY_WC_HASH}
WORKING_DIRECTORY ${SOURCE_DIR}
OUTPUT_VARIABLE _git_contains_check
OUTPUT_STRIP_TRAILING_WHITESPACE)
if(NOT _git_contains_check STREQUAL "")
STRING(REGEX REPLACE "^[ \t]+" "" _git_contains_check "${_git_contains_check}")
if(_git_contains_check STREQUAL "master")
set(MY_WC_BRANCH "master")
else()
execute_process(COMMAND git show-ref --tags -d
@@ -47,22 +48,6 @@ if(EXISTS ${SOURCE_DIR}/.git)
if(_git_tag_hashes MATCHES "${_git_head_hash}")
set(MY_WC_BRANCH "master")
else()
execute_process(COMMAND git branch --contains ${MY_WC_HASH}
WORKING_DIRECTORY ${SOURCE_DIR}
OUTPUT_VARIABLE _git_contains_branches
OUTPUT_STRIP_TRAILING_WHITESPACE)
string(REGEX REPLACE "^\\*[ \t]+" "" _git_contains_branches "${_git_contains_branches}")
string(REGEX REPLACE "[\r\n]+" ";" _git_contains_branches "${_git_contains_branches}")
string(REGEX REPLACE ";[ \t]+" ";" _git_contains_branches "${_git_contains_branches}")
foreach(_branch ${_git_contains_branches})
if (NOT "${_branch}" MATCHES "\\(HEAD.*")
set(MY_WC_BRANCH "${_branch}")
break()
endif()
endforeach()
unset(_branch)
unset(_git_contains_branches)
endif()
unset(_git_tag_hashes)

View File

@@ -12,7 +12,6 @@ set(WITH_CODEC_FFMPEG ON CACHE BOOL "" FORCE)
set(WITH_CODEC_SNDFILE ON CACHE BOOL "" FORCE)
set(WITH_CYCLES ON CACHE BOOL "" FORCE)
set(WITH_CYCLES_OSL ON CACHE BOOL "" FORCE)
set(WITH_CYCLES_OPENSUBDIV ON CACHE BOOL "" FORCE)
set(WITH_FFTW3 ON CACHE BOOL "" FORCE)
set(WITH_LIBMV ON CACHE BOOL "" FORCE)
set(WITH_LIBMV_SCHUR_SPECIALIZATIONS ON CACHE BOOL "" FORCE)

View File

@@ -16,7 +16,6 @@ set(WITH_CODEC_FFMPEG OFF CACHE BOOL "" FORCE)
set(WITH_CODEC_SNDFILE OFF CACHE BOOL "" FORCE)
set(WITH_CYCLES OFF CACHE BOOL "" FORCE)
set(WITH_CYCLES_OSL OFF CACHE BOOL "" FORCE)
set(WITH_CYCLES_OPENSUBDIV OFF CACHE BOOL "" FORCE)
set(WITH_FFTW3 OFF CACHE BOOL "" FORCE)
set(WITH_LIBMV OFF CACHE BOOL "" FORCE)
set(WITH_LLVM OFF CACHE BOOL "" FORCE)
@@ -49,7 +48,6 @@ set(WITH_OPENCOLLADA OFF CACHE BOOL "" FORCE)
set(WITH_OPENCOLORIO OFF CACHE BOOL "" FORCE)
set(WITH_OPENIMAGEIO OFF CACHE BOOL "" FORCE)
set(WITH_OPENMP OFF CACHE BOOL "" FORCE)
set(WITH_OPENSUBDIV OFF CACHE BOOL "" FORCE)
set(WITH_OPENVDB OFF CACHE BOOL "" FORCE)
set(WITH_RAYOPTIMIZATION OFF CACHE BOOL "" FORCE)
set(WITH_SDL OFF CACHE BOOL "" FORCE)

View File

@@ -1,79 +0,0 @@
# Turn everything ON thats expected for an official release builds.
#
# Example usage:
# cmake -C../blender/build_files/cmake/config/blender_release.cmake ../blender
#
set(WITH_ALEMBIC ON CACHE BOOL "" FORCE)
set(WITH_BUILDINFO ON CACHE BOOL "" FORCE)
set(WITH_BULLET ON CACHE BOOL "" FORCE)
set(WITH_CODEC_AVI ON CACHE BOOL "" FORCE)
set(WITH_CODEC_FFMPEG ON CACHE BOOL "" FORCE)
set(WITH_CODEC_SNDFILE ON CACHE BOOL "" FORCE)
set(WITH_CYCLES ON CACHE BOOL "" FORCE)
set(WITH_CYCLES_OSL ON CACHE BOOL "" FORCE)
set(WITH_CYCLES_OPENSUBDIV ON CACHE BOOL "" FORCE)
set(WITH_FFTW3 ON CACHE BOOL "" FORCE)
set(WITH_LIBMV ON CACHE BOOL "" FORCE)
set(WITH_LIBMV_SCHUR_SPECIALIZATIONS ON CACHE BOOL "" FORCE)
set(WITH_GAMEENGINE ON CACHE BOOL "" FORCE)
set(WITH_COMPOSITOR ON CACHE BOOL "" FORCE)
set(WITH_FREESTYLE ON CACHE BOOL "" FORCE)
set(WITH_GHOST_XDND ON CACHE BOOL "" FORCE)
set(WITH_IK_SOLVER ON CACHE BOOL "" FORCE)
set(WITH_IK_ITASC ON CACHE BOOL "" FORCE)
set(WITH_IMAGE_CINEON ON CACHE BOOL "" FORCE)
set(WITH_IMAGE_DDS ON CACHE BOOL "" FORCE)
set(WITH_IMAGE_FRAMESERVER ON CACHE BOOL "" FORCE)
set(WITH_IMAGE_HDR ON CACHE BOOL "" FORCE)
set(WITH_IMAGE_OPENEXR ON CACHE BOOL "" FORCE)
set(WITH_IMAGE_OPENJPEG ON CACHE BOOL "" FORCE)
set(WITH_IMAGE_TIFF ON CACHE BOOL "" FORCE)
set(WITH_INPUT_NDOF ON CACHE BOOL "" FORCE)
set(WITH_INTERNATIONAL ON CACHE BOOL "" FORCE)
set(WITH_JACK ON CACHE BOOL "" FORCE)
set(WITH_LZMA ON CACHE BOOL "" FORCE)
set(WITH_LZO ON CACHE BOOL "" FORCE)
set(WITH_MOD_BOOLEAN ON CACHE BOOL "" FORCE)
set(WITH_MOD_FLUID ON CACHE BOOL "" FORCE)
set(WITH_MOD_REMESH ON CACHE BOOL "" FORCE)
set(WITH_MOD_SMOKE ON CACHE BOOL "" FORCE)
set(WITH_MOD_OCEANSIM ON CACHE BOOL "" FORCE)
set(WITH_AUDASPACE ON CACHE BOOL "" FORCE)
set(WITH_OPENAL ON CACHE BOOL "" FORCE)
set(WITH_OPENCOLLADA ON CACHE BOOL "" FORCE)
set(WITH_OPENCOLORIO ON CACHE BOOL "" FORCE)
set(WITH_OPENMP ON CACHE BOOL "" FORCE)
set(WITH_OPENVDB ON CACHE BOOL "" FORCE)
set(WITH_OPENVDB_BLOSC ON CACHE BOOL "" FORCE)
set(WITH_PYTHON_INSTALL ON CACHE BOOL "" FORCE)
set(WITH_RAYOPTIMIZATION ON CACHE BOOL "" FORCE)
set(WITH_SDL ON CACHE BOOL "" FORCE)
set(WITH_X11_XINPUT ON CACHE BOOL "" FORCE)
set(WITH_X11_XF86VMODE ON CACHE BOOL "" FORCE)
set(WITH_PLAYER ON CACHE BOOL "" FORCE)
set(WITH_MEM_JEMALLOC ON CACHE BOOL "" FORCE)
set(WITH_CYCLES_CUDA_BINARIES ON CACHE BOOL "" FORCE)
set(CYCLES_CUDA_BINARIES_ARCH sm_20;sm_21;sm_30;sm_35;sm_37;sm_50;sm_52;sm_60;sm_61 CACHE STRING "" FORCE)
# platform dependent options
if(UNIX AND NOT APPLE)
set(WITH_JACK ON CACHE BOOL "" FORCE)
set(WITH_DOC_MANPAGE ON CACHE BOOL "" FORCE)
set(WITH_OPENSUBDIV ON CACHE BOOL "" FORCE)
elseif(WIN32)
set(WITH_JACK OFF CACHE BOOL "" FORCE)
if(NOT CMAKE_COMPILER_IS_GNUCC)
set(WITH_OPENSUBDIV ON CACHE BOOL "" FORCE)
else()
# MinGW exceptions
set(WITH_OPENSUBDIV OFF CACHE BOOL "" FORCE)
set(WITH_CODEC_SNDFILE OFF CACHE BOOL "" FORCE)
set(WITH_CYCLES_OSL OFF CACHE BOOL "" FORCE)
endif()
elseif(APPLE)
set(WITH_JACK ON CACHE BOOL "" FORCE)
set(WITH_CODEC_QUICKTIME ON CACHE BOOL "" FORCE)
set(WITH_OPENSUBDIV OFF CACHE BOOL "" FORCE)
endif()

View File

@@ -415,7 +415,7 @@ function(setup_liblinks
if(WITH_OPENCOLORIO)
target_link_libraries(${target} ${OPENCOLORIO_LIBRARIES})
endif()
if(WITH_OPENSUBDIV OR WITH_CYCLES_OPENSUBDIV)
if(WITH_OPENSUBDIV)
if(WIN32 AND NOT UNIX)
file_list_suffix(OPENSUBDIV_LIBRARIES_DEBUG "${OPENSUBDIV_LIBRARIES}" "_d")
target_link_libraries_debug(${target} "${OPENSUBDIV_LIBRARIES_DEBUG}")
@@ -518,8 +518,7 @@ function(setup_liblinks
target_link_libraries(${target}
${BLENDER_GL_LIBRARIES})
#target_link_libraries(${target} ${PLATFORM_LINKLIBS} ${CMAKE_DL_LIBS})
target_link_libraries(${target} ${PLATFORM_LINKLIBS})
target_link_libraries(${target} ${PLATFORM_LINKLIBS} ${CMAKE_DL_LIBS})
endfunction()
@@ -746,7 +745,7 @@ function(SETUP_BLENDER_SORTED_LIBS)
list(APPEND BLENDER_SORTED_LIBS bf_intern_gpudirect)
endif()
if(WITH_OPENSUBDIV OR WITH_CYCLES_OPENSUBDIV)
if(WITH_OPENSUBDIV)
list(APPEND BLENDER_SORTED_LIBS bf_intern_opensubdiv)
endif()
@@ -1601,4 +1600,4 @@ MACRO(WINDOWS_SIGN_TARGET target)
)
endif()
endif()
ENDMACRO()
ENDMACRO()

View File

@@ -97,8 +97,6 @@ if(WIN32)
endif()
set(CPACK_PACKAGE_EXECUTABLES "blender" "blender")
set(CPACK_CREATE_DESKTOP_LINKS "blender" "blender")
include(CPack)
# Target for build_archive.py script, to automatically pass along

View File

@@ -24,11 +24,7 @@
# Libraries configuration for Apple.
if(NOT DEFINED LIBDIR)
if(WITH_CXX11)
set(LIBDIR ${CMAKE_SOURCE_DIR}/../lib/darwin)
else()
set(LIBDIR ${CMAKE_SOURCE_DIR}/../lib/darwin-9.x.universal)
endif()
set(LIBDIR ${CMAKE_SOURCE_DIR}/../lib/darwin-9.x.universal)
else()
message(STATUS "Using pre-compiled LIBDIR: ${LIBDIR}")
endif()
@@ -54,14 +50,15 @@ if(WITH_ALEMBIC)
set(ALEMBIC_LIBRARIES Alembic)
endif()
if(WITH_OPENSUBDIV OR WITH_CYCLES_OPENSUBDIV)
if(WITH_OPENSUBDIV)
set(OPENSUBDIV ${LIBDIR}/opensubdiv)
set(OPENSUBDIV_LIBPATH ${OPENSUBDIV}/lib)
find_library(OSD_LIB_CPU NAMES osdCPU PATHS ${OPENSUBDIV_LIBPATH})
find_library(OSD_LIB_GPU NAMES osdGPU PATHS ${OPENSUBDIV_LIBPATH})
find_library(OSL_LIB_UTIL NAMES osdutil PATHS ${OPENSUBDIV_LIBPATH})
find_library(OSL_LIB_CPU NAMES osdCPU PATHS ${OPENSUBDIV_LIBPATH})
find_library(OSL_LIB_GPU NAMES osdGPU PATHS ${OPENSUBDIV_LIBPATH})
set(OPENSUBDIV_INCLUDE_DIR ${OPENSUBDIV}/include)
set(OPENSUBDIV_INCLUDE_DIRS ${OPENSUBDIV_INCLUDE_DIR})
list(APPEND OPENSUBDIV_LIBRARIES ${OSD_LIB_CPU} ${OSD_LIB_GPU})
list(APPEND OPENSUBDIV_LIBRARIES ${OSL_LIB_UTIL} ${OSL_LIB_CPU} ${OSL_LIB_GPU})
endif()
if(WITH_JACK)
@@ -78,7 +75,7 @@ if(WITH_CODEC_SNDFILE)
set(SNDFILE ${LIBDIR}/sndfile)
set(SNDFILE_INCLUDE_DIRS ${SNDFILE}/include)
set(SNDFILE_LIBRARIES sndfile FLAC ogg vorbis vorbisenc)
set(SNDFILE_LIBPATH ${SNDFILE}/lib ${LIBDIR}/ffmpeg/lib) # TODO, deprecate
set(SNDFILE_LIBPATH ${SNDFILE}/lib ${FFMPEG}/lib) # TODO, deprecate
endif()
if(WITH_PYTHON)
@@ -136,17 +133,7 @@ if(WITH_IMAGE_OPENEXR)
set(OPENEXR ${LIBDIR}/openexr)
set(OPENEXR_INCLUDE_DIR ${OPENEXR}/include)
set(OPENEXR_INCLUDE_DIRS ${OPENEXR_INCLUDE_DIR} ${OPENEXR}/include/OpenEXR)
if(WITH_CXX11)
set(OPENEXR_POSTFIX -2_2)
else()
set(OPENEXR_POSTFIX)
endif()
set(OPENEXR_LIBRARIES
Iex${OPENEXR_POSTFIX}
Half
IlmImf${OPENEXR_POSTFIX}
Imath${OPENEXR_POSTFIX}
IlmThread${OPENEXR_POSTFIX})
set(OPENEXR_LIBRARIES Iex Half IlmImf Imath IlmThread)
set(OPENEXR_LIBPATH ${OPENEXR}/lib)
endif()
@@ -157,22 +144,9 @@ if(WITH_CODEC_FFMPEG)
avcodec avdevice avformat avutil
mp3lame swscale x264 xvidcore theora theoradec theoraenc vorbis vorbisenc vorbisfile ogg
)
if(WITH_CXX11)
set(FFMPEG_LIBRARIES ${FFMPEG_LIBRARIES} schroedinger orc vpx)
endif()
set(FFMPEG_LIBPATH ${FFMPEG}/lib)
endif()
if(WITH_OPENJPEG OR WITH_CODEC_FFMPEG)
# use openjpeg from libdir that is linked into ffmpeg
if(WITH_CXX11)
set(OPENJPEG ${LIBDIR}/openjpeg)
set(WITH_SYSTEM_OPENJPEG ON)
set(OPENJPEG_INCLUDE_DIRS ${OPENJPEG}/include)
set(OPENJPEG_LIBRARIES ${OPENJPEG}/lib/libopenjpeg.a)
endif()
endif()
find_library(SYSTEMSTUBS_LIBRARY
NAMES
SystemStubs
@@ -250,11 +224,7 @@ if(WITH_SDL)
set(SDL_INCLUDE_DIR ${SDL}/include)
set(SDL_LIBRARY SDL2)
set(SDL_LIBPATH ${SDL}/lib)
if(WITH_CXX11)
set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -framework ForceFeedback")
else()
set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -lazy_framework ForceFeedback")
endif()
set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -lazy_framework ForceFeedback")
endif()
set(PNG "${LIBDIR}/png")
@@ -275,27 +245,22 @@ endif()
if(WITH_BOOST)
set(BOOST ${LIBDIR}/boost)
set(BOOST_INCLUDE_DIR ${BOOST}/include)
if(WITH_CXX11)
set(BOOST_POSTFIX)
else()
set(BOOST_POSTFIX -mt)
endif()
set(BOOST_LIBRARIES
boost_date_time${BOOST_POSTFIX}
boost_filesystem${BOOST_POSTFIX}
boost_regex${BOOST_POSTFIX}
boost_system${BOOST_POSTFIX}
boost_thread${BOOST_POSTFIX}
boost_wave${BOOST_POSTFIX}
boost_date_time-mt
boost_filesystem-mt
boost_regex-mt
boost_system-mt
boost_thread-mt
boost_wave-mt
)
if(WITH_INTERNATIONAL)
list(APPEND BOOST_LIBRARIES boost_locale${BOOST_POSTFIX})
list(APPEND BOOST_LIBRARIES boost_locale-mt)
endif()
if(WITH_CYCLES_NETWORK)
list(APPEND BOOST_LIBRARIES boost_serialization${BOOST_POSTFIX})
list(APPEND BOOST_LIBRARIES boost_serialization-mt)
endif()
if(WITH_OPENVDB)
list(APPEND BOOST_LIBRARIES boost_iostreams${BOOST_POSTFIX})
list(APPEND BOOST_LIBRARIES boost_iostreams-mt)
endif()
set(BOOST_LIBPATH ${BOOST}/lib)
set(BOOST_DEFINITIONS)

View File

@@ -344,7 +344,7 @@ if(WITH_LLVM OR WITH_SDL_DYNLOAD)
)
endif()
if(WITH_OPENSUBDIV OR WITH_CYCLES_OPENSUBDIV)
if(WITH_OPENSUBDIV)
find_package_wrapper(OpenSubdiv)
set(OPENSUBDIV_LIBRARIES ${OPENSUBDIV_LIBRARIES})
@@ -352,7 +352,6 @@ if(WITH_OPENSUBDIV OR WITH_CYCLES_OPENSUBDIV)
if(NOT OPENSUBDIV_FOUND)
set(WITH_OPENSUBDIV OFF)
set(WITH_CYCLES_OPENSUBDIV OFF)
message(STATUS "OpenSubdiv not found")
endif()
endif()
@@ -384,18 +383,17 @@ add_definitions(-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE64_SOURCE
if(CMAKE_COMPILER_IS_GNUCC)
set(PLATFORM_CFLAGS "-pipe -fPIC -funsigned-char -fno-strict-aliasing")
if(WITH_LINKER_GOLD)
execute_process(
COMMAND ${CMAKE_C_COMPILER} -fuse-ld=gold -Wl,--version
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
if("${LD_VERSION}" MATCHES "GNU gold")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fuse-ld=gold")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fuse-ld=gold")
else()
message(STATUS "GNU gold linker isn't available, using the default system linker.")
endif()
unset(LD_VERSION)
# use ld.gold linker if available, could make optional
execute_process(
COMMAND ${CMAKE_C_COMPILER} -fuse-ld=gold -Wl,--version
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
if("${LD_VERSION}" MATCHES "GNU gold")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fuse-ld=gold")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fuse-ld=gold")
else()
message(STATUS "GNU gold linker isn't available, using the default system linker.")
endif()
unset(LD_VERSION)
# CLang is the same as GCC for now.
elseif(CMAKE_C_COMPILER_ID MATCHES "Clang")

View File

@@ -39,22 +39,20 @@ endmacro()
add_definitions(-DWIN32)
# Minimum MSVC Version
if(CMAKE_CXX_COMPILER_ID MATCHES MSVC)
if(MSVC_VERSION EQUAL 1800)
set(_min_ver "18.0.31101")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS ${_min_ver})
message(FATAL_ERROR
"Visual Studio 2013 (Update 4, ${_min_ver}) required, "
"found (${CMAKE_CXX_COMPILER_VERSION})")
endif()
if(MSVC_VERSION EQUAL 1800)
set(_min_ver "18.0.31101")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS ${_min_ver})
message(FATAL_ERROR
"Visual Studio 2013 (Update 4, ${_min_ver}) required, "
"found (${CMAKE_CXX_COMPILER_VERSION})")
endif()
if(MSVC_VERSION EQUAL 1900)
set(_min_ver "19.0.24210")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS ${_min_ver})
message(FATAL_ERROR
"Visual Studio 2015 (Update 3, ${_min_ver}) required, "
"found (${CMAKE_CXX_COMPILER_VERSION})")
endif()
endif()
if(MSVC_VERSION EQUAL 1900)
set(_min_ver "19.0.24210")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS ${_min_ver})
message(FATAL_ERROR
"Visual Studio 2015 (Update 3, ${_min_ver}) required, "
"found (${CMAKE_CXX_COMPILER_VERSION})")
endif()
endif()
unset(_min_ver)
@@ -129,10 +127,8 @@ if(NOT DEFINED LIBDIR)
message(STATUS "32 bit compiler detected.")
set(LIBDIR_BASE "windows")
endif()
if(MSVC_VERSION EQUAL 1910)
message(STATUS "Visual Studio 2017 detected.")
set(LIBDIR ${CMAKE_SOURCE_DIR}/../lib/${LIBDIR_BASE}_vc14)
elseif(MSVC_VERSION EQUAL 1900)
if(MSVC_VERSION EQUAL 1900)
message(STATUS "Visual Studio 2015 detected.")
set(LIBDIR ${CMAKE_SOURCE_DIR}/../lib/${LIBDIR_BASE}_vc14)
else()
@@ -444,7 +440,7 @@ if(WITH_MOD_CLOTH_ELTOPO)
)
endif()
if(WITH_OPENSUBDIV OR WITH_CYCLES_OPENSUBDIV)
if(WITH_OPENSUBDIV)
set(OPENSUBDIV_INCLUDE_DIR ${LIBDIR}/opensubdiv/include)
set(OPENSUBDIV_LIBPATH ${LIBDIR}/opensubdiv/lib)
set(OPENSUBDIV_LIBRARIES ${OPENSUBDIV_LIBPATH}/osdCPU.lib ${OPENSUBDIV_LIBPATH}/osdGPU.lib)

View File

@@ -6,10 +6,10 @@
BASE_DIR="$PWD"
blender_srcdir=$(dirname -- $0)/../..
blender_version=$(grep "BLENDER_VERSION\s" "$blender_srcdir/source/blender/blenkernel/BKE_blender_version.h" | awk '{print $3}')
blender_version_char=$(grep "BLENDER_VERSION_CHAR\s" "$blender_srcdir/source/blender/blenkernel/BKE_blender_version.h" | awk '{print $3}')
blender_version_cycle=$(grep "BLENDER_VERSION_CYCLE\s" "$blender_srcdir/source/blender/blenkernel/BKE_blender_version.h" | awk '{print $3}')
blender_subversion=$(grep "BLENDER_SUBVERSION\s" "$blender_srcdir/source/blender/blenkernel/BKE_blender_version.h" | awk '{print $3}')
blender_version=$(grep "BLENDER_VERSION\s" "$blender_srcdir/source/blender/blenkernel/BKE_blender.h" | awk '{print $3}')
blender_version_char=$(grep "BLENDER_VERSION_CHAR\s" "$blender_srcdir/source/blender/blenkernel/BKE_blender.h" | awk '{print $3}')
blender_version_cycle=$(grep "BLENDER_VERSION_CYCLE\s" "$blender_srcdir/source/blender/blenkernel/BKE_blender.h" | awk '{print $3}')
blender_subversion=$(grep "BLENDER_SUBVERSION\s" "$blender_srcdir/source/blender/blenkernel/BKE_blender.h" | awk '{print $3}')
if [ "$blender_version_cycle" = "release" ] ; then
VERSION=$(expr $blender_version / 100).$(expr $blender_version % 100)$blender_version_char

View File

@@ -187,7 +187,7 @@ The next table describes the information in the file-header.
</table>
<p>
<a href="https://en.wikipedia.org/wiki/Endianness">Endianness</a> addresses the way values are ordered in a sequence of bytes(see the <a href="#example-endianess">example</a> below):
<a href="http://en.wikipedia.org/wiki/Endianness">Endianness</a> addresses the way values are ordered in a sequence of bytes(see the <a href="#example-endianess">example</a> below):
</p>
<ul>

View File

@@ -699,7 +699,7 @@ LAYOUT_FILE =
# The CITE_BIB_FILES tag can be used to specify one or more bib files containing
# the reference definitions. This must be a list of .bib files. The .bib
# extension is automatically appended if omitted. This requires the bibtex tool
# to be installed. See also https://en.wikipedia.org/wiki/BibTeX for more info.
# to be installed. See also http://en.wikipedia.org/wiki/BibTeX for more info.
# For LaTeX the style of the bibliography can be controlled using
# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the
# search path. See also \cite for info how to create references.
@@ -1145,7 +1145,7 @@ HTML_EXTRA_FILES =
# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen
# will adjust the colors in the style sheet and background images according to
# this color. Hue is specified as an angle on a colorwheel, see
# https://en.wikipedia.org/wiki/Hue for more information. For instance the value
# http://en.wikipedia.org/wiki/Hue for more information. For instance the value
# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300
# purple, and 360 is red again.
# Minimum value: 0, maximum value: 359, default value: 220.
@@ -1752,7 +1752,7 @@ LATEX_SOURCE_CODE = NO
# The LATEX_BIB_STYLE tag can be used to specify the style to use for the
# bibliography, e.g. plainnat, or ieeetr. See
# https://en.wikipedia.org/wiki/BibTeX and \cite for more info.
# http://en.wikipedia.org/wiki/BibTeX and \cite for more info.
# The default value is: plain.
# This tag requires that the tag GENERATE_LATEX is set to YES.

View File

@@ -4,7 +4,7 @@ Persistent Handler Example
By default handlers are freed when loading new files, in some cases you may
wan't the handler stay running across multiple files (when the handler is
part of an add-on for example).
part of an addon for example).
For this the :data:`bpy.app.handlers.persistent` decorator needs to be used.
"""

View File

@@ -5,7 +5,7 @@ Intro
.. warning::
Most of this object should only be useful if you actually manipulate i18n stuff from Python.
If you are a regular add-on, you should only bother about :const:`contexts` member,
If you are a regular addon, you should only bother about :const:`contexts` member,
and the :func:`register`/:func:`unregister` functions! The :func:`pgettext` family of functions
should only be used in rare, specific cases (like e.g. complex "composited" UI strings...).
@@ -21,11 +21,11 @@ Intro
Then, call ``bpy.app.translations.register(__name__, your_dict)`` in your ``register()`` function, and
``bpy.app.translations.unregister(__name__)`` in your ``unregister()`` one.
The ``Manage UI translations`` add-on has several functions to help you collect strings to translate, and
The ``Manage UI translations`` addon has several functions to help you collect strings to translate, and
generate the needed python code (the translation dictionary), as well as optional intermediary po files
if you want some... See
`How to Translate Blender <https://wiki.blender.org/index.php/Dev:Doc/Process/Translate_Blender>`_ and
`Using i18n in Blender Code <https://wiki.blender.org/index.php/Dev:Source/Interface/Internationalization>`_
`How to Translate Blender <http://wiki.blender.org/index.php/Dev:Doc/Process/Translate_Blender>`_ and
`Using i18n in Blender Code <http://wiki.blender.org/index.php/Dev:Source/Interface/Internationalization>`_
for more info.
Module References

View File

@@ -1,10 +1,10 @@
bl_info = {
"name": "Example Add-on Preferences",
"name": "Example Addon Preferences",
"author": "Your Name Here",
"version": (1, 0),
"blender": (2, 65, 0),
"location": "SpaceBar Search -> Add-on Preferences Example",
"description": "Example Add-on",
"location": "SpaceBar Search -> Addon Preferences Example",
"description": "Example Addon",
"warning": "",
"wiki_url": "",
"tracker_url": "",
@@ -18,7 +18,7 @@ from bpy.props import StringProperty, IntProperty, BoolProperty
class ExampleAddonPreferences(AddonPreferences):
# this must match the add-on name, use '__package__'
# this must match the addon name, use '__package__'
# when defining this in a submodule of a python package.
bl_idname = __name__
@@ -37,7 +37,7 @@ class ExampleAddonPreferences(AddonPreferences):
def draw(self, context):
layout = self.layout
layout.label(text="This is a preferences view for our add-on")
layout.label(text="This is a preferences view for our addon")
layout.prop(self, "filepath")
layout.prop(self, "number")
layout.prop(self, "boolean")
@@ -46,7 +46,7 @@ class ExampleAddonPreferences(AddonPreferences):
class OBJECT_OT_addon_prefs_example(Operator):
"""Display example preferences"""
bl_idname = "object.addon_prefs_example"
bl_label = "Add-on Preferences Example"
bl_label = "Addon Preferences Example"
bl_options = {'REGISTER', 'UNDO'}
def execute(self, context):

View File

@@ -2,9 +2,9 @@
Extending Menus
+++++++++++++++
When creating menus for add-ons you can't reference menus
in Blender's default scripts.
Instead, the add-on can add menu items to existing menus.
When creating menus for addons you can't reference menus in Blender's default
scripts.
Instead, the addon can add menu items to existing menus.
The function menu_draw acts like :class:`Menu.draw`.
"""

View File

@@ -13,7 +13,7 @@ be animated, accessed from the user interface and from python.
definitions are not, this means whenever you load blender the class needs
to be registered too.
This is best done by creating an add-on which loads on startup and registers
This is best done by creating an addon which loads on startup and registers
your properties.
.. note::

View File

@@ -49,7 +49,7 @@ vec2d[:] = vec3d[:2]
# Vectors support 'swizzle' operations
# See https://en.wikipedia.org/wiki/Swizzling_(computer_graphics)
# See http://en.wikipedia.org/wiki/Swizzling_(computer_graphics)
vec.xyz = vec.zyx
vec.xy = vec4d.zw
vec.xyz = vec4d.wzz

File diff suppressed because it is too large Load Diff

View File

@@ -23,7 +23,7 @@ The features exposed closely follow the C API,
giving python access to the functions used by blenders own mesh editing tools.
For an overview of BMesh data types and how they reference each other see:
`BMesh Design Document <https://wiki.blender.org/index.php/Dev:Source/Modeling/BMesh/Design>`_ .
`BMesh Design Document <http://wiki.blender.org/index.php/Dev:2.6/Source/Modeling/BMesh/Design>`_ .
.. note::
@@ -31,12 +31,13 @@ For an overview of BMesh data types and how they reference each other see:
**Disk** and **Radial** data is not exposed by the python api since this is for internal use only.
.. warning:: TODO items are...
.. warning::
TODO items are...
* add access to BMesh **walkers**
* add custom-data manipulation functions add/remove/rename.
Example Script
--------------

View File

@@ -18,7 +18,7 @@ amongst our own scripts and make it easier to use python scripts from other proj
Using our style guide for your own scripts makes it easier if you eventually want to contribute them to blender.
This style guide is known as pep8 and can be found `here <https://www.python.org/dev/peps/pep-0008/>`_
This style guide is known as pep8 and can be found `here <http://www.python.org/dev/peps/pep-0008>`_
A brief listing of pep8 criteria.
@@ -316,7 +316,7 @@ use to join a list of strings (the list may be temporary). In the following exam
Join is fastest on many strings,
`string formatting <https://wiki.blender.org/index.php/Dev:Source/Modeling/BMesh/Design>`__
`string formatting <http://docs.python.org/py3k/library/string.html#string-formatting>`__
is quite fast too (better for converting data types). String arithmetic is slowest.

View File

@@ -1,4 +1,3 @@
*******
Gotchas
*******
@@ -39,6 +38,7 @@ but some operators are more picky about when they run.
In most cases you can figure out what context an operator needs
simply be seeing how it's used in Blender and thinking about what it does.
Unfortunately if you're still stuck - the only way to **really** know
whats going on is to read the source code for the poll function and see what its checking.
@@ -82,6 +82,7 @@ it should be reported to the bug tracker.
Stale Data
==========
No updates after setting values
-------------------------------
@@ -173,8 +174,8 @@ In this situation you can...
.. _info_gotcha_mesh_faces:
N-Gons and Tessellation Faces
=============================
NGons and Tessellation Faces
============================
Since 2.63 NGons are supported, this adds some complexity
since in some cases you need to access triangles/quads still (some exporters for example).
@@ -508,7 +509,7 @@ Unicode Problems
Python supports many different encodings so there is nothing stopping you from
writing a script in ``latin1`` or ``iso-8859-15``.
See `pep-0263 <https://www.python.org/dev/peps/pep-0263/>`_
See `pep-0263 <http://www.python.org/dev/peps/pep-0263/>`_
However this complicates matters for Blender's Python API because ``.blend`` files don't have an explicit encoding.
@@ -656,7 +657,7 @@ Here are some general hints to avoid running into these problems.
.. note::
To find the line of your script that crashes you can use the ``faulthandler`` module.
See the `faulthandler docs <https://docs.python.org/dev/library/faulthandler.html>`_.
See `faulthandler docs <http://docs.python.org/dev/library/faulthandler.html>`_.
While the crash may be in Blenders C/C++ code,
this can help a lot to track down the area of the script that causes the crash.

View File

@@ -43,7 +43,8 @@ scene manipulation, automation, defining your own toolset and customization.
On startup Blender scans the ``scripts/startup/`` directory for Python modules and imports them.
The exact location of this directory depends on your installation.
See the :ref:`directory layout docs <blender_manual:getting-started_installing-config-directories>`.
`See the directory layout docs
<https://www.blender.org/manual/getting_started/installing_blender/directorylayout.html>`__
Script Loading
@@ -76,22 +77,22 @@ To run as modules:
- The obvious way, ``import some_module`` command from the text window or interactive console.
- Open as a text block and tick "Register" option, this will load with the blend file.
- copy into one of the directories ``scripts/startup``, where they will be automatically imported on startup.
- define as an add-on, enabling the add-on will load it as a Python module.
- define as an addon, enabling the addon will load it as a Python module.
Add-ons
Addons
------
Some of Blenders functionality is best kept optional,
alongside scripts loaded at startup we have add-ons which are kept in their own directory ``scripts/addons``,
alongside scripts loaded at startup we have addons which are kept in their own directory ``scripts/addons``,
and only load on startup if selected from the user preferences.
The only difference between add-ons and built-in Python modules is that add-ons must contain a ``bl_info``
The only difference between addons and built-in Python modules is that addons must contain a ``bl_info``
variable which Blender uses to read metadata such as name, author, category and URL.
The User Preferences add-on listing uses **bl_info** to display information about each add-on.
The user preferences addon listing uses **bl_info** to display information about each addon.
`See Add-ons <https://wiki.blender.org/index.php/Dev:Py/Scripts/Guidelines/Addons>`__
`See Addons <http://wiki.blender.org/index.php/Dev:2.5/Py/Scripts/Guidelines/Addons>`__
for details on the ``bl_info`` dictionary.
@@ -222,7 +223,7 @@ These functions usually appear at the bottom of the script containing class regi
You can also use them for internal purposes setting up data for your own tools but take care
since register won't re-run when a new blend file is loaded.
The register/unregister calls are used so it's possible to toggle add-ons and reload scripts while Blender runs.
The register/unregister calls are used so it's possible to toggle addons and reload scripts while Blender runs.
If the register calls were placed in the body of the script, registration would be called on import,
meaning there would be no distinction between importing a module or loading its classes into Blender.

View File

@@ -51,7 +51,8 @@ A quick list of helpful things to know before starting:
| ``scripts/startup/bl_operators`` for operators.
Exact location depends on platform, see:
:ref:`Configuration and Data Paths <blender_manual:getting-started_installing-config-directories>`.
`Configuration and Data Paths
<https://www.blender.org/manual/getting_started/installing_blender/directorylayout.html>`__.
Running Scripts

View File

@@ -27,7 +27,7 @@ There are 3 main uses for the terminal, these are:
.. note::
For Linux and macOS users this means starting the terminal first, then running Blender from within it.
For Linux and OSX users this means starting the terminal first, then running Blender from within it.
On Windows the terminal can be enabled from the help menu.
@@ -306,7 +306,7 @@ Advantages include:
This is marked advanced because to run Blender as a Python module requires a special build option.
For instructions on building see
`Building Blender as a Python module <https://wiki.blender.org/index.php/User:Ideasman42/BlenderAsPyModule>`_
`Building Blender as a Python module <http://wiki.blender.org/index.php/User:Ideasman42/BlenderAsPyModule>`_
Python Safety (Build Option)

View File

@@ -1,6 +1,6 @@
Add-on Tutorial
###############
Addon Tutorial
##############
************
Introduction
@@ -36,7 +36,6 @@ Suggested reading before starting this tutorial.
To best troubleshoot any error message Python prints while writing scripts you run blender with from a terminal,
see :ref:`Use The Terminal <use_the_terminal>`.
Documentation Links
===================
@@ -47,48 +46,51 @@ While going through the tutorial you may want to look into our reference documen
- :mod:`bpy.context` api reference. -
*Handy to have a list of available items your script may operate on.*
- :class:`bpy.types.Operator`. -
*The following add-ons define operators, these docs give details and more examples of operators.*
*The following addons define operators, these docs give details and more examples of operators.*
*******
Add-ons
*******
******
Addons
******
What is an Add-on?
==================
An add-on is simply a Python module with some additional requirements so Blender can display it in a list with useful
What is an Addon?
=================
An addon is simply a Python module with some additional requirements so Blender can display it in a list with useful
information.
To give an example, here is the simplest possible add-on.
To give an example, here is the simplest possible addon.
.. code-block:: python
bl_info = {"name": "My Test Add-on", "category": "Object"}
bl_info = {"name": "My Test Addon", "category": "Object"}
def register():
print("Hello World")
def unregister():
print("Goodbye World")
- ``bl_info`` is a dictionary containing add-on metadata such as the title,
version and author to be displayed in the user preferences add-on list.
- ``register`` is a function which only runs when enabling the add-on,
this means the module can be loaded without activating the add-on.
- ``unregister`` is a function to unload anything setup by ``register``, this is called when the add-on is disabled.
- ``bl_info`` is a dictionary containing addon meta-data such as the title, version and author to be displayed in the
user preferences addon list.
- ``register`` is a function which only runs when enabling the addon, this means the module can be loaded without
activating the addon.
- ``unregister`` is a function to unload anything setup by ``register``, this is called when the addon is disabled.
Notice this add-on does not do anything related to Blender, (the :mod:`bpy` module is not imported for example).
This is a contrived example of an add-on that serves to illustrate the point
that the base requirements of an add-on are simple.
Notice this addon does not do anything related to Blender, (the :mod:`bpy` module is not imported for example).
An add-on will typically register operators, panels, menu items etc, but its worth noting that _any_ script can do this,
This is a contrived example of an addon that serves to illustrate the point
that the base requirements of an addon are simple.
An addon will typically register operators, panels, menu items etc, but its worth noting that _any_ script can do this,
when executed from the text editor or even the interactive console - there is nothing inherently different about an
add-on that allows it to integrate with Blender, such functionality is just provided by the :mod:`bpy` module for any
addon that allows it to integrate with Blender, such functionality is just provided by the :mod:`bpy` module for any
script to access.
So an add-on is just a way to encapsulate a Python module in a way a user can easily utilize.
So an addon is just a way to encapsulate a Python module in a way a user can easily utilize.
.. note::
@@ -97,14 +99,14 @@ So an add-on is just a way to encapsulate a Python module in a way a user can ea
Messages will be printed when enabling and disabling.
Your First Add-on
=================
Your First Addon
================
The simplest possible add-on above is useful as an example but not much else.
This next add-on is simple but shows how to integrate a script into Blender using an ``Operator``
The simplest possible addon above was useful as an example but not much else.
This next addon is simple but shows how to integrate a script into Blender using an ``Operator``
which is the typical way to define a tool accessed from menus, buttons and keyboard shortcuts.
For the first example we will make a script that simply moves all objects in a scene.
For the first example we'll make a script that simply moves all objects in a scene.
Write The Script
@@ -121,14 +123,20 @@ Add the following script to the text editor in Blender.
obj.location.x += 1.0
Click the :ref:`Run Script button <blender_manual:editors-text-run-script>`,
all objects in the active scene are moved by 1.0 Blender unit.
.. image:: run_script.png
:width: 924px
:align: center
:height: 574px
:alt: Run Script button
Click the Run Script button, all objects in the active scene are moved by 1.0 Blender unit.
Next we'll make this script into an addon.
Write the Add-on (Simple)
-------------------------
Write the Addon (Simple)
------------------------
This add-on takes the body of the script above, and adds them to an operator's ``execute()`` function.
This addon takes the body of the script above, and adds them to an operator's ``execute()`` function.
.. code-block:: python
@@ -165,7 +173,7 @@ This add-on takes the body of the script above, and adds them to an operator's `
# This allows you to run the script directly from blenders text editor
# to test the add-on without having to install it.
# to test the addon without having to install it.
if __name__ == "__main__":
register()
@@ -198,33 +206,33 @@ Do this by pressing :kbd:`Spacebar` to bring up the operator search dialog and t
The objects should move as before.
*Keep this add-on open in Blender for the next step - Installing.*
*Keep this addon open in Blender for the next step - Installing.*
Install The Add-on
------------------
Install The Addon
-----------------
Once you have your add-on within in Blender's text editor,
Once you have your addon within in Blender's text editor,
you will want to be able to install it so it can be enabled in the user preferences to load on startup.
Even though the add-on above is a test, lets go through the steps anyway so you know how to do it for later.
Even though the addon above is a test, lets go through the steps anyway so you know how to do it for later.
To install the Blender text as an add-on you will first have to save it to disk, take care to obey the naming
To install the Blender text as an addon you will first have to save it to disk, take care to obey the naming
restrictions that apply to Python modules and end with a ``.py`` extension.
Once the file is on disk, you can install it as you would for an add-on downloaded online.
Once the file is on disk, you can install it as you would for an addon downloaded online.
Open the user :menuselection:`File --> User Preferences`,
Select the *Add-on* section, press *Install Add-on...* and select the file.
Open the user :menuselection:`File -> User Preferences`,
Select the *Addon* section, press *Install Addon...* and select the file.
Now the add-on will be listed and you can enable it by pressing the check-box,
Now the addon will be listed and you can enable it by pressing the check-box,
if you want it to be enabled on restart, press *Save as Default*.
.. note::
The destination of the add-on depends on your Blender configuration.
When installing an add-on the source and destination path are printed in the console.
You can also find add-on path locations by running this in the Python console.
The destination of the addon depends on your Blender configuration.
When installing an addon the source and destination path are printed in the console.
You can also find addon path locations by running this in the Python console.
.. code-block:: python
@@ -232,20 +240,20 @@ if you want it to be enabled on restart, press *Save as Default*.
print(addon_utils.paths())
More is written on this topic here:
:ref:`Directory Layout <blender_manual:getting-started_installing-config-directories>`.
`Directory Layout <https://www.blender.org/manual/getting_started/installing_blender/directorylayout.html>`_
Your Second Add-on
==================
Your Second Addon
=================
For our second add-on, we will focus on object instancing - this is - to make linked copies of an object in a
For our second addon, we will focus on object instancing - this is - to make linked copies of an object in a
similar way to what you may have seen with the array modifier.
Write The Script
----------------
As before, first we will start with a script, develop it, then convert into an add-on.
As before, first we will start with a script, develop it, then convert into an addon.
.. code-block:: python
@@ -316,17 +324,17 @@ allows vectors to be multiplied by numbers and matrices.
If you are interested in this area, read into :class:`mathutils.Vector` - there are many handy utility functions
such as getting the angle between vectors, cross product, dot products
as well as more advanced functions in :mod:`mathutils.geometry` such as zier Spline interpolation and
as well as more advanced functions in :mod:`mathutils.geometry` such as bezier spline interpolation and
ray-triangle intersection.
For now we will focus on making this script an add-on, but its good to know that this 3D math module is available and
For now we'll focus on making this script an addon, but its good to know that this 3D math module is available and
can help you with more advanced functionality later on.
Write the Add-on
----------------
Write the Addon
---------------
The first step is to convert the script as-is into an add-on.
The first step is to convert the script as-is into an addon.
.. code-block:: python
@@ -373,7 +381,7 @@ The first step is to convert the script as-is into an add-on.
register()
Everything here has been covered in the previous steps, you may want to try run the add-on still
Everything here has been covered in the previous steps, you may want to try run the addon still
and consider what could be done to make it more useful.
@@ -426,7 +434,7 @@ however the link above includes examples of more advanced property usage.
Menu Item
^^^^^^^^^
Add-ons can add to the user interface of existing panels, headers and menus defined in Python.
Addons can add to the user interface of existing panels, headers and menus defined in Python.
For this example we'll add to an existing menu.
@@ -456,7 +464,7 @@ For docs on extending menus see: :doc:`bpy.types.Menu`.
Keymap
^^^^^^
In Blender, add-ons have their own keymaps so as not to interfere with Blenders built in key-maps.
In Blender addons have their own key-maps so as not to interfere with Blenders built in key-maps.
In the example below, a new object-mode :class:`bpy.types.KeyMap` is added,
then a :class:`bpy.types.KeyMapItem` is added to the key-map which references our newly added operator,
@@ -494,7 +502,7 @@ this allows you to have multiple keys accessing the same operator with different
.. note::
While :kbd:`Ctrl-Shift-Space` isn't a default Blender key shortcut, its hard to make sure add-ons won't
While :kbd:`Ctrl-Shift-Space` isn't a default Blender key shortcut, its hard to make sure addons won't
overwrite each others keymaps, At least take care when assigning keys that they don't
conflict with important functionality within Blender.
@@ -598,14 +606,14 @@ After selecting it from the menu, you can choose how many instance of the cube y
.. note::
Directly executing the script multiple times will add the menu each time too.
While not useful behavior, theres nothing to worry about since add-ons won't register them selves multiple
While not useful behavior, theres nothing to worry about since addons won't register them selves multiple
times when enabled through the user preferences.
Conclusions
===========
Add-ons can encapsulate certain functionality neatly for writing tools to improve your work-flow or for writing utilities
Addons can encapsulate certain functionality neatly for writing tools to improve your work-flow or for writing utilities
for others to use.
While there are limits to what Python can do within Blender, there is certainly a lot that can be achieved without
@@ -628,8 +636,8 @@ Here are some sites you might like to check on after completing this tutorial.
*For more background details on Blender/Python integration.*
- `How to Think Like a Computer Scientist <http://interactivepython.org/courselib/static/thinkcspy/index.html>`_ -
*Great info for those who are still learning Python.*
- `Blender Development (Wiki) <https://wiki.blender.org/index.php/Dev:Contents>`_ -
- `Blender Development (Wiki) <http://wiki.blender.org/index.php/Dev:Contents>`_ -
*Blender Development, general information and helpful links.*
- `Blender Artists (Coding Section) <https://blenderartists.org/forum/forumdisplay.php?47-Coding>`_ -
- `Blender Artists (Coding Section) <http://blenderartists.org/forum/forumdisplay.php?47-Coding>`_ -
*forum where people ask Python development questions*

View File

@@ -27,7 +27,7 @@ output from this tool should be added into "doc/python_api/rst/change_log.rst"
blender --background --python doc/python_api/sphinx_changelog_gen.py -- --dump
# create changelog
blender --background --factory-startup --python doc/python_api/sphinx_changelog_gen.py -- \
blender --background --python doc/python_api/sphinx_changelog_gen.py -- \
--api_from blender_2_63_0.py \
--api_to blender_2_64_0.py \
--api_out changes.rst
@@ -331,7 +331,7 @@ def main():
# When --help or no args are given, print this help
usage_text = "Run blender in background mode with this script: "
"blender --background --factory-startup --python %s -- [options]" % os.path.basename(__file__)
"blender --background --python %s -- [options]" % os.path.basename(__file__)
epilog = "Run this before releases"

View File

@@ -26,16 +26,16 @@ API dump in RST files
---------------------
Run this script from Blender's root path once you have compiled Blender
blender --background --factory-startup -noaudio --python doc/python_api/sphinx_doc_gen.py
./blender.bin --background -noaudio --python doc/python_api/sphinx_doc_gen.py
This will generate python files in doc/python_api/sphinx-in/
providing ./blender is or links to the blender executable
providing ./blender.bin is or links to the blender executable
To choose sphinx-in directory:
blender --background --factory-startup --python doc/python_api/sphinx_doc_gen.py -- --output ../python_api
./blender.bin --background --python doc/python_api/sphinx_doc_gen.py -- --output ../python_api
For quick builds:
blender --background --factory-startup --python doc/python_api/sphinx_doc_gen.py -- --partial bmesh.*
./blender.bin --background --python doc/python_api/sphinx_doc_gen.py -- --partial bmesh.*
Sphinx: HTML generation
@@ -46,6 +46,8 @@ Sphinx: HTML generation
cd doc/python_api
sphinx-build sphinx-in sphinx-out
This requires sphinx 1.0.7 to be installed.
Sphinx: PDF generation
----------------------
@@ -66,7 +68,7 @@ except ImportError:
import sys
sys.exit()
import rna_info # Blender module
import rna_info # Blender module
def rna_info_BuildRNAInfo_cache():
@@ -84,7 +86,7 @@ import shutil
import logging
from platform import platform
PLATFORM = platform().split('-')[0].lower() # 'linux', 'darwin', 'windows'
PLATFORM = platform().split('-')[0].lower() # 'linux', 'darwin', 'windows'
SCRIPT_DIR = os.path.abspath(os.path.dirname(__file__))
@@ -206,12 +208,12 @@ BPY_LOGGER.setLevel(logging.DEBUG)
"""
# for quick rebuilds
rm -rf /b/doc/python_api/sphinx-* && \
./blender -b -noaudio --factory-startup -P doc/python_api/sphinx_doc_gen.py && \
./blender.bin -b -noaudio --factory-startup -P doc/python_api/sphinx_doc_gen.py && \
sphinx-build doc/python_api/sphinx-in doc/python_api/sphinx-out
or
./blender -b -noaudio --factory-startup -P doc/python_api/sphinx_doc_gen.py -- -f -B
./blender.bin -b -noaudio --factory-startup -P doc/python_api/sphinx_doc_gen.py -- -f -B
"""
# Switch for quick testing so doc-builds don't take so long
@@ -363,7 +365,7 @@ INFO_DOCS = (
("info_overview.rst",
"Blender/Python API Overview: a more complete explanation of Python integration"),
("info_tutorial_addon.rst",
"Blender/Python Add-on Tutorial: a step by step guide on how to write an add-on from scratch"),
"Blender/Python Addon Tutorial: a step by step guide on how to write an addon from scratch"),
("info_api_reference.rst",
"Blender/Python API Reference Usage: examples of how to use the API reference docs"),
("info_best_practice.rst",
@@ -418,7 +420,7 @@ MODULE_GROUPING = {
blender_version_strings = [str(v) for v in bpy.app.version]
# converting bytes to strings, due to T30154
# converting bytes to strings, due to #30154
BLENDER_REVISION = str(bpy.app.build_hash, 'utf_8')
BLENDER_DATE = str(bpy.app.build_date, 'utf_8')
@@ -1565,9 +1567,9 @@ def pyrna2sphinx(basepath):
# operators
def write_ops():
API_BASEURL = "https://developer.blender.org/diffusion/B/browse/master/release/scripts/ "
API_BASEURL_ADDON = "https://developer.blender.org/diffusion/BA/"
API_BASEURL_ADDON_CONTRIB = "https://developer.blender.org/diffusion/BAC/"
API_BASEURL = "http://svn.blender.org/svnroot/bf-blender/trunk/blender/release/scripts"
API_BASEURL_ADDON = "http://svn.blender.org/svnroot/bf-extensions/trunk/py/scripts"
API_BASEURL_ADDON_CONTRIB = "http://svn.blender.org/svnroot/bf-extensions/contrib/py/scripts"
op_modules = {}
for op in ops.values():
@@ -1632,13 +1634,6 @@ def write_sphinx_conf_py(basepath):
file = open(filepath, "w", encoding="utf-8")
fw = file.write
fw("import sys, os\n")
fw("\n")
fw("extensions = ['sphinx.ext.intersphinx']\n")
fw("\n")
fw("intersphinx_mapping = {'blender_manual': ('https://www.blender.org/manual/', None)}\n")
fw("\n")
fw("project = 'Blender'\n")
# fw("master_doc = 'index'\n")
fw("copyright = u'Blender Foundation'\n")
@@ -1650,12 +1645,11 @@ def write_sphinx_conf_py(basepath):
if ARGS.sphinx_theme == "blender-org":
fw("html_theme_path = ['../']\n")
# copied with the theme, exclude else we get an error [T28873]
# copied with the theme, exclude else we get an error [#28873]
fw("html_favicon = 'favicon.ico'\n") # in <theme>/static/
# not helpful since the source is generated, adds to upload size.
fw("html_copy_source = False\n")
fw("html_split_index = True\n")
fw("\n")
# needed for latex, pdf gen

View File

@@ -108,19 +108,18 @@ def main():
subprocess.run(doc_gen_cmd)
# III) Get Blender version info.
blenver = blenver_zip = ""
blenver = ""
getver_file = os.path.join(tmp_dir, "blendver.txt")
getver_script = (""
"import sys, bpy\n"
"with open(sys.argv[-1], 'w') as f:\n"
" f.write('%d_%d%s_release\\n' % (bpy.app.version[0], bpy.app.version[1], bpy.app.version_char)\n"
" if bpy.app.version_cycle in {'rc', 'release'} else '%d_%d_%d\\n' % bpy.app.version)\n"
" f.write('%d_%d_%d' % bpy.app.version)\n")
" f.write('%d_%d%s_release' % (bpy.app.version[0], bpy.app.version[1], bpy.app.version_char)\n"
" if bpy.app.version_cycle in {'rc', 'release'} else '%d_%d_%d' % bpy.app.version)\n")
get_ver_cmd = (args.blender, "--background", "-noaudio", "--factory-startup", "--python-exit-code", "1",
"--python-expr", getver_script, "--", getver_file)
subprocess.run(get_ver_cmd)
with open(getver_file) as f:
blenver, blenver_zip = f.read().split("\n")
blenver = f.read()
os.remove(getver_file)
# IV) Build doc.
@@ -139,14 +138,11 @@ def main():
os.rename(os.path.join(tmp_dir, "sphinx-out"), api_dir)
# VI) Create zip archive.
zip_name = "blender_python_reference_%s" % blenver_zip # We can't use 'release' postfix here...
zip_name = "blender_python_reference_%s" % blenver
zip_path = os.path.join(args.mirror_dir, zip_name)
with zipfile.ZipFile(zip_path, 'w') as zf:
for dirname, _, filenames in os.walk(api_dir):
for filename in filenames:
filepath = os.path.join(dirname, filename)
zip_filepath = os.path.join(zip_name, os.path.relpath(filepath, api_dir))
zf.write(filepath, arcname=zip_filepath)
for de in os.scandir(api_dir):
zf.write(de.path, arcname=os.path.join(zip_name, de.name))
os.rename(zip_path, os.path.join(api_dir, "%s.zip" % zip_name))
# VII) Create symlinks and html redirects.

View File

@@ -73,12 +73,10 @@ set(SRC
internal/ceres/file.cc
internal/ceres/generated/partitioned_matrix_view_d_d_d.cc
internal/ceres/generated/schur_eliminator_d_d_d.cc
internal/ceres/gradient_checker.cc
internal/ceres/gradient_checking_cost_function.cc
internal/ceres/gradient_problem.cc
internal/ceres/gradient_problem_solver.cc
internal/ceres/implicit_schur_complement.cc
internal/ceres/is_close.cc
internal/ceres/iterative_schur_complement_solver.cc
internal/ceres/lapack.cc
internal/ceres/levenberg_marquardt_strategy.cc
@@ -118,7 +116,6 @@ set(SRC
internal/ceres/triplet_sparse_matrix.cc
internal/ceres/trust_region_minimizer.cc
internal/ceres/trust_region_preprocessor.cc
internal/ceres/trust_region_step_evaluator.cc
internal/ceres/trust_region_strategy.cc
internal/ceres/types.cc
internal/ceres/wall_time.cc
@@ -207,7 +204,6 @@ set(SRC
internal/ceres/householder_vector.h
internal/ceres/implicit_schur_complement.h
internal/ceres/integral_types.h
internal/ceres/is_close.h
internal/ceres/iterative_schur_complement_solver.h
internal/ceres/lapack.h
internal/ceres/levenberg_marquardt_strategy.h
@@ -252,7 +248,6 @@ set(SRC
internal/ceres/triplet_sparse_matrix.h
internal/ceres/trust_region_minimizer.h
internal/ceres/trust_region_preprocessor.h
internal/ceres/trust_region_step_evaluator.h
internal/ceres/trust_region_strategy.h
internal/ceres/visibility_based_preconditioner.h
internal/ceres/wall_time.h

1091
extern/ceres/ChangeLog vendored

File diff suppressed because it is too large Load Diff

View File

@@ -173,5 +173,26 @@ if(WITH_OPENMP)
)
endif()
TEST_UNORDERED_MAP_SUPPORT()
if(HAVE_STD_UNORDERED_MAP_HEADER)
if(HAVE_UNORDERED_MAP_IN_STD_NAMESPACE)
add_definitions(-DCERES_STD_UNORDERED_MAP)
else()
if(HAVE_UNORDERED_MAP_IN_TR1_NAMESPACE)
add_definitions(-DCERES_STD_UNORDERED_MAP_IN_TR1_NAMESPACE)
else()
add_definitions(-DCERES_NO_UNORDERED_MAP)
message(STATUS "Replacing unordered_map/set with map/set (warning: slower!)")
endif()
endif()
else()
if(HAVE_UNORDERED_MAP_IN_TR1_NAMESPACE)
add_definitions(-DCERES_TR1_UNORDERED_MAP)
else()
add_definitions(-DCERES_NO_UNORDERED_MAP)
message(STATUS "Replacing unordered_map/set with map/set (warning: slower!)")
endif()
endif()
blender_add_lib(extern_ceres "\${SRC}" "\${INC}" "\${INC_SYS}")
EOF

View File

@@ -149,7 +149,6 @@ internal/ceres/generated/schur_eliminator_4_4_d.cc
internal/ceres/generated/schur_eliminator_d_d_d.cc
internal/ceres/generate_eliminator_specialization.py
internal/ceres/generate_partitioned_matrix_view_specializations.py
internal/ceres/gradient_checker.cc
internal/ceres/gradient_checking_cost_function.cc
internal/ceres/gradient_checking_cost_function.h
internal/ceres/gradient_problem.cc
@@ -161,8 +160,6 @@ internal/ceres/householder_vector.h
internal/ceres/implicit_schur_complement.cc
internal/ceres/implicit_schur_complement.h
internal/ceres/integral_types.h
internal/ceres/is_close.cc
internal/ceres/is_close.h
internal/ceres/iterative_schur_complement_solver.cc
internal/ceres/iterative_schur_complement_solver.h
internal/ceres/lapack.cc
@@ -246,8 +243,6 @@ internal/ceres/trust_region_minimizer.cc
internal/ceres/trust_region_minimizer.h
internal/ceres/trust_region_preprocessor.cc
internal/ceres/trust_region_preprocessor.h
internal/ceres/trust_region_step_evaluator.cc
internal/ceres/trust_region_step_evaluator.h
internal/ceres/trust_region_strategy.cc
internal/ceres/trust_region_strategy.h
internal/ceres/types.cc

View File

@@ -130,8 +130,7 @@ class CostFunctionToFunctor {
const int num_parameter_blocks =
(N0 > 0) + (N1 > 0) + (N2 > 0) + (N3 > 0) + (N4 > 0) +
(N5 > 0) + (N6 > 0) + (N7 > 0) + (N8 > 0) + (N9 > 0);
CHECK_EQ(static_cast<int>(parameter_block_sizes.size()),
num_parameter_blocks);
CHECK_EQ(parameter_block_sizes.size(), num_parameter_blocks);
CHECK_EQ(N0, parameter_block_sizes[0]);
if (parameter_block_sizes.size() > 1) CHECK_EQ(N1, parameter_block_sizes[1]); // NOLINT

View File

@@ -357,28 +357,6 @@ class CERES_EXPORT Covariance {
const double*> >& covariance_blocks,
Problem* problem);
// Compute a part of the covariance matrix.
//
// The vector parameter_blocks contains the parameter blocks that
// are used for computing the covariance matrix. From this vector
// all covariance pairs are generated. This allows the covariance
// estimation algorithm to only compute and store these blocks.
//
// parameter_blocks cannot contain duplicates. Bad things will
// happen if they do.
//
// Note that the list of covariance_blocks is only used to determine
// what parts of the covariance matrix are computed. The full
// Jacobian is used to do the computation, i.e. they do not have an
// impact on what part of the Jacobian is used for computation.
//
// The return value indicates the success or failure of the
// covariance computation. Please see the documentation for
// Covariance::Options for more on the conditions under which this
// function returns false.
bool Compute(const std::vector<const double*>& parameter_blocks,
Problem* problem);
// Return the block of the cross-covariance matrix corresponding to
// parameter_block1 and parameter_block2.
//
@@ -416,40 +394,6 @@ class CERES_EXPORT Covariance {
const double* parameter_block2,
double* covariance_block) const;
// Return the covariance matrix corresponding to all parameter_blocks.
//
// Compute must be called before calling GetCovarianceMatrix and all
// parameter_blocks must have been present in the vector
// parameter_blocks when Compute was called. Otherwise
// GetCovarianceMatrix returns false.
//
// covariance_matrix must point to a memory location that can store
// the size of the covariance matrix. The covariance matrix will be
// a square matrix whose row and column count is equal to the sum of
// the sizes of the individual parameter blocks. The covariance
// matrix will be a row-major matrix.
bool GetCovarianceMatrix(const std::vector<const double *> &parameter_blocks,
double *covariance_matrix);
// Return the covariance matrix corresponding to parameter_blocks
// in the tangent space if a local parameterization is associated
// with one of the parameter blocks else returns the covariance
// matrix in the ambient space.
//
// Compute must be called before calling GetCovarianceMatrix and all
// parameter_blocks must have been present in the vector
// parameters_blocks when Compute was called. Otherwise
// GetCovarianceMatrix returns false.
//
// covariance_matrix must point to a memory location that can store
// the size of the covariance matrix. The covariance matrix will be
// a square matrix whose row and column count is equal to the sum of
// the sizes of the tangent spaces of the individual parameter
// blocks. The covariance matrix will be a row-major matrix.
bool GetCovarianceMatrixInTangentSpace(
const std::vector<const double*>& parameter_blocks,
double* covariance_matrix);
private:
internal::scoped_ptr<internal::CovarianceImpl> impl_;
};

View File

@@ -85,6 +85,22 @@ class DynamicNumericDiffCostFunction : public CostFunction {
options_(options) {
}
// Deprecated. New users should avoid using this constructor. Instead, use the
// constructor with NumericDiffOptions.
DynamicNumericDiffCostFunction(
const CostFunctor* functor,
Ownership ownership,
double relative_step_size)
: functor_(functor),
ownership_(ownership),
options_() {
LOG(WARNING) << "This constructor is deprecated and will be removed in "
"a future version. Please use the NumericDiffOptions "
"constructor instead.";
options_.relative_step_size = relative_step_size;
}
virtual ~DynamicNumericDiffCostFunction() {
if (ownership_ != TAKE_OWNERSHIP) {
functor_.release();
@@ -122,19 +138,19 @@ class DynamicNumericDiffCostFunction : public CostFunction {
std::vector<double> parameters_copy(parameters_size);
std::vector<double*> parameters_references_copy(block_sizes.size());
parameters_references_copy[0] = &parameters_copy[0];
for (size_t block = 1; block < block_sizes.size(); ++block) {
for (int block = 1; block < block_sizes.size(); ++block) {
parameters_references_copy[block] = parameters_references_copy[block - 1]
+ block_sizes[block - 1];
}
// Copy the parameters into the local temp space.
for (size_t block = 0; block < block_sizes.size(); ++block) {
for (int block = 0; block < block_sizes.size(); ++block) {
memcpy(parameters_references_copy[block],
parameters[block],
block_sizes[block] * sizeof(*parameters[block]));
}
for (size_t block = 0; block < block_sizes.size(); ++block) {
for (int block = 0; block < block_sizes.size(); ++block) {
if (jacobians[block] != NULL &&
!NumericDiff<CostFunctor, method, DYNAMIC,
DYNAMIC, DYNAMIC, DYNAMIC, DYNAMIC, DYNAMIC,

View File

@@ -27,121 +27,194 @@
// POSSIBILITY OF SUCH DAMAGE.
// Copyright 2007 Google Inc. All Rights Reserved.
//
// Authors: wjr@google.com (William Rucklidge),
// keir@google.com (Keir Mierle),
// dgossow@google.com (David Gossow)
// Author: wjr@google.com (William Rucklidge)
//
// This file contains a class that exercises a cost function, to make sure
// that it is computing reasonable derivatives. It compares the Jacobians
// computed by the cost function with those obtained by finite
// differences.
#ifndef CERES_PUBLIC_GRADIENT_CHECKER_H_
#define CERES_PUBLIC_GRADIENT_CHECKER_H_
#include <cstddef>
#include <algorithm>
#include <vector>
#include <string>
#include "ceres/cost_function.h"
#include "ceres/dynamic_numeric_diff_cost_function.h"
#include "ceres/internal/eigen.h"
#include "ceres/internal/fixed_array.h"
#include "ceres/internal/macros.h"
#include "ceres/internal/scoped_ptr.h"
#include "ceres/local_parameterization.h"
#include "ceres/numeric_diff_cost_function.h"
#include "glog/logging.h"
namespace ceres {
// GradientChecker compares the Jacobians returned by a cost function against
// derivatives estimated using finite differencing.
// An object that exercises a cost function, to compare the answers that it
// gives with derivatives estimated using finite differencing.
//
// The condition enforced is that
//
// (J_actual(i, j) - J_numeric(i, j))
// ------------------------------------ < relative_precision
// max(J_actual(i, j), J_numeric(i, j))
//
// where J_actual(i, j) is the jacobian as computed by the supplied cost
// function (by the user) multiplied by the local parameterization Jacobian
// and J_numeric is the jacobian as computed by finite differences, multiplied
// by the local parameterization Jacobian as well.
// The only likely usage of this is for testing.
//
// How to use: Fill in an array of pointers to parameter blocks for your
// CostFunction, and then call Probe(). Check that the return value is 'true'.
// CostFunction, and then call Probe(). Check that the return value is
// 'true'. See prober_test.cc for an example.
//
// This is templated similarly to NumericDiffCostFunction, as it internally
// uses that.
template <typename CostFunctionToProbe,
int M = 0, int N0 = 0, int N1 = 0, int N2 = 0, int N3 = 0, int N4 = 0>
class GradientChecker {
public:
// This will not take ownership of the cost function or local
// parameterizations.
//
// function: The cost function to probe.
// local_parameterization: A vector of local parameterizations for each
// parameter. May be NULL or contain NULL pointers to indicate that the
// respective parameter does not have a local parameterization.
// options: Options to use for numerical differentiation.
GradientChecker(
const CostFunction* function,
const std::vector<const LocalParameterization*>* local_parameterizations,
const NumericDiffOptions& options);
// Here we stash some results from the probe, for later
// inspection.
struct GradientCheckResults {
// Computed cost.
Vector cost;
// Contains results from a call to Probe for later inspection.
struct ProbeResults {
// The return value of the cost function.
bool return_value;
// Computed residual vector.
Vector residuals;
// The sizes of the Jacobians below are dictated by the cost function's
// parameter block size and residual block sizes. If a parameter block
// has a local parameterization associated with it, the size of the "local"
// Jacobian will be determined by the local parameterization dimension and
// residual block size, otherwise it will be identical to the regular
// Jacobian.
// The sizes of these matrices are dictated by the cost function's
// parameter and residual block sizes. Each vector's length will
// term->parameter_block_sizes().size(), and each matrix is the
// Jacobian of the residual with respect to the corresponding parameter
// block.
// Derivatives as computed by the cost function.
std::vector<Matrix> jacobians;
std::vector<Matrix> term_jacobians;
// Derivatives as computed by the cost function in local space.
std::vector<Matrix> local_jacobians;
// Derivatives as computed by finite differencing.
std::vector<Matrix> finite_difference_jacobians;
// Derivatives as computed by nuerical differentiation in local space.
std::vector<Matrix> numeric_jacobians;
// Derivatives as computed by nuerical differentiation in local space.
std::vector<Matrix> local_numeric_jacobians;
// Contains the maximum relative error found in the local Jacobians.
double maximum_relative_error;
// If an error was detected, this will contain a detailed description of
// that error.
std::string error_log;
// Infinity-norm of term_jacobians - finite_difference_jacobians.
double error_jacobians;
};
// Call the cost function, compute alternative Jacobians using finite
// differencing and compare results. If local parameterizations are given,
// the Jacobians will be multiplied by the local parameterization Jacobians
// before performing the check, which effectively means that all errors along
// the null space of the local parameterization will be ignored.
// Returns false if the Jacobians don't match, the cost function return false,
// or if the cost function returns different residual when called with a
// Jacobian output argument vs. calling it without. Otherwise returns true.
// Checks the Jacobian computed by a cost function.
//
// parameters: The parameter values at which to probe.
// relative_precision: A threshold for the relative difference between the
// Jacobians. If the Jacobians differ by more than this amount, then the
// probe fails.
// results: On return, the Jacobians (and other information) will be stored
// here. May be NULL.
// probe_point: The parameter values at which to probe.
// error_tolerance: A threshold for the infinity-norm difference
// between the Jacobians. If the Jacobians differ by more than
// this amount, then the probe fails.
//
// term: The cost function to test. Not retained after this call returns.
//
// results: On return, the two Jacobians (and other information)
// will be stored here. May be NULL.
//
// Returns true if no problems are detected and the difference between the
// Jacobians is less than error_tolerance.
bool Probe(double const* const* parameters,
double relative_precision,
ProbeResults* results) const;
static bool Probe(double const* const* probe_point,
double error_tolerance,
CostFunctionToProbe *term,
GradientCheckResults* results) {
CHECK_NOTNULL(probe_point);
CHECK_NOTNULL(term);
LOG(INFO) << "-------------------- Starting Probe() --------------------";
// We need a GradientCheckeresults, whether or not they supplied one.
internal::scoped_ptr<GradientCheckResults> owned_results;
if (results == NULL) {
owned_results.reset(new GradientCheckResults);
results = owned_results.get();
}
// Do a consistency check between the term and the template parameters.
CHECK_EQ(M, term->num_residuals());
const int num_residuals = M;
const std::vector<int32>& block_sizes = term->parameter_block_sizes();
const int num_blocks = block_sizes.size();
CHECK_LE(num_blocks, 5) << "Unable to test functions that take more "
<< "than 5 parameter blocks";
if (N0) {
CHECK_EQ(N0, block_sizes[0]);
CHECK_GE(num_blocks, 1);
} else {
CHECK_LT(num_blocks, 1);
}
if (N1) {
CHECK_EQ(N1, block_sizes[1]);
CHECK_GE(num_blocks, 2);
} else {
CHECK_LT(num_blocks, 2);
}
if (N2) {
CHECK_EQ(N2, block_sizes[2]);
CHECK_GE(num_blocks, 3);
} else {
CHECK_LT(num_blocks, 3);
}
if (N3) {
CHECK_EQ(N3, block_sizes[3]);
CHECK_GE(num_blocks, 4);
} else {
CHECK_LT(num_blocks, 4);
}
if (N4) {
CHECK_EQ(N4, block_sizes[4]);
CHECK_GE(num_blocks, 5);
} else {
CHECK_LT(num_blocks, 5);
}
results->term_jacobians.clear();
results->term_jacobians.resize(num_blocks);
results->finite_difference_jacobians.clear();
results->finite_difference_jacobians.resize(num_blocks);
internal::FixedArray<double*> term_jacobian_pointers(num_blocks);
internal::FixedArray<double*>
finite_difference_jacobian_pointers(num_blocks);
for (int i = 0; i < num_blocks; i++) {
results->term_jacobians[i].resize(num_residuals, block_sizes[i]);
term_jacobian_pointers[i] = results->term_jacobians[i].data();
results->finite_difference_jacobians[i].resize(
num_residuals, block_sizes[i]);
finite_difference_jacobian_pointers[i] =
results->finite_difference_jacobians[i].data();
}
results->cost.resize(num_residuals, 1);
CHECK(term->Evaluate(probe_point, results->cost.data(),
term_jacobian_pointers.get()));
NumericDiffCostFunction<CostFunctionToProbe, CENTRAL, M, N0, N1, N2, N3, N4>
numeric_term(term, DO_NOT_TAKE_OWNERSHIP);
CHECK(numeric_term.Evaluate(probe_point, results->cost.data(),
finite_difference_jacobian_pointers.get()));
results->error_jacobians = 0;
for (int i = 0; i < num_blocks; i++) {
Matrix jacobian_difference = results->term_jacobians[i] -
results->finite_difference_jacobians[i];
results->error_jacobians =
std::max(results->error_jacobians,
jacobian_difference.lpNorm<Eigen::Infinity>());
}
LOG(INFO) << "========== term-computed derivatives ==========";
for (int i = 0; i < num_blocks; i++) {
LOG(INFO) << "term_computed block " << i;
LOG(INFO) << "\n" << results->term_jacobians[i];
}
LOG(INFO) << "========== finite-difference derivatives ==========";
for (int i = 0; i < num_blocks; i++) {
LOG(INFO) << "finite_difference block " << i;
LOG(INFO) << "\n" << results->finite_difference_jacobians[i];
}
LOG(INFO) << "========== difference ==========";
for (int i = 0; i < num_blocks; i++) {
LOG(INFO) << "difference block " << i;
LOG(INFO) << (results->term_jacobians[i] -
results->finite_difference_jacobians[i]);
}
LOG(INFO) << "||difference|| = " << results->error_jacobians;
return results->error_jacobians < error_tolerance;
}
private:
CERES_DISALLOW_IMPLICIT_CONSTRUCTORS(GradientChecker);
std::vector<const LocalParameterization*> local_parameterizations_;
const CostFunction* function_;
internal::scoped_ptr<CostFunction> finite_diff_cost_function_;
};
} // namespace ceres

View File

@@ -33,8 +33,9 @@
// This file needs to compile as c code.
#ifdef __cplusplus
#include <cstddef>
#include "ceres/internal/config.h"
#if defined(CERES_TR1_MEMORY_HEADER)
#include <tr1/memory>
#else
@@ -49,25 +50,6 @@ using std::tr1::shared_ptr;
using std::shared_ptr;
#endif
// We allocate some Eigen objects on the stack and other places they
// might not be aligned to 16-byte boundaries. If we have C++11, we
// can specify their alignment anyway, and thus can safely enable
// vectorization on those matrices; in C++99, we are out of luck. Figure out
// what case we're in and write macros that do the right thing.
#ifdef CERES_USE_CXX11
namespace port_constants {
static constexpr size_t kMaxAlignBytes =
// Work around a GCC 4.8 bug
// (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=56019) where
// std::max_align_t is misplaced.
#if defined (__GNUC__) && __GNUC__ == 4 && __GNUC_MINOR__ == 8
alignof(::max_align_t);
#else
alignof(std::max_align_t);
#endif
} // namespace port_constants
#endif
} // namespace ceres
#endif // __cplusplus

View File

@@ -69,7 +69,7 @@ struct CERES_EXPORT IterationSummary {
// Step was numerically valid, i.e., all values are finite and the
// step reduces the value of the linearized model.
//
// Note: step_is_valid is always true when iteration = 0.
// Note: step_is_valid is false when iteration = 0.
bool step_is_valid;
// Step did not reduce the value of the objective function
@@ -77,7 +77,7 @@ struct CERES_EXPORT IterationSummary {
// acceptance criterion used by the non-monotonic trust region
// algorithm.
//
// Note: step_is_nonmonotonic is always false when iteration = 0;
// Note: step_is_nonmonotonic is false when iteration = 0;
bool step_is_nonmonotonic;
// Whether or not the minimizer accepted this step or not. If the
@@ -89,7 +89,7 @@ struct CERES_EXPORT IterationSummary {
// relative decrease is not sufficient, the algorithm may accept the
// step and the step is declared successful.
//
// Note: step_is_successful is always true when iteration = 0.
// Note: step_is_successful is false when iteration = 0.
bool step_is_successful;
// Value of the objective function.

View File

@@ -164,7 +164,6 @@
#include "Eigen/Core"
#include "ceres/fpclassify.h"
#include "ceres/internal/port.h"
namespace ceres {
@@ -228,23 +227,21 @@ struct Jet {
T a;
// The infinitesimal part.
// We allocate Jets on the stack and other places they
// might not be aligned to 16-byte boundaries. If we have C++11, we
// can specify their alignment anyway, and thus can safely enable
// vectorization on those matrices; in C++99, we are out of luck. Figure out
// what case we're in and do the right thing.
#ifndef CERES_USE_CXX11
// fall back to safe version:
//
// Note the Eigen::DontAlign bit is needed here because this object
// gets allocated on the stack and as part of other arrays and
// structs. Forcing the right alignment there is the source of much
// pain and suffering. Even if that works, passing Jets around to
// functions by value has problems because the C++ ABI does not
// guarantee alignment for function arguments.
//
// Setting the DontAlign bit prevents Eigen from using SSE for the
// various operations on Jets. This is a small performance penalty
// since the AutoDiff code will still expose much of the code as
// statically sized loops to the compiler. But given the subtle
// issues that arise due to alignment, especially when dealing with
// multiple platforms, it seems to be a trade off worth making.
Eigen::Matrix<T, N, 1, Eigen::DontAlign> v;
#else
static constexpr bool kShouldAlignMatrix =
16 <= ::ceres::port_constants::kMaxAlignBytes;
static constexpr int kAlignHint = kShouldAlignMatrix ?
Eigen::AutoAlign : Eigen::DontAlign;
static constexpr size_t kAlignment = kShouldAlignMatrix ? 16 : 1;
alignas(kAlignment) Eigen::Matrix<T, N, 1, kAlignHint> v;
#endif
};
// Unary +
@@ -391,8 +388,6 @@ inline double atan (double x) { return std::atan(x); }
inline double sinh (double x) { return std::sinh(x); }
inline double cosh (double x) { return std::cosh(x); }
inline double tanh (double x) { return std::tanh(x); }
inline double floor (double x) { return std::floor(x); }
inline double ceil (double x) { return std::ceil(x); }
inline double pow (double x, double y) { return std::pow(x, y); }
inline double atan2(double y, double x) { return std::atan2(y, x); }
@@ -487,51 +482,10 @@ Jet<T, N> tanh(const Jet<T, N>& f) {
return Jet<T, N>(tanh_a, tmp * f.v);
}
// The floor function should be used with extreme care as this operation will
// result in a zero derivative which provides no information to the solver.
//
// floor(a + h) ~= floor(a) + 0
template <typename T, int N> inline
Jet<T, N> floor(const Jet<T, N>& f) {
return Jet<T, N>(floor(f.a));
}
// The ceil function should be used with extreme care as this operation will
// result in a zero derivative which provides no information to the solver.
//
// ceil(a + h) ~= ceil(a) + 0
template <typename T, int N> inline
Jet<T, N> ceil(const Jet<T, N>& f) {
return Jet<T, N>(ceil(f.a));
}
// Bessel functions of the first kind with integer order equal to 0, 1, n.
//
// Microsoft has deprecated the j[0,1,n]() POSIX Bessel functions in favour of
// _j[0,1,n](). Where available on MSVC, use _j[0,1,n]() to avoid deprecated
// function errors in client code (the specific warning is suppressed when
// Ceres itself is built).
inline double BesselJ0(double x) {
#if defined(_MSC_VER) && defined(_j0)
return _j0(x);
#else
return j0(x);
#endif
}
inline double BesselJ1(double x) {
#if defined(_MSC_VER) && defined(_j1)
return _j1(x);
#else
return j1(x);
#endif
}
inline double BesselJn(int n, double x) {
#if defined(_MSC_VER) && defined(_jn)
return _jn(n, x);
#else
return jn(n, x);
#endif
}
inline double BesselJ0(double x) { return j0(x); }
inline double BesselJ1(double x) { return j1(x); }
inline double BesselJn(int n, double x) { return jn(n, x); }
// For the formulae of the derivatives of the Bessel functions see the book:
// Olver, Lozier, Boisvert, Clark, NIST Handbook of Mathematical Functions,
@@ -789,15 +743,7 @@ template<typename T, int N> inline Jet<T, N> ei_pow (const Jet<T, N>& x,
// strange compile errors.
template <typename T, int N>
inline std::ostream &operator<<(std::ostream &s, const Jet<T, N>& z) {
s << "[" << z.a << " ; ";
for (int i = 0; i < N; ++i) {
s << z.v[i];
if (i != N - 1) {
s << ", ";
}
}
s << "]";
return s;
return s << "[" << z.a << " ; " << z.v.transpose() << "]";
}
} // namespace ceres
@@ -811,7 +757,6 @@ struct NumTraits<ceres::Jet<T, N> > {
typedef ceres::Jet<T, N> Real;
typedef ceres::Jet<T, N> NonInteger;
typedef ceres::Jet<T, N> Nested;
typedef ceres::Jet<T, N> Literal;
static typename ceres::Jet<T, N> dummy_precision() {
return ceres::Jet<T, N>(1e-12);
@@ -832,21 +777,6 @@ struct NumTraits<ceres::Jet<T, N> > {
HasFloatingPoint = 1,
RequireInitialization = 1
};
template<bool Vectorized>
struct Div {
enum {
#if defined(EIGEN_VECTORIZE_AVX)
AVX = true,
#else
AVX = false,
#endif
// Assuming that for Jets, division is as expensive as
// multiplication.
Cost = 3
};
};
};
} // namespace Eigen

View File

@@ -211,28 +211,6 @@ class CERES_EXPORT QuaternionParameterization : public LocalParameterization {
virtual int LocalSize() const { return 3; }
};
// Implements the quaternion local parameterization for Eigen's representation
// of the quaternion. Eigen uses a different internal memory layout for the
// elements of the quaternion than what is commonly used. Specifically, Eigen
// stores the elements in memory as [x, y, z, w] where the real part is last
// whereas it is typically stored first. Note, when creating an Eigen quaternion
// through the constructor the elements are accepted in w, x, y, z order. Since
// Ceres operates on parameter blocks which are raw double pointers this
// difference is important and requires a different parameterization.
//
// Plus(x, delta) = [sin(|delta|) delta / |delta|, cos(|delta|)] * x
// with * being the quaternion multiplication operator.
class EigenQuaternionParameterization : public ceres::LocalParameterization {
public:
virtual ~EigenQuaternionParameterization() {}
virtual bool Plus(const double* x,
const double* delta,
double* x_plus_delta) const;
virtual bool ComputeJacobian(const double* x,
double* jacobian) const;
virtual int GlobalSize() const { return 4; }
virtual int LocalSize() const { return 3; }
};
// This provides a parameterization for homogeneous vectors which are commonly
// used in Structure for Motion problems. One example where they are used is

View File

@@ -206,6 +206,29 @@ class NumericDiffCostFunction
}
}
// Deprecated. New users should avoid using this constructor. Instead, use the
// constructor with NumericDiffOptions.
NumericDiffCostFunction(CostFunctor* functor,
Ownership ownership,
int num_residuals,
const double relative_step_size)
:functor_(functor),
ownership_(ownership),
options_() {
LOG(WARNING) << "This constructor is deprecated and will be removed in "
"a future version. Please use the NumericDiffOptions "
"constructor instead.";
if (kNumResiduals == DYNAMIC) {
SizedCostFunction<kNumResiduals,
N0, N1, N2, N3, N4,
N5, N6, N7, N8, N9>
::set_num_residuals(num_residuals);
}
options_.relative_step_size = relative_step_size;
}
~NumericDiffCostFunction() {
if (ownership_ != TAKE_OWNERSHIP) {
functor_.release();

View File

@@ -309,9 +309,6 @@ class CERES_EXPORT Problem {
// Allow the indicated parameter block to vary during optimization.
void SetParameterBlockVariable(double* values);
// Returns true if a parameter block is set constant, and false otherwise.
bool IsParameterBlockConstant(double* values) const;
// Set the local parameterization for one of the parameter blocks.
// The local_parameterization is owned by the Problem by default. It
// is acceptable to set the same parameterization for multiple
@@ -464,10 +461,6 @@ class CERES_EXPORT Problem {
// parameter block has a local parameterization, then it contributes
// "LocalSize" entries to the gradient vector (and the number of
// columns in the jacobian).
//
// Note 3: This function cannot be called while the problem is being
// solved, for example it cannot be called from an IterationCallback
// at the end of an iteration during a solve.
bool Evaluate(const EvaluateOptions& options,
double* cost,
std::vector<double>* residuals,

View File

@@ -48,6 +48,7 @@
#include <algorithm>
#include <cmath>
#include <limits>
#include "glog/logging.h"
namespace ceres {
@@ -417,6 +418,7 @@ template <typename T>
inline void EulerAnglesToRotationMatrix(const T* euler,
const int row_stride_parameter,
T* R) {
CHECK_EQ(row_stride_parameter, 3);
EulerAnglesToRotationMatrix(euler, RowMajorAdapter3x3(R));
}
@@ -494,6 +496,7 @@ void QuaternionToRotation(const T q[4],
QuaternionToScaledRotation(q, R);
T normalizer = q[0]*q[0] + q[1]*q[1] + q[2]*q[2] + q[3]*q[3];
CHECK_NE(normalizer, T(0));
normalizer = T(1) / normalizer;
for (int i = 0; i < 3; ++i) {

View File

@@ -134,7 +134,7 @@ class CERES_EXPORT Solver {
trust_region_problem_dump_format_type = TEXTFILE;
check_gradients = false;
gradient_check_relative_precision = 1e-8;
gradient_check_numeric_derivative_relative_step_size = 1e-6;
numeric_derivative_relative_step_size = 1e-6;
update_state_every_iteration = false;
}
@@ -701,22 +701,12 @@ class CERES_EXPORT Solver {
// this number, then the jacobian for that cost term is dumped.
double gradient_check_relative_precision;
// WARNING: This option only applies to the to the numeric
// differentiation used for checking the user provided derivatives
// when when Solver::Options::check_gradients is true. If you are
// using NumericDiffCostFunction and are interested in changing
// the step size for numeric differentiation in your cost
// function, please have a look at
// include/ceres/numeric_diff_options.h.
// Relative shift used for taking numeric derivatives. For finite
// differencing, each dimension is evaluated at slightly shifted
// values; for the case of central difference, this is what gets
// evaluated:
//
// Relative shift used for taking numeric derivatives when
// Solver::Options::check_gradients is true.
//
// For finite differencing, each dimension is evaluated at
// slightly shifted values; for the case of central difference,
// this is what gets evaluated:
//
// delta = gradient_check_numeric_derivative_relative_step_size;
// delta = numeric_derivative_relative_step_size;
// f_initial = f(x)
// f_forward = f((1 + delta) * x)
// f_backward = f((1 - delta) * x)
@@ -733,7 +723,7 @@ class CERES_EXPORT Solver {
// theory a good choice is sqrt(eps) * x, which for doubles means
// about 1e-8 * x. However, I have found this number too
// optimistic. This number should be exposed for users to change.
double gradient_check_numeric_derivative_relative_step_size;
double numeric_derivative_relative_step_size;
// If true, the user's parameter blocks are updated at the end of
// every Minimizer iteration, otherwise they are updated when the
@@ -811,13 +801,6 @@ class CERES_EXPORT Solver {
// Number of times inner iterations were performed.
int num_inner_iteration_steps;
// Total number of iterations inside the line search algorithm
// across all invocations. We call these iterations "steps" to
// distinguish them from the outer iterations of the line search
// and trust region minimizer algorithms which call the line
// search algorithm as a subroutine.
int num_line_search_steps;
// All times reported below are wall times.
// When the user calls Solve, before the actual optimization

View File

@@ -32,7 +32,7 @@
#define CERES_PUBLIC_VERSION_H_
#define CERES_VERSION_MAJOR 1
#define CERES_VERSION_MINOR 12
#define CERES_VERSION_MINOR 11
#define CERES_VERSION_REVISION 0
// Classic CPP stringifcation; the extra level of indirection allows the

View File

@@ -46,7 +46,6 @@ namespace internal {
using std::make_pair;
using std::pair;
using std::vector;
using std::adjacent_find;
void CompressedRowJacobianWriter::PopulateJacobianRowAndColumnBlockVectors(
const Program* program, CompressedRowSparseMatrix* jacobian) {
@@ -141,21 +140,12 @@ SparseMatrix* CompressedRowJacobianWriter::CreateJacobian() const {
// Sort the parameters by their position in the state vector.
sort(parameter_indices.begin(), parameter_indices.end());
if (adjacent_find(parameter_indices.begin(), parameter_indices.end()) !=
parameter_indices.end()) {
std::string parameter_block_description;
for (int j = 0; j < num_parameter_blocks; ++j) {
ParameterBlock* parameter_block = residual_block->parameter_blocks()[j];
parameter_block_description +=
parameter_block->ToString() + "\n";
}
LOG(FATAL) << "Ceres internal error: "
<< "Duplicate parameter blocks detected in a cost function. "
<< "This should never happen. Please report this to "
<< "the Ceres developers.\n"
<< "Residual Block: " << residual_block->ToString() << "\n"
<< "Parameter Blocks: " << parameter_block_description;
}
CHECK(unique(parameter_indices.begin(), parameter_indices.end()) ==
parameter_indices.end())
<< "Ceres internal error: "
<< "Duplicate parameter blocks detected in a cost function. "
<< "This should never happen. Please report this to "
<< "the Ceres developers.";
// Update the row indices.
const int num_residuals = residual_block->NumResiduals();

View File

@@ -38,7 +38,6 @@
namespace ceres {
using std::make_pair;
using std::pair;
using std::vector;
@@ -55,12 +54,6 @@ bool Covariance::Compute(
return impl_->Compute(covariance_blocks, problem->problem_impl_.get());
}
bool Covariance::Compute(
const vector<const double*>& parameter_blocks,
Problem* problem) {
return impl_->Compute(parameter_blocks, problem->problem_impl_.get());
}
bool Covariance::GetCovarianceBlock(const double* parameter_block1,
const double* parameter_block2,
double* covariance_block) const {
@@ -80,20 +73,4 @@ bool Covariance::GetCovarianceBlockInTangentSpace(
covariance_block);
}
bool Covariance::GetCovarianceMatrix(
const vector<const double*>& parameter_blocks,
double* covariance_matrix) {
return impl_->GetCovarianceMatrixInTangentOrAmbientSpace(parameter_blocks,
true, // ambient
covariance_matrix);
}
bool Covariance::GetCovarianceMatrixInTangentSpace(
const std::vector<const double *>& parameter_blocks,
double *covariance_matrix) {
return impl_->GetCovarianceMatrixInTangentOrAmbientSpace(parameter_blocks,
false, // tangent
covariance_matrix);
}
} // namespace ceres

View File

@@ -36,8 +36,6 @@
#include <algorithm>
#include <cstdlib>
#include <numeric>
#include <sstream>
#include <utility>
#include <vector>
@@ -45,7 +43,6 @@
#include "Eigen/SparseQR"
#include "Eigen/SVD"
#include "ceres/collections_port.h"
#include "ceres/compressed_col_sparse_matrix_utils.h"
#include "ceres/compressed_row_sparse_matrix.h"
#include "ceres/covariance.h"
@@ -54,7 +51,6 @@
#include "ceres/map_util.h"
#include "ceres/parameter_block.h"
#include "ceres/problem_impl.h"
#include "ceres/residual_block.h"
#include "ceres/suitesparse.h"
#include "ceres/wall_time.h"
#include "glog/logging.h"
@@ -65,7 +61,6 @@ namespace internal {
using std::make_pair;
using std::map;
using std::pair;
using std::sort;
using std::swap;
using std::vector;
@@ -91,38 +86,8 @@ CovarianceImpl::CovarianceImpl(const Covariance::Options& options)
CovarianceImpl::~CovarianceImpl() {
}
template <typename T> void CheckForDuplicates(vector<T> blocks) {
sort(blocks.begin(), blocks.end());
typename vector<T>::iterator it =
std::adjacent_find(blocks.begin(), blocks.end());
if (it != blocks.end()) {
// In case there are duplicates, we search for their location.
map<T, vector<int> > blocks_map;
for (int i = 0; i < blocks.size(); ++i) {
blocks_map[blocks[i]].push_back(i);
}
std::ostringstream duplicates;
while (it != blocks.end()) {
duplicates << "(";
for (int i = 0; i < blocks_map[*it].size() - 1; ++i) {
duplicates << blocks_map[*it][i] << ", ";
}
duplicates << blocks_map[*it].back() << ")";
it = std::adjacent_find(it + 1, blocks.end());
if (it < blocks.end()) {
duplicates << " and ";
}
}
LOG(FATAL) << "Covariance::Compute called with duplicate blocks at "
<< "indices " << duplicates.str();
}
}
bool CovarianceImpl::Compute(const CovarianceBlocks& covariance_blocks,
ProblemImpl* problem) {
CheckForDuplicates<pair<const double*, const double*> >(covariance_blocks);
problem_ = problem;
parameter_block_to_row_index_.clear();
covariance_matrix_.reset(NULL);
@@ -132,20 +97,6 @@ bool CovarianceImpl::Compute(const CovarianceBlocks& covariance_blocks,
return is_valid_;
}
bool CovarianceImpl::Compute(const vector<const double*>& parameter_blocks,
ProblemImpl* problem) {
CheckForDuplicates<const double*>(parameter_blocks);
CovarianceBlocks covariance_blocks;
for (int i = 0; i < parameter_blocks.size(); ++i) {
for (int j = i; j < parameter_blocks.size(); ++j) {
covariance_blocks.push_back(make_pair(parameter_blocks[i],
parameter_blocks[j]));
}
}
return Compute(covariance_blocks, problem);
}
bool CovarianceImpl::GetCovarianceBlockInTangentOrAmbientSpace(
const double* original_parameter_block1,
const double* original_parameter_block2,
@@ -169,17 +120,9 @@ bool CovarianceImpl::GetCovarianceBlockInTangentOrAmbientSpace(
ParameterBlock* block2 =
FindOrDie(parameter_map,
const_cast<double*>(original_parameter_block2));
const int block1_size = block1->Size();
const int block2_size = block2->Size();
const int block1_local_size = block1->LocalSize();
const int block2_local_size = block2->LocalSize();
if (!lift_covariance_to_ambient_space) {
MatrixRef(covariance_block, block1_local_size, block2_local_size)
.setZero();
} else {
MatrixRef(covariance_block, block1_size, block2_size).setZero();
}
MatrixRef(covariance_block, block1_size, block2_size).setZero();
return true;
}
@@ -297,94 +240,6 @@ bool CovarianceImpl::GetCovarianceBlockInTangentOrAmbientSpace(
return true;
}
bool CovarianceImpl::GetCovarianceMatrixInTangentOrAmbientSpace(
const vector<const double*>& parameters,
bool lift_covariance_to_ambient_space,
double* covariance_matrix) const {
CHECK(is_computed_)
<< "Covariance::GetCovarianceMatrix called before Covariance::Compute";
CHECK(is_valid_)
<< "Covariance::GetCovarianceMatrix called when Covariance::Compute "
<< "returned false.";
const ProblemImpl::ParameterMap& parameter_map = problem_->parameter_map();
// For OpenMP compatibility we need to define these vectors in advance
const int num_parameters = parameters.size();
vector<int> parameter_sizes;
vector<int> cum_parameter_size;
parameter_sizes.reserve(num_parameters);
cum_parameter_size.resize(num_parameters + 1);
cum_parameter_size[0] = 0;
for (int i = 0; i < num_parameters; ++i) {
ParameterBlock* block =
FindOrDie(parameter_map, const_cast<double*>(parameters[i]));
if (lift_covariance_to_ambient_space) {
parameter_sizes.push_back(block->Size());
} else {
parameter_sizes.push_back(block->LocalSize());
}
}
std::partial_sum(parameter_sizes.begin(), parameter_sizes.end(),
cum_parameter_size.begin() + 1);
const int max_covariance_block_size =
*std::max_element(parameter_sizes.begin(), parameter_sizes.end());
const int covariance_size = cum_parameter_size.back();
// Assemble the blocks in the covariance matrix.
MatrixRef covariance(covariance_matrix, covariance_size, covariance_size);
const int num_threads = options_.num_threads;
scoped_array<double> workspace(
new double[num_threads * max_covariance_block_size *
max_covariance_block_size]);
bool success = true;
// The collapse() directive is only supported in OpenMP 3.0 and higher. OpenMP
// 3.0 was released in May 2008 (hence the version number).
#if _OPENMP >= 200805
# pragma omp parallel for num_threads(num_threads) schedule(dynamic) collapse(2)
#else
# pragma omp parallel for num_threads(num_threads) schedule(dynamic)
#endif
for (int i = 0; i < num_parameters; ++i) {
for (int j = 0; j < num_parameters; ++j) {
// The second loop can't start from j = i for compatibility with OpenMP
// collapse command. The conditional serves as a workaround
if (j >= i) {
int covariance_row_idx = cum_parameter_size[i];
int covariance_col_idx = cum_parameter_size[j];
int size_i = parameter_sizes[i];
int size_j = parameter_sizes[j];
#ifdef CERES_USE_OPENMP
int thread_id = omp_get_thread_num();
#else
int thread_id = 0;
#endif
double* covariance_block =
workspace.get() +
thread_id * max_covariance_block_size * max_covariance_block_size;
if (!GetCovarianceBlockInTangentOrAmbientSpace(
parameters[i], parameters[j], lift_covariance_to_ambient_space,
covariance_block)) {
success = false;
}
covariance.block(covariance_row_idx, covariance_col_idx,
size_i, size_j) =
MatrixRef(covariance_block, size_i, size_j);
if (i != j) {
covariance.block(covariance_col_idx, covariance_row_idx,
size_j, size_i) =
MatrixRef(covariance_block, size_i, size_j).transpose();
}
}
}
}
return success;
}
// Determine the sparsity pattern of the covariance matrix based on
// the block pairs requested by the user.
bool CovarianceImpl::ComputeCovarianceSparsity(
@@ -397,28 +252,18 @@ bool CovarianceImpl::ComputeCovarianceSparsity(
vector<double*> all_parameter_blocks;
problem->GetParameterBlocks(&all_parameter_blocks);
const ProblemImpl::ParameterMap& parameter_map = problem->parameter_map();
HashSet<ParameterBlock*> parameter_blocks_in_use;
vector<ResidualBlock*> residual_blocks;
problem->GetResidualBlocks(&residual_blocks);
for (int i = 0; i < residual_blocks.size(); ++i) {
ResidualBlock* residual_block = residual_blocks[i];
parameter_blocks_in_use.insert(residual_block->parameter_blocks(),
residual_block->parameter_blocks() +
residual_block->NumParameterBlocks());
}
constant_parameter_blocks_.clear();
vector<double*>& active_parameter_blocks =
evaluate_options_.parameter_blocks;
active_parameter_blocks.clear();
for (int i = 0; i < all_parameter_blocks.size(); ++i) {
double* parameter_block = all_parameter_blocks[i];
ParameterBlock* block = FindOrDie(parameter_map, parameter_block);
if (!block->IsConstant() && (parameter_blocks_in_use.count(block) > 0)) {
active_parameter_blocks.push_back(parameter_block);
} else {
if (block->IsConstant()) {
constant_parameter_blocks_.insert(parameter_block);
} else {
active_parameter_blocks.push_back(parameter_block);
}
}
@@ -541,8 +386,8 @@ bool CovarianceImpl::ComputeCovarianceValues() {
switch (options_.algorithm_type) {
case DENSE_SVD:
return ComputeCovarianceValuesUsingDenseSVD();
case SUITE_SPARSE_QR:
#ifndef CERES_NO_SUITESPARSE
case SUITE_SPARSE_QR:
return ComputeCovarianceValuesUsingSuiteSparseQR();
#else
LOG(ERROR) << "SuiteSparse is required to use the "
@@ -779,10 +624,7 @@ bool CovarianceImpl::ComputeCovarianceValuesUsingDenseSVD() {
if (automatic_truncation) {
break;
} else {
LOG(ERROR) << "Error: Covariance matrix is near rank deficient "
<< "and the user did not specify a non-zero"
<< "Covariance::Options::null_space_rank "
<< "to enable the computation of a Pseudo-Inverse. "
LOG(ERROR) << "Cholesky factorization of J'J is not reliable. "
<< "Reciprocal condition number: "
<< singular_value_ratio * singular_value_ratio << " "
<< "min_reciprocal_condition_number: "

View File

@@ -55,21 +55,12 @@ class CovarianceImpl {
const double*> >& covariance_blocks,
ProblemImpl* problem);
bool Compute(
const std::vector<const double*>& parameter_blocks,
ProblemImpl* problem);
bool GetCovarianceBlockInTangentOrAmbientSpace(
const double* parameter_block1,
const double* parameter_block2,
bool lift_covariance_to_ambient_space,
double* covariance_block) const;
bool GetCovarianceMatrixInTangentOrAmbientSpace(
const std::vector<const double*>& parameters,
bool lift_covariance_to_ambient_space,
double *covariance_matrix) const;
bool ComputeCovarianceSparsity(
const std::vector<std::pair<const double*,
const double*> >& covariance_blocks,

View File

@@ -1,276 +0,0 @@
// Ceres Solver - A fast non-linear least squares minimizer
// Copyright 2016 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of Google Inc. nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
// Authors: wjr@google.com (William Rucklidge),
// keir@google.com (Keir Mierle),
// dgossow@google.com (David Gossow)
#include "ceres/gradient_checker.h"
#include <algorithm>
#include <cmath>
#include <numeric>
#include <string>
#include <vector>
#include "ceres/is_close.h"
#include "ceres/stringprintf.h"
#include "ceres/types.h"
namespace ceres {
using internal::IsClose;
using internal::StringAppendF;
using internal::StringPrintf;
using std::string;
using std::vector;
namespace {
// Evaluate the cost function and transform the returned Jacobians to
// the local space of the respective local parameterizations.
bool EvaluateCostFunction(
const ceres::CostFunction* function,
double const* const * parameters,
const std::vector<const ceres::LocalParameterization*>&
local_parameterizations,
Vector* residuals,
std::vector<Matrix>* jacobians,
std::vector<Matrix>* local_jacobians) {
CHECK_NOTNULL(residuals);
CHECK_NOTNULL(jacobians);
CHECK_NOTNULL(local_jacobians);
const vector<int32>& block_sizes = function->parameter_block_sizes();
const int num_parameter_blocks = block_sizes.size();
// Allocate Jacobian matrices in local space.
local_jacobians->resize(num_parameter_blocks);
vector<double*> local_jacobian_data(num_parameter_blocks);
for (int i = 0; i < num_parameter_blocks; ++i) {
int block_size = block_sizes.at(i);
if (local_parameterizations.at(i) != NULL) {
block_size = local_parameterizations.at(i)->LocalSize();
}
local_jacobians->at(i).resize(function->num_residuals(), block_size);
local_jacobians->at(i).setZero();
local_jacobian_data.at(i) = local_jacobians->at(i).data();
}
// Allocate Jacobian matrices in global space.
jacobians->resize(num_parameter_blocks);
vector<double*> jacobian_data(num_parameter_blocks);
for (int i = 0; i < num_parameter_blocks; ++i) {
jacobians->at(i).resize(function->num_residuals(), block_sizes.at(i));
jacobians->at(i).setZero();
jacobian_data.at(i) = jacobians->at(i).data();
}
// Compute residuals & jacobians.
CHECK_NE(0, function->num_residuals());
residuals->resize(function->num_residuals());
residuals->setZero();
if (!function->Evaluate(parameters, residuals->data(),
jacobian_data.data())) {
return false;
}
// Convert Jacobians from global to local space.
for (size_t i = 0; i < local_jacobians->size(); ++i) {
if (local_parameterizations.at(i) == NULL) {
local_jacobians->at(i) = jacobians->at(i);
} else {
int global_size = local_parameterizations.at(i)->GlobalSize();
int local_size = local_parameterizations.at(i)->LocalSize();
CHECK_EQ(jacobians->at(i).cols(), global_size);
Matrix global_J_local(global_size, local_size);
local_parameterizations.at(i)->ComputeJacobian(
parameters[i], global_J_local.data());
local_jacobians->at(i) = jacobians->at(i) * global_J_local;
}
}
return true;
}
} // namespace
GradientChecker::GradientChecker(
const CostFunction* function,
const vector<const LocalParameterization*>* local_parameterizations,
const NumericDiffOptions& options) :
function_(function) {
CHECK_NOTNULL(function);
if (local_parameterizations != NULL) {
local_parameterizations_ = *local_parameterizations;
} else {
local_parameterizations_.resize(function->parameter_block_sizes().size(),
NULL);
}
DynamicNumericDiffCostFunction<CostFunction, CENTRAL>*
finite_diff_cost_function =
new DynamicNumericDiffCostFunction<CostFunction, CENTRAL>(
function, DO_NOT_TAKE_OWNERSHIP, options);
finite_diff_cost_function_.reset(finite_diff_cost_function);
const vector<int32>& parameter_block_sizes =
function->parameter_block_sizes();
const int num_parameter_blocks = parameter_block_sizes.size();
for (int i = 0; i < num_parameter_blocks; ++i) {
finite_diff_cost_function->AddParameterBlock(parameter_block_sizes[i]);
}
finite_diff_cost_function->SetNumResiduals(function->num_residuals());
}
bool GradientChecker::Probe(double const* const * parameters,
double relative_precision,
ProbeResults* results_param) const {
int num_residuals = function_->num_residuals();
// Make sure that we have a place to store results, no matter if the user has
// provided an output argument.
ProbeResults* results;
ProbeResults results_local;
if (results_param != NULL) {
results = results_param;
results->residuals.resize(0);
results->jacobians.clear();
results->numeric_jacobians.clear();
results->local_jacobians.clear();
results->local_numeric_jacobians.clear();
results->error_log.clear();
} else {
results = &results_local;
}
results->maximum_relative_error = 0.0;
results->return_value = true;
// Evaluate the derivative using the user supplied code.
vector<Matrix>& jacobians = results->jacobians;
vector<Matrix>& local_jacobians = results->local_jacobians;
if (!EvaluateCostFunction(function_, parameters, local_parameterizations_,
&results->residuals, &jacobians, &local_jacobians)) {
results->error_log = "Function evaluation with Jacobians failed.";
results->return_value = false;
}
// Evaluate the derivative using numeric derivatives.
vector<Matrix>& numeric_jacobians = results->numeric_jacobians;
vector<Matrix>& local_numeric_jacobians = results->local_numeric_jacobians;
Vector finite_diff_residuals;
if (!EvaluateCostFunction(finite_diff_cost_function_.get(), parameters,
local_parameterizations_, &finite_diff_residuals,
&numeric_jacobians, &local_numeric_jacobians)) {
results->error_log += "\nFunction evaluation with numerical "
"differentiation failed.";
results->return_value = false;
}
if (!results->return_value) {
return false;
}
for (int i = 0; i < num_residuals; ++i) {
if (!IsClose(
results->residuals[i],
finite_diff_residuals[i],
relative_precision,
NULL,
NULL)) {
results->error_log = "Function evaluation with and without Jacobians "
"resulted in different residuals.";
LOG(INFO) << results->residuals.transpose();
LOG(INFO) << finite_diff_residuals.transpose();
return false;
}
}
// See if any elements have relative error larger than the threshold.
int num_bad_jacobian_components = 0;
double& worst_relative_error = results->maximum_relative_error;
worst_relative_error = 0;
// Accumulate the error message for all the jacobians, since it won't get
// output if there are no bad jacobian components.
string error_log;
for (int k = 0; k < function_->parameter_block_sizes().size(); k++) {
StringAppendF(&error_log,
"========== "
"Jacobian for " "block %d: (%ld by %ld)) "
"==========\n",
k,
static_cast<long>(local_jacobians[k].rows()),
static_cast<long>(local_jacobians[k].cols()));
// The funny spacing creates appropriately aligned column headers.
error_log +=
" block row col user dx/dy num diff dx/dy "
"abs error relative error parameter residual\n";
for (int i = 0; i < local_jacobians[k].rows(); i++) {
for (int j = 0; j < local_jacobians[k].cols(); j++) {
double term_jacobian = local_jacobians[k](i, j);
double finite_jacobian = local_numeric_jacobians[k](i, j);
double relative_error, absolute_error;
bool bad_jacobian_entry =
!IsClose(term_jacobian,
finite_jacobian,
relative_precision,
&relative_error,
&absolute_error);
worst_relative_error = std::max(worst_relative_error, relative_error);
StringAppendF(&error_log,
"%6d %4d %4d %17g %17g %17g %17g %17g %17g",
k, i, j,
term_jacobian, finite_jacobian,
absolute_error, relative_error,
parameters[k][j],
results->residuals[i]);
if (bad_jacobian_entry) {
num_bad_jacobian_components++;
StringAppendF(
&error_log,
" ------ (%d,%d,%d) Relative error worse than %g",
k, i, j, relative_precision);
}
error_log += "\n";
}
}
}
// Since there were some bad errors, dump comprehensive debug info.
if (num_bad_jacobian_components) {
string header = StringPrintf("\nDetected %d bad Jacobian component(s). "
"Worst relative error was %g.\n",
num_bad_jacobian_components,
worst_relative_error);
results->error_log = header + "\n" + error_log;
return false;
}
return true;
}
} // namespace ceres

View File

@@ -26,8 +26,7 @@
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
// Authors: keir@google.com (Keir Mierle),
// dgossow@google.com (David Gossow)
// Author: keir@google.com (Keir Mierle)
#include "ceres/gradient_checking_cost_function.h"
@@ -37,7 +36,7 @@
#include <string>
#include <vector>
#include "ceres/gradient_checker.h"
#include "ceres/cost_function.h"
#include "ceres/internal/eigen.h"
#include "ceres/internal/scoped_ptr.h"
#include "ceres/parameter_block.h"
@@ -60,25 +59,55 @@ using std::vector;
namespace {
// True if x and y have an absolute relative difference less than
// relative_precision and false otherwise. Stores the relative and absolute
// difference in relative/absolute_error if non-NULL.
bool IsClose(double x, double y, double relative_precision,
double *relative_error,
double *absolute_error) {
double local_absolute_error;
double local_relative_error;
if (!absolute_error) {
absolute_error = &local_absolute_error;
}
if (!relative_error) {
relative_error = &local_relative_error;
}
*absolute_error = abs(x - y);
*relative_error = *absolute_error / max(abs(x), abs(y));
if (x == 0 || y == 0) {
// If x or y is exactly zero, then relative difference doesn't have any
// meaning. Take the absolute difference instead.
*relative_error = *absolute_error;
}
return abs(*relative_error) < abs(relative_precision);
}
class GradientCheckingCostFunction : public CostFunction {
public:
GradientCheckingCostFunction(
const CostFunction* function,
const std::vector<const LocalParameterization*>* local_parameterizations,
const NumericDiffOptions& options,
double relative_precision,
const string& extra_info,
GradientCheckingIterationCallback* callback)
GradientCheckingCostFunction(const CostFunction* function,
const NumericDiffOptions& options,
double relative_precision,
const string& extra_info)
: function_(function),
gradient_checker_(function, local_parameterizations, options),
relative_precision_(relative_precision),
extra_info_(extra_info),
callback_(callback) {
CHECK_NOTNULL(callback_);
extra_info_(extra_info) {
DynamicNumericDiffCostFunction<CostFunction, CENTRAL>*
finite_diff_cost_function =
new DynamicNumericDiffCostFunction<CostFunction, CENTRAL>(
function,
DO_NOT_TAKE_OWNERSHIP,
options);
const vector<int32>& parameter_block_sizes =
function->parameter_block_sizes();
for (int i = 0; i < parameter_block_sizes.size(); ++i) {
finite_diff_cost_function->AddParameterBlock(parameter_block_sizes[i]);
}
*mutable_parameter_block_sizes() = parameter_block_sizes;
set_num_residuals(function->num_residuals());
finite_diff_cost_function->SetNumResiduals(num_residuals());
finite_diff_cost_function_.reset(finite_diff_cost_function);
}
virtual ~GradientCheckingCostFunction() { }
@@ -91,92 +120,133 @@ class GradientCheckingCostFunction : public CostFunction {
return function_->Evaluate(parameters, residuals, NULL);
}
GradientChecker::ProbeResults results;
bool okay = gradient_checker_.Probe(parameters,
relative_precision_,
&results);
int num_residuals = function_->num_residuals();
// If the cost function returned false, there's nothing we can say about
// the gradients.
if (results.return_value == false) {
// Make space for the jacobians of the two methods.
const vector<int32>& block_sizes = function_->parameter_block_sizes();
vector<Matrix> term_jacobians(block_sizes.size());
vector<Matrix> finite_difference_jacobians(block_sizes.size());
vector<double*> term_jacobian_pointers(block_sizes.size());
vector<double*> finite_difference_jacobian_pointers(block_sizes.size());
for (int i = 0; i < block_sizes.size(); i++) {
term_jacobians[i].resize(num_residuals, block_sizes[i]);
term_jacobian_pointers[i] = term_jacobians[i].data();
finite_difference_jacobians[i].resize(num_residuals, block_sizes[i]);
finite_difference_jacobian_pointers[i] =
finite_difference_jacobians[i].data();
}
// Evaluate the derivative using the user supplied code.
if (!function_->Evaluate(parameters,
residuals,
&term_jacobian_pointers[0])) {
LOG(WARNING) << "Function evaluation failed.";
return false;
}
// Copy the residuals.
const int num_residuals = function_->num_residuals();
MatrixRef(residuals, num_residuals, 1) = results.residuals;
// Evaluate the derivative using numeric derivatives.
finite_diff_cost_function_->Evaluate(
parameters,
residuals,
&finite_difference_jacobian_pointers[0]);
// Copy the original jacobian blocks into the jacobians array.
const vector<int32>& block_sizes = function_->parameter_block_sizes();
// See if any elements have relative error larger than the threshold.
int num_bad_jacobian_components = 0;
double worst_relative_error = 0;
// Accumulate the error message for all the jacobians, since it won't get
// output if there are no bad jacobian components.
string m;
for (int k = 0; k < block_sizes.size(); k++) {
// Copy the original jacobian blocks into the jacobians array.
if (jacobians[k] != NULL) {
MatrixRef(jacobians[k],
results.jacobians[k].rows(),
results.jacobians[k].cols()) = results.jacobians[k];
term_jacobians[k].rows(),
term_jacobians[k].cols()) = term_jacobians[k];
}
StringAppendF(&m,
"========== "
"Jacobian for " "block %d: (%ld by %ld)) "
"==========\n",
k,
static_cast<long>(term_jacobians[k].rows()),
static_cast<long>(term_jacobians[k].cols()));
// The funny spacing creates appropriately aligned column headers.
m += " block row col user dx/dy num diff dx/dy "
"abs error relative error parameter residual\n";
for (int i = 0; i < term_jacobians[k].rows(); i++) {
for (int j = 0; j < term_jacobians[k].cols(); j++) {
double term_jacobian = term_jacobians[k](i, j);
double finite_jacobian = finite_difference_jacobians[k](i, j);
double relative_error, absolute_error;
bool bad_jacobian_entry =
!IsClose(term_jacobian,
finite_jacobian,
relative_precision_,
&relative_error,
&absolute_error);
worst_relative_error = max(worst_relative_error, relative_error);
StringAppendF(&m, "%6d %4d %4d %17g %17g %17g %17g %17g %17g",
k, i, j,
term_jacobian, finite_jacobian,
absolute_error, relative_error,
parameters[k][j],
residuals[i]);
if (bad_jacobian_entry) {
num_bad_jacobian_components++;
StringAppendF(
&m, " ------ (%d,%d,%d) Relative error worse than %g",
k, i, j, relative_precision_);
}
m += "\n";
}
}
}
if (!okay) {
std::string error_log = "Gradient Error detected!\nExtra info for "
"this residual: " + extra_info_ + "\n" + results.error_log;
callback_->SetGradientErrorDetected(error_log);
// Since there were some bad errors, dump comprehensive debug info.
if (num_bad_jacobian_components) {
string header = StringPrintf("Detected %d bad jacobian component(s). "
"Worst relative error was %g.\n",
num_bad_jacobian_components,
worst_relative_error);
if (!extra_info_.empty()) {
header += "Extra info for this residual: " + extra_info_ + "\n";
}
LOG(WARNING) << "\n" << header << m;
}
return true;
}
private:
const CostFunction* function_;
GradientChecker gradient_checker_;
internal::scoped_ptr<CostFunction> finite_diff_cost_function_;
double relative_precision_;
string extra_info_;
GradientCheckingIterationCallback* callback_;
};
} // namespace
GradientCheckingIterationCallback::GradientCheckingIterationCallback()
: gradient_error_detected_(false) {
}
CallbackReturnType GradientCheckingIterationCallback::operator()(
const IterationSummary& summary) {
if (gradient_error_detected_) {
LOG(ERROR)<< "Gradient error detected. Terminating solver.";
return SOLVER_ABORT;
}
return SOLVER_CONTINUE;
}
void GradientCheckingIterationCallback::SetGradientErrorDetected(
std::string& error_log) {
mutex_.Lock();
gradient_error_detected_ = true;
error_log_ += "\n" + error_log;
mutex_.Unlock();
}
CostFunction* CreateGradientCheckingCostFunction(
const CostFunction* cost_function,
const std::vector<const LocalParameterization*>* local_parameterizations,
CostFunction *CreateGradientCheckingCostFunction(
const CostFunction *cost_function,
double relative_step_size,
double relative_precision,
const std::string& extra_info,
GradientCheckingIterationCallback* callback) {
const string& extra_info) {
NumericDiffOptions numeric_diff_options;
numeric_diff_options.relative_step_size = relative_step_size;
return new GradientCheckingCostFunction(cost_function,
local_parameterizations,
numeric_diff_options,
relative_precision, extra_info,
callback);
relative_precision,
extra_info);
}
ProblemImpl* CreateGradientCheckingProblemImpl(
ProblemImpl* problem_impl,
double relative_step_size,
double relative_precision,
GradientCheckingIterationCallback* callback) {
CHECK_NOTNULL(callback);
ProblemImpl* CreateGradientCheckingProblemImpl(ProblemImpl* problem_impl,
double relative_step_size,
double relative_precision) {
// We create new CostFunctions by wrapping the original CostFunction
// in a gradient checking CostFunction. So its okay for the
// ProblemImpl to take ownership of it and destroy it. The
@@ -190,9 +260,6 @@ ProblemImpl* CreateGradientCheckingProblemImpl(
gradient_checking_problem_options.local_parameterization_ownership =
DO_NOT_TAKE_OWNERSHIP;
NumericDiffOptions numeric_diff_options;
numeric_diff_options.relative_step_size = relative_step_size;
ProblemImpl* gradient_checking_problem_impl = new ProblemImpl(
gradient_checking_problem_options);
@@ -227,26 +294,19 @@ ProblemImpl* CreateGradientCheckingProblemImpl(
string extra_info = StringPrintf(
"Residual block id %d; depends on parameters [", i);
vector<double*> parameter_blocks;
vector<const LocalParameterization*> local_parameterizations;
parameter_blocks.reserve(residual_block->NumParameterBlocks());
local_parameterizations.reserve(residual_block->NumParameterBlocks());
for (int j = 0; j < residual_block->NumParameterBlocks(); ++j) {
ParameterBlock* parameter_block = residual_block->parameter_blocks()[j];
parameter_blocks.push_back(parameter_block->mutable_user_state());
StringAppendF(&extra_info, "%p", parameter_block->mutable_user_state());
extra_info += (j < residual_block->NumParameterBlocks() - 1) ? ", " : "]";
local_parameterizations.push_back(problem_impl->GetParameterization(
parameter_block->mutable_user_state()));
}
// Wrap the original CostFunction in a GradientCheckingCostFunction.
CostFunction* gradient_checking_cost_function =
new GradientCheckingCostFunction(residual_block->cost_function(),
&local_parameterizations,
numeric_diff_options,
relative_precision,
extra_info,
callback);
CreateGradientCheckingCostFunction(residual_block->cost_function(),
relative_step_size,
relative_precision,
extra_info);
// The const_cast is necessary because
// ProblemImpl::AddResidualBlock can potentially take ownership of

View File

@@ -26,8 +26,7 @@
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
// Authors: keir@google.com (Keir Mierle),
// dgossow@google.com (David Gossow)
// Author: keir@google.com (Keir Mierle)
#ifndef CERES_INTERNAL_GRADIENT_CHECKING_COST_FUNCTION_H_
#define CERES_INTERNAL_GRADIENT_CHECKING_COST_FUNCTION_H_
@@ -35,76 +34,50 @@
#include <string>
#include "ceres/cost_function.h"
#include "ceres/iteration_callback.h"
#include "ceres/local_parameterization.h"
#include "ceres/mutex.h"
namespace ceres {
namespace internal {
class ProblemImpl;
// Callback that collects information about gradient checking errors, and
// will abort the solve as soon as an error occurs.
class GradientCheckingIterationCallback : public IterationCallback {
public:
GradientCheckingIterationCallback();
// Will return SOLVER_CONTINUE until a gradient error has been detected,
// then return SOLVER_ABORT.
virtual CallbackReturnType operator()(const IterationSummary& summary);
// Notify this that a gradient error has occurred (thread safe).
void SetGradientErrorDetected(std::string& error_log);
// Retrieve error status (not thread safe).
bool gradient_error_detected() const { return gradient_error_detected_; }
const std::string& error_log() const { return error_log_; }
private:
bool gradient_error_detected_;
std::string error_log_;
// Mutex protecting member variables.
ceres::internal::Mutex mutex_;
};
// Creates a CostFunction that checks the Jacobians that cost_function computes
// with finite differences. This API is only intended for unit tests that intend
// to check the functionality of the GradientCheckingCostFunction
// implementation directly.
CostFunction* CreateGradientCheckingCostFunction(
const CostFunction* cost_function,
const std::vector<const LocalParameterization*>* local_parameterizations,
double relative_step_size,
double relative_precision,
const std::string& extra_info,
GradientCheckingIterationCallback* callback);
// Create a new ProblemImpl object from the input problem_impl, where all
// cost functions are wrapped so that each time their Evaluate method is called,
// an additional check is performed that compares the Jacobians computed by
// the original cost function with alternative Jacobians computed using
// numerical differentiation. If local parameterizations are given for any
// parameters, the Jacobians will be compared in the local space instead of the
// ambient space. For details on the gradient checking procedure, see the
// documentation of the GradientChecker class. If an error is detected in any
// iteration, the respective cost function will notify the
// GradientCheckingIterationCallback.
// Creates a CostFunction that checks the jacobians that cost_function computes
// with finite differences. Bad results are logged; required precision is
// controlled by relative_precision and the numeric differentiation step size is
// controlled with relative_step_size. See solver.h for a better explanation of
// relative_step_size. Caller owns result.
//
// The caller owns the returned ProblemImpl object.
// The condition enforced is that
//
// (J_actual(i, j) - J_numeric(i, j))
// ------------------------------------ < relative_precision
// max(J_actual(i, j), J_numeric(i, j))
//
// where J_actual(i, j) is the jacobian as computed by the supplied cost
// function (by the user) and J_numeric is the jacobian as computed by finite
// differences.
//
// Note: This is quite inefficient and is intended only for debugging.
CostFunction* CreateGradientCheckingCostFunction(
const CostFunction* cost_function,
double relative_step_size,
double relative_precision,
const std::string& extra_info);
// Create a new ProblemImpl object from the input problem_impl, where
// each CostFunctions in problem_impl are wrapped inside a
// GradientCheckingCostFunctions. This gives us a ProblemImpl object
// which checks its derivatives against estimates from numeric
// differentiation everytime a ResidualBlock is evaluated.
//
// relative_step_size and relative_precision are parameters to control
// the numeric differentiation and the relative tolerance between the
// jacobian computed by the CostFunctions in problem_impl and
// jacobians obtained by numerically differentiating them. See the
// documentation of 'numeric_derivative_relative_step_size' in solver.h for a
// better explanation.
ProblemImpl* CreateGradientCheckingProblemImpl(
ProblemImpl* problem_impl,
double relative_step_size,
double relative_precision,
GradientCheckingIterationCallback* callback);
// jacobians obtained by numerically differentiating them. For more
// details see the documentation for
// CreateGradientCheckingCostFunction above.
ProblemImpl* CreateGradientCheckingProblemImpl(ProblemImpl* problem_impl,
double relative_step_size,
double relative_precision);
} // namespace internal
} // namespace ceres

View File

@@ -84,12 +84,6 @@ Solver::Options GradientProblemSolverOptionsToSolverOptions(
} // namespace
bool GradientProblemSolver::Options::IsValid(std::string* error) const {
const Solver::Options solver_options =
GradientProblemSolverOptionsToSolverOptions(*this);
return solver_options.IsValid(error);
}
GradientProblemSolver::~GradientProblemSolver() {
}
@@ -105,6 +99,8 @@ void GradientProblemSolver::Solve(const GradientProblemSolver::Options& options,
using internal::SetSummaryFinalCost;
double start_time = WallTimeInSeconds();
Solver::Options solver_options =
GradientProblemSolverOptionsToSolverOptions(options);
*CHECK_NOTNULL(summary) = Summary();
summary->num_parameters = problem.NumParameters();
@@ -116,16 +112,14 @@ void GradientProblemSolver::Solve(const GradientProblemSolver::Options& options,
summary->nonlinear_conjugate_gradient_type = options.nonlinear_conjugate_gradient_type; // NOLINT
// Check validity
if (!options.IsValid(&summary->message)) {
if (!solver_options.IsValid(&summary->message)) {
LOG(ERROR) << "Terminating: " << summary->message;
return;
}
// TODO(sameeragarwal): This is a bit convoluted, we should be able
// to convert to minimizer options directly, but this will do for
// now.
Minimizer::Options minimizer_options =
Minimizer::Options(GradientProblemSolverOptionsToSolverOptions(options));
// Assuming that the parameter blocks in the program have been
Minimizer::Options minimizer_options;
minimizer_options = Minimizer::Options(solver_options);
minimizer_options.evaluator.reset(new GradientProblemEvaluator(problem));
scoped_ptr<IterationCallback> logging_callback;

View File

@@ -1,59 +0,0 @@
// Ceres Solver - A fast non-linear least squares minimizer
// Copyright 2016 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of Google Inc. nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
// Authors: keir@google.com (Keir Mierle), dgossow@google.com (David Gossow)
#include "ceres/is_close.h"
#include <algorithm>
#include <cmath>
namespace ceres {
namespace internal {
bool IsClose(double x, double y, double relative_precision,
double *relative_error,
double *absolute_error) {
double local_absolute_error;
double local_relative_error;
if (!absolute_error) {
absolute_error = &local_absolute_error;
}
if (!relative_error) {
relative_error = &local_relative_error;
}
*absolute_error = std::fabs(x - y);
*relative_error = *absolute_error / std::max(std::fabs(x), std::fabs(y));
if (x == 0 || y == 0) {
// If x or y is exactly zero, then relative difference doesn't have any
// meaning. Take the absolute difference instead.
*relative_error = *absolute_error;
}
return *relative_error < std::fabs(relative_precision);
}
} // namespace internal
} // namespace ceres

View File

@@ -1,51 +0,0 @@
// Ceres Solver - A fast non-linear least squares minimizer
// Copyright 2016 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of Google Inc. nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
// Authors: keir@google.com (Keir Mierle), dgossow@google.com (David Gossow)
//
// Utility routine for comparing two values.
#ifndef CERES_INTERNAL_IS_CLOSE_H_
#define CERES_INTERNAL_IS_CLOSE_H_
namespace ceres {
namespace internal {
// Returns true if x and y have a relative (unsigned) difference less than
// relative_precision and false otherwise. Stores the relative and absolute
// difference in relative/absolute_error if non-NULL. If one of the two values
// is exactly zero, the absolute difference will be compared, and relative_error
// will be set to the absolute difference.
bool IsClose(double x,
double y,
double relative_precision,
double *relative_error,
double *absolute_error);
} // namespace internal
} // namespace ceres
#endif // CERES_INTERNAL_IS_CLOSE_H_

View File

@@ -191,7 +191,6 @@ void LineSearchMinimizer::Minimize(const Minimizer::Options& options,
options.line_search_sufficient_curvature_decrease;
line_search_options.max_step_expansion =
options.max_line_search_step_expansion;
line_search_options.is_silent = options.is_silent;
line_search_options.function = &line_search_function;
scoped_ptr<LineSearch>
@@ -342,12 +341,10 @@ void LineSearchMinimizer::Minimize(const Minimizer::Options& options,
"as the step was valid when it was selected by the line search.";
LOG_IF(WARNING, is_not_silent) << "Terminating: " << summary->message;
break;
}
if (!Evaluate(evaluator,
x_plus_delta,
&current_state,
&summary->message)) {
} else if (!Evaluate(evaluator,
x_plus_delta,
&current_state,
&summary->message)) {
summary->termination_type = FAILURE;
summary->message =
"Step failed to evaluate. This should not happen as the step was "
@@ -355,17 +352,15 @@ void LineSearchMinimizer::Minimize(const Minimizer::Options& options,
summary->message;
LOG_IF(WARNING, is_not_silent) << "Terminating: " << summary->message;
break;
} else {
x = x_plus_delta;
}
// Compute the norm of the step in the ambient space.
iteration_summary.step_norm = (x_plus_delta - x).norm();
x = x_plus_delta;
iteration_summary.gradient_max_norm = current_state.gradient_max_norm;
iteration_summary.gradient_norm = sqrt(current_state.gradient_squared_norm);
iteration_summary.cost_change = previous_state.cost - current_state.cost;
iteration_summary.cost = current_state.cost + summary->fixed_cost;
iteration_summary.step_norm = delta.norm();
iteration_summary.step_is_valid = true;
iteration_summary.step_is_successful = true;
iteration_summary.step_size = current_state.step_size;
@@ -381,13 +376,6 @@ void LineSearchMinimizer::Minimize(const Minimizer::Options& options,
WallTimeInSeconds() - start_time
+ summary->preprocessor_time_in_seconds;
// Iterations inside the line search algorithm are considered
// 'steps' in the broader context, to distinguish these inner
// iterations from from the outer iterations of the line search
// minimizer. The number of line search steps is the total number
// of inner line search iterations (or steps) across the entire
// minimization.
summary->num_line_search_steps += line_search_summary.num_iterations;
summary->line_search_cost_evaluation_time_in_seconds +=
line_search_summary.cost_evaluation_time_in_seconds;
summary->line_search_gradient_evaluation_time_in_seconds +=

View File

@@ -30,8 +30,6 @@
#include "ceres/local_parameterization.h"
#include <algorithm>
#include "Eigen/Geometry"
#include "ceres/householder_vector.h"
#include "ceres/internal/eigen.h"
#include "ceres/internal/fixed_array.h"
@@ -89,17 +87,28 @@ bool IdentityParameterization::MultiplyByJacobian(const double* x,
}
SubsetParameterization::SubsetParameterization(
int size, const vector<int>& constant_parameters)
: local_size_(size - constant_parameters.size()), constancy_mask_(size, 0) {
int size,
const vector<int>& constant_parameters)
: local_size_(size - constant_parameters.size()),
constancy_mask_(size, 0) {
CHECK_GT(constant_parameters.size(), 0)
<< "The set of constant parameters should contain at least "
<< "one element. If you do not wish to hold any parameters "
<< "constant, then do not use a SubsetParameterization";
vector<int> constant = constant_parameters;
std::sort(constant.begin(), constant.end());
CHECK_GE(constant.front(), 0)
<< "Indices indicating constant parameter must be greater than zero.";
CHECK_LT(constant.back(), size)
<< "Indices indicating constant parameter must be less than the size "
<< "of the parameter block.";
CHECK(std::adjacent_find(constant.begin(), constant.end()) == constant.end())
sort(constant.begin(), constant.end());
CHECK(unique(constant.begin(), constant.end()) == constant.end())
<< "The set of constant parameters cannot contain duplicates";
CHECK_LT(constant_parameters.size(), size)
<< "Number of parameters held constant should be less "
<< "than the size of the parameter block. If you wish "
<< "to hold the entire parameter block constant, then a "
<< "efficient way is to directly mark it as constant "
<< "instead of using a LocalParameterization to do so.";
CHECK_GE(*min_element(constant.begin(), constant.end()), 0);
CHECK_LT(*max_element(constant.begin(), constant.end()), size);
for (int i = 0; i < constant_parameters.size(); ++i) {
constancy_mask_[constant_parameters[i]] = 1;
}
@@ -120,10 +129,6 @@ bool SubsetParameterization::Plus(const double* x,
bool SubsetParameterization::ComputeJacobian(const double* x,
double* jacobian) const {
if (local_size_ == 0) {
return true;
}
MatrixRef m(jacobian, constancy_mask_.size(), local_size_);
m.setZero();
for (int i = 0, j = 0; i < constancy_mask_.size(); ++i) {
@@ -138,10 +143,6 @@ bool SubsetParameterization::MultiplyByJacobian(const double* x,
const int num_rows,
const double* global_matrix,
double* local_matrix) const {
if (local_size_ == 0) {
return true;
}
for (int row = 0; row < num_rows; ++row) {
for (int col = 0, j = 0; col < constancy_mask_.size(); ++col) {
if (!constancy_mask_[col]) {
@@ -183,39 +184,6 @@ bool QuaternionParameterization::ComputeJacobian(const double* x,
return true;
}
bool EigenQuaternionParameterization::Plus(const double* x_ptr,
const double* delta,
double* x_plus_delta_ptr) const {
Eigen::Map<Eigen::Quaterniond> x_plus_delta(x_plus_delta_ptr);
Eigen::Map<const Eigen::Quaterniond> x(x_ptr);
const double norm_delta =
sqrt(delta[0] * delta[0] + delta[1] * delta[1] + delta[2] * delta[2]);
if (norm_delta > 0.0) {
const double sin_delta_by_delta = sin(norm_delta) / norm_delta;
// Note, in the constructor w is first.
Eigen::Quaterniond delta_q(cos(norm_delta),
sin_delta_by_delta * delta[0],
sin_delta_by_delta * delta[1],
sin_delta_by_delta * delta[2]);
x_plus_delta = delta_q * x;
} else {
x_plus_delta = x;
}
return true;
}
bool EigenQuaternionParameterization::ComputeJacobian(const double* x,
double* jacobian) const {
jacobian[0] = x[3]; jacobian[1] = x[2]; jacobian[2] = -x[1]; // NOLINT
jacobian[3] = -x[2]; jacobian[4] = x[3]; jacobian[5] = x[0]; // NOLINT
jacobian[6] = x[1]; jacobian[7] = -x[0]; jacobian[8] = x[3]; // NOLINT
jacobian[9] = -x[0]; jacobian[10] = -x[1]; jacobian[11] = -x[2]; // NOLINT
return true;
}
HomogeneousVectorParameterization::HomogeneousVectorParameterization(int size)
: size_(size) {
CHECK_GT(size_, 1) << "The size of the homogeneous vector needs to be "
@@ -364,9 +332,9 @@ bool ProductParameterization::ComputeJacobian(const double* x,
if (!param->ComputeJacobian(x + x_cursor, buffer.get())) {
return false;
}
jacobian.block(x_cursor, delta_cursor, global_size, local_size)
= MatrixRef(buffer.get(), global_size, local_size);
delta_cursor += local_size;
x_cursor += global_size;
}

View File

@@ -67,7 +67,7 @@ FindOrDie(const Collection& collection,
// If the key is present in the map then the value associated with that
// key is returned, otherwise the value passed as a default is returned.
template <class Collection>
const typename Collection::value_type::second_type
const typename Collection::value_type::second_type&
FindWithDefault(const Collection& collection,
const typename Collection::value_type::first_type& key,
const typename Collection::value_type::second_type& value) {

View File

@@ -161,34 +161,25 @@ class ParameterBlock {
// does not take ownership of the parameterization.
void SetParameterization(LocalParameterization* new_parameterization) {
CHECK(new_parameterization != NULL) << "NULL parameterization invalid.";
// Nothing to do if the new parameterization is the same as the
// old parameterization.
if (new_parameterization == local_parameterization_) {
return;
}
CHECK(local_parameterization_ == NULL)
<< "Can't re-set the local parameterization; it leads to "
<< "ambiguous ownership. Current local parameterization is: "
<< local_parameterization_;
CHECK(new_parameterization->GlobalSize() == size_)
<< "Invalid parameterization for parameter block. The parameter block "
<< "has size " << size_ << " while the parameterization has a global "
<< "size of " << new_parameterization->GlobalSize() << ". Did you "
<< "accidentally use the wrong parameter block or parameterization?";
CHECK_GT(new_parameterization->LocalSize(), 0)
<< "Invalid parameterization. Parameterizations must have a positive "
<< "dimensional tangent space.";
local_parameterization_ = new_parameterization;
local_parameterization_jacobian_.reset(
new double[local_parameterization_->GlobalSize() *
local_parameterization_->LocalSize()]);
CHECK(UpdateLocalParameterizationJacobian())
<< "Local parameterization Jacobian computation failed for x: "
<< ConstVectorRef(state_, Size()).transpose();
if (new_parameterization != local_parameterization_) {
CHECK(local_parameterization_ == NULL)
<< "Can't re-set the local parameterization; it leads to "
<< "ambiguous ownership.";
local_parameterization_ = new_parameterization;
local_parameterization_jacobian_.reset(
new double[local_parameterization_->GlobalSize() *
local_parameterization_->LocalSize()]);
CHECK(UpdateLocalParameterizationJacobian())
<< "Local parameterization Jacobian computation failed for x: "
<< ConstVectorRef(state_, Size()).transpose();
} else {
// Ignore the case that the parameterizations match.
}
}
void SetUpperBound(int index, double upper_bound) {

View File

@@ -174,10 +174,6 @@ void Problem::SetParameterBlockVariable(double* values) {
problem_impl_->SetParameterBlockVariable(values);
}
bool Problem::IsParameterBlockConstant(double* values) const {
return problem_impl_->IsParameterBlockConstant(values);
}
void Problem::SetParameterization(
double* values,
LocalParameterization* local_parameterization) {

View File

@@ -249,11 +249,10 @@ ResidualBlock* ProblemImpl::AddResidualBlock(
// Check for duplicate parameter blocks.
vector<double*> sorted_parameter_blocks(parameter_blocks);
sort(sorted_parameter_blocks.begin(), sorted_parameter_blocks.end());
const bool has_duplicate_items =
(std::adjacent_find(sorted_parameter_blocks.begin(),
sorted_parameter_blocks.end())
!= sorted_parameter_blocks.end());
if (has_duplicate_items) {
vector<double*>::const_iterator duplicate_items =
unique(sorted_parameter_blocks.begin(),
sorted_parameter_blocks.end());
if (duplicate_items != sorted_parameter_blocks.end()) {
string blocks;
for (int i = 0; i < parameter_blocks.size(); ++i) {
blocks += StringPrintf(" %p ", parameter_blocks[i]);
@@ -573,16 +572,6 @@ void ProblemImpl::SetParameterBlockConstant(double* values) {
parameter_block->SetConstant();
}
bool ProblemImpl::IsParameterBlockConstant(double* values) const {
const ParameterBlock* parameter_block =
FindWithDefault(parameter_block_map_, values, NULL);
CHECK(parameter_block != NULL)
<< "Parameter block not found: " << values << ". You must add the "
<< "parameter block to the problem before it can be queried.";
return parameter_block->IsConstant();
}
void ProblemImpl::SetParameterBlockVariable(double* values) {
ParameterBlock* parameter_block =
FindWithDefault(parameter_block_map_, values, NULL);

View File

@@ -128,8 +128,6 @@ class ProblemImpl {
void SetParameterBlockConstant(double* values);
void SetParameterBlockVariable(double* values);
bool IsParameterBlockConstant(double* values) const;
void SetParameterization(double* values,
LocalParameterization* local_parameterization);
const LocalParameterization* GetParameterization(double* values) const;

View File

@@ -142,11 +142,6 @@ void OrderingForSparseNormalCholeskyUsingSuiteSparse(
ordering);
}
VLOG(2) << "Block ordering stats: "
<< " flops: " << ss.mutable_cc()->fl
<< " lnz : " << ss.mutable_cc()->lnz
<< " anz : " << ss.mutable_cc()->anz;
ss.Free(block_jacobian_transpose);
#endif // CERES_NO_SUITESPARSE
}

View File

@@ -127,7 +127,7 @@ class ResidualBlock {
int index() const { return index_; }
void set_index(int index) { index_ = index; }
std::string ToString() const {
std::string ToString() {
return StringPrintf("{residual block; index=%d}", index_);
}

View File

@@ -33,7 +33,6 @@
#include <algorithm>
#include <ctime>
#include <set>
#include <sstream>
#include <vector>
#include "ceres/block_random_access_dense_matrix.h"
@@ -564,12 +563,6 @@ SparseSchurComplementSolver::SolveReducedLinearSystemUsingEigen(
// worse than the one computed using the block version of the
// algorithm.
simplicial_ldlt_->analyzePattern(eigen_lhs);
if (VLOG_IS_ON(2)) {
std::stringstream ss;
simplicial_ldlt_->dumpMemory(ss);
VLOG(2) << "Symbolic Analysis\n"
<< ss.str();
}
event_logger.AddEvent("Analysis");
if (simplicial_ldlt_->info() != Eigen::Success) {
summary.termination_type = LINEAR_SOLVER_FATAL_ERROR;

View File

@@ -94,7 +94,7 @@ bool CommonOptionsAreValid(const Solver::Options& options, string* error) {
OPTION_GT(num_linear_solver_threads, 0);
if (options.check_gradients) {
OPTION_GT(gradient_check_relative_precision, 0.0);
OPTION_GT(gradient_check_numeric_derivative_relative_step_size, 0.0);
OPTION_GT(numeric_derivative_relative_step_size, 0.0);
}
return true;
}
@@ -351,7 +351,6 @@ void PreSolveSummarize(const Solver::Options& options,
summary->dense_linear_algebra_library_type = options.dense_linear_algebra_library_type; // NOLINT
summary->dogleg_type = options.dogleg_type;
summary->inner_iteration_time_in_seconds = 0.0;
summary->num_line_search_steps = 0;
summary->line_search_cost_evaluation_time_in_seconds = 0.0;
summary->line_search_gradient_evaluation_time_in_seconds = 0.0;
summary->line_search_polynomial_minimization_time_in_seconds = 0.0;
@@ -496,28 +495,21 @@ void Solver::Solve(const Solver::Options& options,
// values provided by the user.
program->SetParameterBlockStatePtrsToUserStatePtrs();
// If gradient_checking is enabled, wrap all cost functions in a
// gradient checker and install a callback that terminates if any gradient
// error is detected.
scoped_ptr<internal::ProblemImpl> gradient_checking_problem;
internal::GradientCheckingIterationCallback gradient_checking_callback;
Solver::Options modified_options = options;
if (options.check_gradients) {
modified_options.callbacks.push_back(&gradient_checking_callback);
gradient_checking_problem.reset(
CreateGradientCheckingProblemImpl(
problem_impl,
options.gradient_check_numeric_derivative_relative_step_size,
options.gradient_check_relative_precision,
&gradient_checking_callback));
options.numeric_derivative_relative_step_size,
options.gradient_check_relative_precision));
problem_impl = gradient_checking_problem.get();
program = problem_impl->mutable_program();
}
scoped_ptr<Preprocessor> preprocessor(
Preprocessor::Create(modified_options.minimizer_type));
Preprocessor::Create(options.minimizer_type));
PreprocessedProblem pp;
const bool status = preprocessor->Preprocess(modified_options, problem_impl, &pp);
const bool status = preprocessor->Preprocess(options, problem_impl, &pp);
summary->fixed_cost = pp.fixed_cost;
summary->preprocessor_time_in_seconds = WallTimeInSeconds() - start_time;
@@ -542,13 +534,6 @@ void Solver::Solve(const Solver::Options& options,
summary->postprocessor_time_in_seconds =
WallTimeInSeconds() - postprocessor_start_time;
// If the gradient checker reported an error, we want to report FAILURE
// instead of USER_FAILURE and provide the error log.
if (gradient_checking_callback.gradient_error_detected()) {
summary->termination_type = FAILURE;
summary->message = gradient_checking_callback.error_log();
}
summary->total_time_in_seconds = WallTimeInSeconds() - start_time;
}
@@ -571,7 +556,6 @@ Solver::Summary::Summary()
num_successful_steps(-1),
num_unsuccessful_steps(-1),
num_inner_iteration_steps(-1),
num_line_search_steps(-1),
preprocessor_time_in_seconds(-1.0),
minimizer_time_in_seconds(-1.0),
postprocessor_time_in_seconds(-1.0),
@@ -712,14 +696,16 @@ string Solver::Summary::FullReport() const {
num_linear_solver_threads_given,
num_linear_solver_threads_used);
string given;
StringifyOrdering(linear_solver_ordering_given, &given);
string used;
StringifyOrdering(linear_solver_ordering_used, &used);
StringAppendF(&report,
"Linear solver ordering %22s %24s\n",
given.c_str(),
used.c_str());
if (IsSchurType(linear_solver_type_used)) {
string given;
StringifyOrdering(linear_solver_ordering_given, &given);
string used;
StringifyOrdering(linear_solver_ordering_used, &used);
StringAppendF(&report,
"Linear solver ordering %22s %24s\n",
given.c_str(),
used.c_str());
}
if (inner_iterations_given) {
StringAppendF(&report,
@@ -798,14 +784,9 @@ string Solver::Summary::FullReport() const {
num_inner_iteration_steps);
}
const bool line_search_used =
(minimizer_type == LINE_SEARCH ||
(minimizer_type == TRUST_REGION && is_constrained));
if (line_search_used) {
StringAppendF(&report, "Line search steps % 14d\n",
num_line_search_steps);
}
const bool print_line_search_timing_information =
minimizer_type == LINE_SEARCH ||
(minimizer_type == TRUST_REGION && is_constrained);
StringAppendF(&report, "\nTime (in seconds):\n");
StringAppendF(&report, "Preprocessor %25.4f\n",
@@ -813,13 +794,13 @@ string Solver::Summary::FullReport() const {
StringAppendF(&report, "\n Residual evaluation %23.4f\n",
residual_evaluation_time_in_seconds);
if (line_search_used) {
if (print_line_search_timing_information) {
StringAppendF(&report, " Line search cost evaluation %10.4f\n",
line_search_cost_evaluation_time_in_seconds);
}
StringAppendF(&report, " Jacobian evaluation %23.4f\n",
jacobian_evaluation_time_in_seconds);
if (line_search_used) {
if (print_line_search_timing_information) {
StringAppendF(&report, " Line search gradient evaluation %6.4f\n",
line_search_gradient_evaluation_time_in_seconds);
}
@@ -834,7 +815,7 @@ string Solver::Summary::FullReport() const {
inner_iteration_time_in_seconds);
}
if (line_search_used) {
if (print_line_search_timing_information) {
StringAppendF(&report, " Line search polynomial minimization %.4f\n",
line_search_polynomial_minimization_time_in_seconds);
}

View File

@@ -33,7 +33,6 @@
#include <algorithm>
#include <cstring>
#include <ctime>
#include <sstream>
#include "ceres/compressed_row_sparse_matrix.h"
#include "ceres/cxsparse.h"
@@ -72,12 +71,6 @@ LinearSolver::Summary SimplicialLDLTSolve(
if (do_symbolic_analysis) {
solver->analyzePattern(lhs);
if (VLOG_IS_ON(2)) {
std::stringstream ss;
solver->dumpMemory(ss);
VLOG(2) << "Symbolic Analysis\n"
<< ss.str();
}
event_logger->AddEvent("Analyze");
if (solver->info() != Eigen::Success) {
summary.termination_type = LINEAR_SOLVER_FATAL_ERROR;

View File

@@ -43,27 +43,14 @@ namespace internal {
using std::string;
// va_copy() was defined in the C99 standard. However, it did not appear in the
// C++ standard until C++11. This means that if Ceres is being compiled with a
// strict pre-C++11 standard (e.g. -std=c++03), va_copy() will NOT be defined,
// as we are using the C++ compiler (it would however be defined if we were
// using the C compiler). Note however that both GCC & Clang will in fact
// define va_copy() when compiling for C++ if the C++ standard is not explicitly
// specified (i.e. no -std=c++<XX> arg), even though it should not strictly be
// defined unless -std=c++11 (or greater) was passed.
#if !defined(va_copy)
#if defined (__GNUC__)
// On GCC/Clang, if va_copy() is not defined (C++ standard < C++11 explicitly
// specified), use the internal __va_copy() version, which should be present
// in even very old GCC versions.
#define va_copy(d, s) __va_copy(d, s)
#else
// Some older versions of MSVC do not have va_copy(), in which case define it.
// Although this is required for older MSVC versions, it should also work for
// other non-GCC/Clang compilers which also do not defined va_copy().
#ifdef _MSC_VER
enum { IS_COMPILER_MSVC = 1 };
#if _MSC_VER < 1800
#define va_copy(d, s) ((d) = (s))
#endif // defined (__GNUC__)
#endif // !defined(va_copy)
#endif
#else
enum { IS_COMPILER_MSVC = 0 };
#endif
void StringAppendV(string* dst, const char* format, va_list ap) {
// First try with a small fixed size buffer
@@ -84,13 +71,13 @@ void StringAppendV(string* dst, const char* format, va_list ap) {
return;
}
#if defined (_MSC_VER)
// Error or MSVC running out of space. MSVC 8.0 and higher
// can be asked about space needed with the special idiom below:
va_copy(backup_ap, ap);
result = vsnprintf(NULL, 0, format, backup_ap);
va_end(backup_ap);
#endif
if (IS_COMPILER_MSVC) {
// Error or MSVC running out of space. MSVC 8.0 and higher
// can be asked about space needed with the special idiom below:
va_copy(backup_ap, ap);
result = vsnprintf(NULL, 0, format, backup_ap);
va_end(backup_ap);
}
if (result < 0) {
// Just an error.

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
// Copyright 2016 Google Inc. All rights reserved.
// Copyright 2015 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,136 +31,35 @@
#ifndef CERES_INTERNAL_TRUST_REGION_MINIMIZER_H_
#define CERES_INTERNAL_TRUST_REGION_MINIMIZER_H_
#include "ceres/internal/eigen.h"
#include "ceres/internal/scoped_ptr.h"
#include "ceres/minimizer.h"
#include "ceres/solver.h"
#include "ceres/sparse_matrix.h"
#include "ceres/trust_region_step_evaluator.h"
#include "ceres/trust_region_strategy.h"
#include "ceres/types.h"
namespace ceres {
namespace internal {
// Generic trust region minimization algorithm.
// Generic trust region minimization algorithm. The heavy lifting is
// done by a TrustRegionStrategy object passed in as part of options.
//
// For example usage, see SolverImpl::Minimize.
class TrustRegionMinimizer : public Minimizer {
public:
~TrustRegionMinimizer();
// This method is not thread safe.
~TrustRegionMinimizer() {}
virtual void Minimize(const Minimizer::Options& options,
double* parameters,
Solver::Summary* solver_summary);
Solver::Summary* summary);
private:
void Init(const Minimizer::Options& options,
double* parameters,
Solver::Summary* solver_summary);
bool IterationZero();
bool FinalizeIterationAndCheckIfMinimizerCanContinue();
bool ComputeTrustRegionStep();
bool EvaluateGradientAndJacobian();
void ComputeCandidatePointAndEvaluateCost();
void DoLineSearch(const Vector& x,
const Vector& gradient,
const double cost,
Vector* delta);
void DoInnerIterationsIfNeeded();
bool ParameterToleranceReached();
bool FunctionToleranceReached();
bool GradientToleranceReached();
bool MaxSolverTimeReached();
bool MaxSolverIterationsReached();
bool MinTrustRegionRadiusReached();
bool IsStepSuccessful();
void HandleUnsuccessfulStep();
bool HandleSuccessfulStep();
bool HandleInvalidStep();
void Init(const Minimizer::Options& options);
void EstimateScale(const SparseMatrix& jacobian, double* scale) const;
bool MaybeDumpLinearLeastSquaresProblem(const int iteration,
const SparseMatrix* jacobian,
const double* residuals,
const double* step) const;
Minimizer::Options options_;
// These pointers are shortcuts to objects passed to the
// TrustRegionMinimizer. The TrustRegionMinimizer does not own them.
double* parameters_;
Solver::Summary* solver_summary_;
Evaluator* evaluator_;
SparseMatrix* jacobian_;
TrustRegionStrategy* strategy_;
scoped_ptr<TrustRegionStepEvaluator> step_evaluator_;
bool is_not_silent_;
bool inner_iterations_are_enabled_;
bool inner_iterations_were_useful_;
// Summary of the current iteration.
IterationSummary iteration_summary_;
// Dimensionality of the problem in the ambient space.
int num_parameters_;
// Dimensionality of the problem in the tangent space. This is the
// number of columns in the Jacobian.
int num_effective_parameters_;
// Length of the residual vector, also the number of rows in the Jacobian.
int num_residuals_;
// Current point.
Vector x_;
// Residuals at x_;
Vector residuals_;
// Gradient at x_.
Vector gradient_;
// Solution computed by the inner iterations.
Vector inner_iteration_x_;
// model_residuals = J * trust_region_step
Vector model_residuals_;
Vector negative_gradient_;
// projected_gradient_step = Plus(x, -gradient), an intermediate
// quantity used to compute the projected gradient norm.
Vector projected_gradient_step_;
// The step computed by the trust region strategy. If Jacobi scaling
// is enabled, this is a vector in the scaled space.
Vector trust_region_step_;
// The current proposal for how far the trust region algorithm
// thinks we should move. In the most basic case, it is just the
// trust_region_step_ with the Jacobi scaling undone. If bounds
// constraints are present, then it is the result of the projected
// line search.
Vector delta_;
// candidate_x = Plus(x, delta)
Vector candidate_x_;
// Scaling vector to scale the columns of the Jacobian.
Vector jacobian_scaling_;
// Euclidean norm of x_.
double x_norm_;
// Cost at x_.
double x_cost_;
// Minimum cost encountered up till now.
double minimum_cost_;
// How much did the trust region strategy reduce the cost of the
// linearized Gauss-Newton model.
double model_cost_change_;
// Cost at candidate_x_.
double candidate_cost_;
// Time at which the minimizer was started.
double start_time_in_secs_;
// Time at which the current iteration was started.
double iteration_start_time_in_secs_;
// Number of consecutive steps where the minimizer loop computed a
// numerically invalid step.
int num_consecutive_invalid_steps_;
};
} // namespace internal
} // namespace ceres
#endif // CERES_INTERNAL_TRUST_REGION_MINIMIZER_H_

View File

@@ -1,107 +0,0 @@
// Ceres Solver - A fast non-linear least squares minimizer
// Copyright 2016 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of Google Inc. nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
// Author: sameeragarwal@google.com (Sameer Agarwal)
#include <algorithm>
#include "ceres/trust_region_step_evaluator.h"
#include "glog/logging.h"
namespace ceres {
namespace internal {
TrustRegionStepEvaluator::TrustRegionStepEvaluator(
const double initial_cost,
const int max_consecutive_nonmonotonic_steps)
: max_consecutive_nonmonotonic_steps_(max_consecutive_nonmonotonic_steps),
minimum_cost_(initial_cost),
current_cost_(initial_cost),
reference_cost_(initial_cost),
candidate_cost_(initial_cost),
accumulated_reference_model_cost_change_(0.0),
accumulated_candidate_model_cost_change_(0.0),
num_consecutive_nonmonotonic_steps_(0){
}
double TrustRegionStepEvaluator::StepQuality(
const double cost,
const double model_cost_change) const {
const double relative_decrease = (current_cost_ - cost) / model_cost_change;
const double historical_relative_decrease =
(reference_cost_ - cost) /
(accumulated_reference_model_cost_change_ + model_cost_change);
return std::max(relative_decrease, historical_relative_decrease);
}
void TrustRegionStepEvaluator::StepAccepted(
const double cost,
const double model_cost_change) {
// Algorithm 10.1.2 from Trust Region Methods by Conn, Gould &
// Toint.
//
// Step 3a
current_cost_ = cost;
accumulated_candidate_model_cost_change_ += model_cost_change;
accumulated_reference_model_cost_change_ += model_cost_change;
// Step 3b.
if (current_cost_ < minimum_cost_) {
minimum_cost_ = current_cost_;
num_consecutive_nonmonotonic_steps_ = 0;
candidate_cost_ = current_cost_;
accumulated_candidate_model_cost_change_ = 0.0;
} else {
// Step 3c.
++num_consecutive_nonmonotonic_steps_;
if (current_cost_ > candidate_cost_) {
candidate_cost_ = current_cost_;
accumulated_candidate_model_cost_change_ = 0.0;
}
}
// Step 3d.
//
// At this point we have made too many non-monotonic steps and
// we are going to reset the value of the reference iterate so
// as to force the algorithm to descend.
//
// Note: In the original algorithm by Toint, this step was only
// executed if the step was non-monotonic, but that would not handle
// the case of max_consecutive_nonmonotonic_steps = 0. The small
// modification of doing this always handles that corner case
// correctly.
if (num_consecutive_nonmonotonic_steps_ ==
max_consecutive_nonmonotonic_steps_) {
reference_cost_ = candidate_cost_;
accumulated_reference_model_cost_change_ =
accumulated_candidate_model_cost_change_;
}
}
} // namespace internal
} // namespace ceres

View File

@@ -1,122 +0,0 @@
// Ceres Solver - A fast non-linear least squares minimizer
// Copyright 2016 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of Google Inc. nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
// Author: sameeragarwal@google.com (Sameer Agarwal)
#ifndef CERES_INTERNAL_TRUST_REGION_STEP_EVALUATOR_H_
#define CERES_INTERNAL_TRUST_REGION_STEP_EVALUATOR_H_
namespace ceres {
namespace internal {
// The job of the TrustRegionStepEvaluator is to evaluate the quality
// of a step, i.e., how the cost of a step compares with the reduction
// in the objective of the trust region problem.
//
// Classic trust region methods are descent methods, in that they only
// accept a point if it strictly reduces the value of the objective
// function. They do this by measuring the quality of a step as
//
// cost_change / model_cost_change.
//
// Relaxing the monotonic descent requirement allows the algorithm to
// be more efficient in the long term at the cost of some local
// increase in the value of the objective function.
//
// This is because allowing for non-decreasing objective function
// values in a principled manner allows the algorithm to "jump over
// boulders" as the method is not restricted to move into narrow
// valleys while preserving its convergence properties.
//
// The parameter max_consecutive_nonmonotonic_steps controls the
// window size used by the step selection algorithm to accept
// non-monotonic steps. Setting this parameter to zero, recovers the
// classic montonic descent algorithm.
//
// Based on algorithm 10.1.2 (page 357) of "Trust Region
// Methods" by Conn Gould & Toint, or equations 33-40 of
// "Non-monotone trust-region algorithms for nonlinear
// optimization subject to convex constraints" by Phil Toint,
// Mathematical Programming, 77, 1997.
//
// Example usage:
//
// TrustRegionStepEvaluator* step_evaluator = ...
//
// cost = ... // Compute the non-linear objective function value.
// model_cost_change = ... // Change in the value of the trust region objective.
// if (step_evaluator->StepQuality(cost, model_cost_change) > threshold) {
// x = x + delta;
// step_evaluator->StepAccepted(cost, model_cost_change);
// }
class TrustRegionStepEvaluator {
public:
// initial_cost is as the name implies the cost of the starting
// state of the trust region minimizer.
//
// max_consecutive_nonmonotonic_steps controls the window size used
// by the step selection algorithm to accept non-monotonic
// steps. Setting this parameter to zero, recovers the classic
// montonic descent algorithm.
TrustRegionStepEvaluator(double initial_cost,
int max_consecutive_nonmonotonic_steps);
// Return the quality of the step given its cost and the decrease in
// the cost of the model. model_cost_change has to be positive.
double StepQuality(double cost, double model_cost_change) const;
// Inform the step evaluator that a step with the given cost and
// model_cost_change has been accepted by the trust region
// minimizer.
void StepAccepted(double cost, double model_cost_change);
private:
const int max_consecutive_nonmonotonic_steps_;
// The minimum cost encountered up till now.
double minimum_cost_;
// The current cost of the trust region minimizer as informed by the
// last call to StepAccepted.
double current_cost_;
double reference_cost_;
double candidate_cost_;
// Accumulated model cost since the last time the reference model
// cost was updated, i.e., when a step with cost less than the
// current known minimum cost is accepted.
double accumulated_reference_model_cost_change_;
// Accumulated model cost since the last time the candidate model
// cost was updated, i.e., a non-monotonic step was taken with a
// cost that was greater than the current candidate cost.
double accumulated_candidate_model_cost_change_;
// Number of steps taken since the last time minimum_cost was updated.
int num_consecutive_nonmonotonic_steps_;
};
} // namespace internal
} // namespace ceres
#endif // CERES_INTERNAL_TRUST_REGION_STEP_EVALUATOR_H_

View File

@@ -86,20 +86,20 @@ class TrustRegionStrategy {
struct PerSolveOptions {
PerSolveOptions()
: eta(0),
dump_filename_base(""),
dump_format_type(TEXTFILE) {
}
// Forcing sequence for inexact solves.
double eta;
DumpFormatType dump_format_type;
// If non-empty and dump_format_type is not CONSOLE, the trust
// regions strategy will write the linear system to file(s) with
// name starting with dump_filename_base. If dump_format_type is
// CONSOLE then dump_filename_base will be ignored and the linear
// system will be written to the standard error.
std::string dump_filename_base;
DumpFormatType dump_format_type;
};
struct Summary {

View File

@@ -369,8 +369,7 @@ typedef unsigned int cl_GLenum;
#endif
/* Define basic vector types */
/* WOrkaround for ppc64el platform: conflicts with bool from C++. */
#if defined( __VEC__ ) && !(defined(__PPC64__) && defined(__LITTLE_ENDIAN__))
#if defined( __VEC__ )
#include <altivec.h> /* may be omitted depending on compiler. AltiVec spec provides no way to detect whether the header is required. */
typedef vector unsigned char __cl_uchar16;
typedef vector signed char __cl_char16;

2
extern/cuew/README vendored
View File

@@ -4,7 +4,7 @@ for determining which CUDA functions and extensions extensions are supported
on the target platform.
CUDA core and extension functionality is exposed in a single header file.
CUEW has been tested on a variety of operating systems, including Windows,
GUEW has been tested on a variety of operating systems, including Windows,
Linux, Mac OS X.
LICENSE

View File

@@ -1,5 +1,5 @@
Project: Cuda Wrangler
URL: https://github.com/CudaWrangler/cuew
License: Apache 2.0
Upstream version: 63d2a0f
Upstream version: e2e0315
Local modifications: None

View File

@@ -255,7 +255,7 @@ static void cubic_list_clear(CubicList *clist)
/** \name Cubic Evaluation
* \{ */
static void cubic_calc_point(
static void cubic_evaluate(
const Cubic *cubic, const double t, const uint dims,
double r_v[])
{
@@ -271,6 +271,18 @@ static void cubic_calc_point(
}
}
static void cubic_calc_point(
const Cubic *cubic, const double t, const uint dims,
double r_v[])
{
CUBIC_VARS_CONST(cubic, dims, p0, p1, p2, p3);
const double s = 1.0 - t;
for (uint j = 0; j < dims; j++) {
r_v[j] = p0[j] * s * s * s +
3.0 * t * s * (s * p1[j] + t * p2[j]) + t * t * t * p3[j];
}
}
static void cubic_calc_speed(
const Cubic *cubic, const double t, const uint dims,
double r_v[])
@@ -320,7 +332,7 @@ static double cubic_calc_error(
#endif
for (uint i = 1; i < points_offset_len - 1; i++, pt_real += dims) {
cubic_calc_point(cubic, u[i], dims, pt_eval);
cubic_evaluate(cubic, u[i], dims, pt_eval);
const double err_sq = len_squared_vnvn(pt_real, pt_eval, dims);
if (err_sq >= error_max_sq) {
@@ -356,7 +368,7 @@ static double cubic_calc_error_simple(
#endif
for (uint i = 1; i < points_offset_len - 1; i++, pt_real += dims) {
cubic_calc_point(cubic, u[i], dims, pt_eval);
cubic_evaluate(cubic, u[i], dims, pt_eval);
const double err_sq = len_squared_vnvn(pt_real, pt_eval, dims);
if (err_sq >= error_threshold_sq) {
@@ -489,7 +501,7 @@ static double points_calc_circle_tangent_factor(
return (1.0 / 3.0) * 0.75;
}
else if (tan_dot < -1.0 + eps) {
/* parallel tangents (half-circle) */
/* parallele tangents (half-circle) */
return (1.0 / 2.0);
}
else {
@@ -611,8 +623,8 @@ static void cubic_from_points_offset_fallback(
}
}
double alpha_l = (dists[0] / 0.75) / fabs(dot_vnvn(tan_l, a[0], dims));
double alpha_r = (dists[1] / 0.75) / fabs(dot_vnvn(tan_r, a[1], dims));
double alpha_l = (dists[0] / 0.75) / dot_vnvn(tan_l, a[0], dims);
double alpha_r = (dists[1] / 0.75) / -dot_vnvn(tan_r, a[1], dims);
if (!(alpha_l > 0.0)) {
alpha_l = dir_dist / 3.0;
@@ -665,11 +677,13 @@ static void cubic_from_points(
double alpha_l, alpha_r;
#ifdef USE_VLA
double a[2][dims];
double tmp[dims];
#else
double *a[2] = {
alloca(sizeof(double) * dims),
alloca(sizeof(double) * dims),
};
double *tmp = alloca(sizeof(double) * dims);
#endif
{
@@ -680,22 +694,22 @@ static void cubic_from_points(
mul_vnvn_fl(a[0], tan_l, B1(u_prime[i]), dims);
mul_vnvn_fl(a[1], tan_r, B2(u_prime[i]), dims);
const double b0_plus_b1 = B0plusB1(u_prime[i]);
const double b2_plus_b3 = B2plusB3(u_prime[i]);
/* inline dot product */
for (uint j = 0; j < dims; j++) {
const double tmp = (pt[j] - (p0[j] * b0_plus_b1)) + (p3[j] * b2_plus_b3);
x[0] += a[0][j] * tmp;
x[1] += a[1][j] * tmp;
c[0][0] += a[0][j] * a[0][j];
c[0][1] += a[0][j] * a[1][j];
c[1][1] += a[1][j] * a[1][j];
}
c[0][0] += dot_vnvn(a[0], a[0], dims);
c[0][1] += dot_vnvn(a[0], a[1], dims);
c[1][1] += dot_vnvn(a[1], a[1], dims);
c[1][0] = c[0][1];
{
const double b0_plus_b1 = B0plusB1(u_prime[i]);
const double b2_plus_b3 = B2plusB3(u_prime[i]);
for (uint j = 0; j < dims; j++) {
tmp[j] = (pt[j] - (p0[j] * b0_plus_b1)) + (p3[j] * b2_plus_b3);
}
x[0] += dot_vnvn(a[0], tmp, dims);
x[1] += dot_vnvn(a[1], tmp, dims);
}
}
double det_C0_C1 = c[0][0] * c[1][1] - c[0][1] * c[1][0];

View File

@@ -463,7 +463,7 @@ static uint curve_incremental_simplify(
rstate_pool_create(&epool, 0);
#endif
Heap *heap = HEAP_new(knots_len_remaining);
Heap *heap = HEAP_new(knots_len);
struct KnotRemove_Params params = {
.pd = pd,
@@ -698,7 +698,7 @@ static uint curve_incremental_simplify_refit(
refit_pool_create(&epool, 0);
#endif
Heap *heap = HEAP_new(knots_len_remaining);
Heap *heap = HEAP_new(knots_len);
struct KnotRefit_Params params = {
.pd = pd,
@@ -890,7 +890,7 @@ static void knot_corner_error_recalculate(
static uint curve_incremental_simplify_corners(
const struct PointData *pd,
struct Knot *knots, const uint knots_len, uint knots_len_remaining,
const double error_sq_max, const double error_sq_collapse_max,
const double error_sq_max, const double error_sq_2x_max,
const double corner_angle,
const uint dims,
uint *r_corner_index_len)
@@ -954,12 +954,12 @@ static uint curve_incremental_simplify_corners(
project_vn_vnvn_normalized(k_proj_ref, co_prev, k_prev->tan[1], dims);
project_vn_vnvn_normalized(k_proj_split, co_split, k_prev->tan[1], dims);
if (len_squared_vnvn(k_proj_ref, k_proj_split, dims) < error_sq_collapse_max) {
if (len_squared_vnvn(k_proj_ref, k_proj_split, dims) < error_sq_2x_max) {
project_vn_vnvn_normalized(k_proj_ref, co_next, k_next->tan[0], dims);
project_vn_vnvn_normalized(k_proj_split, co_split, k_next->tan[0], dims);
if (len_squared_vnvn(k_proj_ref, k_proj_split, dims) < error_sq_collapse_max) {
if (len_squared_vnvn(k_proj_ref, k_proj_split, dims) < error_sq_2x_max) {
struct Knot *k_split = &knots[split_index];
@@ -1260,12 +1260,9 @@ int curve_fit_cubic_to_points_refit_db(
#ifdef USE_CORNER_DETECT
if (use_corner) {
#ifdef DEBUG
for (uint i = 0; i < knots_len; i++) {
assert(knots[i].heap_node == NULL);
}
#endif
knots_len_remaining = curve_incremental_simplify_corners(
&pd, knots, knots_len, knots_len_remaining,

View File

@@ -59,8 +59,7 @@
# include <unistd.h>
#endif
// Hurd does not have SYS_write.
#if (defined(HAVE_SYSCALL_H) || defined(HAVE_SYS_SYSCALL_H)) && !defined(__GNU__)
#if defined(HAVE_SYSCALL_H) || defined(HAVE_SYS_SYSCALL_H)
# define safe_write(fd, s, len) syscall(SYS_write, fd, s, len)
#else
// Not so safe, but what can you do?

View File

@@ -111,7 +111,7 @@ int GetStackTrace(void** result, int max_depth, int skip_count) {
result[n++] = *(sp+2);
#elif defined(_CALL_SYSV)
result[n++] = *(sp+1);
#elif defined(__APPLE__) || ((defined(__linux) || defined(__linux__)) && defined(__PPC64__))
#elif defined(__APPLE__) || (defined(__linux) && defined(__PPC64__))
// This check is in case the compiler doesn't define _CALL_AIX/etc.
result[n++] = *(sp+2);
#elif defined(__linux)

View File

@@ -21,10 +21,10 @@ set(INC
)
set(SRC
range_tree.h
intern/generic_alloc_impl.h
range_tree.hh
range_tree_c_api.h
intern/range_tree.c
range_tree_c_api.cc
)
blender_add_lib(extern_rangetree "${SRC}" "${INC}" "")

View File

@@ -1,5 +1,5 @@
Project: RangeTree
URL: https://github.com/ideasman42/rangetree-c
License: Apache 2.0
Upstream version: 40ebed8aa209
URL: https://github.com/nicholasbishop/RangeTree
License: GPLv2+
Upstream version: c4ecf6bb7dfd
Local modifications: None

13
extern/rangetree/README.org vendored Normal file
View File

@@ -0,0 +1,13 @@
* Overview
Basic class for storing non-overlapping scalar ranges. Underlying
representation is a C++ STL set for fast lookups.
* License
GPL version 2 or later (see COPYING)
* Author Note
This implementation is intended for storing free unique IDs in a new
undo system for BMesh in Blender, but could be useful elsewhere.
* Website
https://github.com/nicholasbishop/RangeTree

View File

@@ -1,215 +0,0 @@
/*
* Copyright (c) 2016, Blender Foundation.
*
* Licensed under the Apache License, Version 2.0 (the "Apache License")
* with the following modification; you may not use this file except in
* compliance with the Apache License and the following modification to it:
* Section 6. Trademarks. is deleted and replaced with:
*
* 6. Trademarks. This License does not grant permission to use the trade
* names, trademarks, service marks, or product names of the Licensor
* and its affiliates, except as required to comply with Section 4(c) of
* the License and to reproduce the content of the NOTICE file.
*
* You may obtain a copy of the Apache License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the Apache License with the above modification is
* distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the Apache License for the specific
* language governing permissions and limitations under the Apache License.
*/
/**
* Simple Memory Chunking Allocator
* ================================
*
* Defines need to be set:
* - #TPOOL_IMPL_PREFIX: Prefix to use for the API.
* - #TPOOL_ALLOC_TYPE: Struct type this pool handles.
* - #TPOOL_STRUCT: Name for pool struct name.
* - #TPOOL_CHUNK_SIZE: Chunk size (optional), use 64kb when not defined.
*
* \note #TPOOL_ALLOC_TYPE must be at least ``sizeof(void *)``.
*
* Defines the API, uses #TPOOL_IMPL_PREFIX to prefix each function.
*
* - *_pool_create()
* - *_pool_destroy()
* - *_pool_clear()
*
* - *_pool_elem_alloc()
* - *_pool_elem_calloc()
* - *_pool_elem_free()
*/
/* check we're not building directly */
#if !defined(TPOOL_IMPL_PREFIX) || \
!defined(TPOOL_ALLOC_TYPE) || \
!defined(TPOOL_STRUCT)
# error "This file can't be compiled directly, include in another source file"
#endif
#define _CONCAT_AUX(MACRO_ARG1, MACRO_ARG2) MACRO_ARG1 ## MACRO_ARG2
#define _CONCAT(MACRO_ARG1, MACRO_ARG2) _CONCAT_AUX(MACRO_ARG1, MACRO_ARG2)
#define _TPOOL_PREFIX(id) _CONCAT(TPOOL_IMPL_PREFIX, _##id)
/* local identifiers */
#define pool_create _TPOOL_PREFIX(pool_create)
#define pool_destroy _TPOOL_PREFIX(pool_destroy)
#define pool_clear _TPOOL_PREFIX(pool_clear)
#define pool_elem_alloc _TPOOL_PREFIX(pool_elem_alloc)
#define pool_elem_calloc _TPOOL_PREFIX(pool_elem_calloc)
#define pool_elem_free _TPOOL_PREFIX(pool_elem_free)
/* private identifiers (only for this file, undefine after) */
#define pool_alloc_chunk _TPOOL_PREFIX(pool_alloc_chunk)
#define TPoolChunk _TPOOL_PREFIX(TPoolChunk)
#define TPoolChunkElemFree _TPOOL_PREFIX(TPoolChunkElemFree)
#ifndef TPOOL_CHUNK_SIZE
#define TPOOL_CHUNK_SIZE (1 << 16) /* 64kb */
#define _TPOOL_CHUNK_SIZE_UNDEF
#endif
#ifndef UNLIKELY
# ifdef __GNUC__
# define UNLIKELY(x) __builtin_expect(!!(x), 0)
# else
# define UNLIKELY(x) (x)
# endif
#endif
#ifdef __GNUC__
# define MAYBE_UNUSED __attribute__((unused))
#else
# define MAYBE_UNUSED
#endif
struct TPoolChunk {
struct TPoolChunk *prev;
unsigned int size;
unsigned int bufsize;
TPOOL_ALLOC_TYPE buf[0];
};
struct TPoolChunkElemFree {
struct TPoolChunkElemFree *next;
};
struct TPOOL_STRUCT {
/* Always keep at least one chunk (never NULL) */
struct TPoolChunk *chunk;
/* when NULL, allocate a new chunk */
struct TPoolChunkElemFree *free;
};
/**
* Number of elems to include per #TPoolChunk when no reserved size is passed,
* or we allocate past the reserved number.
*
* \note Optimize number for 64kb allocs.
*/
#define _TPOOL_CHUNK_DEFAULT_NUM \
(((1 << 16) - sizeof(struct TPoolChunk)) / sizeof(TPOOL_ALLOC_TYPE))
/** \name Internal Memory Management
* \{ */
static struct TPoolChunk *pool_alloc_chunk(
unsigned int tot_elems, struct TPoolChunk *chunk_prev)
{
struct TPoolChunk *chunk = malloc(
sizeof(struct TPoolChunk) + (sizeof(TPOOL_ALLOC_TYPE) * tot_elems));
chunk->prev = chunk_prev;
chunk->bufsize = tot_elems;
chunk->size = 0;
return chunk;
}
static TPOOL_ALLOC_TYPE *pool_elem_alloc(struct TPOOL_STRUCT *pool)
{
TPOOL_ALLOC_TYPE *elem;
if (pool->free) {
elem = (TPOOL_ALLOC_TYPE *)pool->free;
pool->free = pool->free->next;
}
else {
struct TPoolChunk *chunk = pool->chunk;
if (UNLIKELY(chunk->size == chunk->bufsize)) {
chunk = pool->chunk = pool_alloc_chunk(_TPOOL_CHUNK_DEFAULT_NUM, chunk);
}
elem = &chunk->buf[chunk->size++];
}
return elem;
}
MAYBE_UNUSED
static TPOOL_ALLOC_TYPE *pool_elem_calloc(struct TPOOL_STRUCT *pool)
{
TPOOL_ALLOC_TYPE *elem = pool_elem_alloc(pool);
memset(elem, 0, sizeof(*elem));
return elem;
}
static void pool_elem_free(struct TPOOL_STRUCT *pool, TPOOL_ALLOC_TYPE *elem)
{
struct TPoolChunkElemFree *elem_free = (struct TPoolChunkElemFree *)elem;
elem_free->next = pool->free;
pool->free = elem_free;
}
static void pool_create(struct TPOOL_STRUCT *pool, unsigned int tot_reserve)
{
pool->chunk = pool_alloc_chunk((tot_reserve > 1) ? tot_reserve : _TPOOL_CHUNK_DEFAULT_NUM, NULL);
pool->free = NULL;
}
MAYBE_UNUSED
static void pool_clear(struct TPOOL_STRUCT *pool)
{
/* Remove all except the last chunk */
while (pool->chunk->prev) {
struct TPoolChunk *chunk_prev = pool->chunk->prev;
free(pool->chunk);
pool->chunk = chunk_prev;
}
pool->chunk->size = 0;
pool->free = NULL;
}
static void pool_destroy(struct TPOOL_STRUCT *pool)
{
struct TPoolChunk *chunk = pool->chunk;
do {
struct TPoolChunk *chunk_prev;
chunk_prev = chunk->prev;
free(chunk);
chunk = chunk_prev;
} while (chunk);
pool->chunk = NULL;
pool->free = NULL;
}
/** \} */
#undef _TPOOL_CHUNK_DEFAULT_NUM
#undef _CONCAT_AUX
#undef _CONCAT
#undef _TPOOL_PREFIX
#undef TPoolChunk
#undef TPoolChunkElemFree
#ifdef _TPOOL_CHUNK_SIZE_UNDEF
# undef TPOOL_CHUNK_SIZE
# undef _TPOOL_CHUNK_SIZE_UNDEF
#endif

View File

@@ -1,873 +0,0 @@
/*
* Copyright (c) 2016, Campbell Barton.
*
* Licensed under the Apache License, Version 2.0 (the "Apache License")
* with the following modification; you may not use this file except in
* compliance with the Apache License and the following modification to it:
* Section 6. Trademarks. is deleted and replaced with:
*
* 6. Trademarks. This License does not grant permission to use the trade
* names, trademarks, service marks, or product names of the Licensor
* and its affiliates, except as required to comply with Section 4(c) of
* the License and to reproduce the content of the NOTICE file.
*
* You may obtain a copy of the Apache License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the Apache License with the above modification is
* distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the Apache License for the specific
* language governing permissions and limitations under the Apache License.
*/
#include <stdlib.h>
#include <stdbool.h>
#include <string.h>
#include <assert.h>
#include "range_tree.h"
typedef unsigned int uint;
/* Use binary-tree for lookups, else fallback to full search */
#define USE_BTREE
/* Use memory pool for nodes, else do individual allocations */
#define USE_TPOOL
/* Node representing a range in the RangeTreeUInt. */
typedef struct Node {
struct Node *next, *prev;
/* range (inclusive) */
uint min, max;
#ifdef USE_BTREE
/* Left leaning red-black tree, for reference implementation see:
* https://gitlab.com/ideasman42/btree-mini-py */
struct Node *left, *right;
/* RED/BLACK */
bool color;
#endif
} Node;
#ifdef USE_TPOOL
/* rt_pool_* pool allocator */
#define TPOOL_IMPL_PREFIX rt_node
#define TPOOL_ALLOC_TYPE Node
#define TPOOL_STRUCT ElemPool_Node
#include "generic_alloc_impl.h"
#undef TPOOL_IMPL_PREFIX
#undef TPOOL_ALLOC_TYPE
#undef TPOOL_STRUCT
#endif /* USE_TPOOL */
typedef struct LinkedList {
Node *first, *last;
} LinkedList;
typedef struct RangeTreeUInt {
uint range[2];
LinkedList list;
#ifdef USE_BTREE
Node *root;
#endif
#ifdef USE_TPOOL
struct ElemPool_Node epool;
#endif
} RangeTreeUInt;
/* ------------------------------------------------------------------------- */
/* List API */
static void list_push_front(LinkedList *list, Node *node)
{
if (list->first != NULL) {
node->next = list->first;
node->next->prev = node;
node->prev = NULL;
}
else {
list->last = node;
}
list->first = node;
}
static void list_push_back(LinkedList *list, Node *node)
{
if (list->first != NULL) {
node->prev = list->last;
node->prev->next = node;
node->next = NULL;
}
else {
list->first = node;
}
list->last = node;
}
static void list_push_after(LinkedList *list, Node *node_prev, Node *node_new)
{
/* node_new before node_next */
/* empty list */
if (list->first == NULL) {
list->first = node_new;
list->last = node_new;
return;
}
/* insert at head of list */
if (node_prev == NULL) {
node_new->prev = NULL;
node_new->next = list->first;
node_new->next->prev = node_new;
list->first = node_new;
return;
}
/* at end of list */
if (list->last == node_prev) {
list->last = node_new;
}
node_new->next = node_prev->next;
node_new->prev = node_prev;
node_prev->next = node_new;
if (node_new->next) {
node_new->next->prev = node_new;
}
}
static void list_push_before(LinkedList *list, Node *node_next, Node *node_new)
{
/* node_new before node_next */
/* empty list */
if (list->first == NULL) {
list->first = node_new;
list->last = node_new;
return;
}
/* insert at end of list */
if (node_next == NULL) {
node_new->prev = list->last;
node_new->next = NULL;
list->last->next = node_new;
list->last = node_new;
return;
}
/* at beginning of list */
if (list->first == node_next) {
list->first = node_new;
}
node_new->next = node_next;
node_new->prev = node_next->prev;
node_next->prev = node_new;
if (node_new->prev) {
node_new->prev->next = node_new;
}
}
static void list_remove(LinkedList *list, Node *node)
{
if (node->next != NULL) {
node->next->prev = node->prev;
}
if (node->prev != NULL) {
node->prev->next = node->next;
}
if (list->last == node) {
list->last = node->prev;
}
if (list->first == node) {
list->first = node->next;
}
}
static void list_clear(LinkedList *list)
{
list->first = NULL;
list->last = NULL;
}
/* end list API */
/* forward declarations */
static void rt_node_free(RangeTreeUInt *rt, Node *node);
#ifdef USE_BTREE
#ifdef DEBUG
static bool rb_is_balanced_root(const Node *root);
#endif
/* ------------------------------------------------------------------------- */
/* Internal BTree API
*
* Left-leaning red-black tree.
*/
/* use minimum, could use max too since nodes never overlap */
#define KEY(n) ((n)->min)
enum {
RED = 0,
BLACK = 1,
};
static bool is_red(const Node *node)
{
return (node && (node->color == RED));
}
static int key_cmp(uint key1, uint key2)
{
return (key1 == key2) ? 0 : ((key1 < key2) ? -1 : 1);
}
/* removed from the tree */
static void rb_node_invalidate(Node *node)
{
#ifdef DEBUG
node->left = NULL;
node->right = NULL;
node->color = false;
#else
(void)node;
#endif
}
static void rb_flip_color(Node *node)
{
node->color ^= 1;
node->left->color ^= 1;
node->right->color ^= 1;
}
static Node *rb_rotate_left(Node *left)
{
/* Make a right-leaning 3-node lean to the left. */
Node *right = left->right;
left->right = right->left;
right->left = left;
right->color = left->color;
left->color = RED;
return right;
}
static Node *rb_rotate_right(Node *right)
{
/* Make a left-leaning 3-node lean to the right. */
Node *left = right->left;
right->left = left->right;
left->right = right;
left->color = right->color;
right->color = RED;
return left;
}
/* Fixup colors when insert happened */
static Node *rb_fixup_insert(Node *node)
{
if (is_red(node->right) && !is_red(node->left)) {
node = rb_rotate_left(node);
}
if (is_red(node->left) && is_red(node->left->left)) {
node = rb_rotate_right(node);
}
if (is_red(node->left) && is_red(node->right)) {
rb_flip_color(node);
}
return node;
}
static Node *rb_insert_recursive(Node *node, Node *node_to_insert)
{
if (node == NULL) {
return node_to_insert;
}
const int cmp = key_cmp(KEY(node_to_insert), KEY(node));
if (cmp == 0) {
/* caller ensures no collisions */
assert(0);
}
else if (cmp == -1) {
node->left = rb_insert_recursive(node->left, node_to_insert);
}
else {
node->right = rb_insert_recursive(node->right, node_to_insert);
}
return rb_fixup_insert(node);
}
static Node *rb_insert_root(Node *root, Node *node_to_insert)
{
root = rb_insert_recursive(root, node_to_insert);
root->color = BLACK;
return root;
}
static Node *rb_move_red_to_left(Node *node)
{
/* Assuming that h is red and both h->left and h->left->left
* are black, make h->left or one of its children red.
*/
rb_flip_color(node);
if (node->right && is_red(node->right->left)) {
node->right = rb_rotate_right(node->right);
node = rb_rotate_left(node);
rb_flip_color(node);
}
return node;
}
static Node *rb_move_red_to_right(Node *node)
{
/* Assuming that h is red and both h->right and h->right->left
* are black, make h->right or one of its children red.
*/
rb_flip_color(node);
if (node->left && is_red(node->left->left)) {
node = rb_rotate_right(node);
rb_flip_color(node);
}
return node;
}
/* Fixup colors when remove happened */
static Node *rb_fixup_remove(Node *node)
{
if (is_red(node->right)) {
node = rb_rotate_left(node);
}
if (is_red(node->left) && is_red(node->left->left)) {
node = rb_rotate_right(node);
}
if (is_red(node->left) && is_red(node->right)) {
rb_flip_color(node);
}
return node;
}
static Node *rb_pop_min_recursive(Node *node, Node **r_node_pop)
{
if (node == NULL) {
return NULL;
}
if (node->left == NULL) {
rb_node_invalidate(node);
*r_node_pop = node;
return NULL;
}
if ((!is_red(node->left)) && (!is_red(node->left->left))) {
node = rb_move_red_to_left(node);
}
node->left = rb_pop_min_recursive(node->left, r_node_pop);
return rb_fixup_remove(node);
}
static Node *rb_remove_recursive(Node *node, const Node *node_to_remove)
{
if (node == NULL) {
return NULL;
}
if (key_cmp(KEY(node_to_remove), KEY(node)) == -1) {
if (node->left != NULL) {
if ((!is_red(node->left)) && (!is_red(node->left->left))) {
node = rb_move_red_to_left(node);
}
}
node->left = rb_remove_recursive(node->left, node_to_remove);
}
else {
if (is_red(node->left)) {
node = rb_rotate_right(node);
}
if ((node == node_to_remove) && (node->right == NULL)) {
rb_node_invalidate(node);
return NULL;
}
assert(node->right != NULL);
if ((!is_red(node->right)) && (!is_red(node->right->left))) {
node = rb_move_red_to_right(node);
}
if (node == node_to_remove) {
/* minor improvement over original method:
* no need to double lookup min */
Node *node_free; /* will always be set */
node->right = rb_pop_min_recursive(node->right, &node_free);
node_free->left = node->left;
node_free->right = node->right;
node_free->color = node->color;
rb_node_invalidate(node);
node = node_free;
}
else {
node->right = rb_remove_recursive(node->right, node_to_remove);
}
}
return rb_fixup_remove(node);
}
static Node *rb_btree_remove(Node *root, const Node *node_to_remove)
{
root = rb_remove_recursive(root, node_to_remove);
if (root != NULL) {
root->color = BLACK;
}
return root;
}
/*
* Returns the node closest to and including 'key',
* excluding anything below.
*/
static Node *rb_get_or_upper_recursive(Node *n, const uint key)
{
if (n == NULL) {
return NULL;
}
const int cmp_upper = key_cmp(KEY(n), key);
if (cmp_upper == 0) {
return n; // exact match
}
else if (cmp_upper == 1) {
assert(KEY(n) >= key);
Node *n_test = rb_get_or_upper_recursive(n->left, key);
return n_test ? n_test : n;
}
else { // cmp_upper == -1
return rb_get_or_upper_recursive(n->right, key);
}
}
/*
* Returns the node closest to and including 'key',
* excluding anything above.
*/
static Node *rb_get_or_lower_recursive(Node *n, const uint key)
{
if (n == NULL) {
return NULL;
}
const int cmp_lower = key_cmp(KEY(n), key);
if (cmp_lower == 0) {
return n; // exact match
}
else if (cmp_lower == -1) {
assert(KEY(n) <= key);
Node *n_test = rb_get_or_lower_recursive(n->right, key);
return n_test ? n_test : n;
}
else { // cmp_lower == 1
return rb_get_or_lower_recursive(n->left, key);
}
}
#ifdef DEBUG
static bool rb_is_balanced_recursive(const Node *node, int black)
{
// Does every path from the root to a leaf have the given number
// of black links?
if (node == NULL) {
return black == 0;
}
if (!is_red(node)) {
black--;
}
return rb_is_balanced_recursive(node->left, black) &&
rb_is_balanced_recursive(node->right, black);
}
static bool rb_is_balanced_root(const Node *root)
{
// Do all paths from root to leaf have same number of black edges?
int black = 0; // number of black links on path from root to min
const Node *node = root;
while (node != NULL) {
if (!is_red(node)) {
black++;
}
node = node->left;
}
return rb_is_balanced_recursive(root, black);
}
#endif // DEBUG
/* End BTree API */
#endif // USE_BTREE
/* ------------------------------------------------------------------------- */
/* Internal RangeTreeUInt API */
#ifdef _WIN32
#define inline __inline
#endif
static inline Node *rt_node_alloc(RangeTreeUInt *rt)
{
#ifdef USE_TPOOL
return rt_node_pool_elem_alloc(&rt->epool);
#else
(void)rt;
return malloc(sizeof(Node));
#endif
}
static Node *rt_node_new(RangeTreeUInt *rt, uint min, uint max)
{
Node *node = rt_node_alloc(rt);
assert(min <= max);
node->prev = NULL;
node->next = NULL;
node->min = min;
node->max = max;
#ifdef USE_BTREE
node->left = NULL;
node->right = NULL;
#endif
return node;
}
static void rt_node_free(RangeTreeUInt *rt, Node *node)
{
#ifdef USE_TPOOL
rt_node_pool_elem_free(&rt->epool, node);
#else
(void)rt;
free(node);
#endif
}
#ifdef USE_BTREE
static void rt_btree_insert(RangeTreeUInt *rt, Node *node)
{
node->color = RED;
node->left = NULL;
node->right = NULL;
rt->root = rb_insert_root(rt->root, node);
}
#endif
static void rt_node_add_back(RangeTreeUInt *rt, Node *node)
{
list_push_back(&rt->list, node);
#ifdef USE_BTREE
rt_btree_insert(rt, node);
#endif
}
static void rt_node_add_front(RangeTreeUInt *rt, Node *node)
{
list_push_front(&rt->list, node);
#ifdef USE_BTREE
rt_btree_insert(rt, node);
#endif
}
static void rt_node_add_before(RangeTreeUInt *rt, Node *node_next, Node *node)
{
list_push_before(&rt->list, node_next, node);
#ifdef USE_BTREE
rt_btree_insert(rt, node);
#endif
}
static void rt_node_add_after(RangeTreeUInt *rt, Node *node_prev, Node *node)
{
list_push_after(&rt->list, node_prev, node);
#ifdef USE_BTREE
rt_btree_insert(rt, node);
#endif
}
static void rt_node_remove(RangeTreeUInt *rt, Node *node)
{
list_remove(&rt->list, node);
#ifdef USE_BTREE
rt->root = rb_btree_remove(rt->root, node);
#endif
rt_node_free(rt, node);
}
static Node *rt_find_node_from_value(RangeTreeUInt *rt, const uint value)
{
#ifdef USE_BTREE
Node *node = rb_get_or_lower_recursive(rt->root, value);
if (node != NULL) {
if ((value >= node->min) && (value <= node->max)) {
return node;
}
}
return NULL;
#else
for (Node *node = rt->list.first; node; node = node->next) {
if ((value >= node->min) && (value <= node->max)) {
return node;
}
}
return NULL;
#endif // USE_BTREE
}
static void rt_find_node_pair_around_value(RangeTreeUInt *rt, const uint value,
Node **r_node_prev, Node **r_node_next)
{
if (value < rt->list.first->min) {
*r_node_prev = NULL;
*r_node_next = rt->list.first;
return;
}
else if (value > rt->list.last->max) {
*r_node_prev = rt->list.last;
*r_node_next = NULL;
return;
}
else {
#ifdef USE_BTREE
Node *node_next = rb_get_or_upper_recursive(rt->root, value);
if (node_next != NULL) {
Node *node_prev = node_next->prev;
if ((node_prev->max < value) && (value < node_next->min)) {
*r_node_prev = node_prev;
*r_node_next = node_next;
return;
}
}
#else
Node *node_prev = rt->list.first;
Node *node_next;
while ((node_next = node_prev->next)) {
if ((node_prev->max < value) && (value < node_next->min)) {
*r_node_prev = node_prev;
*r_node_next = node_next;
return;
}
node_prev = node_next;
}
#endif // USE_BTREE
}
*r_node_prev = NULL;
*r_node_next = NULL;
}
/* ------------------------------------------------------------------------- */
/* Public API */
static RangeTreeUInt *rt_create_empty(uint min, uint max)
{
RangeTreeUInt *rt = malloc(sizeof(*rt));
rt->range[0] = min;
rt->range[1] = max;
list_clear(&rt->list);
#ifdef USE_BTREE
rt->root = NULL;
#endif
#ifdef USE_TPOOL
rt_node_pool_create(&rt->epool, 512);
#endif
return rt;
}
RangeTreeUInt *range_tree_uint_alloc(uint min, uint max)
{
RangeTreeUInt *rt = rt_create_empty(min, max);
Node *node = rt_node_new(rt, min, max);
rt_node_add_front(rt, node);
return rt;
}
void range_tree_uint_free(RangeTreeUInt *rt)
{
#ifdef DEBUG
#ifdef USE_BTREE
assert(rb_is_balanced_root(rt->root));
#endif
#endif
#ifdef USE_TPOOL
rt_node_pool_destroy(&rt->epool);
#else
for (Node *node = rt->list.first, *node_next; node; node = node_next) {
node_next = node->next;
rt_node_free(rt, node);
}
#endif
free(rt);
}
#ifdef USE_BTREE
static Node *rt_copy_recursive(RangeTreeUInt *rt_dst, const Node *node_src)
{
if (node_src == NULL) {
return NULL;
}
Node *node_dst = rt_node_alloc(rt_dst);
*node_dst = *node_src;
node_dst->left = rt_copy_recursive(rt_dst, node_dst->left);
list_push_back(&rt_dst->list, node_dst);
node_dst->right = rt_copy_recursive(rt_dst, node_dst->right);
return node_dst;
}
#endif // USE_BTREE
RangeTreeUInt *range_tree_uint_copy(const RangeTreeUInt *rt_src)
{
RangeTreeUInt *rt_dst = rt_create_empty(rt_src->range[0], rt_src->range[1]);
#ifdef USE_BTREE
rt_dst->root = rt_copy_recursive(rt_dst, rt_src->root);
#else
for (Node *node_src = rt_src->list.first; node_src; node_src = node_src->next) {
Node *node_dst = rt_node_alloc(rt_dst);
*node_dst = *node_src;
list_push_back(&rt_dst->list, node_dst);
}
#endif
return rt_dst;
}
/**
* Return true if the tree has the value (not taken).
*/
bool range_tree_uint_has(RangeTreeUInt *rt, const uint value)
{
assert(value >= rt->range[0] && value <= rt->range[1]);
Node *node = rt_find_node_from_value(rt, value);
return (node != NULL);
}
static void range_tree_uint_take_impl(RangeTreeUInt *rt, const uint value, Node *node)
{
assert(node == rt_find_node_from_value(rt, value));
if (node->min == value) {
if (node->max != value) {
node->min += 1;
}
else {
assert(node->min == node->max);
rt_node_remove(rt, node);
}
}
else if (node->max == value) {
node->max -= 1;
}
else {
Node *node_next = rt_node_new(rt, value + 1, node->max);
node->max = value - 1;
rt_node_add_after(rt, node, node_next);
}
}
void range_tree_uint_take(RangeTreeUInt *rt, const uint value)
{
Node *node = rt_find_node_from_value(rt, value);
assert(node != NULL);
range_tree_uint_take_impl(rt, value, node);
}
bool range_tree_uint_retake(RangeTreeUInt *rt, const uint value)
{
Node *node = rt_find_node_from_value(rt, value);
if (node != NULL) {
range_tree_uint_take_impl(rt, value, node);
return true;
}
else {
return false;
}
}
uint range_tree_uint_take_any(RangeTreeUInt *rt)
{
Node *node = rt->list.first;
uint value = node->min;
if (value == node->max) {
rt_node_remove(rt, node);
}
else {
node->min += 1;
}
return value;
}
void range_tree_uint_release(RangeTreeUInt *rt, const uint value)
{
bool touch_prev, touch_next;
Node *node_prev, *node_next;
if (rt->list.first != NULL) {
rt_find_node_pair_around_value(rt, value, &node_prev, &node_next);
/* the value must have been already taken */
assert(node_prev || node_next);
/* Cases:
* 1) fill the gap between prev & next (two spans into one span).
* 2) touching prev, (grow node_prev->max up one).
* 3) touching next, (grow node_next->min down one).
* 4) touching neither, add a new segment. */
touch_prev = (node_prev != NULL && node_prev->max + 1 == value);
touch_next = (node_next != NULL && node_next->min - 1 == value);
}
else {
// we could handle this case (4) inline,
// since its not a common case - use regular logic.
node_prev = node_next = NULL;
touch_prev = false;
touch_next = false;
}
if (touch_prev && touch_next) { // 1)
node_prev->max = node_next->max;
rt_node_remove(rt, node_next);
}
else if (touch_prev) { // 2)
assert(node_prev->max + 1 == value);
node_prev->max = value;
}
else if (touch_next) { // 3)
assert(node_next->min - 1 == value);
node_next->min = value;
}
else { // 4)
Node *node_new = rt_node_new(rt, value, value);
if (node_prev != NULL) {
rt_node_add_after(rt, node_prev, node_new);
}
else if (node_next != NULL) {
rt_node_add_before(rt, node_next, node_new);
}
else {
assert(rt->list.first == NULL);
rt_node_add_back(rt, node_new);
}
}
}

View File

@@ -1,48 +0,0 @@
/*
* Copyright (c) 2016, Campbell Barton.
*
* Licensed under the Apache License, Version 2.0 (the "Apache License")
* with the following modification; you may not use this file except in
* compliance with the Apache License and the following modification to it:
* Section 6. Trademarks. is deleted and replaced with:
*
* 6. Trademarks. This License does not grant permission to use the trade
* names, trademarks, service marks, or product names of the Licensor
* and its affiliates, except as required to comply with Section 4(c) of
* the License and to reproduce the content of the NOTICE file.
*
* You may obtain a copy of the Apache License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the Apache License with the above modification is
* distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the Apache License for the specific
* language governing permissions and limitations under the Apache License.
*/
#ifndef __RANGE_TREE_H__
#define __RANGE_TREE_H__
#ifdef __cplusplus
extern "C" {
#endif
typedef struct RangeTreeUInt RangeTreeUInt;
struct RangeTreeUInt *range_tree_uint_alloc(unsigned int min, unsigned int max);
void range_tree_uint_free(struct RangeTreeUInt *rt);
struct RangeTreeUInt *range_tree_uint_copy(const struct RangeTreeUInt *rt_src);
bool range_tree_uint_has(struct RangeTreeUInt *rt, const unsigned int value);
void range_tree_uint_take(struct RangeTreeUInt *rt, const unsigned int value);
bool range_tree_uint_retake(struct RangeTreeUInt *rt, const unsigned int value);
unsigned int range_tree_uint_take_any(struct RangeTreeUInt *rt);
void range_tree_uint_release(struct RangeTreeUInt *rt, const unsigned int value);
#ifdef __cplusplus
}
#endif
#endif /* __RANGE_TREE_H__ */

251
extern/rangetree/range_tree.hh vendored Normal file
View File

@@ -0,0 +1,251 @@
/* This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
02110-1301, USA.
*/
#include <cassert>
#include <climits>
#include <iostream>
#include <set>
#ifndef RANGE_TREE_DEBUG_PRINT_FUNCTION
# define RANGE_TREE_DEBUG_PRINT_FUNCTION 0
#endif
template <typename T>
struct RangeTree {
struct Range {
Range(T min_, T max_)
: min(min_), max(max_), single(min_ == max_) {
assert(min_ <= max_);
}
Range(T t)
: min(t), max(t), single(true)
{}
Range& operator=(const Range& v) {
*this = v;
return *this;
}
bool operator<(const Range& v) const {
return max < v.min;
}
const T min;
const T max;
const bool single;
};
typedef std::set<Range> Tree;
typedef typename Tree::iterator TreeIter;
typedef typename Tree::reverse_iterator TreeIterReverse;
typedef typename Tree::const_iterator TreeIterConst;
/* Initialize with a single range from 'min' to 'max', inclusive. */
RangeTree(T min, T max) {
tree.insert(Range(min, max));
}
/* Initialize with a single range from 0 to 'max', inclusive. */
RangeTree(T max) {
tree.insert(Range(0, max));
}
RangeTree(const RangeTree<T>& src) {
tree = src.tree;
}
/* Remove 't' from the associated range in the tree. Precondition:
a range including 't' must exist in the tree. */
void take(T t) {
#if RANGE_TREE_DEBUG_PRINT_FUNCTION
std::cout << __func__ << "(" << t << ")\n";
#endif
/* Find the range that includes 't' and its neighbors */
TreeIter iter = tree.find(Range(t));
assert(iter != tree.end());
Range cur = *iter;
/* Remove the original range (note that this does not
invalidate the prev/next iterators) */
tree.erase(iter);
/* Construct two new ranges that together cover the original
range, except for 't' */
if (t > cur.min)
tree.insert(Range(cur.min, t - 1));
if (t + 1 <= cur.max)
tree.insert(Range(t + 1, cur.max));
}
/* clone of 'take' that checks if the item exists */
bool retake(T t) {
#if RANGE_TREE_DEBUG_PRINT_FUNCTION
std::cout << __func__ << "(" << t << ")\n";
#endif
TreeIter iter = tree.find(Range(t));
if (iter == tree.end()) {
return false;
}
Range cur = *iter;
tree.erase(iter);
if (t > cur.min)
tree.insert(Range(cur.min, t - 1));
if (t + 1 <= cur.max)
tree.insert(Range(t + 1, cur.max));
return true;
}
/* Take the first element out of the first range in the
tree. Precondition: tree must not be empty. */
T take_any() {
#if RANGE_TREE_DEBUG_PRINT_FUNCTION
std::cout << __func__ << "()\n";
#endif
/* Find the first element */
TreeIter iter = tree.begin();
assert(iter != tree.end());
T first = iter->min;
/* Take the first element */
take(first);
return first;
}
/* Return 't' to the tree, either expanding/merging existing
ranges or adding a range to cover it. Precondition: 't' cannot
be in an existing range. */
void release(T t) {
#if RANGE_TREE_DEBUG_PRINT_FUNCTION
std::cout << __func__ << "(" << t << ")\n";
#endif
/* TODO: these cases should be simplified/unified */
TreeIter right = tree.upper_bound(t);
if (right != tree.end()) {
TreeIter left = right;
if (left != tree.begin())
--left;
if (left == right) {
/* 't' lies before any existing ranges */
if (t + 1 == left->min) {
/* 't' lies directly before the first range,
resize and replace that range */
const Range r(t, left->max);
tree.erase(left);
tree.insert(r);
}
else {
/* There's a gap between 't' and the first range,
add a new range */
tree.insert(Range(t));
}
}
else if ((left->max + 1 == t) &&
(t + 1 == right->min)) {
/* 't' fills a hole. Remove left and right, and insert a
new range that covers both. */
const Range r(left->min, right->max);
tree.erase(left);
tree.erase(right);
tree.insert(r);
}
else if (left->max + 1 == t) {
/* 't' lies directly after 'left' range, resize and
replace that range */
const Range r(left->min, t);
tree.erase(left);
tree.insert(r);
}
else if (t + 1 == right->min) {
/* 't' lies directly before 'right' range, resize and
replace that range */
const Range r(t, right->max);
tree.erase(right);
tree.insert(r);
}
else {
/* There's a gap between 't' and both adjacent ranges,
add a new range */
tree.insert(Range(t));
}
}
else {
/* 't' lies after any existing ranges */
right = tree.end();
right--;
if (right->max + 1 == t) {
/* 't' lies directly after last range, resize and
replace that range */
const Range r(right->min, t);
tree.erase(right);
tree.insert(r);
}
else {
/* There's a gap between the last range and 't', add a
new range */
tree.insert(Range(t));
}
}
}
bool has(T t) const {
TreeIterConst iter = tree.find(Range(t));
return (iter != tree.end()) && (t <= iter->max);
}
bool has_range(T min, T max) const {
TreeIterConst iter = tree.find(Range(min, max));
return (iter != tree.end()) && (min == iter->min && max == iter->max);
}
bool empty() const {
return tree.empty();
}
int size() const {
return tree.size();
}
void print() const {
std::cout << "RangeTree:\n";
for (TreeIterConst iter = tree.begin(); iter != tree.end(); ++iter) {
const Range& r = *iter;
if (r.single)
std::cout << " [" << r.min << "]\n";
else
std::cout << " [" << r.min << ", " << r.max << "]\n";
}
if (empty())
std::cout << " <empty>";
std::cout << "\n";
}
unsigned int allocation_lower_bound() const {
return tree.size() * sizeof(Range);
}
private:
Tree tree;
};

92
extern/rangetree/range_tree_c_api.cc vendored Normal file
View File

@@ -0,0 +1,92 @@
/* This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
02110-1301, USA.
*/
#include "range_tree.hh"
/* Give RangeTreeUInt a real type rather than the opaque struct type
defined for external use. */
#define RANGE_TREE_C_API_INTERNAL
typedef RangeTree<unsigned> RangeTreeUInt;
#include "range_tree_c_api.h"
RangeTreeUInt *range_tree_uint_alloc(unsigned min, unsigned max)
{
return new RangeTreeUInt(min, max);
}
RangeTreeUInt *range_tree_uint_copy(RangeTreeUInt *src)
{
return new RangeTreeUInt(*src);
}
void range_tree_uint_free(RangeTreeUInt *rt)
{
delete rt;
}
void range_tree_uint_take(RangeTreeUInt *rt, unsigned v)
{
rt->take(v);
}
bool range_tree_uint_retake(RangeTreeUInt *rt, unsigned v)
{
return rt->retake(v);
}
unsigned range_tree_uint_take_any(RangeTreeUInt *rt)
{
return rt->take_any();
}
void range_tree_uint_release(RangeTreeUInt *rt, unsigned v)
{
rt->release(v);
}
bool range_tree_uint_has(const RangeTreeUInt *rt, unsigned v)
{
return rt->has(v);
}
bool range_tree_uint_has_range(
const RangeTreeUInt *rt,
unsigned vmin,
unsigned vmax)
{
return rt->has_range(vmin, vmax);
}
bool range_tree_uint_empty(const RangeTreeUInt *rt)
{
return rt->empty();
}
unsigned range_tree_uint_size(const RangeTreeUInt *rt)
{
return rt->size();
}
void range_tree_uint_print(const RangeTreeUInt *rt)
{
rt->print();
}
unsigned int range_tree_uint_allocation_lower_bound(const RangeTreeUInt *rt)
{
return rt->allocation_lower_bound();
}

62
extern/rangetree/range_tree_c_api.h vendored Normal file
View File

@@ -0,0 +1,62 @@
/* This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
02110-1301, USA.
*/
#ifndef __RANGE_TREE_C_API_H__
#define __RANGE_TREE_C_API_H__
#ifdef __cplusplus
extern "C" {
#endif
/* Simple C-accessible wrapper for RangeTree<unsigned> */
#ifndef RANGE_TREE_C_API_INTERNAL
typedef struct RangeTreeUInt RangeTreeUInt;
#endif
RangeTreeUInt *range_tree_uint_alloc(unsigned min, unsigned max);
RangeTreeUInt *range_tree_uint_copy(RangeTreeUInt *src);
void range_tree_uint_free(RangeTreeUInt *rt);
void range_tree_uint_take(RangeTreeUInt *rt, unsigned v);
bool range_tree_uint_retake(RangeTreeUInt *rt, unsigned v);
unsigned range_tree_uint_take_any(RangeTreeUInt *rt);
void range_tree_uint_release(RangeTreeUInt *rt, unsigned v);
bool range_tree_uint_has(const RangeTreeUInt *rt, unsigned v);
bool range_tree_uint_has_range(
const RangeTreeUInt *rt,
unsigned vmin, unsigned vmax);
bool range_tree_uint_empty(const RangeTreeUInt *rt);
unsigned range_tree_uint_size(const RangeTreeUInt *rt);
void range_tree_uint_print(const RangeTreeUInt *rt);
unsigned int range_tree_uint_allocation_lower_bound(const RangeTreeUInt *rt);
#ifdef __cplusplus
}
#endif
#endif /* __RANGE_TREE_C_API_H__ */

View File

@@ -77,40 +77,32 @@
/* Function prototypes. */
#if (LG_SIZEOF_PTR == 8 || LG_SIZEOF_INT == 8)
ATOMIC_INLINE uint64_t atomic_add_and_fetch_uint64(uint64_t *p, uint64_t x);
ATOMIC_INLINE uint64_t atomic_sub_and_fetch_uint64(uint64_t *p, uint64_t x);
ATOMIC_INLINE uint64_t atomic_fetch_and_add_uint64(uint64_t *p, uint64_t x);
ATOMIC_INLINE uint64_t atomic_fetch_and_sub_uint64(uint64_t *p, uint64_t x);
ATOMIC_INLINE uint64_t atomic_add_uint64(uint64_t *p, uint64_t x);
ATOMIC_INLINE uint64_t atomic_sub_uint64(uint64_t *p, uint64_t x);
ATOMIC_INLINE uint64_t atomic_cas_uint64(uint64_t *v, uint64_t old, uint64_t _new);
#endif
ATOMIC_INLINE uint32_t atomic_add_and_fetch_uint32(uint32_t *p, uint32_t x);
ATOMIC_INLINE uint32_t atomic_sub_and_fetch_uint32(uint32_t *p, uint32_t x);
ATOMIC_INLINE uint32_t atomic_add_uint32(uint32_t *p, uint32_t x);
ATOMIC_INLINE uint32_t atomic_sub_uint32(uint32_t *p, uint32_t x);
ATOMIC_INLINE uint32_t atomic_cas_uint32(uint32_t *v, uint32_t old, uint32_t _new);
ATOMIC_INLINE uint32_t atomic_fetch_and_add_uint32(uint32_t *p, uint32_t x);
ATOMIC_INLINE uint32_t atomic_fetch_and_or_uint32(uint32_t *p, uint32_t x);
ATOMIC_INLINE uint32_t atomic_fetch_and_and_uint32(uint32_t *p, uint32_t x);
ATOMIC_INLINE uint8_t atomic_fetch_and_or_uint8(uint8_t *p, uint8_t b);
ATOMIC_INLINE uint8_t atomic_fetch_and_and_uint8(uint8_t *p, uint8_t b);
ATOMIC_INLINE size_t atomic_add_and_fetch_z(size_t *p, size_t x);
ATOMIC_INLINE size_t atomic_sub_and_fetch_z(size_t *p, size_t x);
ATOMIC_INLINE size_t atomic_fetch_and_add_z(size_t *p, size_t x);
ATOMIC_INLINE size_t atomic_fetch_and_sub_z(size_t *p, size_t x);
ATOMIC_INLINE size_t atomic_add_z(size_t *p, size_t x);
ATOMIC_INLINE size_t atomic_sub_z(size_t *p, size_t x);
ATOMIC_INLINE size_t atomic_cas_z(size_t *v, size_t old, size_t _new);
ATOMIC_INLINE unsigned atomic_add_and_fetch_u(unsigned *p, unsigned x);
ATOMIC_INLINE unsigned atomic_sub_and_fetch_u(unsigned *p, unsigned x);
ATOMIC_INLINE unsigned atomic_fetch_and_add_u(unsigned *p, unsigned x);
ATOMIC_INLINE unsigned atomic_fetch_and_sub_u(unsigned *p, unsigned x);
ATOMIC_INLINE unsigned atomic_add_u(unsigned *p, unsigned x);
ATOMIC_INLINE unsigned atomic_sub_u(unsigned *p, unsigned x);
ATOMIC_INLINE unsigned atomic_cas_u(unsigned *v, unsigned old, unsigned _new);
/* WARNING! Float 'atomics' are really faked ones, those are actually closer to some kind of spinlock-sync'ed operation,
* which means they are only efficient if collisions are highly unlikely (i.e. if probability of two threads
* working on the same pointer at the same time is very low). */
ATOMIC_INLINE float atomic_add_and_fetch_fl(float *p, const float x);
ATOMIC_INLINE float atomic_add_fl(float *p, const float x);
/******************************************************************************/
/* Include system-dependent implementations. */

View File

@@ -56,47 +56,25 @@
/******************************************************************************/
/* size_t operations. */
ATOMIC_INLINE size_t atomic_add_and_fetch_z(size_t *p, size_t x)
ATOMIC_INLINE size_t atomic_add_z(size_t *p, size_t x)
{
assert(sizeof(size_t) == LG_SIZEOF_PTR);
#if (LG_SIZEOF_PTR == 8)
return (size_t)atomic_add_and_fetch_uint64((uint64_t *)p, (uint64_t)x);
return (size_t)atomic_add_uint64((uint64_t *)p, (uint64_t)x);
#elif (LG_SIZEOF_PTR == 4)
return (size_t)atomic_add_and_fetch_uint32((uint32_t *)p, (uint32_t)x);
return (size_t)atomic_add_uint32((uint32_t *)p, (uint32_t)x);
#endif
}
ATOMIC_INLINE size_t atomic_sub_and_fetch_z(size_t *p, size_t x)
ATOMIC_INLINE size_t atomic_sub_z(size_t *p, size_t x)
{
assert(sizeof(size_t) == LG_SIZEOF_PTR);
#if (LG_SIZEOF_PTR == 8)
return (size_t)atomic_add_and_fetch_uint64((uint64_t *)p, (uint64_t)-((int64_t)x));
return (size_t)atomic_add_uint64((uint64_t *)p, (uint64_t)-((int64_t)x));
#elif (LG_SIZEOF_PTR == 4)
return (size_t)atomic_add_and_fetch_uint32((uint32_t *)p, (uint32_t)-((int32_t)x));
#endif
}
ATOMIC_INLINE size_t atomic_fetch_and_add_z(size_t *p, size_t x)
{
assert(sizeof(size_t) == LG_SIZEOF_PTR);
#if (LG_SIZEOF_PTR == 8)
return (size_t)atomic_fetch_and_add_uint64((uint64_t *)p, (uint64_t)x);
#elif (LG_SIZEOF_PTR == 4)
return (size_t)atomic_fetch_and_add_uint32((uint32_t *)p, (uint32_t)x);
#endif
}
ATOMIC_INLINE size_t atomic_fetch_and_sub_z(size_t *p, size_t x)
{
assert(sizeof(size_t) == LG_SIZEOF_PTR);
#if (LG_SIZEOF_PTR == 8)
return (size_t)atomic_fetch_and_add_uint64((uint64_t *)p, (uint64_t)-((int64_t)x));
#elif (LG_SIZEOF_PTR == 4)
return (size_t)atomic_fetch_and_add_uint32((uint32_t *)p, (uint32_t)-((int32_t)x));
return (size_t)atomic_add_uint32((uint32_t *)p, (uint32_t)-((int32_t)x));
#endif
}
@@ -113,47 +91,25 @@ ATOMIC_INLINE size_t atomic_cas_z(size_t *v, size_t old, size_t _new)
/******************************************************************************/
/* unsigned operations. */
ATOMIC_INLINE unsigned atomic_add_and_fetch_u(unsigned *p, unsigned x)
ATOMIC_INLINE unsigned atomic_add_u(unsigned *p, unsigned x)
{
assert(sizeof(unsigned) == LG_SIZEOF_INT);
#if (LG_SIZEOF_INT == 8)
return (unsigned)atomic_add_and_fetch_uint64((uint64_t *)p, (uint64_t)x);
return (unsigned)atomic_add_uint64((uint64_t *)p, (uint64_t)x);
#elif (LG_SIZEOF_INT == 4)
return (unsigned)atomic_add_and_fetch_uint32((uint32_t *)p, (uint32_t)x);
return (unsigned)atomic_add_uint32((uint32_t *)p, (uint32_t)x);
#endif
}
ATOMIC_INLINE unsigned atomic_sub_and_fetch_u(unsigned *p, unsigned x)
ATOMIC_INLINE unsigned atomic_sub_u(unsigned *p, unsigned x)
{
assert(sizeof(unsigned) == LG_SIZEOF_INT);
#if (LG_SIZEOF_INT == 8)
return (unsigned)atomic_add_and_fetch_uint64((uint64_t *)p, (uint64_t)-((int64_t)x));
return (unsigned)atomic_add_uint64((uint64_t *)p, (uint64_t)-((int64_t)x));
#elif (LG_SIZEOF_INT == 4)
return (unsigned)atomic_add_and_fetch_uint32((uint32_t *)p, (uint32_t)-((int32_t)x));
#endif
}
ATOMIC_INLINE unsigned atomic_fetch_and_add_u(unsigned *p, unsigned x)
{
assert(sizeof(unsigned) == LG_SIZEOF_INT);
#if (LG_SIZEOF_INT == 8)
return (unsigned)atomic_fetch_and_add_uint64((uint64_t *)p, (uint64_t)x);
#elif (LG_SIZEOF_INT == 4)
return (unsigned)atomic_fetch_and_add_uint32((uint32_t *)p, (uint32_t)x);
#endif
}
ATOMIC_INLINE unsigned atomic_fetch_and_sub_u(unsigned *p, unsigned x)
{
assert(sizeof(unsigned) == LG_SIZEOF_INT);
#if (LG_SIZEOF_INT == 8)
return (unsigned)atomic_fetch_and_add_uint64((uint64_t *)p, (uint64_t)-((int64_t)x));
#elif (LG_SIZEOF_INT == 4)
return (unsigned)atomic_fetch_and_add_uint32((uint32_t *)p, (uint32_t)-((int32_t)x));
return (unsigned)atomic_add_uint32((uint32_t *)p, (uint32_t)-((int32_t)x));
#endif
}
@@ -171,7 +127,7 @@ ATOMIC_INLINE unsigned atomic_cas_u(unsigned *v, unsigned old, unsigned _new)
/******************************************************************************/
/* float operations. */
ATOMIC_INLINE float atomic_add_and_fetch_fl(float *p, const float x)
ATOMIC_INLINE float atomic_add_fl(float *p, const float x)
{
assert(sizeof(float) == sizeof(uint32_t));

View File

@@ -43,12 +43,12 @@
/******************************************************************************/
/* 64-bit operations. */
#if (LG_SIZEOF_PTR == 8 || LG_SIZEOF_INT == 8)
ATOMIC_INLINE uint64_t atomic_add_and_fetch_uint64(uint64_t *p, uint64_t x)
ATOMIC_INLINE uint64_t atomic_add_uint64(uint64_t *p, uint64_t x)
{
return InterlockedExchangeAdd64((int64_t *)p, (int64_t)x) + x;
}
ATOMIC_INLINE uint64_t atomic_sub_and_fetch_uint64(uint64_t *p, uint64_t x)
ATOMIC_INLINE uint64_t atomic_sub_uint64(uint64_t *p, uint64_t x)
{
return InterlockedExchangeAdd64((int64_t *)p, -((int64_t)x)) - x;
}
@@ -57,26 +57,16 @@ ATOMIC_INLINE uint64_t atomic_cas_uint64(uint64_t *v, uint64_t old, uint64_t _ne
{
return InterlockedCompareExchange64((int64_t *)v, _new, old);
}
ATOMIC_INLINE uint64_t atomic_fetch_and_add_uint64(uint64_t *p, uint64_t x)
{
return InterlockedExchangeAdd64((int64_t *)p, (int64_t)x);
}
ATOMIC_INLINE uint64_t atomic_fetch_and_sub_uint64(uint64_t *p, uint64_t x)
{
return InterlockedExchangeAdd64((int64_t *)p, -((int64_t)x));
}
#endif
/******************************************************************************/
/* 32-bit operations. */
ATOMIC_INLINE uint32_t atomic_add_and_fetch_uint32(uint32_t *p, uint32_t x)
ATOMIC_INLINE uint32_t atomic_add_uint32(uint32_t *p, uint32_t x)
{
return InterlockedExchangeAdd(p, x) + x;
}
ATOMIC_INLINE uint32_t atomic_sub_and_fetch_uint32(uint32_t *p, uint32_t x)
ATOMIC_INLINE uint32_t atomic_sub_uint32(uint32_t *p, uint32_t x)
{
return InterlockedExchangeAdd(p, -((int32_t)x)) - x;
}
@@ -91,16 +81,6 @@ ATOMIC_INLINE uint32_t atomic_fetch_and_add_uint32(uint32_t *p, uint32_t x)
return InterlockedExchangeAdd(p, x);
}
ATOMIC_INLINE uint32_t atomic_fetch_and_or_uint32(uint32_t *p, uint32_t x)
{
return InterlockedOr((long *)p, x);
}
ATOMIC_INLINE uint32_t atomic_fetch_and_and_uint32(uint32_t *p, uint32_t x)
{
return InterlockedAnd((long *)p, x);
}
/******************************************************************************/
/* 8-bit operations. */

View File

@@ -58,32 +58,22 @@
/* 64-bit operations. */
#if (LG_SIZEOF_PTR == 8 || LG_SIZEOF_INT == 8)
# if (defined(__GCC_HAVE_SYNC_COMPARE_AND_SWAP_8) || defined(JE_FORCE_SYNC_COMPARE_AND_SWAP_8))
ATOMIC_INLINE uint64_t atomic_add_and_fetch_uint64(uint64_t *p, uint64_t x)
ATOMIC_INLINE uint64_t atomic_add_uint64(uint64_t *p, uint64_t x)
{
return __sync_add_and_fetch(p, x);
}
ATOMIC_INLINE uint64_t atomic_sub_and_fetch_uint64(uint64_t *p, uint64_t x)
ATOMIC_INLINE uint64_t atomic_sub_uint64(uint64_t *p, uint64_t x)
{
return __sync_sub_and_fetch(p, x);
}
ATOMIC_INLINE uint64_t atomic_fetch_and_add_uint64(uint64_t *p, uint64_t x)
{
return __sync_fetch_and_add(p, x);
}
ATOMIC_INLINE uint64_t atomic_fetch_and_sub_uint64(uint64_t *p, uint64_t x)
{
return __sync_fetch_and_sub(p, x);
}
ATOMIC_INLINE uint64_t atomic_cas_uint64(uint64_t *v, uint64_t old, uint64_t _new)
{
return __sync_val_compare_and_swap(v, old, _new);
}
# elif (defined(__amd64__) || defined(__x86_64__))
ATOMIC_INLINE uint64_t atomic_fetch_and_add_uint64(uint64_t *p, uint64_t x)
ATOMIC_INLINE uint64_t atomic_add_uint64(uint64_t *p, uint64_t x)
{
asm volatile (
"lock; xaddq %0, %1;"
@@ -93,7 +83,7 @@ ATOMIC_INLINE uint64_t atomic_fetch_and_add_uint64(uint64_t *p, uint64_t x)
return x;
}
ATOMIC_INLINE uint64_t atomic_fetch_and_sub_uint64(uint64_t *p, uint64_t x)
ATOMIC_INLINE uint64_t atomic_sub_uint64(uint64_t *p, uint64_t x)
{
x = (uint64_t)(-(int64_t)x);
asm volatile (
@@ -104,16 +94,6 @@ ATOMIC_INLINE uint64_t atomic_fetch_and_sub_uint64(uint64_t *p, uint64_t x)
return x;
}
ATOMIC_INLINE uint64_t atomic_add_and_fetch_uint64(uint64_t *p, uint64_t x)
{
return atomic_fetch_and_add_uint64(p, x) + x;
}
ATOMIC_INLINE uint64_t atomic_sub_and_fetch_uint64(uint64_t *p, uint64_t x)
{
return atomic_fetch_and_sub_uint64(p, x) - x;
}
ATOMIC_INLINE uint64_t atomic_cas_uint64(uint64_t *v, uint64_t old, uint64_t _new)
{
uint64_t ret;
@@ -132,12 +112,12 @@ ATOMIC_INLINE uint64_t atomic_cas_uint64(uint64_t *v, uint64_t old, uint64_t _ne
/******************************************************************************/
/* 32-bit operations. */
#if (defined(__GCC_HAVE_SYNC_COMPARE_AND_SWAP_4) || defined(JE_FORCE_SYNC_COMPARE_AND_SWAP_4))
ATOMIC_INLINE uint32_t atomic_add_and_fetch_uint32(uint32_t *p, uint32_t x)
ATOMIC_INLINE uint32_t atomic_add_uint32(uint32_t *p, uint32_t x)
{
return __sync_add_and_fetch(p, x);
}
ATOMIC_INLINE uint32_t atomic_sub_and_fetch_uint32(uint32_t *p, uint32_t x)
ATOMIC_INLINE uint32_t atomic_sub_uint32(uint32_t *p, uint32_t x)
{
return __sync_sub_and_fetch(p, x);
}
@@ -147,7 +127,7 @@ ATOMIC_INLINE uint32_t atomic_cas_uint32(uint32_t *v, uint32_t old, uint32_t _ne
return __sync_val_compare_and_swap(v, old, _new);
}
#elif (defined(__i386__) || defined(__amd64__) || defined(__x86_64__))
ATOMIC_INLINE uint32_t atomic_add_and_fetch_uint32(uint32_t *p, uint32_t x)
ATOMIC_INLINE uint32_t atomic_add_uint32(uint32_t *p, uint32_t x)
{
uint32_t ret = x;
asm volatile (
@@ -158,7 +138,7 @@ ATOMIC_INLINE uint32_t atomic_add_and_fetch_uint32(uint32_t *p, uint32_t x)
return ret+x;
}
ATOMIC_INLINE uint32_t atomic_sub_and_fetch_uint32(uint32_t *p, uint32_t x)
ATOMIC_INLINE uint32_t atomic_sub_uint32(uint32_t *p, uint32_t x)
{
ret = (uint32_t)(-(int32_t)x);
asm volatile (
@@ -189,16 +169,6 @@ ATOMIC_INLINE uint32_t atomic_fetch_and_add_uint32(uint32_t *p, uint32_t x)
return __sync_fetch_and_add(p, x);
}
ATOMIC_INLINE uint32_t atomic_fetch_and_or_uint32(uint32_t *p, uint32_t x)
{
return __sync_fetch_and_or(p, x);
}
ATOMIC_INLINE uint32_t atomic_fetch_and_and_uint32(uint32_t *p, uint32_t x)
{
return __sync_fetch_and_and(p, x);
}
#else
# error "Missing implementation for 32-bit atomic operations"
#endif

View File

@@ -2698,7 +2698,7 @@ Device_set_doppler_factor(Device *self, PyObject *args, void* nothing)
PyDoc_STRVAR(M_aud_Device_distance_model_doc,
"The distance model of the device.\n\n"
".. seealso:: `OpenAL documentation <https://www.openal.org/documentation>`");
".. seealso:: http://connect.creativelabs.com/openal/Documentation/OpenAL%201.1%20Specification.htm#_Toc199835864");
static PyObject *
Device_get_distance_model(Device *self, void* nothing)

Some files were not shown because too many files have changed in this diff Show More