Compare commits

..

29 Commits

Author SHA1 Message Date
735727e2b8 Removed DNA for point caches. 2016-04-30 14:20:13 +02:00
ac30a04b27 Removed point cache blenkernel code. 2016-04-29 15:03:58 +02:00
181d095f50 Removed PointCache RNA struct definition and uses. 2016-04-29 11:07:11 +02:00
ceb452bc9d Removed point cache operators. 2016-04-29 10:44:09 +02:00
c3863650cc Removed UI for point cache users. 2016-04-28 18:38:10 +02:00
1f723603c8 Merge branch 'master' into temp_remove_particles 2016-04-28 17:33:19 +02:00
3632c4997f Merge branch 'master' into temp_remove_particles 2016-04-20 16:25:16 +02:00
773efb506a Removed particle sync code from Cycles.
Note that this only removes the actual dependencies of Cycles on the
particle code in Blender, but not the internal "particle" definition
or the curve type handling inside Cycles. These structures may be in need
of some improvement themselves, but that is out of scope here.
2016-04-20 11:59:02 +02:00
ba279efbdb Removed the ND_PARTICLE notifier and outliner particle elements. 2016-04-16 17:27:49 +02:00
9465d3decf Removed the particle context of property buttons space. 2016-04-16 17:17:31 +02:00
ecb695ccc8 Removed tool settings for particle edit mode. 2016-04-16 14:26:09 +02:00
cd0ec340c4 Removed remaining uses of the particle edit mode flag. 2016-04-16 12:39:41 +02:00
15c8d095e5 Removed the Main.particle list, used for ParticleSettings ID blocks.
There were still some type-agnostic uses as well, owing to the generic
ListBase type.
2016-04-16 12:28:29 +02:00
7c57822afa Fixed some minor errors in game engine and player. 2016-04-16 12:11:34 +02:00
c92b6f1de6 Removed the translation context for particle settings. 2016-04-16 11:32:45 +02:00
d30b942f07 Removed the ID_PA code used for ParticleSettings. 2016-04-16 11:29:28 +02:00
df2e543d44 Removed some unused declarations for boids code. 2016-04-16 11:11:39 +02:00
fbed29a246 Merge branch 'master' into temp_remove_particles 2016-04-15 17:59:54 +02:00
987bb50a74 Removed remaining use of pointers to particle types as well as boids headers. 2016-04-13 18:10:23 +02:00
d474ed9b88 Partially revert 82ec9c87a7, to add back point cache operators.
Eventually point cache will also be replaced, but it can be kept working at first even without particles.
2016-04-13 16:58:44 +02:00
664f5b8c06 Removed particle DNA. 2016-04-13 13:41:11 +02:00
d8d49befa0 Removed particle system and particle instance modifiers. 2016-04-13 11:45:15 +02:00
d47173c8ca Removed blenkernel particle code. 2016-04-13 10:49:39 +02:00
cf6cb3dcaf Removed most particle system code from RNA. 2016-04-12 18:26:19 +02:00
bcd12bf64d Removed most partical-related code from UI scripts.
There are a lot of cases here where deciding for removal is a bit tricky.
Many features have options for "use_particles" and similar settings. Only
features which actually store a particle object reference or work on actual
particle data have been removed.
2016-04-12 16:28:00 +02:00
29a792a75b Removed all direct uses of BKE_particle.h and DNA_particle_types.h from source/blender/editors. 2016-04-12 13:04:31 +02:00
cc468c1974 Removed remnants of particle draw code. 2016-04-12 12:18:38 +02:00
82ec9c87a7 Removed particle operators API and point cache operators. 2016-04-12 11:47:08 +02:00
5a783144e2 Removed particle operators from editors/physics/. 2016-04-12 11:25:40 +02:00
1689 changed files with 46922 additions and 185103 deletions

6
.gitignore vendored
View File

@@ -33,9 +33,3 @@ Desktop.ini
/doc/python_api/sphinx-in-tmp/
/doc/python_api/sphinx-in/
/doc/python_api/sphinx-out/
/doc/python_api/rst/bmesh.ops.rst
/doc/python_api/rst/in_menu.png
/doc/python_api/rst/menu_id.png
/doc/python_api/rst/op_prop.png
/doc/python_api/rst/run_script.png
/doc/python_api/rst/spacebar.png

4
.gitmodules vendored
View File

@@ -10,7 +10,3 @@
path = release/datafiles/locale
url = ../blender-translations.git
ignore = all
[submodule "source/tools"]
path = source/tools
url = ../blender-dev-tools.git
ignore = all

File diff suppressed because it is too large Load Diff

View File

@@ -120,7 +120,7 @@ endif
# -----------------------------------------------------------------------------
# Build Blender
all: .FORCE
all: FORCE
@echo
@echo Configuring Blender in \"$(BUILD_DIR)\" ...
@@ -149,13 +149,13 @@ bpy: all
# -----------------------------------------------------------------------------
# Configuration (save some cd'ing around)
config: .FORCE
config: FORCE
$(CMAKE_CONFIG_TOOL) "$(BUILD_DIR)"
# -----------------------------------------------------------------------------
# Help for build targets
help: .FORCE
help: FORCE
@echo ""
@echo "Convenience targets provided for building blender, (multiple at once can be used)"
@echo " * debug - build a debug binary"
@@ -182,20 +182,14 @@ help: .FORCE
@echo " * package_archive - build an archive package"
@echo ""
@echo "Testing Targets (not associated with building blender)"
@echo " * test - run ctest, currently tests import/export,"
@echo " operator execution and that python modules load"
@echo " * test_cmake - runs our own cmake file checker"
@echo " which detects errors in the cmake file list definitions"
@echo " * test_pep8 - checks all python script are pep8"
@echo " which are tagged to use the stricter formatting"
@echo " * test - run ctest, currently tests import/export, operator execution and that python modules load"
@echo " * test_cmake - runs our own cmake file checker which detects errors in the cmake file list definitions"
@echo " * test_pep8 - checks all python script are pep8 which are tagged to use the stricter formatting"
@echo " * test_deprecated - checks for deprecation tags in our code which may need to be removed"
@echo " * test_style_c - checks C/C++ conforms with blenders style guide:"
@echo " http://wiki.blender.org/index.php/Dev:Doc/CodeStyle"
@echo " * test_style_c - checks C/C++ conforms with blenders style guide: http://wiki.blender.org/index.php/Dev:Doc/CodeStyle"
@echo " * test_style_c_qtc - same as test_style but outputs QtCreator tasks format"
@echo " * test_style_osl - checks OpenShadingLanguage conforms with blenders style guide:"
@echo " http://wiki.blender.org/index.php/Dev:Doc/CodeStyle"
@echo " * test_style_osl_qtc - checks OpenShadingLanguage conforms with blenders style guide:"
@echo " http://wiki.blender.org/index.php/Dev:Doc/CodeStyle"
@echo " * test_style_osl - checks OpenShadingLanguage conforms with blenders style guide: http://wiki.blender.org/index.php/Dev:Doc/CodeStyle"
@echo " * test_style_osl_qtc - checks OpenShadingLanguage conforms with blenders style guide: http://wiki.blender.org/index.php/Dev:Doc/CodeStyle"
@echo ""
@echo "Static Source Code Checking (not associated with building blender)"
@echo " * check_cppcheck - run blender source through cppcheck (C & C++)"
@@ -234,13 +228,13 @@ help: .FORCE
# -----------------------------------------------------------------------------
# Packages
#
package_debian: .FORCE
package_debian: FORCE
cd build_files/package_spec ; DEB_BUILD_OPTIONS="parallel=$(NPROCS)" sh ./build_debian.sh
package_pacman: .FORCE
package_pacman: FORCE
cd build_files/package_spec/pacman ; MAKEFLAGS="-j$(NPROCS)" makepkg
package_archive: .FORCE
package_archive: FORCE
make -C "$(BUILD_DIR)" -s package_archive
@echo archive in "$(BUILD_DIR)/release"
@@ -248,24 +242,24 @@ package_archive: .FORCE
# -----------------------------------------------------------------------------
# Tests
#
test: .FORCE
test: FORCE
cd $(BUILD_DIR) ; ctest . --output-on-failure
# run pep8 check check on scripts we distribute.
test_pep8: .FORCE
test_pep8: FORCE
$(PYTHON) tests/python/pep8.py > test_pep8.log 2>&1
@echo "written: test_pep8.log"
# run some checks on our cmakefiles.
test_cmake: .FORCE
test_cmake: FORCE
$(PYTHON) build_files/cmake/cmake_consistency_check.py > test_cmake_consistency.log 2>&1
@echo "written: test_cmake_consistency.log"
# run deprecation tests, see if we have anything to remove.
test_deprecated: .FORCE
test_deprecated: FORCE
$(PYTHON) tests/check_deprecated.py
test_style_c: .FORCE
test_style_c: FORCE
# run our own checks on C/C++ style
PYTHONIOENCODING=utf_8 $(PYTHON) \
"$(BLENDER_DIR)/source/tools/check_source/check_style_c.py" \
@@ -273,7 +267,7 @@ test_style_c: .FORCE
"$(BLENDER_DIR)/source/creator" \
--no-length-check
test_style_c_qtc: .FORCE
test_style_c_qtc: FORCE
# run our own checks on C/C++ style
USE_QTC_TASK=1 \
PYTHONIOENCODING=utf_8 $(PYTHON) \
@@ -286,7 +280,7 @@ test_style_c_qtc: .FORCE
@echo "written: test_style.tasks"
test_style_osl: .FORCE
test_style_osl: FORCE
# run our own checks on C/C++ style
PYTHONIOENCODING=utf_8 $(PYTHON) \
"$(BLENDER_DIR)/source/tools/check_source/check_style_c.py" \
@@ -294,7 +288,7 @@ test_style_osl: .FORCE
"$(BLENDER_DIR)/release/scripts/templates_osl"
test_style_osl_qtc: .FORCE
test_style_osl_qtc: FORCE
# run our own checks on C/C++ style
USE_QTC_TASK=1 \
PYTHONIOENCODING=utf_8 $(PYTHON) \
@@ -309,13 +303,13 @@ test_style_osl_qtc: .FORCE
# Project Files
#
project_qtcreator: .FORCE
project_qtcreator: FORCE
$(PYTHON) build_files/cmake/cmake_qtcreator_project.py "$(BUILD_DIR)"
project_netbeans: .FORCE
project_netbeans: FORCE
$(PYTHON) build_files/cmake/cmake_netbeans_project.py "$(BUILD_DIR)"
project_eclipse: .FORCE
project_eclipse: FORCE
cmake -G"Eclipse CDT4 - Unix Makefiles" -H"$(BLENDER_DIR)" -B"$(BUILD_DIR)"
@@ -323,40 +317,40 @@ project_eclipse: .FORCE
# Static Checking
#
check_cppcheck: .FORCE
check_cppcheck: FORCE
$(CMAKE_CONFIG)
cd "$(BUILD_DIR)" ; \
$(PYTHON) "$(BLENDER_DIR)/build_files/cmake/cmake_static_check_cppcheck.py" 2> \
"$(BLENDER_DIR)/check_cppcheck.txt"
@echo "written: check_cppcheck.txt"
check_clang_array: .FORCE
check_clang_array: FORCE
$(CMAKE_CONFIG)
cd "$(BUILD_DIR)" ; \
$(PYTHON) "$(BLENDER_DIR)/build_files/cmake/cmake_static_check_clang_array.py"
check_splint: .FORCE
check_splint: FORCE
$(CMAKE_CONFIG)
cd "$(BUILD_DIR)" ; \
$(PYTHON) "$(BLENDER_DIR)/build_files/cmake/cmake_static_check_splint.py"
check_sparse: .FORCE
check_sparse: FORCE
$(CMAKE_CONFIG)
cd "$(BUILD_DIR)" ; \
$(PYTHON) "$(BLENDER_DIR)/build_files/cmake/cmake_static_check_sparse.py"
check_smatch: .FORCE
check_smatch: FORCE
$(CMAKE_CONFIG)
cd "$(BUILD_DIR)" ; \
$(PYTHON) "$(BLENDER_DIR)/build_files/cmake/cmake_static_check_smatch.py"
check_spelling_py: .FORCE
check_spelling_py: FORCE
cd "$(BUILD_DIR)" ; \
PYTHONIOENCODING=utf_8 $(PYTHON) \
"$(BLENDER_DIR)/source/tools/check_source/check_spelling.py" \
"$(BLENDER_DIR)/release/scripts"
check_spelling_c: .FORCE
check_spelling_c: FORCE
cd "$(BUILD_DIR)" ; \
PYTHONIOENCODING=utf_8 $(PYTHON) \
"$(BLENDER_DIR)/source/tools/check_source/check_spelling.py" \
@@ -365,7 +359,7 @@ check_spelling_c: .FORCE
"$(BLENDER_DIR)/intern/guardedalloc" \
"$(BLENDER_DIR)/intern/ghost" \
check_spelling_c_qtc: .FORCE
check_spelling_c_qtc: FORCE
cd "$(BUILD_DIR)" ; USE_QTC_TASK=1 \
PYTHONIOENCODING=utf_8 $(PYTHON) \
"$(BLENDER_DIR)/source/tools/check_source/check_spelling.py" \
@@ -376,13 +370,13 @@ check_spelling_c_qtc: .FORCE
> \
"$(BLENDER_DIR)/check_spelling_c.tasks"
check_spelling_osl: .FORCE
check_spelling_osl: FORCE
cd "$(BUILD_DIR)" ;\
PYTHONIOENCODING=utf_8 $(PYTHON) \
"$(BLENDER_DIR)/source/tools/check_source/check_spelling.py" \
"$(BLENDER_DIR)/intern/cycles/kernel/shaders"
check_descriptions: .FORCE
check_descriptions: FORCE
"$(BUILD_DIR)/bin/blender" --background -noaudio --factory-startup --python \
"$(BLENDER_DIR)/source/tools/check_source/check_descriptions.py"
@@ -390,14 +384,14 @@ check_descriptions: .FORCE
# Utilities
#
tgz: .FORCE
tgz: FORCE
./build_files/utils/build_tgz.sh
icons: .FORCE
icons: FORCE
"$(BLENDER_DIR)/release/datafiles/blender_icons_update.py"
"$(BLENDER_DIR)/release/datafiles/prvicons_update.py"
update: .FORCE
update: FORCE
if [ -d "../lib" ]; then \
svn update ../lib/* ; \
fi
@@ -410,25 +404,23 @@ update: .FORCE
#
# Simple version of ./doc/python_api/sphinx_doc_gen.sh with no PDF generation.
doc_py: .FORCE
"$(BUILD_DIR)/bin/blender" --background -noaudio --factory-startup \
--python doc/python_api/sphinx_doc_gen.py
doc_py: FORCE
"$(BUILD_DIR)/bin/blender" --background -noaudio --factory-startup --python doc/python_api/sphinx_doc_gen.py
cd doc/python_api ; sphinx-build -b html sphinx-in sphinx-out
@echo "docs written into: '$(BLENDER_DIR)/doc/python_api/sphinx-out/contents.html'"
doc_doxy: .FORCE
doc_doxy: FORCE
cd doc/doxygen; doxygen Doxyfile
@echo "docs written into: '$(BLENDER_DIR)/doc/doxygen/html/index.html'"
doc_dna: .FORCE
"$(BUILD_DIR)/bin/blender" --background -noaudio --factory-startup \
--python doc/blender_file_format/BlendFileDnaExporter_25.py
doc_dna: FORCE
"$(BUILD_DIR)/bin/blender" --background -noaudio --factory-startup --python doc/blender_file_format/BlendFileDnaExporter_25.py
@echo "docs written into: '$(BLENDER_DIR)/doc/blender_file_format/dna.html'"
doc_man: .FORCE
doc_man: FORCE
$(PYTHON) doc/manpage/blender.1.py "$(BUILD_DIR)/bin/blender"
help_features: .FORCE
help_features: FORCE
@$(PYTHON) -c \
"import re; \
print('\n'.join([ \
@@ -439,9 +431,9 @@ help_features: .FORCE
if w.startswith('WITH_')]))" | uniq
clean: .FORCE
clean: FORCE
$(MAKE) -C "$(BUILD_DIR)" clean
.PHONY: all
.FORCE:
FORCE:

File diff suppressed because it is too large Load Diff

View File

@@ -285,7 +285,7 @@ def generic_builder(id, libdir='', branch='', rsync=False):
maxsize=150 * 1024 * 1024,
workdir='install'))
f.addStep(MasterShellCommand(name='unpack',
command=['python2.7', unpack_script, filename],
command=['python', unpack_script, filename],
description='unpacking',
descriptionDone='unpacked'))
return f

View File

@@ -56,6 +56,7 @@ if 'cmake' in builder:
chroot_name = None # If not None command will be delegated to that chroot
cuda_chroot_name = None # If not None cuda compilationcommand will be delegated to that chroot
build_cubins = True # Whether to build Cycles CUDA kernels
remove_install_dir = False # Remove installation folder before building
bits = 64
# Config file to be used (relative to blender's sources root)
@@ -69,29 +70,19 @@ if 'cmake' in builder:
cuda_cmake_options = []
if builder.startswith('mac'):
install_dir = None
# Set up OSX architecture
if builder.endswith('x86_64_10_6_cmake'):
cmake_extra_options.append('-DCMAKE_OSX_ARCHITECTURES:STRING=x86_64')
cmake_extra_options.append('-DCUDA_NVCC_EXECUTABLE=/usr/local/cuda-hack/bin/nvcc')
cmake_extra_options.append('-DCUDA_NVCC8_EXECUTABLE=/usr/local/cuda8-hack/bin/nvcc')
elif builder.startswith('win'):
if builder.endswith('_vc2015'):
if builder.startswith('win64'):
cmake_options.extend(['-G', 'Visual Studio 14 2015 Win64'])
elif builder.startswith('win32'):
bits = 32
cmake_options.extend(['-G', 'Visual Studio 14 2015'])
cmake_extra_options.append('-DCUDA_NVCC_FLAGS=--cl-version;2013;' +
'--compiler-bindir;C:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\\bin')
else:
if builder.startswith('win64'):
cmake_options.extend(['-G', 'Visual Studio 12 2013 Win64'])
elif builder.startswith('win32'):
bits = 32
cmake_options.extend(['-G', 'Visual Studio 12 2013'])
cmake_extra_options.append('-DCUDA_NVCC_EXECUTABLE:FILEPATH=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v7.5/bin/nvcc.exe')
cmake_extra_options.append('-DCUDA_NVCC8_EXECUTABLE:FILEPATH=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v8.0/bin/nvcc.exe')
install_dir = None
if builder.startswith('win64'):
cmake_options.append(['-G', '"Visual Studio 12 2013 Win64"'])
elif builder.startswith('win32'):
bits = 32
cmake_options.append(['-G', '"Visual Studio 12 2013"'])
elif builder.startswith('linux'):
tokens = builder.split("_")
@@ -100,6 +91,7 @@ if 'cmake' in builder:
deb_name = "jessie"
elif glibc == 'glibc211':
deb_name = "squeeze"
remove_install_dir = True
cmake_config_file = "build_files/buildbot/config/blender_linux.cmake"
cmake_player_config_file = "build_files/buildbot/config/blender_linux_player.cmake"
if builder.endswith('x86_64_cmake'):
@@ -111,14 +103,10 @@ if 'cmake' in builder:
cuda_chroot_name = 'buildbot_' + deb_name + '_x86_64'
targets = ['player', 'blender', 'cuda']
cmake_extra_options.append('-DCUDA_NVCC_EXECUTABLE=/usr/local/cuda-7.5/bin/nvcc')
cmake_extra_options.append('-DCUDA_NVCC8_EXECUTABLE=/usr/local/cuda-8.0/bin/nvcc')
cmake_options.append("-C" + os.path.join(blender_dir, cmake_config_file))
# Prepare CMake options needed to configure cuda binaries compilation.
cuda_cmake_options.append("-DWITH_CYCLES_CUDA_BINARIES=%s" % ('ON' if build_cubins else 'OFF'))
cuda_cmake_options.append("-DCYCLES_CUDA_BINARIES_ARCH=sm_20;sm_21;sm_30;sm_35;sm_37;sm_50;sm_52;sm_60;sm_61")
if build_cubins or 'cuda' in targets:
if bits == 32:
cuda_cmake_options.append("-DCUDA_64_BIT_DEVICE_CODE=OFF")
@@ -129,7 +117,8 @@ if 'cmake' in builder:
if 'cuda' not in targets:
cmake_options += cuda_cmake_options
cmake_options.append("-DCMAKE_INSTALL_PREFIX=%s" % (install_dir))
if install_dir:
cmake_options.append("-DCMAKE_INSTALL_PREFIX=%s" % (install_dir))
cmake_options += cmake_extra_options
@@ -144,8 +133,10 @@ if 'cmake' in builder:
cuda_chroot_prefix = chroot_prefix[:]
# Make sure no garbage remained from the previous run
if os.path.isdir(install_dir):
shutil.rmtree(install_dir)
# (only do it if builder requested this)
if remove_install_dir:
if os.path.isdir(install_dir):
shutil.rmtree(install_dir)
for target in targets:
print("Building target %s" % (target))

View File

@@ -108,8 +108,6 @@ if builder.find('cmake') != -1:
platform += 'i386'
elif builder.endswith('ppc_10_6_cmake'):
platform += 'ppc'
if builder.endswith('vc2015'):
platform += "-vc14"
builderified_name = 'blender-{}-{}-{}'.format(blender_full_version, git_hash, platform)
if branch != '':
builderified_name = branch + "-" + builderified_name

View File

@@ -1,70 +0,0 @@
# - Find Alembic library
# Find the native Alembic includes and libraries
# This module defines
# ALEMBIC_INCLUDE_DIRS, where to find Alembic headers, Set when
# ALEMBIC_INCLUDE_DIR is found.
# ALEMBIC_LIBRARIES, libraries to link against to use Alembic.
# ALEMBIC_ROOT_DIR, The base directory to search for Alembic.
# This can also be an environment variable.
# ALEMBIC_FOUND, If false, do not try to use Alembic.
#
#=============================================================================
# Copyright 2016 Blender Foundation.
#
# Distributed under the OSI-approved BSD License (the "License");
# see accompanying file Copyright.txt for details.
#
# This software is distributed WITHOUT ANY WARRANTY; without even the
# implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the License for more information.
#=============================================================================
# If ALEMBIC_ROOT_DIR was defined in the environment, use it.
IF(NOT ALEMBIC_ROOT_DIR AND NOT $ENV{ALEMBIC_ROOT_DIR} STREQUAL "")
SET(ALEMBIC_ROOT_DIR $ENV{ALEMBIC_ROOT_DIR})
ENDIF()
SET(_alembic_SEARCH_DIRS
${ALEMBIC_ROOT_DIR}
/usr/local
/sw # Fink
/opt/local # DarwinPorts
/opt/csw # Blastwave
/opt/lib/alembic
)
FIND_PATH(ALEMBIC_INCLUDE_DIR
NAMES
Alembic/Abc/All.h
HINTS
${_alembic_SEARCH_DIRS}
PATH_SUFFIXES
include
)
FIND_LIBRARY(ALEMBIC_LIBRARY
NAMES
Alembic
HINTS
${_alembic_SEARCH_DIRS}
PATH_SUFFIXES
lib64 lib lib/static
)
# handle the QUIETLY and REQUIRED arguments and set ALEMBIC_FOUND to TRUE if
# all listed variables are TRUE
INCLUDE(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(ALEMBIC DEFAULT_MSG ALEMBIC_LIBRARY ALEMBIC_INCLUDE_DIR)
IF(ALEMBIC_FOUND)
SET(ALEMBIC_LIBRARIES ${ALEMBIC_LIBRARY})
SET(ALEMBIC_INCLUDE_DIRS ${ALEMBIC_INCLUDE_DIR})
ENDIF(ALEMBIC_FOUND)
MARK_AS_ADVANCED(
ALEMBIC_INCLUDE_DIR
ALEMBIC_LIBRARY
)
UNSET(_alembic_SEARCH_DIRS)

View File

@@ -1,69 +0,0 @@
# - Find HDF5 library
# Find the native HDF5 includes and libraries
# This module defines
# HDF5_INCLUDE_DIRS, where to find hdf5.h, Set when HDF5_INCLUDE_DIR is found.
# HDF5_LIBRARIES, libraries to link against to use HDF5.
# HDF5_ROOT_DIR, The base directory to search for HDF5.
# This can also be an environment variable.
# HDF5_FOUND, If false, do not try to use HDF5.
#
#=============================================================================
# Copyright 2016 Blender Foundation.
#
# Distributed under the OSI-approved BSD License (the "License");
# see accompanying file Copyright.txt for details.
#
# This software is distributed WITHOUT ANY WARRANTY; without even the
# implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the License for more information.
#=============================================================================
# If HDF5_ROOT_DIR was defined in the environment, use it.
IF(NOT HDF5_ROOT_DIR AND NOT $ENV{HDF5_ROOT_DIR} STREQUAL "")
SET(HDF5_ROOT_DIR $ENV{HDF5_ROOT_DIR})
ENDIF()
SET(_hdf5_SEARCH_DIRS
${HDF5_ROOT_DIR}
/usr/local
/sw # Fink
/opt/local # DarwinPorts
/opt/csw # Blastwave
/opt/lib/hdf5
)
FIND_LIBRARY(HDF5_LIBRARY
NAMES
hdf5
HINTS
${_hdf5_SEARCH_DIRS}
PATH_SUFFIXES
lib64 lib
)
FIND_PATH(HDF5_INCLUDE_DIR
NAMES
hdf5.h
HINTS
${_hdf5_SEARCH_DIRS}
PATH_SUFFIXES
include
)
# handle the QUIETLY and REQUIRED arguments and set HDF5_FOUND to TRUE if
# all listed variables are TRUE
INCLUDE(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(HDF5 DEFAULT_MSG HDF5_LIBRARY HDF5_INCLUDE_DIR)
IF(HDF5_FOUND)
SET(HDF5_LIBRARIES ${HDF5_LIBRARY})
SET(HDF5_INCLUDE_DIRS ${HDF5_INCLUDE_DIR})
ENDIF(HDF5_FOUND)
MARK_AS_ADVANCED(
HDF5_INCLUDE_DIR
HDF5_LIBRARY
)
UNSET(_hdf5_SEARCH_DIRS)

View File

@@ -23,7 +23,6 @@ macro(BLENDER_SRC_GTEST_EX NAME SRC EXTRA_LIBS DO_ADD_TEST)
${CMAKE_SOURCE_DIR}/extern/glog/src
${CMAKE_SOURCE_DIR}/extern/gflags/src
${CMAKE_SOURCE_DIR}/extern/gtest/include
${CMAKE_SOURCE_DIR}/extern/gmock/include
)
unset(_current_include_directories)
@@ -34,7 +33,6 @@ macro(BLENDER_SRC_GTEST_EX NAME SRC EXTRA_LIBS DO_ADD_TEST)
bf_testing_main
bf_intern_guardedalloc
extern_gtest
extern_gmock
# needed for glog
${PTHREADS_LIBRARIES}
extern_glog

View File

@@ -1,10 +1,5 @@
# This is called by cmake as an external process from
# This is called by cmake as an extermal process from
# ./source/creator/CMakeLists.txt to write ./source/creator/buildinfo.h
# Caller must define:
# SOURCE_DIR
# Optional overrides:
# BUILD_DATE
# BUILD_TIME
# Extract working copy information for SOURCE_DIR into MY_XXX variables
# with a default in case anything fails, for example when using git-svn
@@ -133,19 +128,12 @@ endif()
# BUILD_PLATFORM and BUILD_PLATFORM are taken from CMake
# but BUILD_DATE and BUILD_TIME are platform dependent
if(UNIX)
if(NOT BUILD_DATE)
execute_process(COMMAND date "+%Y-%m-%d" OUTPUT_VARIABLE BUILD_DATE OUTPUT_STRIP_TRAILING_WHITESPACE)
endif()
if(NOT BUILD_TIME)
execute_process(COMMAND date "+%H:%M:%S" OUTPUT_VARIABLE BUILD_TIME OUTPUT_STRIP_TRAILING_WHITESPACE)
endif()
elseif(WIN32)
if(NOT BUILD_DATE)
execute_process(COMMAND cmd /c date /t OUTPUT_VARIABLE BUILD_DATE OUTPUT_STRIP_TRAILING_WHITESPACE)
endif()
if(NOT BUILD_TIME)
execute_process(COMMAND cmd /c time /t OUTPUT_VARIABLE BUILD_TIME OUTPUT_STRIP_TRAILING_WHITESPACE)
endif()
execute_process(COMMAND date "+%Y-%m-%d" OUTPUT_VARIABLE BUILD_DATE OUTPUT_STRIP_TRAILING_WHITESPACE)
execute_process(COMMAND date "+%H:%M:%S" OUTPUT_VARIABLE BUILD_TIME OUTPUT_STRIP_TRAILING_WHITESPACE)
endif()
if(WIN32)
execute_process(COMMAND cmd /c date /t OUTPUT_VARIABLE BUILD_DATE OUTPUT_STRIP_TRAILING_WHITESPACE)
execute_process(COMMAND cmd /c time /t OUTPUT_VARIABLE BUILD_TIME OUTPUT_STRIP_TRAILING_WHITESPACE)
endif()
# Write a file with the BUILD_HASH define

View File

@@ -239,8 +239,8 @@ def file_check_arg_sizes(tu):
if 0:
print("---",
" <~> ".join(
[" ".join([t.spelling for t in C.get_tokens()])
for C in node.get_children()]
[" ".join([t.spelling for t in C.get_tokens()])
for C in node.get_children()]
))
# print(node.location)

View File

@@ -29,11 +29,11 @@ if not sys.version.startswith("3"):
sys.exit(1)
from cmake_consistency_check_config import (
IGNORE,
UTF8_CHECK,
SOURCE_DIR,
BUILD_DIR,
)
IGNORE,
UTF8_CHECK,
SOURCE_DIR,
BUILD_DIR,
)
import os

View File

@@ -31,7 +31,7 @@ IGNORE = (
"extern/carve/patches/files/random.h",
"intern/audaspace/SRC/AUD_SRCResampleFactory.h",
"intern/audaspace/SRC/AUD_SRCResampleReader.h",
)
)
UTF8_CHECK = True

View File

@@ -37,19 +37,19 @@ if not project_info.init(sys.argv[-1]):
sys.exit(1)
from project_info import (
SIMPLE_PROJECTFILE,
SOURCE_DIR,
CMAKE_DIR,
PROJECT_DIR,
source_list,
is_project_file,
is_c_header,
# is_py,
cmake_advanced_info,
cmake_compiler_defines,
cmake_cache_var,
project_name_get,
)
SIMPLE_PROJECTFILE,
SOURCE_DIR,
CMAKE_DIR,
PROJECT_DIR,
source_list,
is_project_file,
is_c_header,
# is_py,
cmake_advanced_info,
cmake_compiler_defines,
cmake_cache_var,
project_name_get,
)
import os

View File

@@ -43,17 +43,17 @@ def quote_define(define):
def create_qtc_project_main(name):
from project_info import (
SIMPLE_PROJECTFILE,
SOURCE_DIR,
# CMAKE_DIR,
PROJECT_DIR,
source_list,
is_project_file,
is_c_header,
cmake_advanced_info,
cmake_compiler_defines,
project_name_get,
)
SIMPLE_PROJECTFILE,
SOURCE_DIR,
# CMAKE_DIR,
PROJECT_DIR,
source_list,
is_project_file,
is_c_header,
cmake_advanced_info,
cmake_compiler_defines,
project_name_get,
)
files = list(source_list(SOURCE_DIR, filename_check=is_project_file))
files_rel = [os.path.relpath(f, start=PROJECT_DIR) for f in files]
@@ -69,7 +69,7 @@ def create_qtc_project_main(name):
with open(os.path.join(PROJECT_DIR, "%s.includes" % FILE_NAME), 'w') as f:
f.write("\n".join(sorted(list(set(os.path.dirname(f)
for f in files_rel if is_c_header(f))))))
for f in files_rel if is_c_header(f))))))
qtc_prj = os.path.join(PROJECT_DIR, "%s.creator" % FILE_NAME)
with open(qtc_prj, 'w') as f:
@@ -87,7 +87,7 @@ def create_qtc_project_main(name):
# for some reason it doesnt give all internal includes
includes = list(set(includes) | set(os.path.dirname(f)
for f in files_rel if is_c_header(f)))
for f in files_rel if is_c_header(f)))
includes.sort()
# be tricky, get the project name from CMake if we can!
@@ -125,13 +125,13 @@ def create_qtc_project_main(name):
def create_qtc_project_python(name):
from project_info import (
SOURCE_DIR,
# CMAKE_DIR,
PROJECT_DIR,
source_list,
is_py,
project_name_get,
)
SOURCE_DIR,
# CMAKE_DIR,
PROJECT_DIR,
source_list,
is_py,
project_name_get,
)
files = list(source_list(SOURCE_DIR, filename_check=is_py))
files_rel = [os.path.relpath(f, start=PROJECT_DIR) for f in files]
@@ -161,24 +161,24 @@ def argparse_create():
import argparse
parser = argparse.ArgumentParser(
description="This script generates Qt Creator project files for Blender",
)
description="This script generates Qt Creator project files for Blender",
)
parser.add_argument(
"-n", "--name",
dest="name",
metavar='NAME', type=str,
help="Override default project name (\"Blender\")",
required=False,
)
"-n", "--name",
dest="name",
metavar='NAME', type=str,
help="Override default project name (\"Blender\")",
required=False,
)
parser.add_argument(
"-b", "--build-dir",
dest="build_dir",
metavar='BUILD_DIR', type=str,
help="Specify the build path (or fallback to the $PWD)",
required=False,
)
"-b", "--build-dir",
dest="build_dir",
metavar='BUILD_DIR', type=str,
help="Specify the build path (or fallback to the $PWD)",
required=False,
)
return parser

View File

@@ -32,7 +32,7 @@ USE_QUIET = (os.environ.get("QUIET", None) is not None)
CHECKER_IGNORE_PREFIX = [
"extern",
"intern/moto",
]
]
CHECKER_BIN = "python2"
@@ -42,7 +42,7 @@ CHECKER_ARGS = [
"-I" + os.path.join(project_source_info.SOURCE_DIR, "extern", "glew", "include"),
# stupid but needed
"-Dbool=char"
]
]
def main():

View File

@@ -32,7 +32,7 @@ USE_QUIET = (os.environ.get("QUIET", None) is not None)
CHECKER_IGNORE_PREFIX = [
"extern",
"intern/moto",
]
]
CHECKER_BIN = "cppcheck"
@@ -43,7 +43,7 @@ CHECKER_ARGS = [
"--max-configs=1", # speeds up execution
# "--check-config", # when includes are missing
"--enable=all", # if you want sixty hundred pedantic suggestions
]
]
if USE_QUIET:
CHECKER_ARGS.append("--quiet")

View File

@@ -25,13 +25,13 @@
CHECKER_IGNORE_PREFIX = [
"extern",
"intern/moto",
]
]
CHECKER_BIN = "smatch"
CHECKER_ARGS = [
"--full-path",
"--two-passes",
]
]
import project_source_info
import subprocess

View File

@@ -25,11 +25,11 @@
CHECKER_IGNORE_PREFIX = [
"extern",
"intern/moto",
]
]
CHECKER_BIN = "sparse"
CHECKER_ARGS = [
]
]
import project_source_info
import subprocess

View File

@@ -25,7 +25,7 @@
CHECKER_IGNORE_PREFIX = [
"extern",
"intern/moto",
]
]
CHECKER_BIN = "splint"
@@ -61,7 +61,7 @@ CHECKER_ARGS = [
# dummy, witjout this splint complains with:
# /usr/include/bits/confname.h:31:27: *** Internal Bug at cscannerHelp.c:2428: Unexpanded macro not function or constant: int _PC_MAX_CANON
"-D_PC_MAX_CANON=0",
]
]
import project_source_info

View File

@@ -4,7 +4,6 @@
# cmake -C../blender/build_files/cmake/config/blender_full.cmake ../blender
#
set(WITH_ALEMBIC ON CACHE BOOL "" FORCE)
set(WITH_BUILDINFO ON CACHE BOOL "" FORCE)
set(WITH_BULLET ON CACHE BOOL "" FORCE)
set(WITH_CODEC_AVI ON CACHE BOOL "" FORCE)
@@ -55,7 +54,7 @@ set(WITH_PLAYER ON CACHE BOOL "" FORCE)
set(WITH_MEM_JEMALLOC ON CACHE BOOL "" FORCE)
# platform dependent options
# platform dependant options
if(UNIX AND NOT APPLE)
set(WITH_JACK ON CACHE BOOL "" FORCE)
set(WITH_DOC_MANPAGE ON CACHE BOOL "" FORCE)

View File

@@ -8,7 +8,6 @@
set(WITH_INSTALL_PORTABLE ON CACHE BOOL "" FORCE)
set(WITH_SYSTEM_GLEW ON CACHE BOOL "" FORCE)
set(WITH_ALEMBIC OFF CACHE BOOL "" FORCE)
set(WITH_BUILDINFO OFF CACHE BOOL "" FORCE)
set(WITH_BULLET OFF CACHE BOOL "" FORCE)
set(WITH_CODEC_AVI OFF CACHE BOOL "" FORCE)

View File

@@ -32,4 +32,3 @@ set(WITH_OPENCOLLADA OFF CACHE BOOL "" FORCE)
set(WITH_INTERNATIONAL OFF CACHE BOOL "" FORCE)
set(WITH_BULLET OFF CACHE BOOL "" FORCE)
set(WITH_OPENVDB OFF CACHE BOOL "" FORCE)
set(WITH_ALEMBIC OFF CACHE BOOL "" FORCE)

View File

@@ -74,7 +74,7 @@ def main():
"rebuild_cache",
"depend",
"cmake_check_build_system",
])
])
targets -= set(bad)

View File

@@ -196,33 +196,8 @@ function(blender_source_group
endfunction()
# Support per-target CMake flags
# Read from: CMAKE_C_FLAGS_**** (made upper case) when set.
#
# 'name' should alway match the target name,
# use this macro before add_library or add_executable.
#
# Optionally takes an arg passed to set(), eg PARENT_SCOPE.
macro(add_cc_flags_custom_test
name
)
string(TOUPPER ${name} _name_upper)
if(DEFINED CMAKE_C_FLAGS_${_name_upper})
message(STATUS "Using custom CFLAGS: CMAKE_C_FLAGS_${_name_upper} in \"${CMAKE_CURRENT_SOURCE_DIR}\"")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${CMAKE_C_FLAGS_${_name_upper}}" ${ARGV1})
endif()
if(DEFINED CMAKE_CXX_FLAGS_${_name_upper})
message(STATUS "Using custom CXXFLAGS: CMAKE_CXX_FLAGS_${_name_upper} in \"${CMAKE_CURRENT_SOURCE_DIR}\"")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${CMAKE_CXX_FLAGS_${_name_upper}}" ${ARGV1})
endif()
unset(_name_upper)
endmacro()
# only MSVC uses SOURCE_GROUP
function(blender_add_lib__impl
function(blender_add_lib_nolist
name
sources
includes
@@ -250,18 +225,6 @@ function(blender_add_lib__impl
endfunction()
function(blender_add_lib_nolist
name
sources
includes
includes_sys
)
add_cc_flags_custom_test(${name} PARENT_SCOPE)
blender_add_lib__impl(${name} "${sources}" "${includes}" "${includes_sys}")
endfunction()
function(blender_add_lib
name
sources
@@ -269,9 +232,7 @@ function(blender_add_lib
includes_sys
)
add_cc_flags_custom_test(${name} PARENT_SCOPE)
blender_add_lib__impl(${name} "${sources}" "${includes}" "${includes_sys}")
blender_add_lib_nolist(${name} "${sources}" "${includes}" "${includes_sys}")
set_property(GLOBAL APPEND PROPERTY BLENDER_LINK_LIBS ${name})
endfunction()
@@ -333,11 +294,6 @@ function(SETUP_LIBDIRS)
link_directories(${LLVM_LIBPATH})
endif()
if(WITH_ALEMBIC)
link_directories(${ALEMBIC_LIBPATH})
link_directories(${HDF5_LIBPATH})
endif()
if(WIN32 AND NOT UNIX)
link_directories(${PTHREADS_LIBPATH})
endif()
@@ -359,6 +315,7 @@ function(setup_liblinks
target_link_libraries(
${target}
${PNG_LIBRARIES}
${ZLIB_LIBRARIES}
${FREETYPE_LIBRARY}
)
@@ -438,9 +395,6 @@ function(setup_liblinks
endif()
endif()
target_link_libraries(${target} ${JPEG_LIBRARIES})
if(WITH_ALEMBIC)
target_link_libraries(${target} ${ALEMBIC_LIBRARIES} ${HDF5_LIBRARIES})
endif()
if(WITH_IMAGE_OPENEXR)
target_link_libraries(${target} ${OPENEXR_LIBRARIES})
endif()
@@ -481,6 +435,9 @@ function(setup_liblinks
if(WITH_MEM_JEMALLOC)
target_link_libraries(${target} ${JEMALLOC_LIBRARIES})
endif()
if(WITH_INPUT_NDOF)
target_link_libraries(${target} ${NDOF_LIBRARIES})
endif()
if(WITH_MOD_CLOTH_ELTOPO)
target_link_libraries(${target} ${LAPACK_LIBRARIES})
endif()
@@ -494,9 +451,6 @@ function(setup_liblinks
if(WITH_OPENMP_STATIC)
target_link_libraries(${target} ${OpenMP_LIBRARIES})
endif()
if(WITH_INPUT_NDOF)
target_link_libraries(${target} ${NDOF_LIBRARIES})
endif()
endif()
# We put CLEW and CUEW here because OPENSUBDIV_LIBRARIES dpeends on them..
@@ -509,11 +463,6 @@ function(setup_liblinks
endif()
endif()
target_link_libraries(
${target}
${ZLIB_LIBRARIES}
)
#system libraries with no dependencies such as platform link libs or opengl should go last
target_link_libraries(${target}
${BLENDER_GL_LIBRARIES})
@@ -538,7 +487,6 @@ function(SETUP_BLENDER_SORTED_LIBS)
if(WITH_CYCLES)
list(APPEND BLENDER_LINK_LIBS
cycles_render
cycles_graph
cycles_bvh
cycles_device
cycles_kernel
@@ -603,11 +551,11 @@ function(SETUP_BLENDER_SORTED_LIBS)
bf_modifiers
bf_bmesh
bf_gpu
bf_blenloader
bf_blenkernel
bf_physics
bf_nodes
bf_rna
bf_blenloader
bf_imbuf
bf_blenlib
bf_depsgraph
@@ -619,7 +567,6 @@ function(SETUP_BLENDER_SORTED_LIBS)
bf_imbuf_openimageio
bf_imbuf_dds
bf_collada
bf_alembic
bf_intern_elbeem
bf_intern_memutil
bf_intern_guardedalloc
@@ -653,7 +600,6 @@ function(SETUP_BLENDER_SORTED_LIBS)
bf_intern_dualcon
bf_intern_cycles
cycles_render
cycles_graph
cycles_bvh
cycles_device
cycles_kernel
@@ -713,6 +659,10 @@ function(SETUP_BLENDER_SORTED_LIBS)
list(APPEND BLENDER_SORTED_LIBS bf_quicktime)
endif()
if(WITH_INPUT_NDOF)
list(APPEND BLENDER_SORTED_LIBS bf_intern_ghostndof3dconnexion)
endif()
if(WITH_MOD_BOOLEAN)
list(APPEND BLENDER_SORTED_LIBS extern_carve)
endif()
@@ -737,14 +687,6 @@ function(SETUP_BLENDER_SORTED_LIBS)
list_insert_after(BLENDER_SORTED_LIBS "ge_logic_ngnetwork" "extern_bullet")
endif()
if(WITH_GAMEENGINE_DECKLINK)
list(APPEND BLENDER_SORTED_LIBS bf_intern_decklink)
endif()
if(WIN32)
list(APPEND BLENDER_SORTED_LIBS bf_intern_gpudirect)
endif()
if(WITH_OPENSUBDIV)
list(APPEND BLENDER_SORTED_LIBS bf_intern_opensubdiv)
endif()
@@ -861,15 +803,7 @@ macro(TEST_UNORDERED_MAP_SUPPORT)
# UNORDERED_MAP_NAMESPACE, namespace for unordered_map, if found
include(CheckIncludeFileCXX)
# Workaround for newer GCC (6.x+) where C++11 was enabled by default, which lead us
# to a situation when there is <unordered_map> include but which can't be used uless
# C++11 is enabled.
if(CMAKE_COMPILER_IS_GNUCC AND (NOT "${CMAKE_C_COMPILER_VERSION}" VERSION_LESS "6.0") AND (NOT WITH_CXX11))
set(HAVE_STD_UNORDERED_MAP_HEADER False)
else()
CHECK_INCLUDE_FILE_CXX("unordered_map" HAVE_STD_UNORDERED_MAP_HEADER)
endif()
CHECK_INCLUDE_FILE_CXX("unordered_map" HAVE_STD_UNORDERED_MAP_HEADER)
if(HAVE_STD_UNORDERED_MAP_HEADER)
# Even so we've found unordered_map header file it doesn't
# mean unordered_map and unordered_set will be declared in
@@ -939,16 +873,8 @@ macro(TEST_SHARED_PTR_SUPPORT)
# otherwise it's assumed to be defined in std namespace.
include(CheckIncludeFileCXX)
include(CheckCXXSourceCompiles)
set(SHARED_PTR_FOUND FALSE)
# Workaround for newer GCC (6.x+) where C++11 was enabled by default, which lead us
# to a situation when there is <unordered_map> include but which can't be used uless
# C++11 is enabled.
if(CMAKE_COMPILER_IS_GNUCC AND (NOT "${CMAKE_C_COMPILER_VERSION}" VERSION_LESS "6.0") AND (NOT WITH_CXX11))
set(HAVE_STD_MEMORY_HEADER False)
else()
CHECK_INCLUDE_FILE_CXX(memory HAVE_STD_MEMORY_HEADER)
endif()
CHECK_INCLUDE_FILE_CXX(memory HAVE_STD_MEMORY_HEADER)
if(HAVE_STD_MEMORY_HEADER)
# Finding the memory header doesn't mean that shared_ptr is in std
# namespace.
@@ -956,6 +882,7 @@ macro(TEST_SHARED_PTR_SUPPORT)
# In particular, MSVC 2008 has shared_ptr declared in std::tr1. In
# order to support this, we do an extra check to see which namespace
# should be used.
include(CheckCXXSourceCompiles)
CHECK_CXX_SOURCE_COMPILES("#include <memory>
int main() {
std::shared_ptr<int> int_ptr;
@@ -1123,19 +1050,6 @@ macro(remove_strict_flags_file
endmacro()
# External libs may need 'signed char' to be default.
macro(remove_cc_flag_unsigned_char)
if(CMAKE_C_COMPILER_ID MATCHES "^(GNU|Clang|Intel)$")
remove_cc_flag("-funsigned-char")
elseif(MSVC)
remove_cc_flag("/J")
else()
message(WARNING
"Compiler '${CMAKE_C_COMPILER_ID}' failed to disable 'unsigned char' flag."
"Build files need updating."
)
endif()
endmacro()
function(ADD_CHECK_C_COMPILER_FLAG
_CFLAGS
@@ -1560,44 +1474,3 @@ function(print_all_vars)
message("${_var}=${${_var}}")
endforeach()
endfunction()
macro(openmp_delayload
projectname
)
if(MSVC)
if(WITH_OPENMP)
if(MSVC_VERSION EQUAL 1800)
set(OPENMP_DLL_NAME "vcomp120")
else()
set(OPENMP_DLL_NAME "vcomp140")
endif()
SET_TARGET_PROPERTIES(${projectname} PROPERTIES LINK_FLAGS_RELEASE "/DELAYLOAD:${OPENMP_DLL_NAME}.dll delayimp.lib")
SET_TARGET_PROPERTIES(${projectname} PROPERTIES LINK_FLAGS_DEBUG "/DELAYLOAD:${OPENMP_DLL_NAME}d.dll delayimp.lib")
SET_TARGET_PROPERTIES(${projectname} PROPERTIES LINK_FLAGS_RELWITHDEBINFO "/DELAYLOAD:${OPENMP_DLL_NAME}.dll delayimp.lib")
SET_TARGET_PROPERTIES(${projectname} PROPERTIES LINK_FLAGS_MINSIZEREL "/DELAYLOAD:${OPENMP_DLL_NAME}.dll delayimp.lib")
endif(WITH_OPENMP)
endif(MSVC)
endmacro()
MACRO(WINDOWS_SIGN_TARGET target)
if (WITH_WINDOWS_CODESIGN)
if (!SIGNTOOL_EXE)
error("Codesigning is enabled, but signtool is not found")
else()
if (WINDOWS_CODESIGN_PFX_PASSWORD)
set(CODESIGNPASSWORD /p ${WINDOWS_CODESIGN_PFX_PASSWORD})
else()
if ($ENV{PFXPASSWORD})
set(CODESIGNPASSWORD /p $ENV{PFXPASSWORD})
else()
message( FATAL_ERROR "WITH_WINDOWS_CODESIGN is on but WINDOWS_CODESIGN_PFX_PASSWORD not set, and environment variable PFXPASSWORD not found, unable to sign code.")
endif()
endif()
add_custom_command(TARGET ${target}
POST_BUILD
COMMAND ${SIGNTOOL_EXE} sign /f ${WINDOWS_CODESIGN_PFX} ${CODESIGNPASSWORD} $<TARGET_FILE:${target}>
VERBATIM
)
endif()
endif()
ENDMACRO()

View File

@@ -38,21 +38,7 @@ unset(MY_WC_HASH)
# Force Package Name
execute_process(COMMAND date "+%Y%m%d" OUTPUT_VARIABLE CPACK_DATE OUTPUT_STRIP_TRAILING_WHITESPACE)
string(TOLOWER ${PROJECT_NAME} PROJECT_NAME_LOWER)
if (MSVC)
if ("${CMAKE_SIZEOF_VOID_P}" EQUAL "8")
set(PACKAGE_ARCH windows64)
else()
set(PACKAGE_ARCH windows32)
endif()
else(MSVC)
set(PACKAGE_ARCH ${CMAKE_SYSTEM_PROCESSOR})
endif()
if (CPACK_OVERRIDE_PACKAGENAME)
set(CPACK_PACKAGE_FILE_NAME ${CPACK_OVERRIDE_PACKAGENAME}-${PACKAGE_ARCH})
else()
set(CPACK_PACKAGE_FILE_NAME ${PROJECT_NAME_LOWER}-${MAJOR_VERSION}.${MINOR_VERSION}.${PATCH_VERSION}-git${CPACK_DATE}.${BUILD_REV}-${PACKAGE_ARCH})
endif()
set(CPACK_PACKAGE_FILE_NAME ${PROJECT_NAME_LOWER}-${MAJOR_VERSION}.${MINOR_VERSION}.${PATCH_VERSION}-git${CPACK_DATE}.${BUILD_REV}-${CMAKE_SYSTEM_PROCESSOR})
if(CMAKE_SYSTEM_NAME MATCHES "Linux")
# RPM packages

View File

@@ -1,430 +0,0 @@
# ***** BEGIN GPL LICENSE BLOCK *****
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# The Original Code is Copyright (C) 2016, Blender Foundation
# All rights reserved.
#
# Contributor(s): Sergey Sharybin.
#
# ***** END GPL LICENSE BLOCK *****
# Libraries configuration for Apple.
if(NOT DEFINED LIBDIR)
set(LIBDIR ${CMAKE_SOURCE_DIR}/../lib/darwin-9.x.universal)
else()
message(STATUS "Using pre-compiled LIBDIR: ${LIBDIR}")
endif()
if(NOT EXISTS "${LIBDIR}/")
message(FATAL_ERROR "Mac OSX requires pre-compiled libs at: '${LIBDIR}'")
endif()
if(WITH_OPENAL)
find_package(OpenAL)
if(OPENAL_FOUND)
set(WITH_OPENAL ON)
set(OPENAL_INCLUDE_DIR "${LIBDIR}/openal/include")
else()
set(WITH_OPENAL OFF)
endif()
endif()
if(WITH_ALEMBIC)
set(ALEMBIC ${LIBDIR}/alembic)
set(ALEMBIC_INCLUDE_DIR ${ALEMBIC}/include)
set(ALEMBIC_INCLUDE_DIRS ${ALEMBIC_INCLUDE_DIR})
set(ALEMBIC_LIBPATH ${ALEMBIC}/lib)
set(ALEMBIC_LIBRARIES Alembic)
endif()
if(WITH_OPENSUBDIV)
set(OPENSUBDIV ${LIBDIR}/opensubdiv)
set(OPENSUBDIV_LIBPATH ${OPENSUBDIV}/lib)
find_library(OSL_LIB_UTIL NAMES osdutil PATHS ${OPENSUBDIV_LIBPATH})
find_library(OSL_LIB_CPU NAMES osdCPU PATHS ${OPENSUBDIV_LIBPATH})
find_library(OSL_LIB_GPU NAMES osdGPU PATHS ${OPENSUBDIV_LIBPATH})
set(OPENSUBDIV_INCLUDE_DIR ${OPENSUBDIV}/include)
set(OPENSUBDIV_INCLUDE_DIRS ${OPENSUBDIV_INCLUDE_DIR})
list(APPEND OPENSUBDIV_LIBRARIES ${OSL_LIB_UTIL} ${OSL_LIB_CPU} ${OSL_LIB_GPU})
endif()
if(WITH_JACK)
find_library(JACK_FRAMEWORK
NAMES jackmp
)
set(JACK_INCLUDE_DIRS ${JACK_FRAMEWORK}/headers)
if(NOT JACK_FRAMEWORK)
set(WITH_JACK OFF)
endif()
endif()
if(WITH_CODEC_SNDFILE)
set(SNDFILE ${LIBDIR}/sndfile)
set(SNDFILE_INCLUDE_DIRS ${SNDFILE}/include)
set(SNDFILE_LIBRARIES sndfile FLAC ogg vorbis vorbisenc)
set(SNDFILE_LIBPATH ${SNDFILE}/lib ${FFMPEG}/lib) # TODO, deprecate
endif()
if(WITH_PYTHON)
# we use precompiled libraries for py 3.5 and up by default
set(PYTHON_VERSION 3.5)
if(NOT WITH_PYTHON_MODULE AND NOT WITH_PYTHON_FRAMEWORK)
# normally cached but not since we include them with blender
set(PYTHON_INCLUDE_DIR "${LIBDIR}/python/include/python${PYTHON_VERSION}m")
set(PYTHON_EXECUTABLE "${LIBDIR}/python/bin/python${PYTHON_VERSION}m")
set(PYTHON_LIBRARY python${PYTHON_VERSION}m)
set(PYTHON_LIBPATH "${LIBDIR}/python/lib/python${PYTHON_VERSION}")
# set(PYTHON_LINKFLAGS "-u _PyMac_Error") # won't build with this enabled
else()
# module must be compiled against Python framework
set(_py_framework "/Library/Frameworks/Python.framework/Versions/${PYTHON_VERSION}")
set(PYTHON_INCLUDE_DIR "${_py_framework}/include/python${PYTHON_VERSION}m")
set(PYTHON_EXECUTABLE "${_py_framework}/bin/python${PYTHON_VERSION}m")
set(PYTHON_LIBPATH "${_py_framework}/lib/python${PYTHON_VERSION}/config-${PYTHON_VERSION}m")
#set(PYTHON_LIBRARY python${PYTHON_VERSION})
#set(PYTHON_LINKFLAGS "-u _PyMac_Error -framework Python") # won't build with this enabled
unset(_py_framework)
endif()
# uncached vars
set(PYTHON_INCLUDE_DIRS "${PYTHON_INCLUDE_DIR}")
set(PYTHON_LIBRARIES "${PYTHON_LIBRARY}")
if(NOT EXISTS "${PYTHON_EXECUTABLE}")
message(FATAL_ERROR "Python executable missing: ${PYTHON_EXECUTABLE}")
endif()
endif()
if(WITH_FFTW3)
set(FFTW3 ${LIBDIR}/fftw3)
set(FFTW3_INCLUDE_DIRS ${FFTW3}/include)
set(FFTW3_LIBRARIES fftw3)
set(FFTW3_LIBPATH ${FFTW3}/lib)
endif()
set(PNG_LIBRARIES png)
set(JPEG_LIBRARIES jpeg)
set(ZLIB /usr)
set(ZLIB_INCLUDE_DIRS "${ZLIB}/include")
set(ZLIB_LIBRARIES z bz2)
set(FREETYPE ${LIBDIR}/freetype)
set(FREETYPE_INCLUDE_DIRS ${FREETYPE}/include ${FREETYPE}/include/freetype2)
set(FREETYPE_LIBPATH ${FREETYPE}/lib)
set(FREETYPE_LIBRARY freetype)
if(WITH_IMAGE_OPENEXR)
set(OPENEXR ${LIBDIR}/openexr)
set(OPENEXR_INCLUDE_DIR ${OPENEXR}/include)
set(OPENEXR_INCLUDE_DIRS ${OPENEXR_INCLUDE_DIR} ${OPENEXR}/include/OpenEXR)
set(OPENEXR_LIBRARIES Iex Half IlmImf Imath IlmThread)
set(OPENEXR_LIBPATH ${OPENEXR}/lib)
endif()
if(WITH_CODEC_FFMPEG)
set(FFMPEG ${LIBDIR}/ffmpeg)
set(FFMPEG_INCLUDE_DIRS ${FFMPEG}/include)
set(FFMPEG_LIBRARIES
avcodec avdevice avformat avutil
mp3lame swscale x264 xvidcore theora theoradec theoraenc vorbis vorbisenc vorbisfile ogg
)
set(FFMPEG_LIBPATH ${FFMPEG}/lib)
endif()
find_library(SYSTEMSTUBS_LIBRARY
NAMES
SystemStubs
PATHS
)
mark_as_advanced(SYSTEMSTUBS_LIBRARY)
if(SYSTEMSTUBS_LIBRARY)
list(APPEND PLATFORM_LINKLIBS SystemStubs)
endif()
set(PLATFORM_CFLAGS "-pipe -funsigned-char")
set(PLATFORM_LINKFLAGS
"-fexceptions -framework CoreServices -framework Foundation -framework IOKit -framework AppKit -framework Cocoa -framework Carbon -framework AudioUnit -framework AudioToolbox -framework CoreAudio"
)
if(WITH_CODEC_QUICKTIME)
set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -framework QTKit")
if(CMAKE_OSX_ARCHITECTURES MATCHES i386)
set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -framework QuickTime")
# libSDL still needs 32bit carbon quicktime
endif()
endif()
if(WITH_CXX11)
list(APPEND PLATFORM_LINKLIBS c++)
else()
list(APPEND PLATFORM_LINKLIBS stdc++)
endif()
if(WITH_JACK)
set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -F/Library/Frameworks -weak_framework jackmp")
endif()
if(WITH_PYTHON_MODULE OR WITH_PYTHON_FRAMEWORK)
# force cmake to link right framework
set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} /Library/Frameworks/Python.framework/Versions/${PYTHON_VERSION}/Python")
endif()
if(WITH_OPENCOLLADA)
set(OPENCOLLADA ${LIBDIR}/opencollada)
set(OPENCOLLADA_INCLUDE_DIRS
${LIBDIR}/opencollada/include/COLLADAStreamWriter
${LIBDIR}/opencollada/include/COLLADABaseUtils
${LIBDIR}/opencollada/include/COLLADAFramework
${LIBDIR}/opencollada/include/COLLADASaxFrameworkLoader
${LIBDIR}/opencollada/include/GeneratedSaxParser
)
set(OPENCOLLADA_LIBPATH ${OPENCOLLADA}/lib)
set(OPENCOLLADA_LIBRARIES
OpenCOLLADASaxFrameworkLoader
-lOpenCOLLADAFramework
-lOpenCOLLADABaseUtils
-lOpenCOLLADAStreamWriter
-lMathMLSolver
-lGeneratedSaxParser
-lxml2 -lbuffer -lftoa
)
# Use UTF functions from collada if LLVM is not enabled
if(NOT WITH_LLVM)
list(APPEND OPENCOLLADA_LIBRARIES -lUTF)
endif()
# pcre is bundled with openCollada
#set(PCRE ${LIBDIR}/pcre)
#set(PCRE_LIBPATH ${PCRE}/lib)
set(PCRE_LIBRARIES pcre)
#libxml2 is used
#set(EXPAT ${LIBDIR}/expat)
#set(EXPAT_LIBPATH ${EXPAT}/lib)
set(EXPAT_LIB)
endif()
if(WITH_SDL)
set(SDL ${LIBDIR}/sdl)
set(SDL_INCLUDE_DIR ${SDL}/include)
set(SDL_LIBRARY SDL2)
set(SDL_LIBPATH ${SDL}/lib)
set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -lazy_framework ForceFeedback")
endif()
set(PNG "${LIBDIR}/png")
set(PNG_INCLUDE_DIRS "${PNG}/include")
set(PNG_LIBPATH ${PNG}/lib)
set(JPEG "${LIBDIR}/jpeg")
set(JPEG_INCLUDE_DIR "${JPEG}/include")
set(JPEG_LIBPATH ${JPEG}/lib)
if(WITH_IMAGE_TIFF)
set(TIFF ${LIBDIR}/tiff)
set(TIFF_INCLUDE_DIR ${TIFF}/include)
set(TIFF_LIBRARY tiff)
set(TIFF_LIBPATH ${TIFF}/lib)
endif()
if(WITH_BOOST)
set(BOOST ${LIBDIR}/boost)
set(BOOST_INCLUDE_DIR ${BOOST}/include)
set(BOOST_LIBRARIES
boost_date_time-mt
boost_filesystem-mt
boost_regex-mt
boost_system-mt
boost_thread-mt
boost_wave-mt
)
if(WITH_INTERNATIONAL)
list(APPEND BOOST_LIBRARIES boost_locale-mt)
endif()
if(WITH_CYCLES_NETWORK)
list(APPEND BOOST_LIBRARIES boost_serialization-mt)
endif()
if(WITH_OPENVDB)
list(APPEND BOOST_LIBRARIES boost_iostreams-mt)
endif()
set(BOOST_LIBPATH ${BOOST}/lib)
set(BOOST_DEFINITIONS)
endif()
if(WITH_INTERNATIONAL OR WITH_CODEC_FFMPEG)
set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -liconv") # boost_locale and ffmpeg needs it !
endif()
if(WITH_OPENIMAGEIO)
set(OPENIMAGEIO ${LIBDIR}/openimageio)
set(OPENIMAGEIO_INCLUDE_DIRS ${OPENIMAGEIO}/include)
set(OPENIMAGEIO_LIBRARIES
${OPENIMAGEIO}/lib/libOpenImageIO.a
${PNG_LIBRARIES}
${JPEG_LIBRARIES}
${TIFF_LIBRARY}
${OPENEXR_LIBRARIES}
${ZLIB_LIBRARIES}
)
set(OPENIMAGEIO_LIBPATH
${OPENIMAGEIO}/lib
${JPEG_LIBPATH}
${PNG_LIBPATH}
${TIFF_LIBPATH}
${OPENEXR_LIBPATH}
${ZLIB_LIBPATH}
)
set(OPENIMAGEIO_DEFINITIONS "-DOIIO_STATIC_BUILD")
set(OPENIMAGEIO_IDIFF "${LIBDIR}/openimageio/bin/idiff")
endif()
if(WITH_OPENCOLORIO)
set(OPENCOLORIO ${LIBDIR}/opencolorio)
set(OPENCOLORIO_INCLUDE_DIRS ${OPENCOLORIO}/include)
set(OPENCOLORIO_LIBRARIES OpenColorIO tinyxml yaml-cpp)
set(OPENCOLORIO_LIBPATH ${OPENCOLORIO}/lib)
endif()
if(WITH_OPENVDB)
set(OPENVDB ${LIBDIR}/openvdb)
set(OPENVDB_INCLUDE_DIRS ${OPENVDB}/include)
set(TBB_INCLUDE_DIRS ${LIBDIR}/tbb/include)
set(TBB_LIBRARIES ${LIBDIR}/tbb/lib/libtbb.a)
set(OPENVDB_LIBRARIES openvdb blosc ${TBB_LIBRARIES})
set(OPENVDB_LIBPATH ${LIBDIR}/openvdb/lib)
set(OPENVDB_DEFINITIONS)
endif()
if(WITH_LLVM)
set(LLVM_ROOT_DIR ${LIBDIR}/llvm CACHE PATH "Path to the LLVM installation")
set(LLVM_VERSION "3.4" CACHE STRING "Version of LLVM to use")
if(EXISTS "${LLVM_ROOT_DIR}/bin/llvm-config")
set(LLVM_CONFIG "${LLVM_ROOT_DIR}/bin/llvm-config")
else()
set(LLVM_CONFIG llvm-config)
endif()
execute_process(COMMAND ${LLVM_CONFIG} --version
OUTPUT_VARIABLE LLVM_VERSION
OUTPUT_STRIP_TRAILING_WHITESPACE)
execute_process(COMMAND ${LLVM_CONFIG} --prefix
OUTPUT_VARIABLE LLVM_ROOT_DIR
OUTPUT_STRIP_TRAILING_WHITESPACE)
execute_process(COMMAND ${LLVM_CONFIG} --libdir
OUTPUT_VARIABLE LLVM_LIBPATH
OUTPUT_STRIP_TRAILING_WHITESPACE)
find_library(LLVM_LIBRARY
NAMES LLVMAnalysis # first of a whole bunch of libs to get
PATHS ${LLVM_LIBPATH})
if(LLVM_LIBRARY AND LLVM_ROOT_DIR AND LLVM_LIBPATH)
if(LLVM_STATIC)
# if static LLVM libraries were requested, use llvm-config to generate
# the list of what libraries we need, and substitute that in the right
# way for LLVM_LIBRARY.
execute_process(COMMAND ${LLVM_CONFIG} --libfiles
OUTPUT_VARIABLE LLVM_LIBRARY
OUTPUT_STRIP_TRAILING_WHITESPACE)
string(REPLACE " " ";" LLVM_LIBRARY ${LLVM_LIBRARY})
else()
set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -lLLVM-3.4")
endif()
else()
message(FATAL_ERROR "LLVM not found.")
endif()
endif()
if(WITH_CYCLES_OSL)
set(CYCLES_OSL ${LIBDIR}/osl CACHE PATH "Path to OpenShadingLanguage installation")
find_library(OSL_LIB_EXEC NAMES oslexec PATHS ${CYCLES_OSL}/lib)
find_library(OSL_LIB_COMP NAMES oslcomp PATHS ${CYCLES_OSL}/lib)
find_library(OSL_LIB_QUERY NAMES oslquery PATHS ${CYCLES_OSL}/lib)
# WARNING! depends on correct order of OSL libs linking
list(APPEND OSL_LIBRARIES ${OSL_LIB_COMP} -force_load ${OSL_LIB_EXEC} ${OSL_LIB_QUERY})
find_path(OSL_INCLUDE_DIR OSL/oslclosure.h PATHS ${CYCLES_OSL}/include)
find_program(OSL_COMPILER NAMES oslc PATHS ${CYCLES_OSL}/bin)
if(OSL_INCLUDE_DIR AND OSL_LIBRARIES AND OSL_COMPILER)
set(OSL_FOUND TRUE)
else()
message(STATUS "OSL not found")
set(WITH_CYCLES_OSL OFF)
endif()
endif()
if(WITH_OPENMP)
execute_process(COMMAND ${CMAKE_C_COMPILER} --version OUTPUT_VARIABLE COMPILER_VENDOR)
string(SUBSTRING "${COMPILER_VENDOR}" 0 5 VENDOR_NAME) # truncate output
if(${VENDOR_NAME} MATCHES "Apple") # Apple does not support OpenMP reliable with gcc and not with clang
set(WITH_OPENMP OFF)
else() # vanilla gcc or clang_omp support OpenMP
message(STATUS "Using special OpenMP enabled compiler !") # letting find_package(OpenMP) module work for gcc
if(CMAKE_C_COMPILER_ID MATCHES "Clang") # clang-omp in darwin libs
set(OPENMP_FOUND ON)
set(OpenMP_C_FLAGS "-fopenmp" CACHE STRING "C compiler flags for OpenMP parallization" FORCE)
set(OpenMP_CXX_FLAGS "-fopenmp" CACHE STRING "C++ compiler flags for OpenMP parallization" FORCE)
include_directories(${LIBDIR}/openmp/include)
link_directories(${LIBDIR}/openmp/lib)
# This is a workaround for our helperbinaries ( datatoc, masgfmt, ... ),
# They are linked also to omp lib, so we need it in builddir for runtime exexcution,
# TODO: remove all unneeded dependencies from these
# for intermediate binaries, in respect to lib ID
execute_process(
COMMAND ditto -arch ${CMAKE_OSX_ARCHITECTURES}
${LIBDIR}/openmp/lib/libiomp5.dylib
${CMAKE_BINARY_DIR}/Resources/lib/libiomp5.dylib)
endif()
endif()
endif()
set(EXETYPE MACOSX_BUNDLE)
set(CMAKE_C_FLAGS_DEBUG "-fno-strict-aliasing -g")
set(CMAKE_CXX_FLAGS_DEBUG "-fno-strict-aliasing -g")
if(CMAKE_OSX_ARCHITECTURES MATCHES "x86_64" OR CMAKE_OSX_ARCHITECTURES MATCHES "i386")
set(CMAKE_CXX_FLAGS_RELEASE "-O2 -mdynamic-no-pic -msse -msse2 -msse3 -mssse3")
set(CMAKE_C_FLAGS_RELEASE "-O2 -mdynamic-no-pic -msse -msse2 -msse3 -mssse3")
if(NOT CMAKE_C_COMPILER_ID MATCHES "Clang")
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -ftree-vectorize -fvariable-expansion-in-unroller")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -ftree-vectorize -fvariable-expansion-in-unroller")
endif()
else()
set(CMAKE_C_FLAGS_RELEASE "-mdynamic-no-pic -fno-strict-aliasing")
set(CMAKE_CXX_FLAGS_RELEASE "-mdynamic-no-pic -fno-strict-aliasing")
endif()
if(${XCODE_VERSION} VERSION_EQUAL 5 OR ${XCODE_VERSION} VERSION_GREATER 5)
# Xcode 5 is always using CLANG, which has too low template depth of 128 for libmv
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -ftemplate-depth=1024")
endif()
# Get rid of eventually clashes, we export some symbols explicite as local
set(PLATFORM_LINKFLAGS
"${PLATFORM_LINKFLAGS} -Xlinker -unexported_symbols_list -Xlinker ${CMAKE_SOURCE_DIR}/source/creator/osx_locals.map"
)
if(WITH_CXX11)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -stdlib=libc++")
set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -stdlib=libc++")
endif()
# Suppress ranlib "has no symbols" warnings (workaround for T48250)
set(CMAKE_C_ARCHIVE_CREATE "<CMAKE_AR> Scr <TARGET> <LINK_FLAGS> <OBJECTS>")
set(CMAKE_CXX_ARCHIVE_CREATE "<CMAKE_AR> Scr <TARGET> <LINK_FLAGS> <OBJECTS>")
set(CMAKE_C_ARCHIVE_FINISH "<CMAKE_RANLIB> -no_warning_for_no_symbols -c <TARGET>")
set(CMAKE_CXX_ARCHIVE_FINISH "<CMAKE_RANLIB> -no_warning_for_no_symbols -c <TARGET>")

View File

@@ -1,426 +0,0 @@
# ***** BEGIN GPL LICENSE BLOCK *****
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# The Original Code is Copyright (C) 2016, Blender Foundation
# All rights reserved.
#
# Contributor(s): Sergey Sharybin.
#
# ***** END GPL LICENSE BLOCK *****
# Libraries configuration for any *nix system including Linux and Unix.
macro(find_package_wrapper)
if(WITH_STATIC_LIBS)
find_package_static(${ARGV})
else()
find_package(${ARGV})
endif()
endmacro()
find_package_wrapper(JPEG REQUIRED)
find_package_wrapper(PNG REQUIRED)
find_package_wrapper(ZLIB REQUIRED)
find_package_wrapper(Freetype REQUIRED)
if(WITH_LZO AND WITH_SYSTEM_LZO)
find_package_wrapper(LZO)
if(NOT LZO_FOUND)
message(FATAL_ERROR "Failed finding system LZO version!")
endif()
endif()
if(WITH_SYSTEM_EIGEN3)
find_package_wrapper(Eigen3)
if(NOT EIGEN3_FOUND)
message(FATAL_ERROR "Failed finding system Eigen3 version!")
endif()
endif()
# else values are set below for all platforms
if(WITH_PYTHON)
# No way to set py35, remove for now.
# find_package(PythonLibs)
# Use our own instead, since without py is such a rare case,
# require this package
# XXX Linking errors with debian static python :/
# find_package_wrapper(PythonLibsUnix REQUIRED)
find_package(PythonLibsUnix REQUIRED)
endif()
if(WITH_IMAGE_OPENEXR)
find_package_wrapper(OpenEXR) # our own module
if(NOT OPENEXR_FOUND)
set(WITH_IMAGE_OPENEXR OFF)
endif()
endif()
if(WITH_IMAGE_OPENJPEG)
find_package_wrapper(OpenJPEG)
if(NOT OPENJPEG_FOUND)
set(WITH_IMAGE_OPENJPEG OFF)
endif()
endif()
if(WITH_IMAGE_TIFF)
# XXX Linking errors with debian static tiff :/
# find_package_wrapper(TIFF)
find_package(TIFF)
if(NOT TIFF_FOUND)
set(WITH_IMAGE_TIFF OFF)
endif()
endif()
# Audio IO
if(WITH_SYSTEM_AUDASPACE)
find_package_wrapper(Audaspace)
if(NOT AUDASPACE_FOUND OR NOT AUDASPACE_C_FOUND)
message(FATAL_ERROR "Audaspace external library not found!")
endif()
endif()
if(WITH_OPENAL)
find_package_wrapper(OpenAL)
if(NOT OPENAL_FOUND)
set(WITH_OPENAL OFF)
endif()
endif()
if(WITH_SDL)
if(WITH_SDL_DYNLOAD)
set(SDL_INCLUDE_DIR "${CMAKE_CURRENT_SOURCE_DIR}/extern/sdlew/include/SDL2")
set(SDL_LIBRARY)
else()
find_package_wrapper(SDL2)
if(SDL2_FOUND)
# Use same names for both versions of SDL until we move to 2.x.
set(SDL_INCLUDE_DIR "${SDL2_INCLUDE_DIR}")
set(SDL_LIBRARY "${SDL2_LIBRARY}")
set(SDL_FOUND "${SDL2_FOUND}")
else()
find_package_wrapper(SDL)
endif()
mark_as_advanced(
SDL_INCLUDE_DIR
SDL_LIBRARY
)
# unset(SDLMAIN_LIBRARY CACHE)
if(NOT SDL_FOUND)
set(WITH_SDL OFF)
endif()
endif()
endif()
if(WITH_JACK)
find_package_wrapper(Jack)
if(NOT JACK_FOUND)
set(WITH_JACK OFF)
endif()
endif()
# Codecs
if(WITH_CODEC_SNDFILE)
find_package_wrapper(SndFile)
if(NOT SNDFILE_FOUND)
set(WITH_CODEC_SNDFILE OFF)
endif()
endif()
if(WITH_CODEC_FFMPEG)
set(FFMPEG /usr CACHE PATH "FFMPEG Directory")
set(FFMPEG_LIBRARIES avformat avcodec avutil avdevice swscale CACHE STRING "FFMPEG Libraries")
mark_as_advanced(FFMPEG)
# lame, but until we have proper find module for ffmpeg
set(FFMPEG_INCLUDE_DIRS ${FFMPEG}/include)
if(EXISTS "${FFMPEG}/include/ffmpeg/")
list(APPEND FFMPEG_INCLUDE_DIRS "${FFMPEG}/include/ffmpeg")
endif()
# end lameness
mark_as_advanced(FFMPEG_LIBRARIES)
set(FFMPEG_LIBPATH ${FFMPEG}/lib)
endif()
if(WITH_FFTW3)
find_package_wrapper(Fftw3)
if(NOT FFTW3_FOUND)
set(WITH_FFTW3 OFF)
endif()
endif()
if(WITH_OPENCOLLADA)
find_package_wrapper(OpenCOLLADA)
if(OPENCOLLADA_FOUND)
find_package_wrapper(XML2)
find_package_wrapper(PCRE)
else()
set(WITH_OPENCOLLADA OFF)
endif()
endif()
if(WITH_MEM_JEMALLOC)
find_package_wrapper(JeMalloc)
if(NOT JEMALLOC_FOUND)
set(WITH_MEM_JEMALLOC OFF)
endif()
endif()
if(WITH_INPUT_NDOF)
find_package_wrapper(Spacenav)
if(SPACENAV_FOUND)
# use generic names within blenders buildsystem.
set(NDOF_INCLUDE_DIRS ${SPACENAV_INCLUDE_DIRS})
set(NDOF_LIBRARIES ${SPACENAV_LIBRARIES})
else()
set(WITH_INPUT_NDOF OFF)
endif()
endif()
if(WITH_CYCLES_OSL)
set(CYCLES_OSL ${LIBDIR}/osl CACHE PATH "Path to OpenShadingLanguage installation")
if(NOT OSL_ROOT)
set(OSL_ROOT ${CYCLES_OSL})
endif()
find_package_wrapper(OpenShadingLanguage)
if(OSL_FOUND)
if(${OSL_LIBRARY_VERSION_MAJOR} EQUAL "1" AND ${OSL_LIBRARY_VERSION_MINOR} LESS "6")
# Note: --whole-archive is needed to force loading of all symbols in liboslexec,
# otherwise LLVM is missing the osl_allocate_closure_component function
set(OSL_LIBRARIES
${OSL_OSLCOMP_LIBRARY}
-Wl,--whole-archive ${OSL_OSLEXEC_LIBRARY}
-Wl,--no-whole-archive ${OSL_OSLQUERY_LIBRARY}
)
endif()
else()
message(STATUS "OSL not found, disabling it from Cycles")
set(WITH_CYCLES_OSL OFF)
endif()
endif()
if(WITH_OPENVDB)
find_package_wrapper(OpenVDB)
find_package_wrapper(TBB)
if(NOT OPENVDB_FOUND OR NOT TBB_FOUND)
set(WITH_OPENVDB OFF)
set(WITH_OPENVDB_BLOSC OFF)
message(STATUS "OpenVDB not found, disabling it")
endif()
endif()
if(WITH_ALEMBIC)
find_package_wrapper(Alembic)
if(WITH_ALEMBIC_HDF5)
set(HDF5_ROOT_DIR ${LIBDIR}/hdf5)
find_package_wrapper(HDF5)
endif()
if(NOT ALEMBIC_FOUND OR (WITH_ALEMBIC_HDF5 AND NOT HDF5_FOUND))
set(WITH_ALEMBIC OFF)
set(WITH_ALEMBIC_HDF5 OFF)
endif()
endif()
if(WITH_BOOST)
# uses in build instructions to override include and library variables
if(NOT BOOST_CUSTOM)
if(WITH_STATIC_LIBS)
set(Boost_USE_STATIC_LIBS ON)
endif()
set(Boost_USE_MULTITHREADED ON)
set(__boost_packages filesystem regex thread date_time)
if(WITH_CYCLES_OSL)
if(NOT (${OSL_LIBRARY_VERSION_MAJOR} EQUAL "1" AND ${OSL_LIBRARY_VERSION_MINOR} LESS "6"))
list(APPEND __boost_packages wave)
else()
endif()
endif()
if(WITH_INTERNATIONAL)
list(APPEND __boost_packages locale)
endif()
if(WITH_CYCLES_NETWORK)
list(APPEND __boost_packages serialization)
endif()
if(WITH_OPENVDB)
list(APPEND __boost_packages iostreams)
endif()
list(APPEND __boost_packages system)
find_package(Boost 1.48 COMPONENTS ${__boost_packages})
if(NOT Boost_FOUND)
# try to find non-multithreaded if -mt not found, this flag
# doesn't matter for us, it has nothing to do with thread
# safety, but keep it to not disturb build setups
set(Boost_USE_MULTITHREADED OFF)
find_package(Boost 1.48 COMPONENTS ${__boost_packages})
endif()
unset(__boost_packages)
if(Boost_USE_STATIC_LIBS AND WITH_BOOST_ICU)
find_package(IcuLinux)
endif()
mark_as_advanced(Boost_DIR) # why doesnt boost do this?
endif()
set(BOOST_INCLUDE_DIR ${Boost_INCLUDE_DIRS})
set(BOOST_LIBRARIES ${Boost_LIBRARIES})
set(BOOST_LIBPATH ${Boost_LIBRARY_DIRS})
set(BOOST_DEFINITIONS "-DBOOST_ALL_NO_LIB")
endif()
if(WITH_OPENIMAGEIO)
find_package_wrapper(OpenImageIO)
if(NOT OPENIMAGEIO_PUGIXML_FOUND AND WITH_CYCLES_STANDALONE)
find_package_wrapper(PugiXML)
else()
set(PUGIXML_INCLUDE_DIR "${OPENIMAGEIO_INCLUDE_DIR/OpenImageIO}")
set(PUGIXML_LIBRARIES "")
endif()
set(OPENIMAGEIO_LIBRARIES
${OPENIMAGEIO_LIBRARIES}
${PNG_LIBRARIES}
${JPEG_LIBRARIES}
${ZLIB_LIBRARIES}
${BOOST_LIBRARIES}
)
set(OPENIMAGEIO_LIBPATH) # TODO, remove and reference the absolute path everywhere
set(OPENIMAGEIO_DEFINITIONS "")
if(WITH_IMAGE_TIFF)
list(APPEND OPENIMAGEIO_LIBRARIES "${TIFF_LIBRARY}")
endif()
if(WITH_IMAGE_OPENEXR)
list(APPEND OPENIMAGEIO_LIBRARIES "${OPENEXR_LIBRARIES}")
endif()
if(NOT OPENIMAGEIO_FOUND)
set(WITH_OPENIMAGEIO OFF)
message(STATUS "OpenImageIO not found, disabling WITH_CYCLES")
endif()
endif()
if(WITH_OPENCOLORIO)
find_package_wrapper(OpenColorIO)
set(OPENCOLORIO_LIBRARIES ${OPENCOLORIO_LIBRARIES})
set(OPENCOLORIO_LIBPATH) # TODO, remove and reference the absolute path everywhere
set(OPENCOLORIO_DEFINITIONS)
if(NOT OPENCOLORIO_FOUND)
set(WITH_OPENCOLORIO OFF)
message(STATUS "OpenColorIO not found")
endif()
endif()
if(WITH_LLVM)
find_package_wrapper(LLVM)
if(NOT LLVM_FOUND)
set(WITH_LLVM OFF)
message(STATUS "LLVM not found")
endif()
endif()
if(WITH_LLVM OR WITH_SDL_DYNLOAD)
# Fix for conflict with Mesa llvmpipe
set(PLATFORM_LINKFLAGS
"${PLATFORM_LINKFLAGS} -Wl,--version-script='${CMAKE_SOURCE_DIR}/source/creator/blender.map'"
)
endif()
if(WITH_OPENSUBDIV)
find_package_wrapper(OpenSubdiv)
set(OPENSUBDIV_LIBRARIES ${OPENSUBDIV_LIBRARIES})
set(OPENSUBDIV_LIBPATH) # TODO, remove and reference the absolute path everywhere
if(NOT OPENSUBDIV_FOUND)
set(WITH_OPENSUBDIV OFF)
message(STATUS "OpenSubdiv not found")
endif()
endif()
# OpenSuse needs lutil, ArchLinux not, for now keep, can avoid by using --as-needed
list(APPEND PLATFORM_LINKLIBS -lutil -lc -lm)
find_package(Threads REQUIRED)
list(APPEND PLATFORM_LINKLIBS ${CMAKE_THREAD_LIBS_INIT})
# used by other platforms
set(PTHREADS_LIBRARIES ${CMAKE_THREAD_LIBS_INIT})
if(CMAKE_DL_LIBS)
list(APPEND PLATFORM_LINKLIBS ${CMAKE_DL_LIBS})
endif()
if(CMAKE_SYSTEM_NAME MATCHES "Linux")
if(NOT WITH_PYTHON_MODULE)
# binreloc is linux only
set(BINRELOC_INCLUDE_DIRS ${CMAKE_SOURCE_DIR}/extern/binreloc/include)
set(WITH_BINRELOC ON)
endif()
endif()
# lfs on glibc, all compilers should use
add_definitions(-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE64_SOURCE)
# GNU Compiler
if(CMAKE_COMPILER_IS_GNUCC)
set(PLATFORM_CFLAGS "-pipe -fPIC -funsigned-char -fno-strict-aliasing")
# use ld.gold linker if available, could make optional
execute_process(
COMMAND ${CMAKE_C_COMPILER} -fuse-ld=gold -Wl,--version
ERROR_QUIET OUTPUT_VARIABLE LD_VERSION)
if("${LD_VERSION}" MATCHES "GNU gold")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fuse-ld=gold")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fuse-ld=gold")
else()
message(STATUS "GNU gold linker isn't available, using the default system linker.")
endif()
unset(LD_VERSION)
# CLang is the same as GCC for now.
elseif(CMAKE_C_COMPILER_ID MATCHES "Clang")
set(PLATFORM_CFLAGS "-pipe -fPIC -funsigned-char -fno-strict-aliasing")
# Solaris CC
elseif(CMAKE_C_COMPILER_ID MATCHES "SunPro")
set(PLATFORM_CFLAGS "-pipe -features=extensions -fPIC -D__FUNCTION__=__func__")
# Intel C++ Compiler
elseif(CMAKE_C_COMPILER_ID MATCHES "Intel")
# think these next two are broken
find_program(XIAR xiar)
if(XIAR)
set(CMAKE_AR "${XIAR}")
endif()
mark_as_advanced(XIAR)
find_program(XILD xild)
if(XILD)
set(CMAKE_LINKER "${XILD}")
endif()
mark_as_advanced(XILD)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fp-model precise -prec_div -parallel")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fp-model precise -prec_div -parallel")
# set(PLATFORM_CFLAGS "${PLATFORM_CFLAGS} -diag-enable sc3")
set(PLATFORM_CFLAGS "-pipe -fPIC -funsigned-char -fno-strict-aliasing")
set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -static-intel")
endif()

View File

@@ -1,87 +0,0 @@
# ***** BEGIN GPL LICENSE BLOCK *****
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# The Original Code is Copyright (C) 2016, Blender Foundation
# All rights reserved.
#
# Contributor(s): Sergey Sharybin.
#
# ***** END GPL LICENSE BLOCK *****
# Libraries configuration for Windows.
add_definitions(-DWIN32)
if(MSVC)
include(platform_win32_msvc)
elseif(CMAKE_COMPILER_IS_GNUCC)
include(platform_win32_mingw)
endif()
# Things common to both mingw and MSVC should go here
set(WINTAB_INC ${LIBDIR}/wintab/include)
if(WITH_OPENAL)
set(OPENAL ${LIBDIR}/openal)
set(OPENALDIR ${LIBDIR}/openal)
set(OPENAL_INCLUDE_DIR ${OPENAL}/include)
if(MSVC)
set(OPENAL_LIBRARY openal32)
else()
set(OPENAL_LIBRARY wrap_oal)
endif()
set(OPENAL_LIBPATH ${OPENAL}/lib)
endif()
if(WITH_CODEC_SNDFILE)
set(SNDFILE ${LIBDIR}/sndfile)
set(SNDFILE_INCLUDE_DIRS ${SNDFILE}/include)
set(SNDFILE_LIBRARIES libsndfile-1)
set(SNDFILE_LIBPATH ${SNDFILE}/lib) # TODO, deprecate
endif()
if(WITH_RAYOPTIMIZATION AND SUPPORT_SSE_BUILD)
add_definitions(-D__SSE__ -D__MMX__)
endif()
if(WITH_CYCLES_OSL)
set(CYCLES_OSL ${LIBDIR}/osl CACHE PATH "Path to OpenShadingLanguage installation")
find_library(OSL_LIB_EXEC NAMES oslexec PATHS ${CYCLES_OSL}/lib)
find_library(OSL_LIB_COMP NAMES oslcomp PATHS ${CYCLES_OSL}/lib)
find_library(OSL_LIB_QUERY NAMES oslquery PATHS ${CYCLES_OSL}/lib)
find_library(OSL_LIB_EXEC_DEBUG NAMES oslexec_d PATHS ${CYCLES_OSL}/lib)
find_library(OSL_LIB_COMP_DEBUG NAMES oslcomp_d PATHS ${CYCLES_OSL}/lib)
find_library(OSL_LIB_QUERY_DEBUG NAMES oslquery_d PATHS ${CYCLES_OSL}/lib)
list(APPEND OSL_LIBRARIES
optimized ${OSL_LIB_COMP}
optimized ${OSL_LIB_EXEC}
optimized ${OSL_LIB_QUERY}
debug ${OSL_LIB_EXEC_DEBUG}
debug ${OSL_LIB_COMP_DEBUG}
debug ${OSL_LIB_QUERY_DEBUG}
)
find_path(OSL_INCLUDE_DIR OSL/oslclosure.h PATHS ${CYCLES_OSL}/include)
find_program(OSL_COMPILER NAMES oslc PATHS ${CYCLES_OSL}/bin)
if(OSL_INCLUDE_DIR AND OSL_LIBRARIES AND OSL_COMPILER)
set(OSL_FOUND TRUE)
else()
message(STATUS "OSL not found")
set(WITH_CYCLES_OSL OFF)
endif()
endif()

View File

@@ -1,302 +0,0 @@
# ***** BEGIN GPL LICENSE BLOCK *****
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# The Original Code is Copyright (C) 2016, Blender Foundation
# All rights reserved.
#
# Contributor(s): Sergey Sharybin.
#
# ***** END GPL LICENSE BLOCK *****
# Libraries configuration for Windows when compiling with MinGW.
# keep GCC specific stuff here
include(CheckCSourceCompiles)
# Setup 64bit and 64bit windows systems
CHECK_C_SOURCE_COMPILES("
#ifndef __MINGW64__
#error
#endif
int main(void) { return 0; }
"
WITH_MINGW64
)
if(NOT DEFINED LIBDIR)
if(WITH_MINGW64)
message(STATUS "Compiling for 64 bit with MinGW-w64.")
set(LIBDIR ${CMAKE_SOURCE_DIR}/../lib/mingw64)
else()
message(STATUS "Compiling for 32 bit with MinGW-w32.")
set(LIBDIR ${CMAKE_SOURCE_DIR}/../lib/mingw32)
if(WITH_RAYOPTIMIZATION)
message(WARNING "MinGW-w32 is known to be unstable with 'WITH_RAYOPTIMIZATION' option enabled.")
endif()
endif()
else()
message(STATUS "Using pre-compiled LIBDIR: ${LIBDIR}")
endif()
if(NOT EXISTS "${LIBDIR}/")
message(FATAL_ERROR "Windows requires pre-compiled libs at: '${LIBDIR}'")
endif()
list(APPEND PLATFORM_LINKLIBS
-lshell32 -lshfolder -lgdi32 -lmsvcrt -lwinmm -lmingw32 -lm -lws2_32
-lz -lstdc++ -lole32 -luuid -lwsock32 -lpsapi -ldbghelp
)
if(WITH_INPUT_IME)
list(APPEND PLATFORM_LINKLIBS -limm32)
endif()
set(PLATFORM_CFLAGS "-pipe -funsigned-char -fno-strict-aliasing")
if(WITH_MINGW64)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fpermissive")
list(APPEND PLATFORM_LINKLIBS -lpthread)
add_definitions(-DFREE_WINDOWS64 -DMS_WIN64)
endif()
add_definitions(-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE64_SOURCE)
add_definitions(-DFREE_WINDOWS)
set(PNG "${LIBDIR}/png")
set(PNG_INCLUDE_DIRS "${PNG}/include")
set(PNG_LIBPATH ${PNG}/lib) # not cmake defined
if(WITH_MINGW64)
set(JPEG_LIBRARIES jpeg)
else()
set(JPEG_LIBRARIES libjpeg)
endif()
set(PNG_LIBRARIES png)
set(ZLIB ${LIBDIR}/zlib)
set(ZLIB_INCLUDE_DIRS ${ZLIB}/include)
set(ZLIB_LIBPATH ${ZLIB}/lib)
set(ZLIB_LIBRARIES z)
set(JPEG "${LIBDIR}/jpeg")
set(JPEG_INCLUDE_DIR "${JPEG}/include")
set(JPEG_LIBPATH ${JPEG}/lib) # not cmake defined
# comes with own pthread library
if(NOT WITH_MINGW64)
set(PTHREADS ${LIBDIR}/pthreads)
#set(PTHREADS_INCLUDE_DIRS ${PTHREADS}/include)
set(PTHREADS_LIBPATH ${PTHREADS}/lib)
set(PTHREADS_LIBRARIES pthreadGC2)
endif()
set(FREETYPE ${LIBDIR}/freetype)
set(FREETYPE_INCLUDE_DIRS ${FREETYPE}/include ${FREETYPE}/include/freetype2)
set(FREETYPE_LIBPATH ${FREETYPE}/lib)
set(FREETYPE_LIBRARY freetype)
if(WITH_FFTW3)
set(FFTW3 ${LIBDIR}/fftw3)
set(FFTW3_LIBRARIES fftw3)
set(FFTW3_INCLUDE_DIRS ${FFTW3}/include)
set(FFTW3_LIBPATH ${FFTW3}/lib)
endif()
if(WITH_OPENCOLLADA)
set(OPENCOLLADA ${LIBDIR}/opencollada)
set(OPENCOLLADA_INCLUDE_DIRS
${OPENCOLLADA}/include/opencollada/COLLADAStreamWriter
${OPENCOLLADA}/include/opencollada/COLLADABaseUtils
${OPENCOLLADA}/include/opencollada/COLLADAFramework
${OPENCOLLADA}/include/opencollada/COLLADASaxFrameworkLoader
${OPENCOLLADA}/include/opencollada/GeneratedSaxParser
)
set(OPENCOLLADA_LIBPATH ${OPENCOLLADA}/lib/opencollada)
set(OPENCOLLADA_LIBRARIES
OpenCOLLADAStreamWriter
OpenCOLLADASaxFrameworkLoader
OpenCOLLADAFramework
OpenCOLLADABaseUtils
GeneratedSaxParser
UTF MathMLSolver buffer ftoa xml
)
set(PCRE_LIBRARIES pcre)
endif()
if(WITH_CODEC_FFMPEG)
set(FFMPEG ${LIBDIR}/ffmpeg)
set(FFMPEG_INCLUDE_DIRS ${FFMPEG}/include)
if(WITH_MINGW64)
set(FFMPEG_LIBRARIES avcodec.dll avformat.dll avdevice.dll avutil.dll swscale.dll swresample.dll)
else()
set(FFMPEG_LIBRARIES avcodec-55 avformat-55 avdevice-55 avutil-52 swscale-2)
endif()
set(FFMPEG_LIBPATH ${FFMPEG}/lib)
endif()
if(WITH_IMAGE_OPENEXR)
set(OPENEXR ${LIBDIR}/openexr)
set(OPENEXR_INCLUDE_DIR ${OPENEXR}/include)
set(OPENEXR_INCLUDE_DIRS ${OPENEXR}/include/OpenEXR)
set(OPENEXR_LIBRARIES Half IlmImf Imath IlmThread Iex)
set(OPENEXR_LIBPATH ${OPENEXR}/lib)
endif()
if(WITH_IMAGE_TIFF)
set(TIFF ${LIBDIR}/tiff)
set(TIFF_LIBRARY tiff)
set(TIFF_INCLUDE_DIR ${TIFF}/include)
set(TIFF_LIBPATH ${TIFF}/lib)
endif()
if(WITH_JACK)
set(JACK ${LIBDIR}/jack)
set(JACK_INCLUDE_DIRS ${JACK}/include/jack ${JACK}/include)
set(JACK_LIBRARIES jack)
set(JACK_LIBPATH ${JACK}/lib)
# TODO, gives linking errors, force off
set(WITH_JACK OFF)
endif()
if(WITH_PYTHON)
# normally cached but not since we include them with blender
set(PYTHON_VERSION 3.5) # CACHE STRING)
string(REPLACE "." "" _PYTHON_VERSION_NO_DOTS ${PYTHON_VERSION})
set(PYTHON_INCLUDE_DIR "${LIBDIR}/python/include/python${PYTHON_VERSION}") # CACHE PATH)
set(PYTHON_LIBRARY "${LIBDIR}/python/lib/python${_PYTHON_VERSION_NO_DOTS}mw.lib") # CACHE FILEPATH)
unset(_PYTHON_VERSION_NO_DOTS)
# uncached vars
set(PYTHON_INCLUDE_DIRS "${PYTHON_INCLUDE_DIR}")
set(PYTHON_LIBRARIES "${PYTHON_LIBRARY}")
endif()
if(WITH_BOOST)
set(BOOST ${LIBDIR}/boost)
set(BOOST_INCLUDE_DIR ${BOOST}/include)
if(WITH_MINGW64)
set(BOOST_POSTFIX "mgw47-mt-s-1_49")
set(BOOST_DEBUG_POSTFIX "mgw47-mt-sd-1_49")
else()
set(BOOST_POSTFIX "mgw46-mt-s-1_49")
set(BOOST_DEBUG_POSTFIX "mgw46-mt-sd-1_49")
endif()
set(BOOST_LIBRARIES
optimized boost_date_time-${BOOST_POSTFIX} boost_filesystem-${BOOST_POSTFIX}
boost_regex-${BOOST_POSTFIX}
boost_system-${BOOST_POSTFIX} boost_thread-${BOOST_POSTFIX}
debug boost_date_time-${BOOST_DEBUG_POSTFIX} boost_filesystem-${BOOST_DEBUG_POSTFIX}
boost_regex-${BOOST_DEBUG_POSTFIX}
boost_system-${BOOST_DEBUG_POSTFIX} boost_thread-${BOOST_DEBUG_POSTFIX})
if(WITH_INTERNATIONAL)
set(BOOST_LIBRARIES ${BOOST_LIBRARIES}
optimized boost_locale-${BOOST_POSTFIX}
debug boost_locale-${BOOST_DEBUG_POSTFIX}
)
endif()
if(WITH_CYCLES_OSL)
set(BOOST_LIBRARIES ${BOOST_LIBRARIES}
optimized boost_wave-${BOOST_POSTFIX}
debug boost_wave-${BOOST_DEBUG_POSTFIX}
)
endif()
set(BOOST_LIBPATH ${BOOST}/lib)
set(BOOST_DEFINITIONS "-DBOOST_ALL_NO_LIB -DBOOST_THREAD_USE_LIB ")
endif()
if(WITH_OPENIMAGEIO)
set(OPENIMAGEIO ${LIBDIR}/openimageio)
set(OPENIMAGEIO_INCLUDE_DIRS ${OPENIMAGEIO}/include)
set(OPENIMAGEIO_LIBRARIES OpenImageIO)
set(OPENIMAGEIO_LIBPATH ${OPENIMAGEIO}/lib)
set(OPENIMAGEIO_DEFINITIONS "")
set(OPENIMAGEIO_IDIFF "${OPENIMAGEIO}/bin/idiff.exe")
endif()
if(WITH_LLVM)
set(LLVM_ROOT_DIR ${LIBDIR}/llvm CACHE PATH "Path to the LLVM installation")
set(LLVM_LIBPATH ${LLVM_ROOT_DIR}/lib)
# Explicitly set llvm lib order.
#---- WARNING ON GCC ORDER OF LIBS IS IMPORTANT, DO NOT CHANGE! ---------
set(LLVM_LIBRARY LLVMSelectionDAG LLVMCodeGen LLVMScalarOpts LLVMAnalysis LLVMArchive
LLVMAsmParser LLVMAsmPrinter
LLVMBitReader LLVMBitWriter
LLVMDebugInfo LLVMExecutionEngine
LLVMInstCombine LLVMInstrumentation
LLVMInterpreter LLVMJIT
LLVMLinker LLVMMC
LLVMMCDisassembler LLVMMCJIT
LLVMMCParser LLVMObject
LLVMRuntimeDyld
LLVMSupport
LLVMTableGen LLVMTarget
LLVMTransformUtils LLVMVectorize
LLVMX86AsmParser LLVMX86AsmPrinter
LLVMX86CodeGen LLVMX86Desc
LLVMX86Disassembler LLVMX86Info
LLVMX86Utils LLVMipa
LLVMipo LLVMCore)
# imagehelp is needed by LLVM 3.1 on MinGW, check lib\Support\Windows\Signals.inc
list(APPEND PLATFORM_LINKLIBS -limagehlp)
endif()
if(WITH_OPENCOLORIO)
set(OPENCOLORIO ${LIBDIR}/opencolorio)
set(OPENCOLORIO_INCLUDE_DIRS ${OPENCOLORIO}/include)
set(OPENCOLORIO_LIBRARIES OpenColorIO)
set(OPENCOLORIO_LIBPATH ${OPENCOLORIO}/lib)
set(OPENCOLORIO_DEFINITIONS)
endif()
if(WITH_SDL)
set(SDL ${LIBDIR}/sdl)
set(SDL_INCLUDE_DIR ${SDL}/include)
set(SDL_LIBRARY SDL)
set(SDL_LIBPATH ${SDL}/lib)
endif()
if(WITH_OPENVDB)
set(OPENVDB ${LIBDIR}/openvdb)
set(OPENVDB_INCLUDE_DIRS ${OPENVDB}/include)
set(OPENVDB_LIBRARIES openvdb ${TBB_LIBRARIES})
set(OPENVDB_LIBPATH ${LIBDIR}/openvdb/lib)
set(OPENVDB_DEFINITIONS)
endif()
if(WITH_ALEMBIC)
# TODO(sergey): For until someone drops by and compiles libraries for
# MinGW we allow users to compile their own Alembic library and use
# that via find_package(),
#
# Once precompiled libraries are there we'll use hardcoded locations.
find_package_wrapper(Alembic)
if(WITH_ALEMBIC_HDF5)
set(HDF5_ROOT_DIR ${LIBDIR}/hdf5)
find_package_wrapper(HDF5)
endif()
if(NOT ALEMBIC_FOUND OR (WITH_ALEMBIC_HDF5 AND NOT HDF5_FOUND))
set(WITH_ALEMBIC OFF)
set(WITH_ALEMBIC_HDF5 OFF)
endif()
endif()
set(PLATFORM_LINKFLAGS "-Xlinker --stack=2097152")
## DISABLE - causes linking errors
## for re-distribution, so users dont need mingw installed
# set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -static-libgcc -static-libstdc++")

View File

@@ -1,485 +0,0 @@
# ***** BEGIN GPL LICENSE BLOCK *****
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# The Original Code is Copyright (C) 2016, Blender Foundation
# All rights reserved.
#
# Contributor(s): Sergey Sharybin.
#
# ***** END GPL LICENSE BLOCK *****
# Libraries configuration for Windows when compiling with MSVC.
macro(warn_hardcoded_paths package_name
)
if(WITH_WINDOWS_FIND_MODULES)
message(WARNING "Using HARDCODED ${package_name} locations")
endif(WITH_WINDOWS_FIND_MODULES)
endmacro()
macro(windows_find_package package_name
)
if(WITH_WINDOWS_FIND_MODULES)
find_package( ${package_name})
endif(WITH_WINDOWS_FIND_MODULES)
endmacro()
add_definitions(-DWIN32)
# Minimum MSVC Version
if(MSVC_VERSION EQUAL 1800)
set(_min_ver "18.0.31101")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS ${_min_ver})
message(FATAL_ERROR
"Visual Studio 2013 (Update 4, ${_min_ver}) required, "
"found (${CMAKE_CXX_COMPILER_VERSION})")
endif()
endif()
if(MSVC_VERSION EQUAL 1900)
set(_min_ver "19.0.24210")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS ${_min_ver})
message(FATAL_ERROR
"Visual Studio 2015 (Update 3, ${_min_ver}) required, "
"found (${CMAKE_CXX_COMPILER_VERSION})")
endif()
endif()
unset(_min_ver)
# needed for some MSVC installations
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} /SAFESEH:NO")
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} /SAFESEH:NO")
set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} /SAFESEH:NO")
list(APPEND PLATFORM_LINKLIBS
ws2_32 vfw32 winmm kernel32 user32 gdi32 comdlg32
advapi32 shfolder shell32 ole32 oleaut32 uuid psapi Dbghelp
)
if(WITH_INPUT_IME)
list(APPEND PLATFORM_LINKLIBS imm32)
endif()
add_definitions(
-D_CRT_NONSTDC_NO_DEPRECATE
-D_CRT_SECURE_NO_DEPRECATE
-D_SCL_SECURE_NO_DEPRECATE
-D_CONSOLE
-D_LIB
)
# MSVC11 needs _ALLOW_KEYWORD_MACROS to build
add_definitions(-D_ALLOW_KEYWORD_MACROS)
# We want to support Vista level ABI
add_definitions(-D_WIN32_WINNT=0x600)
# Make cmake find the msvc redistributables
set(CMAKE_INSTALL_SYSTEM_RUNTIME_LIBS_SKIP TRUE)
include(InstallRequiredSystemLibraries)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /nologo /J /Gd /MP /EHsc")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /nologo /J /Gd /MP")
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /MTd")
set(CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG} /MTd")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /MT")
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} /MT")
set(CMAKE_CXX_FLAGS_MINSIZEREL "${CMAKE_CXX_FLAGS_MINSIZEREL} /MT")
set(CMAKE_C_FLAGS_MINSIZEREL "${CMAKE_C_FLAGS_MINSIZEREL} /MT")
set(CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO} /MT")
set(CMAKE_C_FLAGS_RELWITHDEBINFO "${CMAKE_C_FLAGS_RELWITHDEBINFO} /MT")
set(PLATFORM_LINKFLAGS "/SUBSYSTEM:CONSOLE /STACK:2097152 /INCREMENTAL:NO ")
set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} /NODEFAULTLIB:msvcrt.lib /NODEFAULTLIB:msvcmrt.lib /NODEFAULTLIB:msvcurt.lib /NODEFAULTLIB:msvcrtd.lib ")
# Ignore meaningless for us linker warnings.
set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} /ignore:4049 /ignore:4217 /ignore:4221")
set(CMAKE_STATIC_LINKER_FLAGS "${CMAKE_STATIC_LINKER_FLAGS} /ignore:4221")
# MSVC only, Mingw doesnt need
if(CMAKE_CL_64)
set(PLATFORM_LINKFLAGS "/MACHINE:X64 /OPT:NOREF ${PLATFORM_LINKFLAGS}")
else()
set(PLATFORM_LINKFLAGS "/MACHINE:IX86 /LARGEADDRESSAWARE ${PLATFORM_LINKFLAGS}")
endif()
set(PLATFORM_LINKFLAGS_DEBUG "/IGNORE:4099 /NODEFAULTLIB:libcmt.lib /NODEFAULTLIB:libc.lib")
if(NOT DEFINED LIBDIR)
# Setup 64bit and 64bit windows systems
if(CMAKE_CL_64)
message(STATUS "64 bit compiler detected.")
set(LIBDIR_BASE "win64")
else()
message(STATUS "32 bit compiler detected.")
set(LIBDIR_BASE "windows")
endif()
if(MSVC_VERSION EQUAL 1900)
message(STATUS "Visual Studio 2015 detected.")
set(LIBDIR ${CMAKE_SOURCE_DIR}/../lib/${LIBDIR_BASE}_vc14)
else()
message(STATUS "Visual Studio 2013 detected.")
set(LIBDIR ${CMAKE_SOURCE_DIR}/../lib/${LIBDIR_BASE}_vc12)
endif()
else()
message(STATUS "Using pre-compiled LIBDIR: ${LIBDIR}")
endif()
if(NOT EXISTS "${LIBDIR}/")
message(FATAL_ERROR "Windows requires pre-compiled libs at: '${LIBDIR}'")
endif()
# Add each of our libraries to our cmake_prefix_path so find_package() could work
file(GLOB children RELATIVE ${LIBDIR} ${LIBDIR}/*)
foreach(child ${children})
if(IS_DIRECTORY ${LIBDIR}/${child})
list(APPEND CMAKE_PREFIX_PATH ${LIBDIR}/${child})
endif()
endforeach()
set(ZLIB_INCLUDE_DIRS ${LIBDIR}/zlib/include)
set(ZLIB_LIBRARIES ${LIBDIR}/zlib/lib/libz_st.lib)
set(ZLIB_INCLUDE_DIR ${LIBDIR}/zlib/include)
set(ZLIB_LIBRARY ${LIBDIR}/zlib/lib/libz_st.lib)
set(ZLIB_DIR ${LIBDIR}/zlib)
windows_find_package(zlib) # we want to find before finding things that depend on it like png
windows_find_package(png)
if(NOT PNG_FOUND)
warn_hardcoded_paths(libpng)
set(PNG_PNG_INCLUDE_DIR ${LIBDIR}/png/include)
set(PNG_LIBRARIES libpng)
set(PNG "${LIBDIR}/png")
set(PNG_INCLUDE_DIRS "${PNG}/include")
set(PNG_LIBPATH ${PNG}/lib) # not cmake defined
endif()
set(JPEG_NAMES ${JPEG_NAMES} libjpeg)
windows_find_package(jpeg REQUIRED)
if(NOT JPEG_FOUND)
warn_hardcoded_paths(jpeg)
set(JPEG_INCLUDE_DIR ${LIBDIR}/jpeg/include)
set(JPEG_LIBRARIES ${LIBDIR}/jpeg/lib/libjpeg.lib)
endif()
set(PTHREADS_INCLUDE_DIRS ${LIBDIR}/pthreads/include)
set(PTHREADS_LIBRARIES ${LIBDIR}/pthreads/lib/pthreadVC2.lib)
set(FREETYPE ${LIBDIR}/freetype)
set(FREETYPE_INCLUDE_DIRS
${LIBDIR}/freetype/include
${LIBDIR}/freetype/include/freetype2
)
set(FREETYPE_LIBRARY ${LIBDIR}/freetype/lib/freetype2ST.lib)
windows_find_package(freetype REQUIRED)
if(WITH_FFTW3)
set(FFTW3 ${LIBDIR}/fftw3)
set(FFTW3_LIBRARIES libfftw)
set(FFTW3_INCLUDE_DIRS ${FFTW3}/include)
set(FFTW3_LIBPATH ${FFTW3}/lib)
endif()
if(WITH_OPENCOLLADA)
set(OPENCOLLADA ${LIBDIR}/opencollada)
set(OPENCOLLADA_INCLUDE_DIRS
${OPENCOLLADA}/include/opencollada/COLLADAStreamWriter
${OPENCOLLADA}/include/opencollada/COLLADABaseUtils
${OPENCOLLADA}/include/opencollada/COLLADAFramework
${OPENCOLLADA}/include/opencollada/COLLADASaxFrameworkLoader
${OPENCOLLADA}/include/opencollada/GeneratedSaxParser
)
set(OPENCOLLADA_LIBRARIES
${OPENCOLLADA}/lib/opencollada/OpenCOLLADASaxFrameworkLoader.lib
${OPENCOLLADA}/lib/opencollada/OpenCOLLADAFramework.lib
${OPENCOLLADA}/lib/opencollada/OpenCOLLADABaseUtils.lib
${OPENCOLLADA}/lib/opencollada/OpenCOLLADAStreamWriter.lib
${OPENCOLLADA}/lib/opencollada/MathMLSolver.lib
${OPENCOLLADA}/lib/opencollada/GeneratedSaxParser.lib
${OPENCOLLADA}/lib/opencollada/xml.lib
${OPENCOLLADA}/lib/opencollada/buffer.lib
${OPENCOLLADA}/lib/opencollada/ftoa.lib
)
if(NOT WITH_LLVM)
list(APPEND OPENCOLLADA_LIBRARIES ${OPENCOLLADA}/lib/opencollada/UTF.lib)
endif()
set(PCRE_LIBRARIES
${OPENCOLLADA}/lib/opencollada/pcre.lib
)
endif()
if(WITH_CODEC_FFMPEG)
set(FFMPEG_INCLUDE_DIRS
${LIBDIR}/ffmpeg/include
${LIBDIR}/ffmpeg/include/msvc
)
windows_find_package(FFMPEG)
if(NOT FFMPEG_FOUND)
warn_hardcoded_paths(ffmpeg)
set(FFMPEG_LIBRARY_VERSION 55)
set(FFMPEG_LIBRARY_VERSION_AVU 52)
set(FFMPEG_LIBRARIES
${LIBDIR}/ffmpeg/lib/avcodec-${FFMPEG_LIBRARY_VERSION}.lib
${LIBDIR}/ffmpeg/lib/avformat-${FFMPEG_LIBRARY_VERSION}.lib
${LIBDIR}/ffmpeg/lib/avdevice-${FFMPEG_LIBRARY_VERSION}.lib
${LIBDIR}/ffmpeg/lib/avutil-${FFMPEG_LIBRARY_VERSION_AVU}.lib
${LIBDIR}/ffmpeg/lib/swscale-2.lib
)
endif()
endif()
if(WITH_IMAGE_OPENEXR)
set(OPENEXR_ROOT_DIR ${LIBDIR}/openexr)
set(OPENEXR_VERSION "2.1")
windows_find_package(OPENEXR REQUIRED)
if(NOT OPENEXR_FOUND)
warn_hardcoded_paths(OpenEXR)
set(OPENEXR ${LIBDIR}/openexr)
set(OPENEXR_INCLUDE_DIR ${OPENEXR}/include)
set(OPENEXR_INCLUDE_DIRS ${OPENEXR_INCLUDE_DIR} ${OPENEXR}/include/OpenEXR)
set(OPENEXR_LIBPATH ${OPENEXR}/lib)
set(OPENEXR_LIBRARIES
optimized ${OPENEXR_LIBPATH}/Iex-2_2.lib
optimized ${OPENEXR_LIBPATH}/Half.lib
optimized ${OPENEXR_LIBPATH}/IlmImf-2_2.lib
optimized ${OPENEXR_LIBPATH}/Imath-2_2.lib
optimized ${OPENEXR_LIBPATH}/IlmThread-2_2.lib
debug ${OPENEXR_LIBPATH}/Iex-2_2_d.lib
debug ${OPENEXR_LIBPATH}/Half_d.lib
debug ${OPENEXR_LIBPATH}/IlmImf-2_2_d.lib
debug ${OPENEXR_LIBPATH}/Imath-2_2_d.lib
debug ${OPENEXR_LIBPATH}/IlmThread-2_2_d.lib
)
endif()
endif()
if(WITH_IMAGE_TIFF)
# Try to find tiff first then complain and set static and maybe wrong paths
windows_find_package(TIFF)
if(NOT TIFF_FOUND)
warn_hardcoded_paths(libtiff)
set(TIFF_LIBRARY ${LIBDIR}/tiff/lib/libtiff.lib)
set(TIFF_INCLUDE_DIR ${LIBDIR}/tiff/include)
endif()
endif()
if(WITH_JACK)
set(JACK_INCLUDE_DIRS
${LIBDIR}/jack/include/jack
${LIBDIR}/jack/include
)
set(JACK_LIBRARIES optimized ${LIBDIR}/jack/lib/libjack.lib debug ${LIBDIR}/jack/lib/libjack_d.lib)
endif()
if(WITH_PYTHON)
set(PYTHON_VERSION 3.5) # CACHE STRING)
string(REPLACE "." "" _PYTHON_VERSION_NO_DOTS ${PYTHON_VERSION})
# Use shared libs for vc2008 and vc2010 until we actually have vc2010 libs
set(PYTHON_LIBRARY ${LIBDIR}/python/lib/python${_PYTHON_VERSION_NO_DOTS}.lib)
unset(_PYTHON_VERSION_NO_DOTS)
# Shared includes for both vc2008 and vc2010
set(PYTHON_INCLUDE_DIR ${LIBDIR}/python/include/python${PYTHON_VERSION})
# uncached vars
set(PYTHON_INCLUDE_DIRS "${PYTHON_INCLUDE_DIR}")
set(PYTHON_LIBRARIES "${PYTHON_LIBRARY}")
endif()
if(WITH_BOOST)
if(WITH_CYCLES_OSL)
set(boost_extra_libs wave)
endif()
if(WITH_INTERNATIONAL)
list(APPEND boost_extra_libs locale)
endif()
if(WITH_OPENVDB)
list(APPEND boost_extra_libs iostreams)
endif()
set(Boost_USE_STATIC_RUNTIME ON) # prefix lib
set(Boost_USE_MULTITHREADED ON) # suffix -mt
set(Boost_USE_STATIC_LIBS ON) # suffix -s
if (WITH_WINDOWS_FIND_MODULES)
find_package(Boost COMPONENTS date_time filesystem thread regex system ${boost_extra_libs})
endif (WITH_WINDOWS_FIND_MODULES)
if(NOT Boost_FOUND)
warn_hardcoded_paths(BOOST)
set(BOOST ${LIBDIR}/boost)
set(BOOST_INCLUDE_DIR ${BOOST}/include)
if(MSVC12)
set(BOOST_LIBPATH ${BOOST}/lib)
set(BOOST_POSTFIX "vc120-mt-s-1_60.lib")
set(BOOST_DEBUG_POSTFIX "vc120-mt-sgd-1_60.lib")
else()
set(BOOST_LIBPATH ${BOOST}/lib)
set(BOOST_POSTFIX "vc140-mt-s-1_60.lib")
set(BOOST_DEBUG_POSTFIX "vc140-mt-sgd-1_60.lib")
endif()
set(BOOST_LIBRARIES
optimized libboost_date_time-${BOOST_POSTFIX}
optimized libboost_filesystem-${BOOST_POSTFIX}
optimized libboost_regex-${BOOST_POSTFIX}
optimized libboost_system-${BOOST_POSTFIX}
optimized libboost_thread-${BOOST_POSTFIX}
debug libboost_date_time-${BOOST_DEBUG_POSTFIX}
debug libboost_filesystem-${BOOST_DEBUG_POSTFIX}
debug libboost_regex-${BOOST_DEBUG_POSTFIX}
debug libboost_system-${BOOST_DEBUG_POSTFIX}
debug libboost_thread-${BOOST_DEBUG_POSTFIX}
)
if(WITH_CYCLES_OSL)
set(BOOST_LIBRARIES ${BOOST_LIBRARIES}
optimized libboost_wave-${BOOST_POSTFIX}
debug libboost_wave-${BOOST_DEBUG_POSTFIX})
endif()
if(WITH_INTERNATIONAL)
set(BOOST_LIBRARIES ${BOOST_LIBRARIES}
optimized libboost_locale-${BOOST_POSTFIX}
debug libboost_locale-${BOOST_DEBUG_POSTFIX})
endif()
else() # we found boost using find_package
set(BOOST_INCLUDE_DIR ${Boost_INCLUDE_DIRS})
set(BOOST_LIBRARIES ${Boost_LIBRARIES})
set(BOOST_LIBPATH ${Boost_LIBRARY_DIRS})
endif()
set(BOOST_DEFINITIONS "-DBOOST_ALL_NO_LIB")
endif()
if(WITH_OPENIMAGEIO)
windows_find_package(OpenImageIO)
set(OPENIMAGEIO ${LIBDIR}/openimageio)
set(OPENIMAGEIO_INCLUDE_DIRS ${OPENIMAGEIO}/include)
set(OIIO_OPTIMIZED optimized OpenImageIO optimized OpenImageIO_Util)
set(OIIO_DEBUG debug OpenImageIO_d debug OpenImageIO_Util_d)
set(OPENIMAGEIO_LIBRARIES ${OIIO_OPTIMIZED} ${OIIO_DEBUG})
set(OPENIMAGEIO_LIBPATH ${OPENIMAGEIO}/lib)
set(OPENIMAGEIO_DEFINITIONS "-DUSE_TBB=0")
set(OPENCOLORIO_DEFINITIONS "-DOCIO_STATIC_BUILD")
set(OPENIMAGEIO_IDIFF "${OPENIMAGEIO}/bin/idiff.exe")
add_definitions(-DOIIO_STATIC_BUILD)
endif()
if(WITH_LLVM)
set(LLVM_ROOT_DIR ${LIBDIR}/llvm CACHE PATH "Path to the LLVM installation")
file(GLOB LLVM_LIBRARY_OPTIMIZED ${LLVM_ROOT_DIR}/lib/*.lib)
if(EXISTS ${LLVM_ROOT_DIR}/debug/lib)
foreach(LLVM_OPTIMIZED_LIB ${LLVM_LIBRARY_OPTIMIZED})
get_filename_component(LIBNAME ${LLVM_OPTIMIZED_LIB} ABSOLUTE)
list(APPEND LLVM_LIBS optimized ${LIBNAME})
endforeach(LLVM_OPTIMIZED_LIB)
file(GLOB LLVM_LIBRARY_DEBUG ${LLVM_ROOT_DIR}/debug/lib/*.lib)
foreach(LLVM_DEBUG_LIB ${LLVM_LIBRARY_DEBUG})
get_filename_component(LIBNAME ${LLVM_DEBUG_LIB} ABSOLUTE)
list(APPEND LLVM_LIBS debug ${LIBNAME})
endforeach(LLVM_DEBUG_LIB)
set(LLVM_LIBRARY ${LLVM_LIBS})
else()
message(WARNING "LLVM debug libs not present on this system. Using release libs for debug builds.")
set(LLVM_LIBRARY ${LLVM_LIBRARY_OPTIMIZED})
endif()
endif()
if(WITH_OPENCOLORIO)
set(OPENCOLORIO ${LIBDIR}/opencolorio)
set(OPENCOLORIO_INCLUDE_DIRS ${OPENCOLORIO}/include)
set(OPENCOLORIO_LIBRARIES OpenColorIO)
set(OPENCOLORIO_LIBPATH ${LIBDIR}/opencolorio/lib)
set(OPENCOLORIO_DEFINITIONS)
endif()
if(WITH_OPENVDB)
set(BLOSC_LIBRARIES optimized ${LIBDIR}/blosc/lib/libblosc.lib debug ${LIBDIR}/blosc/lib/libblosc_d.lib)
set(TBB_LIBRARIES optimized ${LIBDIR}/tbb/lib/tbb.lib debug ${LIBDIR}/tbb/lib/tbb_debug.lib)
set(TBB_INCLUDE_DIR ${LIBDIR}/tbb/include)
set(OPENVDB ${LIBDIR}/openvdb)
set(OPENVDB_INCLUDE_DIRS ${OPENVDB}/include ${TBB_INCLUDE_DIR})
set(OPENVDB_LIBRARIES optimized openvdb debug openvdb_d ${TBB_LIBRARIES} ${BLOSC_LIBRARIES})
set(OPENVDB_LIBPATH ${LIBDIR}/openvdb/lib)
endif()
if(WITH_ALEMBIC)
set(ALEMBIC ${LIBDIR}/alembic)
set(ALEMBIC_INCLUDE_DIR ${ALEMBIC}/include)
set(ALEMBIC_INCLUDE_DIRS ${ALEMBIC_INCLUDE_DIR})
set(ALEMBIC_LIBPATH ${ALEMBIC}/lib)
set(ALEMBIC_LIBRARIES optimized alembic debug alembic_d)
endif()
if(WITH_MOD_CLOTH_ELTOPO)
set(LAPACK ${LIBDIR}/lapack)
# set(LAPACK_INCLUDE_DIR ${LAPACK}/include)
set(LAPACK_LIBPATH ${LAPACK}/lib)
set(LAPACK_LIBRARIES
${LIBDIR}/lapack/lib/libf2c.lib
${LIBDIR}/lapack/lib/clapack_nowrap.lib
${LIBDIR}/lapack/lib/BLAS_nowrap.lib
)
endif()
if(WITH_OPENSUBDIV)
set(OPENSUBDIV_INCLUDE_DIR ${LIBDIR}/opensubdiv/include)
set(OPENSUBDIV_LIBPATH ${LIBDIR}/opensubdiv/lib)
set(OPENSUBDIV_LIBRARIES ${OPENSUBDIV_LIBPATH}/osdCPU.lib ${OPENSUBDIV_LIBPATH}/osdGPU.lib)
find_package(OpenSubdiv)
endif()
if(WITH_SDL)
set(SDL ${LIBDIR}/sdl)
set(SDL_INCLUDE_DIR ${SDL}/include)
set(SDL_LIBPATH ${SDL}/lib)
# MinGW TODO: Update MinGW to SDL2
if(NOT CMAKE_COMPILER_IS_GNUCC)
set(SDL_LIBRARY SDL2)
else()
set(SDL_LIBRARY SDL)
endif()
endif()
# Audio IO
if(WITH_SYSTEM_AUDASPACE)
set(AUDASPACE_INCLUDE_DIRS ${LIBDIR}/audaspace/include/audaspace)
set(AUDASPACE_LIBRARIES ${LIBDIR}/audaspace/lib/audaspace.lib)
set(AUDASPACE_C_INCLUDE_DIRS ${LIBDIR}/audaspace/include/audaspace)
set(AUDASPACE_C_LIBRARIES ${LIBDIR}/audaspace/lib/audaspace-c.lib)
set(AUDASPACE_PY_INCLUDE_DIRS ${LIBDIR}/audaspace/include/audaspace)
set(AUDASPACE_PY_LIBRARIES ${LIBDIR}/audaspace/lib/audaspace-py.lib)
endif()
# used in many places so include globally, like OpenGL
blender_include_dirs_sys("${PTHREADS_INCLUDE_DIRS}")
#find signtool
SET(ProgramFilesX86_NAME "ProgramFiles(x86)") #env dislikes the ( )
find_program(SIGNTOOL_EXE signtool
HINTS
"$ENV{${ProgramFilesX86_NAME}}/Windows Kits/10/bin/x86/"
"$ENV{ProgramFiles}/Windows Kits/10/bin/x86/"
"$ENV{${ProgramFilesX86_NAME}}/Windows Kits/8.1/bin/x86/"
"$ENV{ProgramFiles}/Windows Kits/8.1/bin/x86/"
"$ENV{${ProgramFilesX86_NAME}}/Windows Kits/8.0/bin/x86/"
"$ENV{ProgramFiles}/Windows Kits/8.0/bin/x86/"
)

View File

@@ -39,7 +39,7 @@ __all__ = (
"is_py",
"cmake_advanced_info",
"cmake_compiler_defines",
"project_name_get",
"project_name_get"
"init",
)
@@ -214,12 +214,7 @@ def cmake_advanced_info():
def cmake_cache_var(var):
cache_file = open(join(CMAKE_DIR, "CMakeCache.txt"), encoding='utf-8')
lines = [
l_strip for l in cache_file
for l_strip in (l.strip(),)
if l_strip
if not l_strip.startswith(("//", "#"))
]
lines = [l_strip for l in cache_file for l_strip in (l.strip(),) if l_strip if not l_strip.startswith("//") if not l_strip.startswith("#")]
cache_file.close()
for l in lines:

View File

@@ -23,7 +23,7 @@
__all__ = (
"build_info",
"SOURCE_DIR",
)
)
import sys

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# <pep8 compliant>

View File

@@ -2,9 +2,11 @@
# custom blender vars
blender_srcdir=$(dirname $startdir)"/../.."
blender_version=$(grep "BLENDER_VERSION\s" $blender_srcdir/source/blender/blenkernel/BKE_blender_version.h | awk '{print $3}')
# value may be formatted: 35042:35051M
blender_revision=$(svnversion $blender_srcdir | cut -d: -f2 | awk '{print $3}')
blender_version=$(grep "BLENDER_VERSION\s" $blender_srcdir/source/blender/blenkernel/BKE_blender.h | awk '{print $3}')
blender_version=$(expr $blender_version / 100).$(expr $blender_version % 100) # 256 -> 2.56
blender_version_char=$(sed -ne 's/.*BLENDER_VERSION_CHAR.*\([a-z]\)$/\1/p' $blender_srcdir/source/blender/blenkernel/BKE_blender_version.h)
blender_version_char=$(sed -ne 's/.*BLENDER_VERSION_CHAR.*\([a-z]\)$/\1/p' $blender_srcdir/source/blender/blenkernel/BKE_blender.h)
# blender_subversion=$(grep BLENDER_SUBVERSION $blender_srcdir/source/blender/blenkernel/BKE_blender.h | awk '{print $3}')
# map the version a -> 1
@@ -25,9 +27,7 @@ arch=('i686' 'x86_64')
url="www.blender.org"
license=('GPL')
groups=()
depends=('libjpeg' 'libpng' 'openjpeg' 'libtiff' 'openexr' 'python>=3.5'
'gettext' 'libxi' 'libxmu' 'mesa' 'freetype2' 'openal' 'sdl'
'libsndfile' 'ffmpeg')
depends=('libjpeg' 'libpng' 'openjpeg' 'libtiff' 'openexr' 'python>=3.4' 'gettext' 'libxi' 'libxmu' 'mesa' 'freetype2' 'openal' 'sdl' 'libsndfile' 'ffmpeg')
makedepends=('cmake' 'git')
optdepends=()
provides=()

View File

@@ -38,7 +38,7 @@ PROJECT_NAME = Blender
# could be handy for archiving the generated documentation or if some version
# control system is used.
PROJECT_NUMBER = "V2.8x"
PROJECT_NUMBER = "V2.7x"
# Using the PROJECT_BRIEF tag one can provide an optional one line description
# for a project that appears at the top of each page and should give viewer a

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
# ##### BEGIN GPL LICENSE BLOCK #####
#

View File

@@ -0,0 +1,132 @@
class IDGroup:
"""
The IDGroup Type
================
This type supports both iteration and the []
operator to get child ID properties.
You can also add new properties using the [] operator.
For example::
group['a float!'] = 0.0
group['an int!'] = 0
group['a string!'] = "hi!"
group['an array!'] = [0, 0, 1.0, 0]
group['a subgroup!] = {"float": 0.0, "an int": 1.0, "an array": [1, 2],
"another subgroup": {"a": 0.0, "str": "bleh"}}
Note that for arrays, the array type defaults to int unless a float is found
while scanning the template list; if any floats are found, then the whole
array is float. Note that double-precision floating point numbers are used for
python-created float ID properties and arrays (though the internal C api does
support single-precision floats, and the python code will read them).
You can also delete properties with the del operator. For example:
del group['property']
To get the type of a property, use the type() operator, for example::
if type(group['bleh']) == str: pass
To tell if the property is a group or array type, import the Blender.Types module and test
against IDGroupType and IDArrayType, like so::
from Blender.Types import IDGroupType, IDArrayType.
if type(group['bleghr']) == IDGroupType:
(do something)
@ivar name: The name of the property
@type name: string
"""
def pop(item):
"""
Pop an item from the group property.
@type item: string
@param item: The item name.
@rtype: can be dict, list, int, float or string.
@return: The removed property.
"""
def update(updatedict):
"""
Updates items in the dict, similar to normal python
dictionary method .update().
@type updatedict: dict
@param updatedict: A dict of simple types to derive updated/new IDProperties from.
@rtype: None
@return: None
"""
def keys():
"""
Returns a list of the keys in this property group.
@rtype: list of strings.
@return: a list of the keys in this property group.
"""
def values():
"""
Returns a list of the values in this property group.
Note that unless a value is itself a property group or an array, you
cannot change it by changing the values in this list, you must change them
in the parent property group.
For example,
group['some_property'] = new_value
. . .is correct, while,
values = group.values()
values[0] = new_value
. . .is wrong.
@rtype: list of strings.
@return: a list of the values in this property group.
"""
def iteritems():
"""
Implements the python dictionary iteritmes method.
For example::
for k, v in group.iteritems():
print "Property name: " + k
print "Property value: " + str(v)
@rtype: an iterator that spits out items of the form [key, value]
@return: an iterator.
"""
def convert_to_pyobject():
"""
Converts the entire property group to a purely python form.
@rtype: dict
@return: A python dictionary representing the property group
"""
class IDArray:
"""
The IDArray Type
================
@ivar type: returns the type of the array, can be either IDP_Int or IDP_Float
"""
def __getitem__(index):
pass
def __setitem__(index, value):
pass
def __len__():
pass

View File

@@ -1,237 +0,0 @@
"""
Video Capture with DeckLink
+++++++++++++++++++++++++++
Video frames captured with DeckLink cards have pixel formats that are generally not directly
usable by OpenGL, they must be processed by a shader. The three shaders presented here should
cover all common video capture cases.
This file reflects the current video transfer method implemented in the Decklink module:
whenever possible the video images are transferred as float texture because this is more
compatible with GPUs. Of course, only the pixel formats that have a correspondant GL format
can be transferred as float. Look for fg_shaders in this file for an exhaustive list.
Other pixel formats will be transferred as 32 bits integer red-channel texture but this
won't work with certain GPU (Intel GMA); the corresponding shaders are not shown here.
However, it should not be necessary to use any of them as the list below covers all practical
cases of video capture with all types of Decklink product.
In other words, only use one of the pixel format below and you will be fine. Note that depending
on the video stream, only certain pixel formats will be allowed (others will throw an exception).
For example, to capture a PAL video stream, you must use one of the YUV formats.
To find which pixel format is suitable for a particular video stream, use the 'Media Express'
utility that comes with the Decklink software : if you see the video in the 'Log and Capture'
Window, you have selected the right pixel format and you can use the same in Blender.
Notes: * these shaders only decode the RGB channel and set the alpha channel to a fixed
value (look for color.a = ). It's up to you to add postprocessing to the color.
* these shaders are compatible with 2D and 3D video stream
"""
import bge
from bge import logic
from bge import texture as vt
# The default vertex shader, because we need one
#
VertexShader = """
#version 130
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}
"""
# For use with RGB video stream: the pixel is directly usable
#
FragmentShader_R10l = """
#version 130
uniform sampler2D tex;
// stereo = 1.0 if 2D image, =0.5 if 3D (left eye below, right eye above)
uniform float stereo;
// eye = 0.0 for the left eye, 0.5 for the right eye
uniform float eye;
void main(void)
{
vec4 color;
float tx, ty;
tx = gl_TexCoord[0].x;
ty = eye+gl_TexCoord[0].y*stereo;
color = texture(tex, vec2(tx,ty));
color.a = 0.7;
gl_FragColor = color;
}
"""
# For use with YUV video stream
#
FragmentShader_2vuy = """
#version 130
uniform sampler2D tex;
// stereo = 1.0 if 2D image, =0.5 if 3D (left eye below, right eye above)
uniform float stereo;
// eye = 0.0 for the left eye, 0.5 for the right eye
uniform float eye;
void main(void)
{
vec4 color;
float tx, ty, width, Y, Cb, Cr;
int px;
tx = gl_TexCoord[0].x;
ty = eye+gl_TexCoord[0].y*stereo;
width = float(textureSize(tex, 0).x);
color = texture(tex, vec2(tx, ty));
px = int(floor(fract(tx*width)*2.0));
switch (px) {
case 0:
Y = color.g;
break;
case 1:
Y = color.a;
break;
}
Y = (Y - 0.0625) * 1.168949772;
Cb = (color.b - 0.0625) * 1.142857143 - 0.5;
Cr = (color.r - 0.0625) * 1.142857143 - 0.5;
color.r = Y + 1.5748 * Cr;
color.g = Y - 0.1873 * Cb - 0.4681 * Cr;
color.b = Y + 1.8556 * Cb;
color.a = 0.7;
gl_FragColor = color;
}
"""
# For use with high resolution YUV
#
FragmentShader_v210 = """
#version 130
uniform sampler2D tex;
// stereo = 1.0 if 2D image, =0.5 if 3D (left eye below, right eye above)
uniform float stereo;
// eye = 0.0 for the left eye, 0.5 for the right eye
uniform float eye;
void main(void)
{
vec4 color, color1, color2, color3;
int px;
float tx, ty, width, sx, dx, bx, Y, Cb, Cr;
tx = gl_TexCoord[0].x;
ty = eye+gl_TexCoord[0].y*stereo;
width = float(textureSize(tex, 0).x);
// to sample macro pixels (6 pixels in 4 words)
sx = tx*width*0.25+0.01;
// index of display pixel in the macro pixel 0..5
px = int(floor(fract(sx)*6.0));
// increment as we sample the macro pixel
dx = 1.0/width;
// base x coord of macro pixel
bx = (floor(sx)+0.01)*dx*4.0;
color = texture(tex, vec2(bx, ty));
color1 = texture(tex, vec2(bx+dx, ty));
color2 = texture(tex, vec2(bx+dx*2.0, ty));
color3 = texture(tex, vec2(bx+dx*3.0, ty));
switch (px) {
case 0:
case 1:
Cb = color.b;
Cr = color.r;
break;
case 2:
case 3:
Cb = color1.g;
Cr = color2.b;
break;
default:
Cb = color2.r;
Cr = color3.g;
break;
}
switch (px) {
case 0:
Y = color.g;
break;
case 1:
Y = color1.b;
break;
case 2:
Y = color1.r;
break;
case 3:
Y = color2.g;
break;
case 4:
Y = color3.b;
break;
default:
Y = color3.r;
break;
}
Y = (Y - 0.0625) * 1.168949772;
Cb = (Cb - 0.0625) * 1.142857143 - 0.5;
Cr = (Cr - 0.0625) * 1.142857143 - 0.5;
color.r = Y + 1.5748 * Cr;
color.g = Y - 0.1873 * Cb - 0.4681 * Cr;
color.b = Y + 1.8556 * Cb;
color.a = 0.7;
gl_FragColor = color;
}
"""
# The exhausitve list of pixel formats that are transferred as float texture
# Only use those for greater efficiency and compatiblity.
#
fg_shaders = {
'2vuy' :FragmentShader_2vuy,
'8BitYUV' :FragmentShader_2vuy,
'v210' :FragmentShader_v210,
'10BitYUV' :FragmentShader_v210,
'8BitBGRA' :FragmentShader_R10l,
'BGRA' :FragmentShader_R10l,
'8BitARGB' :FragmentShader_R10l,
'10BitRGBXLE':FragmentShader_R10l,
'R10l' :FragmentShader_R10l
}
#
# Helper function to attach a pixel shader to the material that receives the video frame.
#
def config_video(obj, format, pixel, is3D=False, mat=0, card=0):
if pixel not in fg_shaders:
raise('Unsuported shader')
shader = obj.meshes[0].materials[mat].getShader()
if shader is not None and not shader.isValid():
shader.setSource(VertexShader, fg_shaders[pixel], True)
shader.setSampler('tex', 0)
shader.setUniformEyef("eye")
shader.setUniform1f("stereo", 0.5 if is3D else 1.0)
tex = vt.Texture(obj, mat)
tex.source = vt.VideoDeckLink(format + "/" + pixel + ("/3D" if is3D else ""), card)
print("frame rate: ", tex.source.framerate)
tex.source.play()
obj["video"] = tex
#
# Attach this function to an object that has a material with texture
# and call it once to initialize the object
#
def init(cont):
# config_video(cont.owner, 'HD720p5994', '8BitBGRA')
# config_video(cont.owner, 'HD720p5994', '8BitYUV')
# config_video(cont.owner, 'pal ', '10BitYUV')
config_video(cont.owner, 'pal ', '8BitYUV')
#
# To be called on every frame
#
def play(cont):
obj = cont.owner
video = obj.get("video")
if video is not None:
video.refresh(True)

View File

@@ -8,11 +8,11 @@ Physics Constraints (bge.constraints)
Examples
--------
.. include:: __/examples/bge.constraints.py
.. include:: ../examples/bge.constraints.py
:start-line: 1
:end-line: 4
.. literalinclude:: __/examples/bge.constraints.py
.. literalinclude:: ../examples/bge.constraints.py
:lines: 6-

View File

@@ -12,53 +12,53 @@ This module holds key constants for the SCA_KeyboardSensor.
.. code-block:: python
# Set a connected keyboard sensor to accept F1
import bge
# Set a connected keyboard sensor to accept F1
import bge
co = bge.logic.getCurrentController()
# 'Keyboard' is a keyboard sensor
sensor = co.sensors["Keyboard"]
sensor.key = bge.events.F1KEY
co = bge.logic.getCurrentController()
# 'Keyboard' is a keyboard sensor
sensor = co.sensors["Keyboard"]
sensor.key = bge.events.F1KEY
code-block:: python
.. code-block:: python
# Do the all keys thing
import bge
# Do the all keys thing
import bge
co = bge.logic.getCurrentController()
# 'Keyboard' is a keyboard sensor
sensor = co.sensors["Keyboard"]
co = bge.logic.getCurrentController()
# 'Keyboard' is a keyboard sensor
sensor = co.sensors["Keyboard"]
for key,status in sensor.events:
# key[0] == bge.events.keycode, key[1] = status
if status == bge.logic.KX_INPUT_JUST_ACTIVATED:
if key == bge.events.WKEY:
# Activate Forward!
if key == bge.events.SKEY:
# Activate Backward!
if key == bge.events.AKEY:
# Activate Left!
if key == bge.events.DKEY:
# Activate Right!
for key,status in sensor.events:
# key[0] == bge.events.keycode, key[1] = status
if status == bge.logic.KX_INPUT_JUST_ACTIVATED:
if key == bge.events.WKEY:
# Activate Forward!
if key == bge.events.SKEY:
# Activate Backward!
if key == bge.events.AKEY:
# Activate Left!
if key == bge.events.DKEY:
# Activate Right!
code-block:: python
.. code-block:: python
# The all keys thing without a keyboard sensor (but you will
# need an always sensor with pulse mode on)
import bge
# The all keys thing without a keyboard sensor (but you will
# need an always sensor with pulse mode on)
import bge
# Just shortening names here
keyboard = bge.logic.keyboard
JUST_ACTIVATED = bge.logic.KX_INPUT_JUST_ACTIVATED
# Just shortening names here
keyboard = bge.logic.keyboard
JUST_ACTIVATED = bge.logic.KX_INPUT_JUST_ACTIVATED
if keyboard.events[bge.events.WKEY] == JUST_ACTIVATED:
print("Activate Forward!")
if keyboard.events[bge.events.SKEY] == JUST_ACTIVATED:
print("Activate Backward!")
if keyboard.events[bge.events.AKEY] == JUST_ACTIVATED:
print("Activate Left!")
if keyboard.events[bge.events.DKEY] == JUST_ACTIVATED:
print("Activate Right!")
if keyboard.events[bge.events.WKEY] == JUST_ACTIVATED:
print("Activate Forward!")
if keyboard.events[bge.events.SKEY] == JUST_ACTIVATED:
print("Activate Backward!")
if keyboard.events[bge.events.AKEY] == JUST_ACTIVATED:
print("Activate Left!")
if keyboard.events[bge.events.DKEY] == JUST_ACTIVATED:
print("Activate Right!")
*********

View File

@@ -378,28 +378,6 @@ General functions
Render next frame (if Python has control)
.. function:: setRender(render)
Sets the global flag that controls the render of the scene.
If True, the render is done after the logic frame.
If False, the render is skipped and another logic frame starts immediately.
.. note::
GPU VSync no longer limits the number of frame per second when render is off,
but the *Use Frame Rate* option still regulates the fps. To run as many frames
as possible, untick this option (Render Properties, System panel).
:arg render: the render flag
:type render: bool
.. function:: getRender()
Get the current value of the global render flag
:return: The flag value
:rtype: bool
**********************
Time related functions
**********************

View File

@@ -90,48 +90,6 @@ Constants
Right eye being used during stereoscopic rendering.
.. data:: RAS_OFS_RENDER_BUFFER
The pixel buffer for offscreen render is a RenderBuffer. Argument to :func:`offScreenCreate`
.. data:: RAS_OFS_RENDER_TEXTURE
The pixel buffer for offscreen render is a Texture. Argument to :func:`offScreenCreate`
*****
Types
*****
.. class:: RASOffScreen
An off-screen render buffer object.
Use :func:`offScreenCreate` to create it.
Currently it can only be used in the :class:`bge.texture.ImageRender`
constructor to render on a FBO rather than the default viewport.
.. attribute:: width
The width in pixel of the FBO
:type: integer
.. attribute:: height
The height in pixel of the FBO
:type: integer
.. attribute:: color
The underlying OpenGL bind code of the texture object that holds
the rendered image, 0 if the FBO is using RenderBuffer.
The choice between RenderBuffer and Texture is determined
by the target argument of :func:`offScreenCreate`.
:type: integer
*********
Functions
@@ -404,22 +362,3 @@ Functions
Get the current vsync value
:rtype: One of VSYNC_OFF, VSYNC_ON, VSYNC_ADAPTIVE
.. function:: offScreenCreate(width,height[,samples=0][,target=bge.render.RAS_OFS_RENDER_BUFFER])
Create a Off-screen render buffer object.
:arg width: the width of the buffer in pixels
:type width: integer
:arg height: the height of the buffer in pixels
:type height: integer
:arg samples: the number of multisample for anti-aliasing (MSAA), 0 to disable MSAA
:type samples: integer
:arg target: the pixel storage: :data:`RAS_OFS_RENDER_BUFFER` to render on RenderBuffers (the default),
:data:`RAS_OFS_RENDER_TEXTURE` to render on texture.
The later is interesting if you want to access the texture directly (see :attr:`RASOffScreen.color`).
Otherwise the default is preferable as it's more widely supported by GPUs and more efficient.
If the GPU does not support MSAA+Texture (e.g. Intel HD GPU), MSAA will be disabled.
:type target: integer
:rtype: :class:`RASOffScreen`

File diff suppressed because it is too large Load Diff

View File

@@ -214,16 +214,6 @@ base class --- :class:`PyObjectPlus`
:arg iList: a list (2, 3 or 4 elements) of integer values
:type iList: list[integer]
.. method:: setUniformEyef(name)
Set a uniform with a float value that reflects the eye being render in stereo mode:
0.0 for the left eye, 0.5 for the right eye. In non stereo mode, the value of the uniform
is fixed to 0.0. The typical use of this uniform is in stereo mode to sample stereo textures
containing the left and right eye images in a top-bottom order.
:arg name: the uniform name
:type name: string
.. method:: validate()
Validate the shader object.

View File

@@ -60,37 +60,37 @@ base class --- :class:`KX_GameObject`
:type: float (read only)
.. attribute:: shadowFrustumSize
..attribute:: shadowFrustumSize
Size of the frustum used for creating the shadowmap.
:type: float (read only)
.. attribute:: shadowBindId
..attribute:: shadowBindId
The OpenGL shadow texture bind number/id.
:type: int (read only)
.. attribute:: shadowMapType
..attribute:: shadowMapType
The shadow shadow map type (0 -> Simple; 1 -> Variance)
:type: int (read only)
.. attribute:: shadowBias
..attribute:: shadowBias
The shadow buffer sampling bias.
:type: float (read only)
.. attribute:: shadowBleedBias
..attribute:: shadowBleedBias
The bias for reducing light-bleed on variance shadow maps.
:type: float (read only)
.. attribute:: useShadow
..attribute:: useShadow
Returns True if the light has Shadow option activated, else returns False.

View File

@@ -12,13 +12,13 @@ base class --- :class:`PyObjectPlus`
.. attribute:: name
The name assigned to the joystick by the operating system. (read-only)
:type: string
.. attribute:: activeButtons
A list of active button values. (read-only)
:type: list
.. attribute:: axisValues
@@ -27,10 +27,8 @@ base class --- :class:`PyObjectPlus`
:type: list of ints.
Each specifying the value of an axis between -1.0 and 1.0
depending on how far the axis is pushed, 0 for nothing.
The first 2 values are used by most joysticks and gamepads for directional control.
3rd and 4th values are only on some joysticks and can be used for arbitary controls.
Each specifying the value of an axis between -1.0 and 1.0 depending on how far the axis is pushed, 0 for nothing.
The first 2 values are used by most joysticks and gamepads for directional control. 3rd and 4th values are only on some joysticks and can be used for arbitary controls.
* left:[-1.0, 0.0, ...]
* right:[1.0, 0.0, ...]

View File

@@ -1,8 +1,7 @@
..
This document is appended to the auto generated bmesh api doc to avoid clogging up the C files with details.
to test this run:
./blender.bin -b -noaudio -P doc/python_api/sphinx_doc_gen.py -- \
--partial bmesh* ; cd doc/python_api ; sphinx-build sphinx-in sphinx-out ; cd ../../
./blender.bin -b -noaudio -P doc/python_api/sphinx_doc_gen.py -- --partial bmesh* ; cd doc/python_api ; sphinx-build sphinx-in sphinx-out ; cd ../../
Submodules:
@@ -41,7 +40,7 @@ For an overview of BMesh data types and how they reference each other see:
Example Script
--------------
.. literalinclude:: __/__/__/release/scripts/templates_py/bmesh_simple.py
.. literalinclude:: ../../../release/scripts/templates_py/bmesh_simple.py
Stand-Alone Module
@@ -60,9 +59,9 @@ There are 2 ways to access BMesh data, you can create a new BMesh by converting
:class:`bpy.types.BlendData.meshes` or by accessing the current edit mode mesh.
see: :class:`bmesh.types.BMesh.from_mesh` and :mod:`bmesh.from_edit_mesh` respectively.
When explicitly converting from mesh data python **owns** the data, that is to say -
that the mesh only exists while python holds a reference to it,
and the script is responsible for putting it back into a mesh data-block when the edits are done.
When explicitly converting from mesh data python **owns** the data, that is to say - that the mesh only exists while
python holds a reference to it, and the script is responsible for putting it back into a mesh data-block when the edits
are done.
Note that unlike :mod:`bpy`, a BMesh does not necessarily correspond to data in the currently open blend file,
a BMesh can be created, edited and freed without the user ever seeing or having access to it.

View File

@@ -151,7 +151,7 @@ Data Creation/Removal
^^^^^^^^^^^^^^^^^^^^^
Those of you familiar with other Python API's may be surprised that
new data-blocks in the bpy API can't be created by calling the class:
new datablocks in the bpy API can't be created by calling the class:
>>> bpy.types.Mesh()
Traceback (most recent call last):
@@ -305,7 +305,7 @@ In Python, this is done by defining a class, which is a subclass of an existing
Example Operator
----------------
.. literalinclude:: __/__/__/release/scripts/templates_py/operator_simple.py
.. literalinclude:: ../../../release/scripts/templates_py/operator_simple.py
Once this script runs, ``SimpleOperator`` is registered with Blender
and can be called from the operator search popup or added to the toolbar.
@@ -336,7 +336,7 @@ Example Panel
Panels register themselves as a class, like an operator.
Notice the extra ``bl_`` variables used to set the context they display in.
.. literalinclude:: __/__/__/release/scripts/templates_py/ui_panel_simple.py
.. literalinclude:: ../../../release/scripts/templates_py/ui_panel_simple.py
To run the script:
@@ -393,11 +393,11 @@ so these are accessed as normal Python types.
Internal Types
--------------
Used for Blender data-blocks and collections: :class:`bpy.types.bpy_struct`
Used for Blender datablocks and collections: :class:`bpy.types.bpy_struct`
For data that contains its own attributes groups/meshes/bones/scenes... etc.
There are 2 main types that wrap Blenders data, one for data-blocks
There are 2 main types that wrap Blenders data, one for datablocks
(known internally as ``bpy_struct``), another for properties.
>>> bpy.context.object

View File

@@ -57,7 +57,7 @@ Operator Example
++++++++++++++++
This script shows how operators can be used to model a link of a chain.
.. literalinclude:: __/examples/bmesh.ops.1.py
.. literalinclude:: ../examples/bmesh.ops.1.py
"""

View File

@@ -48,8 +48,7 @@ python doc/python_api/sphinx_changelog_gen.py -- \
'''
{"module.name":
{"parent.class":
{"basic_type", "member_name":
("Name", type, range, length, default, descr, f_args, f_arg_types, f_ret_types)}, ...
{"basic_type", "member_name": ("Name", type, range, length, default, descr, f_args, f_arg_types, f_ret_types)}, ...
}, ...
}
'''
@@ -100,34 +99,34 @@ def api_dump():
prop_range = None
dump_class[prop_id] = (
"prop_rna", # basic_type
prop.name, # name
prop_type, # type
prop_range, # range
prop_length, # length
prop.default, # default
prop.description, # descr
Ellipsis, # f_args
Ellipsis, # f_arg_types
Ellipsis, # f_ret_types
)
"prop_rna", # basic_type
prop.name, # name
prop_type, # type
prop_range, # range
prop_length, # length
prop.default, # default
prop.description, # descr
Ellipsis, # f_args
Ellipsis, # f_arg_types
Ellipsis, # f_ret_types
)
del props
# python props, tricky since we dont know much about them.
for prop_id, attr in struct_info.get_py_properties():
dump_class[prop_id] = (
"prop_py", # basic_type
Ellipsis, # name
Ellipsis, # type
Ellipsis, # range
Ellipsis, # length
Ellipsis, # default
attr.__doc__, # descr
Ellipsis, # f_args
Ellipsis, # f_arg_types
Ellipsis, # f_ret_types
)
"prop_py", # basic_type
Ellipsis, # name
Ellipsis, # type
Ellipsis, # range
Ellipsis, # length
Ellipsis, # default
attr.__doc__, # descr
Ellipsis, # f_args
Ellipsis, # f_arg_types
Ellipsis, # f_ret_types
)
# kludge func -> props
funcs = [(func.identifier, func) for func in struct_info.functions]
@@ -138,17 +137,17 @@ def api_dump():
func_args_type = tuple([prop.type for prop in func.args])
dump_class[func_id] = (
"func_rna", # basic_type
Ellipsis, # name
Ellipsis, # type
Ellipsis, # range
Ellipsis, # length
Ellipsis, # default
func.description, # descr
func_args_ids, # f_args
func_args_type, # f_arg_types
func_ret_types, # f_ret_types
)
"func_rna", # basic_type
Ellipsis, # name
Ellipsis, # type
Ellipsis, # range
Ellipsis, # length
Ellipsis, # default
func.description, # descr
func_args_ids, # f_args
func_args_type, # f_arg_types
func_ret_types, # f_ret_types
)
del funcs
# kludge func -> props
@@ -159,17 +158,17 @@ def api_dump():
func_args_ids = tuple(inspect.getargspec(attr).args)
dump_class[func_id] = (
"func_py", # basic_type
Ellipsis, # name
Ellipsis, # type
Ellipsis, # range
Ellipsis, # length
Ellipsis, # default
attr.__doc__, # descr
func_args_ids, # f_args
Ellipsis, # f_arg_types
Ellipsis, # f_ret_types
)
"func_py", # basic_type
Ellipsis, # name
Ellipsis, # type
Ellipsis, # range
Ellipsis, # length
Ellipsis, # default
attr.__doc__, # descr
func_args_ids, # f_args
Ellipsis, # f_arg_types
Ellipsis, # f_ret_types
)
del funcs
import pprint
@@ -337,19 +336,15 @@ def main():
parser = argparse.ArgumentParser(description=usage_text, epilog=epilog)
parser.add_argument(
"--dump", dest="dump", action='store_true',
help="When set the api will be dumped into blender_version.py")
parser.add_argument("--dump", dest="dump", action='store_true',
help="When set the api will be dumped into blender_version.py")
parser.add_argument(
"--api_from", dest="api_from", metavar='FILE',
help="File to compare from (previous version)")
parser.add_argument(
"--api_to", dest="api_to", metavar='FILE',
help="File to compare from (current)")
parser.add_argument(
"--api_out", dest="api_out", metavar='FILE',
help="Output sphinx changelog")
parser.add_argument("--api_from", dest="api_from", metavar='FILE',
help="File to compare from (previous version)")
parser.add_argument("--api_to", dest="api_to", metavar='FILE',
help="File to compare from (current)")
parser.add_argument("--api_out", dest="api_out", metavar='FILE',
help="Output sphinx changelog")
args = parser.parse_args(argv) # In this example we wont use the args

View File

@@ -24,7 +24,7 @@ SCRIPT_HELP_MSG = """
API dump in RST files
---------------------
Run this script from Blender's root path once you have compiled Blender
Run this script from blenders root path once you have compiled blender
./blender.bin --background -noaudio --python doc/python_api/sphinx_doc_gen.py
@@ -61,14 +61,14 @@ Sphinx: PDF generation
"""
try:
import bpy # Blender module
import bpy # blender module
except ImportError:
print("\nERROR: this script must run from inside Blender")
print(SCRIPT_HELP_MSG)
import sys
sys.exit()
import rna_info # Blender module
import rna_info # blender module
def rna_info_BuildRNAInfo_cache():
@@ -181,13 +181,15 @@ def handle_args():
dest="log",
default=False,
action='store_true',
help="Log the output of the api dump and sphinx|latex "
"warnings and errors (default=False).\n"
"If given, save logs in:\n"
"* OUTPUT_DIR/.bpy.log\n"
"* OUTPUT_DIR/.sphinx-build.log\n"
"* OUTPUT_DIR/.sphinx-build_pdf.log\n"
"* OUTPUT_DIR/.latex_make.log",
help=(
"Log the output of the api dump and sphinx|latex "
"warnings and errors (default=False).\n"
"If given, save logs in:\n"
"* OUTPUT_DIR/.bpy.log\n"
"* OUTPUT_DIR/.sphinx-build.log\n"
"* OUTPUT_DIR/.sphinx-build_pdf.log\n"
"* OUTPUT_DIR/.latex_make.log",
),
required=False)
# parse only the args passed after '--'
@@ -222,12 +224,12 @@ if not ARGS.partial:
FILTER_BPY_OPS = None
FILTER_BPY_TYPES = None
EXCLUDE_INFO_DOCS = False
EXCLUDE_MODULES = []
EXCLUDE_MODULES = ()
else:
# can manually edit this too:
# FILTER_BPY_OPS = ("import.scene", ) # allow
# FILTER_BPY_TYPES = ("bpy_struct", "Operator", "ID") # allow
#FILTER_BPY_OPS = ("import.scene", ) # allow
#FILTER_BPY_TYPES = ("bpy_struct", "Operator", "ID") # allow
EXCLUDE_INFO_DOCS = True
EXCLUDE_MODULES = [
"aud",
@@ -260,7 +262,6 @@ else:
"bpy_extras",
"gpu",
"gpu.offscreen",
"idprop.types",
"mathutils",
"mathutils.bvhtree",
"mathutils.geometry",
@@ -274,7 +275,7 @@ else:
"freestyle.shaders",
"freestyle.types",
"freestyle.utils",
]
]
# ------
# Filter
@@ -300,9 +301,7 @@ else:
del m
del fnmatch
BPY_LOGGER.debug(
"Partial Doc Build, Skipping: %s\n" %
"\n ".join(sorted(EXCLUDE_MODULES)))
BPY_LOGGER.debug("Partial Doc Build, Skipping: %s\n" % "\n ".join(sorted(EXCLUDE_MODULES)))
#
# done filtering
@@ -312,39 +311,19 @@ try:
__import__("aud")
except ImportError:
BPY_LOGGER.debug("Warning: Built without 'aud' module, docs incomplete...")
EXCLUDE_MODULES.append("aud")
EXCLUDE_MODULES = list(EXCLUDE_MODULES) + ["aud"]
try:
__import__("freestyle")
except ImportError:
BPY_LOGGER.debug("Warning: Built without 'freestyle' module, docs incomplete...")
EXCLUDE_MODULES.extend([
"freestyle",
"freestyle.chainingiterators",
"freestyle.functions",
"freestyle.predicates",
"freestyle.shaders",
"freestyle.types",
"freestyle.utils",
])
# Source files we use, and need to copy to the OUTPUT_DIR
# to have working out-of-source builds.
# Note that ".." is replaced by "__" in the RST files,
# to avoid having to match Blender's source tree.
EXTRA_SOURCE_FILES = (
"../../../release/scripts/templates_py/bmesh_simple.py",
"../../../release/scripts/templates_py/operator_simple.py",
"../../../release/scripts/templates_py/ui_panel_simple.py",
"../../../release/scripts/templates_py/ui_previews_custom_icon.py",
"../examples/bge.constraints.py",
"../examples/bge.texture.1.py",
"../examples/bge.texture.2.py",
"../examples/bge.texture.py",
"../examples/bmesh.ops.1.py",
"../examples/bpy.app.translations.py",
)
EXCLUDE_MODULES = list(EXCLUDE_MODULES) + ["freestyle",
"freestyle.chainingiterators",
"freestyle.functions",
"freestyle.predicates",
"freestyle.shaders",
"freestyle.types",
"freestyle.utils"]
# examples
EXAMPLES_DIR = os.path.abspath(os.path.join(SCRIPT_DIR, "examples"))
@@ -360,59 +339,52 @@ RST_DIR = os.path.abspath(os.path.join(SCRIPT_DIR, "rst"))
# extra info, not api reference docs
# stored in ./rst/info_*
INFO_DOCS = (
("info_quickstart.rst",
"Blender/Python Quickstart: new to Blender/scripting and want to get your feet wet?"),
("info_overview.rst",
"Blender/Python API Overview: a more complete explanation of Python integration"),
("info_tutorial_addon.rst",
"Blender/Python Addon Tutorial: a step by step guide on how to write an addon from scratch"),
("info_api_reference.rst",
"Blender/Python API Reference Usage: examples of how to use the API reference docs"),
("info_best_practice.rst",
"Best Practice: Conventions to follow for writing good scripts"),
("info_tips_and_tricks.rst",
"Tips and Tricks: Hints to help you while writing scripts for Blender"),
("info_gotcha.rst",
"Gotcha's: some of the problems you may come up against when writing scripts"),
)
("info_quickstart.rst", "Blender/Python Quickstart: new to blender/scripting and want to get your feet wet?"),
("info_overview.rst", "Blender/Python API Overview: a more complete explanation of python integration"),
("info_tutorial_addon.rst", "Blender/Python Addon Tutorial: a step by step guide on how to write an addon from scratch"),
("info_api_reference.rst", "Blender/Python API Reference Usage: examples of how to use the API reference docs"),
("info_best_practice.rst", "Best Practice: Conventions to follow for writing good scripts"),
("info_tips_and_tricks.rst", "Tips and Tricks: Hints to help you while writing scripts for blender"),
("info_gotcha.rst", "Gotcha's: some of the problems you may come up against when writing scripts"),
)
# only support for properties atm.
RNA_BLACKLIST = {
# XXX messes up PDF!, really a bug but for now just workaround.
"UserPreferencesSystem": {"language", }
}
}
MODULE_GROUPING = {
"bmesh.types": (
("Base Mesh Type", '-'),
"BMesh",
("Mesh Elements", '-'),
"BMVert",
"BMEdge",
"BMFace",
"BMLoop",
("Sequence Accessors", '-'),
"BMElemSeq",
"BMVertSeq",
"BMEdgeSeq",
"BMFaceSeq",
"BMLoopSeq",
"BMIter",
("Selection History", '-'),
"BMEditSelSeq",
"BMEditSelIter",
("Custom-Data Layer Access", '-'),
"BMLayerAccessVert",
"BMLayerAccessEdge",
"BMLayerAccessFace",
"BMLayerAccessLoop",
"BMLayerCollection",
"BMLayerItem",
("Custom-Data Layer Types", '-'),
"BMLoopUV",
"BMDeformVert"
)
}
("Base Mesh Type", '-'),
"BMesh",
("Mesh Elements", '-'),
"BMVert",
"BMEdge",
"BMFace",
"BMLoop",
("Sequence Accessors", '-'),
"BMElemSeq",
"BMVertSeq",
"BMEdgeSeq",
"BMFaceSeq",
"BMLoopSeq",
"BMIter",
("Selection History", '-'),
"BMEditSelSeq",
"BMEditSelIter",
("Custom-Data Layer Access", '-'),
"BMLayerAccessVert",
"BMLayerAccessEdge",
"BMLayerAccessFace",
"BMLayerAccessLoop",
"BMLayerCollection",
"BMLayerItem",
("Custom-Data Layer Types", '-'),
"BMLoopUV",
"BMDeformVert"
)
}
# --------------------configure compile time options----------------------------
@@ -488,10 +460,10 @@ MethodDescriptorType = type(dict.get)
GetSetDescriptorType = type(int.real)
StaticMethodType = type(staticmethod(lambda: None))
from types import (
MemberDescriptorType,
MethodType,
FunctionType,
)
MemberDescriptorType,
MethodType,
FunctionType,
)
_BPY_STRUCT_FAKE = "bpy_struct"
_BPY_PROP_COLLECTION_FAKE = "bpy_prop_collection"
@@ -511,7 +483,7 @@ escape_rst.trans = str.maketrans({
"|": "\\|",
"*": "\\*",
"\\": "\\\\",
})
})
def is_struct_seq(value):
@@ -756,7 +728,7 @@ def py_c_func2sphinx(ident, fw, module_name, type_name, identifier, py_func, is_
def pyprop2sphinx(ident, fw, identifier, py_prop):
'''
Python property to sphinx
python property to sphinx
'''
# readonly properties use "data" directive, variables use "attribute" directive
if py_prop.fset is None:
@@ -856,8 +828,7 @@ def pymodule2sphinx(basepath, module_name, module, title):
# naughty, we also add getset's into PyStructs, this is not typical py but also not incorrect.
# type_name is only used for examples and messages
# "<class 'bpy.app.handlers'>" --> bpy.app.handlers
type_name = str(type(module)).strip("<>").split(" ", 1)[-1][1:-1]
type_name = str(type(module)).strip("<>").split(" ", 1)[-1][1:-1] # "<class 'bpy.app.handlers'>" --> bpy.app.handlers
if type(descr) == types.GetSetDescriptorType:
py_descr2sphinx("", fw, descr, module_name, type_name, key)
attribute_set.add(key)
@@ -918,8 +889,7 @@ def pymodule2sphinx(basepath, module_name, module, title):
for attribute, value, value_type in module_dir_value_type:
if value_type == FunctionType:
pyfunc2sphinx("", fw, module_name, None, attribute, value, is_class=False)
# both the same at the moment but to be future proof
elif value_type in {types.BuiltinMethodType, types.BuiltinFunctionType}:
elif value_type in {types.BuiltinMethodType, types.BuiltinFunctionType}: # both the same at the moment but to be future proof
# note: can't get args from these, so dump the string as is
# this means any module used like this must have fully formatted docstrings.
py_c_func2sphinx("", fw, module_name, None, attribute, value, is_class=False)
@@ -985,7 +955,7 @@ def pymodule2sphinx(basepath, module_name, module, title):
if type(descr) == ClassMethodDescriptorType:
py_descr2sphinx(" ", fw, descr, module_name, type_name, key)
# needed for pure Python classes
# needed for pure python classes
for key, descr in descr_items:
if type(descr) == FunctionType:
pyfunc2sphinx(" ", fw, module_name, type_name, key, descr, is_class=True)
@@ -1008,15 +978,12 @@ def pymodule2sphinx(basepath, module_name, module, title):
file.close()
# Changes in Blender will force errors here
# Changes in blender will force errors here
context_type_map = {
"active_base": ("ObjectBase", False),
"active_bone": ("EditBone", False),
"active_gpencil_frame": ("GreasePencilLayer", True),
"active_gpencil_layer": ("GPencilLayer", True),
"active_gpencil_brush": ("GPencilSculptBrush", False),
"active_gpencil_palette": ("GPencilPalette", True),
"active_gpencil_palettecolor": ("GPencilPaletteColor", True),
"active_node": ("Node", False),
"active_object": ("Object", False),
"active_operator": ("Operator", False),
@@ -1099,10 +1066,9 @@ def pycontext2sphinx(basepath):
fw(title_string("Context Access (bpy.context)", "="))
fw(".. module:: bpy.context\n")
fw("\n")
fw("The context members available depend on the area of Blender which is currently being accessed.\n")
fw("The context members available depend on the area of blender which is currently being accessed.\n")
fw("\n")
fw("Note that all context values are readonly,\n")
fw("but may be modified through the data api or by running operators\n\n")
fw("Note that all context values are readonly, but may be modified through the data api or by running operators\n\n")
def write_contex_cls():
@@ -1125,8 +1091,7 @@ def pycontext2sphinx(basepath):
if prop.identifier in struct_blacklist:
continue
type_descr = prop.get_type_description(
class_fmt=":class:`bpy.types.%s`", collection_id=_BPY_PROP_COLLECTION_ID)
type_descr = prop.get_type_description(class_fmt=":class:`bpy.types.%s`", collection_id=_BPY_PROP_COLLECTION_ID)
fw(".. data:: %s\n\n" % prop.identifier)
if prop.description:
fw(" %s\n\n" % prop.description)
@@ -1146,7 +1111,7 @@ def pycontext2sphinx(basepath):
del write_contex_cls
# end
# nasty, get strings directly from Blender because there is no other way to get it
# nasty, get strings directly from blender because there is no other way to get it
import ctypes
context_strings = (
@@ -1182,9 +1147,7 @@ def pycontext2sphinx(basepath):
# for member in sorted(unique):
# print(' "%s": ("", False),' % member)
if len(context_type_map) > len(unique):
raise Exception(
"Some types are not used: %s" %
str([member for member in context_type_map if member not in unique]))
raise Exception("Some types are not used: %s" % str([member for member in context_type_map if member not in unique]))
else:
pass # will have raised an error above
@@ -1263,11 +1226,11 @@ def pyrna2sphinx(basepath):
fw(ident + ":%s%s: %s\n" % (id_type, identifier, type_descr))
def write_struct(struct):
# if not struct.identifier.startswith("Sc") and not struct.identifier.startswith("I"):
# return
#if not struct.identifier.startswith("Sc") and not struct.identifier.startswith("I"):
# return
# if not struct.identifier == "Object":
# return
#if not struct.identifier == "Object":
# return
filepath = os.path.join(basepath, "bpy.types.%s.rst" % struct.identifier)
file = open(filepath, "w", encoding="utf-8")
@@ -1308,11 +1271,7 @@ def pyrna2sphinx(basepath):
fw(", ".join((":class:`%s`" % base_id) for base_id in base_ids))
fw("\n\n")
subclass_ids = [
s.identifier for s in structs.values()
if s.base is struct
if not rna_info.rna_id_ignore(s.identifier)
]
subclass_ids = [s.identifier for s in structs.values() if s.base is struct if not rna_info.rna_id_ignore(s.identifier)]
subclass_ids.sort()
if subclass_ids:
fw("subclasses --- \n" + ", ".join((":class:`%s`" % s) for s in subclass_ids) + "\n\n")
@@ -1363,7 +1322,7 @@ def pyrna2sphinx(basepath):
fw(" :type: %s\n\n" % type_descr)
# Python attributes
# python attributes
py_properties = struct.get_py_properties()
py_prop = None
for identifier, py_prop in py_properties:
@@ -1373,8 +1332,7 @@ def pyrna2sphinx(basepath):
for func in struct.functions:
args_str = ", ".join(prop.get_arg_default(force=False) for prop in func.args)
fw(" .. %s:: %s(%s)\n\n" %
("classmethod" if func.is_classmethod else "method", func.identifier, args_str))
fw(" .. %s:: %s(%s)\n\n" % ("classmethod" if func.is_classmethod else "method", func.identifier, args_str))
fw(" %s\n\n" % func.description)
for prop in func.args:
@@ -1385,10 +1343,8 @@ def pyrna2sphinx(basepath):
elif func.return_values: # multiple return values
fw(" :return (%s):\n" % ", ".join(prop.identifier for prop in func.return_values))
for prop in func.return_values:
# TODO, pyrna_enum2sphinx for multiple return values... actually dont
# think we even use this but still!!!
type_descr = prop.get_type_description(
as_ret=True, class_fmt=":class:`%s`", collection_id=_BPY_PROP_COLLECTION_ID)
# TODO, pyrna_enum2sphinx for multiple return values... actually dont think we even use this but still!!!
type_descr = prop.get_type_description(as_ret=True, class_fmt=":class:`%s`", collection_id=_BPY_PROP_COLLECTION_ID)
descr = prop.description
if not descr:
descr = prop.name
@@ -1401,7 +1357,7 @@ def pyrna2sphinx(basepath):
fw("\n")
# Python methods
# python methods
py_funcs = struct.get_py_functions()
py_func = None
@@ -1424,10 +1380,7 @@ def pyrna2sphinx(basepath):
del lines[:]
if _BPY_STRUCT_FAKE:
descr_items = [
(key, descr) for key, descr in sorted(bpy.types.Struct.__bases__[0].__dict__.items())
if not key.startswith("__")
]
descr_items = [(key, descr) for key, descr in sorted(bpy.types.Struct.__bases__[0].__dict__.items()) if not key.startswith("__")]
if _BPY_STRUCT_FAKE:
for key, descr in descr_items:
@@ -1523,28 +1476,19 @@ def pyrna2sphinx(basepath):
fw("\n")
if use_subclasses:
subclass_ids = [
s.identifier for s in structs.values()
if s.base is None
if not rna_info.rna_id_ignore(s.identifier)
]
subclass_ids = [s.identifier for s in structs.values() if s.base is None if not rna_info.rna_id_ignore(s.identifier)]
if subclass_ids:
fw("subclasses --- \n" + ", ".join((":class:`%s`" % s) for s in sorted(subclass_ids)) + "\n\n")
fw(".. class:: %s\n\n" % class_name)
fw(" %s\n\n" % descr_str)
fw(" .. note::\n\n")
fw(" Note that bpy.types.%s is not actually available from within Blender,\n"
" it only exists for the purpose of documentation.\n\n" % class_name)
fw(" Note that bpy.types.%s is not actually available from within blender, it only exists for the purpose of documentation.\n\n" % class_name)
descr_items = [
(key, descr) for key, descr in sorted(class_value.__dict__.items())
if not key.startswith("__")
]
descr_items = [(key, descr) for key, descr in sorted(class_value.__dict__.items()) if not key.startswith("__")]
for key, descr in descr_items:
# GetSetDescriptorType, GetSetDescriptorType's are not documented yet
if type(descr) == MethodDescriptorType:
if type(descr) == MethodDescriptorType: # GetSetDescriptorType, GetSetDescriptorType's are not documented yet
py_descr2sphinx(" ", fw, descr, "bpy.types", class_name, key)
for key, descr in descr_items:
@@ -1555,15 +1499,11 @@ def pyrna2sphinx(basepath):
# write fake classes
if _BPY_STRUCT_FAKE:
class_value = bpy.types.Struct.__bases__[0]
fake_bpy_type(
class_value, _BPY_STRUCT_FAKE,
"built-in base class for all classes in bpy.types.", use_subclasses=True)
fake_bpy_type(class_value, _BPY_STRUCT_FAKE, "built-in base class for all classes in bpy.types.", use_subclasses=True)
if _BPY_PROP_COLLECTION_FAKE:
class_value = bpy.data.objects.__class__
fake_bpy_type(
class_value, _BPY_PROP_COLLECTION_FAKE,
"built-in class used for all collections.", use_subclasses=False)
fake_bpy_type(class_value, _BPY_PROP_COLLECTION_FAKE, "built-in class used for all collections.", use_subclasses=False)
# operators
def write_ops():
@@ -1675,13 +1615,11 @@ def write_rst_contents(basepath):
fw(title_string("Blender Documentation Contents", "%", double=True))
fw("\n")
fw("Welcome, this document is an API reference for Blender %s, built %s.\n" %
(BLENDER_VERSION_DOTS, BLENDER_DATE))
fw("Welcome, this document is an API reference for Blender %s, built %s.\n" % (BLENDER_VERSION_DOTS, BLENDER_DATE))
fw("\n")
# fw("`A PDF version of this document is also available <%s>`_\n" % BLENDER_PDF_FILENAME)
fw("This site can be downloaded for offline use `Download the full Documentation (zipped HTML files) <%s>`_\n" %
BLENDER_ZIP_FILENAME)
fw("This site can be downloaded for offline use `Download the full Documentation (zipped HTML files) <%s>`_\n" % BLENDER_ZIP_FILENAME)
fw("\n")
@@ -1714,7 +1652,7 @@ def write_rst_contents(basepath):
# C modules
"bpy.props",
)
)
for mod in app_modules:
if mod not in EXCLUDE_MODULES:
@@ -1735,10 +1673,9 @@ def write_rst_contents(basepath):
"freestyle", "bgl", "blf",
"gpu", "gpu.offscreen",
"aud", "bpy_extras",
"idprop.types",
# bmesh, submodules are in own page
"bmesh",
)
)
for mod in standalone_modules:
if mod not in EXCLUDE_MODULES:
@@ -1776,8 +1713,7 @@ def write_rst_contents(basepath):
fw(" * mesh creation and editing functions\n")
fw(" \n")
fw(" These parts of the API are relatively stable and are unlikely to change significantly\n")
fw(" * data API, access to attributes of Blender data such as mesh verts, material color,\n")
fw(" timeline frames and scene objects\n")
fw(" * data API, access to attributes of blender data such as mesh verts, material color, timeline frames and scene objects\n")
fw(" * user interface functions for defining buttons, creation of menus, headers, panels\n")
fw(" * render engine integration\n")
fw(" * modules: bgl, mathutils & game engine.\n")
@@ -1849,11 +1785,11 @@ def write_rst_data(basepath):
fw(title_string("Data Access (bpy.data)", "="))
fw(".. module:: bpy\n")
fw("\n")
fw("This module is used for all Blender/Python access.\n")
fw("This module is used for all blender/python access.\n")
fw("\n")
fw(".. data:: data\n")
fw("\n")
fw(" Access to Blender's internal data\n")
fw(" Access to blenders internal data\n")
fw("\n")
fw(" :type: :class:`bpy.types.BlendData`\n")
fw("\n")
@@ -1868,38 +1804,37 @@ def write_rst_importable_modules(basepath):
Write the rst files of importable modules
'''
importable_modules = {
# Python_modules
"bpy.path": "Path Utilities",
"bpy.utils": "Utilities",
"bpy_extras": "Extra Utilities",
# python_modules
"bpy.path" : "Path Utilities",
"bpy.utils" : "Utilities",
"bpy_extras" : "Extra Utilities",
# C_modules
"aud": "Audio System",
"blf": "Font Drawing",
"gpu.offscreen": "GPU Off-Screen Buffer",
"bmesh": "BMesh Module",
"bmesh.types": "BMesh Types",
"bmesh.utils": "BMesh Utilities",
"bmesh.geometry": "BMesh Geometry Utilities",
"bpy.app": "Application Data",
"bpy.app.handlers": "Application Handlers",
"bpy.app.translations": "Application Translations",
"bpy.props": "Property Definitions",
"idprop.types": "ID Property Access",
"mathutils": "Math Types & Utilities",
"mathutils.geometry": "Geometry Utilities",
"mathutils.bvhtree": "BVHTree Utilities",
"mathutils.kdtree": "KDTree Utilities",
"aud" : "Audio System",
"blf" : "Font Drawing",
"gpu.offscreen" : "GPU Off-Screen Buffer",
"bmesh" : "BMesh Module",
"bmesh.types" : "BMesh Types",
"bmesh.utils" : "BMesh Utilities",
"bmesh.geometry" : "BMesh Geometry Utilities",
"bpy.app" : "Application Data",
"bpy.app.handlers" : "Application Handlers",
"bpy.app.translations" : "Application Translations",
"bpy.props" : "Property Definitions",
"mathutils" : "Math Types & Utilities",
"mathutils.geometry" : "Geometry Utilities",
"mathutils.bvhtree" : "BVHTree Utilities",
"mathutils.kdtree" : "KDTree Utilities",
"mathutils.interpolate": "Interpolation Utilities",
"mathutils.noise": "Noise Utilities",
"freestyle": "Freestyle Module",
"freestyle.types": "Freestyle Types",
"freestyle.predicates": "Freestyle Predicates",
"freestyle.functions": "Freestyle Functions",
"freestyle.chainingiterators": "Freestyle Chaining Iterators",
"freestyle.shaders": "Freestyle Shaders",
"freestyle.utils": "Freestyle Utilities",
}
"mathutils.noise" : "Noise Utilities",
"freestyle" : "Freestyle Module",
"freestyle.types" : "Freestyle Types",
"freestyle.predicates" : "Freestyle Predicates",
"freestyle.functions" : "Freestyle Functions",
"freestyle.chainingiterators" : "Freestyle Chaining Iterators",
"freestyle.shaders" : "Freestyle Shaders",
"freestyle.utils" : "Freestyle Utilities",
}
for mod_name, mod_descr in importable_modules.items():
if mod_name not in EXCLUDE_MODULES:
module = __import__(mod_name,
@@ -1914,7 +1849,7 @@ def copy_handwritten_rsts(basepath):
for info, info_desc in INFO_DOCS:
shutil.copy2(os.path.join(RST_DIR, info), basepath)
# TODO put this docs in Blender's code and use import as per modules above
# TODO put this docs in blender's code and use import as per modules above
handwritten_modules = [
"bge.logic",
"bge.render",
@@ -1955,21 +1890,6 @@ def copy_handwritten_rsts(basepath):
shutil.copy2(os.path.join(RST_DIR, f), basepath)
def copy_handwritten_extra(basepath):
for f_src in EXTRA_SOURCE_FILES:
if os.sep != "/":
f_src = os.sep.join(f_src.split("/"))
f_dst = f_src.replace("..", "__")
f_src = os.path.join(RST_DIR, f_src)
f_dst = os.path.join(basepath, f_dst)
os.makedirs(os.path.dirname(f_dst), exist_ok=True)
shutil.copy2(f_src, f_dst)
def rna2sphinx(basepath):
try:
@@ -2001,48 +1921,35 @@ def rna2sphinx(basepath):
# copy the other rsts
copy_handwritten_rsts(basepath)
# copy source files referenced
copy_handwritten_extra(basepath)
def align_sphinx_in_to_sphinx_in_tmp(dir_src, dir_dst):
def align_sphinx_in_to_sphinx_in_tmp():
'''
Move changed files from SPHINX_IN_TMP to SPHINX_IN
'''
import filecmp
# possible the dir doesn't exist when running recursively
os.makedirs(dir_dst, exist_ok=True)
sphinx_dst_files = set(os.listdir(dir_dst))
sphinx_src_files = set(os.listdir(dir_src))
sphinx_in_files = set(os.listdir(SPHINX_IN))
sphinx_in_tmp_files = set(os.listdir(SPHINX_IN_TMP))
# remove deprecated files that have been removed
for f in sorted(sphinx_dst_files):
if f not in sphinx_src_files:
for f in sorted(sphinx_in_files):
if f not in sphinx_in_tmp_files:
BPY_LOGGER.debug("\tdeprecated: %s" % f)
f_dst = os.path.join(dir_dst, f)
if os.path.isdir(f_dst):
shutil.rmtree(f_dst, True)
else:
os.remove(f_dst)
os.remove(os.path.join(SPHINX_IN, f))
# freshen with new files.
for f in sorted(sphinx_src_files):
f_src = os.path.join(dir_src, f)
f_dst = os.path.join(dir_dst, f)
for f in sorted(sphinx_in_tmp_files):
f_from = os.path.join(SPHINX_IN_TMP, f)
f_to = os.path.join(SPHINX_IN, f)
if os.path.isdir(f_src):
align_sphinx_in_to_sphinx_in_tmp(f_src, f_dst)
else:
do_copy = True
if f in sphinx_dst_files:
if filecmp.cmp(f_src, f_dst):
do_copy = False
do_copy = True
if f in sphinx_in_files:
if filecmp.cmp(f_from, f_to):
do_copy = False
if do_copy:
BPY_LOGGER.debug("\tupdating: %s" % f)
shutil.copy(f_src, f_dst)
if do_copy:
BPY_LOGGER.debug("\tupdating: %s" % f)
shutil.copy(f_from, f_to)
def refactor_sphinx_log(sphinx_logfile):
@@ -2129,7 +2036,7 @@ def main():
shutil.rmtree(SPHINX_OUT_PDF, True)
else:
# move changed files in SPHINX_IN
align_sphinx_in_to_sphinx_in_tmp(SPHINX_IN_TMP, SPHINX_IN)
align_sphinx_in_to_sphinx_in_tmp()
# report which example files weren't used
EXAMPLE_SET_UNUSED = EXAMPLE_SET - EXAMPLE_SET_USED

View File

@@ -3,6 +3,11 @@
# bash doc/python_api/sphinx_doc_gen.sh
# ssh upload means you need an account on the server
if [ "$1" == "" ] ; then
echo "Expected a single argument for the username on blender.org, aborting"
exit 1
fi
# ----------------------------------------------------------------------------
# Upload vars
@@ -17,15 +22,9 @@ if [ -z $BLENDER_BIN ] ; then
BLENDER_BIN="./blender.bin"
fi
if [ "$1" == "" ] ; then
echo "Expected a single argument for the username on blender.org, skipping upload step!"
DO_UPLOAD=false
else
SSH_USER=$1
SSH_HOST=$SSH_USER"@blender.org"
SSH_UPLOAD="/data/www/vhosts/www.blender.org/api" # blender_python_api_VERSION, added after
fi
SSH_USER=$1
SSH_HOST=$SSH_USER"@blender.org"
SSH_UPLOAD="/data/www/vhosts/www.blender.org/api" # blender_python_api_VERSION, added after
# ----------------------------------------------------------------------------
# Blender Version & Info
@@ -34,12 +33,10 @@ fi
# "_".join(str(v) for v in bpy.app.version)
# custom blender vars
blender_srcdir=$(dirname -- $0)/../..
blender_version_header="$blender_srcdir/source/blender/blenkernel/BKE_blender_version.h"
blender_version=$(grep "BLENDER_VERSION\s" "$blender_version_header" | awk '{print $3}')
blender_version_char=$(grep "BLENDER_VERSION_CHAR\s" "$blender_version_header" | awk '{print $3}')
blender_version_cycle=$(grep "BLENDER_VERSION_CYCLE\s" "$blender_version_header" | awk '{print $3}')
blender_subversion=$(grep "BLENDER_SUBVERSION\s" "$blender_version_header" | awk '{print $3}')
unset blender_version_header
blender_version=$(grep "BLENDER_VERSION\s" "$blender_srcdir/source/blender/blenkernel/BKE_blender_version.h" | awk '{print $3}')
blender_version_char=$(grep "BLENDER_VERSION_CHAR\s" "$blender_srcdir/source/blender/blenkernel/BKE_blender_version.h" | awk '{print $3}')
blender_version_cycle=$(grep "BLENDER_VERSION_CYCLE\s" "$blender_srcdir/source/blender/blenkernel/BKE_blender_version.h" | awk '{print $3}')
blender_subversion=$(grep "BLENDER_SUBVERSION\s" "$blender_srcdir/source/blender/blenkernel/BKE_blender_version.h" | awk '{print $3}')
if [ "$blender_version_cycle" = "release" ] ; then
BLENDER_VERSION=$(expr $blender_version / 100)_$(expr $blender_version % 100)$blender_version_char"_release"
@@ -51,8 +48,6 @@ SSH_UPLOAD_FULL=$SSH_UPLOAD/"blender_python_api_"$BLENDER_VERSION
SPHINXBASE=doc/python_api
SPHINX_WORKDIR="$(mktemp --directory --suffix=.sphinx)"
# ----------------------------------------------------------------------------
# Generate reStructuredText (blender/python only)
@@ -64,25 +59,23 @@ if $DO_EXE_BLENDER ; then
-noaudio \
--factory-startup \
--python-exit-code 1 \
--python $SPHINXBASE/sphinx_doc_gen.py \
-- \
--output=$SPHINX_WORKDIR
--python $SPHINXBASE/sphinx_doc_gen.py
if (($? != 0)) ; then
if (($? == 1)) ; then
echo "Generating documentation failed, aborting"
exit 1
fi
fi
# ----------------------------------------------------------------------------
# Generate HTML (sphinx)
if $DO_OUT_HTML ; then
# sphinx-build -n -b html $SPHINX_WORKDIR/sphinx-in $SPHINX_WORKDIR/sphinx-out
# sphinx-build -n -b html $SPHINXBASE/sphinx-in $SPHINXBASE/sphinx-out
# annoying bug in sphinx makes it very slow unless we do this. should report.
cd $SPHINX_WORKDIR
cd $SPHINXBASE
sphinx-build -b html sphinx-in sphinx-out
# XXX, saves space on upload and zip, should move HTML outside
@@ -110,21 +103,20 @@ fi
# Generate PDF (sphinx/laytex)
if $DO_OUT_PDF ; then
cd $SPHINX_WORKDIR
sphinx-build -n -b latex $SPHINX_WORKDIR/sphinx-in $SPHINX_WORKDIR/sphinx-out
make -C $SPHINX_WORKDIR/sphinx-out
mv $SPHINX_WORKDIR/sphinx-out/contents.pdf \
$SPHINX_WORKDIR/sphinx-out/blender_python_reference_$BLENDER_VERSION.pdf
sphinx-build -n -b latex $SPHINXBASE/sphinx-in $SPHINXBASE/sphinx-out
make -C $SPHINXBASE/sphinx-out
mv $SPHINXBASE/sphinx-out/contents.pdf $SPHINXBASE/sphinx-out/blender_python_reference_$BLENDER_VERSION.pdf
fi
# ----------------------------------------------------------------------------
# Upload to blender servers, comment this section for testing
if $DO_UPLOAD ; then
cp $SPHINX_WORKDIR/sphinx-out/contents.html $SPHINX_WORKDIR/sphinx-out/index.html
cp $SPHINXBASE/sphinx-out/contents.html $SPHINXBASE/sphinx-out/index.html
ssh $SSH_USER@blender.org 'rm -rf '$SSH_UPLOAD_FULL'/*'
rsync --progress -ave "ssh -p 22" $SPHINX_WORKDIR/sphinx-out/* $SSH_HOST:$SSH_UPLOAD_FULL/
rsync --progress -ave "ssh -p 22" $SPHINXBASE/sphinx-out/* $SSH_HOST:$SSH_UPLOAD_FULL/
## symlink the dir to a static URL
#ssh $SSH_USER@blender.org 'rm '$SSH_UPLOAD'/250PythonDoc && ln -s '$SSH_UPLOAD_FULL' '$SSH_UPLOAD'/250PythonDoc'
@@ -142,15 +134,11 @@ if $DO_UPLOAD ; then
if $DO_OUT_PDF ; then
# rename so local PDF has matching name.
rsync --progress -ave "ssh -p 22" \
$SPHINX_WORKDIR/sphinx-out/blender_python_reference_$BLENDER_VERSION.pdf \
$SSH_HOST:$SSH_UPLOAD_FULL/blender_python_reference_$BLENDER_VERSION.pdf
rsync --progress -ave "ssh -p 22" $SPHINXBASE/sphinx-out/blender_python_reference_$BLENDER_VERSION.pdf $SSH_HOST:$SSH_UPLOAD_FULL/blender_python_reference_$BLENDER_VERSION.pdf
fi
if $DO_OUT_HTML_ZIP ; then
rsync --progress -ave "ssh -p 22" \
$SPHINX_WORKDIR/blender_python_reference_$BLENDER_VERSION.zip \
$SSH_HOST:$SSH_UPLOAD_FULL/blender_python_reference_$BLENDER_VERSION.zip
rsync --progress -ave "ssh -p 22" $SPHINXBASE/blender_python_reference_$BLENDER_VERSION.zip $SSH_HOST:$SSH_UPLOAD_FULL/blender_python_reference_$BLENDER_VERSION.zip
fi
fi
@@ -161,5 +149,5 @@ fi
echo ""
echo "Finished! view the docs from: "
if $DO_OUT_HTML ; then echo " html:" $SPHINX_WORKDIR/sphinx-out/contents.html ; fi
if $DO_OUT_PDF ; then echo " pdf:" $SPHINX_WORKDIR/sphinx-out/blender_python_reference_$BLENDER_VERSION.pdf ; fi
if $DO_OUT_HTML ; then echo " html:" $SPHINXBASE/sphinx-out/contents.html ; fi
if $DO_OUT_PDF ; then echo " pdf:" $SPHINXBASE/sphinx-out/blender_python_reference_$BLENDER_VERSION.pdf ; fi

View File

@@ -1,182 +0,0 @@
#!/usr/bin/env python3
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# Contributor(s): Bastien Montagne
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
"""
This is a helper script to generate Blender Python API documentation (using Sphinx), and update server data using rsync.
You'll need to specify your user login and password, obviously.
Example usage:
./sphinx_doc_update.py --mirror ../../../docs/remote_api_backup/ --source ../.. --blender ../../../build_cmake/bin/blender --user foobar --password barfoo
"""
import os
import shutil
import subprocess
import sys
import tempfile
import zipfile
DEFAULT_RSYNC_SERVER = "www.blender.org"
DEFAULT_RSYNC_ROOT = "/api/"
DEFAULT_SYMLINK_ROOT = "/data/www/vhosts/www.blender.org/api"
def argparse_create():
import argparse
global __doc__
# When --help or no args are given, print this help
usage_text = __doc__
parser = argparse.ArgumentParser(description=usage_text,
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument(
"--mirror", dest="mirror_dir",
metavar='PATH', required=True,
help="Path to local rsync mirror of api doc server")
parser.add_argument(
"--source", dest="source_dir",
metavar='PATH', required=True,
help="Path to Blender git repository")
parser.add_argument(
"--blender", dest="blender",
metavar='PATH', required=True,
help="Path to Blender executable")
parser.add_argument(
"--rsync-server", dest="rsync_server", default=DEFAULT_RSYNC_SERVER,
metavar='RSYNCSERVER', type=str, required=False,
help=("rsync server address"))
parser.add_argument(
"--rsync-root", dest="rsync_root", default=DEFAULT_RSYNC_ROOT,
metavar='RSYNCROOT', type=str, required=False,
help=("Root path of API doc on rsync server"))
parser.add_argument(
"--user", dest="user",
metavar='USER', type=str, required=True,
help=("User to login on rsync server"))
parser.add_argument(
"--password", dest="password",
metavar='PASSWORD', type=str, required=True,
help=("Password to login on rsync server"))
return parser
def main():
# ----------
# Parse Args
args = argparse_create().parse_args()
rsync_base = "rsync://%s@%s:%s" % (args.user, args.rsync_server, args.rsync_root)
# I) Update local mirror using rsync.
rsync_mirror_cmd = ("rsync", "--delete-after", "-avzz", rsync_base, args.mirror_dir)
subprocess.run(rsync_mirror_cmd, env=dict(os.environ, RSYNC_PASSWORD=args.password))
with tempfile.TemporaryDirectory() as tmp_dir:
# II) Generate doc source in temp dir.
doc_gen_cmd = (args.blender, "--background", "-noaudio", "--factory-startup", "--python-exit-code", "1",
"--python", "%s/doc/python_api/sphinx_doc_gen.py" % args.source_dir, "--",
"--output", tmp_dir)
subprocess.run(doc_gen_cmd)
# III) Get Blender version info.
blenver = ""
getver_file = os.path.join(tmp_dir, "blendver.txt")
getver_script = (""
"import sys, bpy\n"
"with open(sys.argv[-1], 'w') as f:\n"
" f.write('%d_%d%s_release' % (bpy.app.version[0], bpy.app.version[1], bpy.app.version_char)\n"
" if bpy.app.version_cycle in {'rc', 'release'} else '%d_%d_%d' % bpy.app.version)\n")
get_ver_cmd = (args.blender, "--background", "-noaudio", "--factory-startup", "--python-exit-code", "1",
"--python-expr", getver_script, "--", getver_file)
subprocess.run(get_ver_cmd)
with open(getver_file) as f:
blenver = f.read()
os.remove(getver_file)
# IV) Build doc.
curr_dir = os.getcwd()
os.chdir(tmp_dir)
sphinx_cmd = ("sphinx-build", "-b", "html", "sphinx-in", "sphinx-out")
subprocess.run(sphinx_cmd)
shutil.rmtree(os.path.join("sphinx-out", ".doctrees"))
os.chdir(curr_dir)
# V) Cleanup existing matching dir in server mirror (if any), and copy new doc.
api_name = "blender_python_api_%s" % blenver
api_dir = os.path.join(args.mirror_dir, api_name)
if os.path.exists(api_dir):
shutil.rmtree(api_dir)
os.rename(os.path.join(tmp_dir, "sphinx-out"), api_dir)
# VI) Create zip archive.
zip_name = "blender_python_reference_%s" % blenver
zip_path = os.path.join(args.mirror_dir, zip_name)
with zipfile.ZipFile(zip_path, 'w') as zf:
for de in os.scandir(api_dir):
zf.write(de.path, arcname=os.path.join(zip_name, de.name))
os.rename(zip_path, os.path.join(api_dir, "%s.zip" % zip_name))
# VII) Create symlinks and html redirects.
#~ os.symlink(os.path.join(DEFAULT_SYMLINK_ROOT, api_name, "contents.html"), os.path.join(api_dir, "index.html"))
os.symlink("./contents.html", os.path.join(api_dir, "index.html"))
if blenver.endswith("release"):
symlink = os.path.join(args.mirror_dir, "blender_python_api_current")
os.remove(symlink)
os.symlink("./%s" % api_name, symlink)
with open(os.path.join(args.mirror_dir, "250PythonDoc/index.html"), 'w') as f:
f.write("<html><head><title>Redirecting...</title><meta http-equiv=\"REFRESH\""
"content=\"0;url=../%s/\"></head><body>Redirecting...</body></html>" % api_name)
else:
symlink = os.path.join(args.mirror_dir, "blender_python_api_master")
os.remove(symlink)
os.symlink("./%s" % api_name, symlink)
with open(os.path.join(args.mirror_dir, "blender_python_api/index.html"), 'w') as f:
f.write("<html><head><title>Redirecting...</title><meta http-equiv=\"REFRESH\""
"content=\"0;url=../%s/\"></head><body>Redirecting...</body></html>" % api_name)
# VIII) Upload (first do a dry-run so user can ensure everything is OK).
print("Doc generated in local mirror %s, please check it before uploading "
"(hit [Enter] to continue, [Ctrl-C] to exit):" % api_dir)
sys.stdin.read(1)
rsync_mirror_cmd = ("rsync", "--dry-run", "--delete-after", "-avzz", args.mirror_dir, rsync_base)
subprocess.run(rsync_mirror_cmd, env=dict(os.environ, RSYNC_PASSWORD=args.password))
print("Rsync upload simulated, please check every thing is OK (hit [Enter] to continue, [Ctrl-C] to exit):")
sys.stdin.read(1)
rsync_mirror_cmd = ("rsync", "--delete-after", "-avzz", args.mirror_dir, rsync_base)
subprocess.run(rsync_mirror_cmd, env=dict(os.environ, RSYNC_PASSWORD=args.password))
if __name__ == "__main__":
main()

View File

@@ -29,14 +29,6 @@ add_subdirectory(curve_fit_nd)
# Otherwise we get warnings here that we cant fix in external projects
remove_strict_flags()
# Not a strict flag, but noisy for code we don't maintain
if(CMAKE_COMPILER_IS_GNUCC)
remove_cc_flag(
"-Wmisleading-indentation"
)
endif()
add_subdirectory(rangetree)
add_subdirectory(wcwidth)
@@ -105,7 +97,6 @@ endif()
if(WITH_GTESTS)
add_subdirectory(gtest)
add_subdirectory(gmock)
endif()
if(WITH_SDL AND WITH_SDL_DYNLOAD)

View File

@@ -1,6 +0,0 @@
Project: Eigen, template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms
URL: http://eigen.tuxfamily.org/index.php?title=Main_Page
License: GPLv3+
Upstream version: 3.2.7
Local modifications:
- OpenMP fix for MSVC2015, see http://eigen.tuxfamily.org/bz/show_bug.cgi?id=1131

View File

@@ -1,6 +0,0 @@
Project: AutoPackage
URL: http://autopackage.org/docs/binreloc (original, defunct)
http://alien.cern.ch/cache/autopackage-1.0/site/docs/binreloc/ (cache)
License: Public Domain
Upstream version: Unknown (Last Release)
Local modifications: None

View File

@@ -151,8 +151,8 @@ static btScalar EdgeSeparation(const btBox2dShape* poly1, const btTransform& xf1
int index = 0;
btScalar minDot = BT_LARGE_FLOAT;
if( count2 > 0 )
index = (int) normal1.minDot( vertices2, count2, minDot);
if( count2 > 0 )
index = (int) normal1.minDot( vertices2, count2, minDot);
btVector3 v1 = b2Mul(xf1, vertices1[edge1]);
btVector3 v2 = b2Mul(xf2, vertices2[index]);
@@ -174,9 +174,9 @@ static btScalar FindMaxSeparation(int* edgeIndex,
// Find edge normal on poly1 that has the largest projection onto d.
int edge = 0;
btScalar maxDot;
if( count1 > 0 )
edge = (int) dLocal1.maxDot( normals1, count1, maxDot);
btScalar maxDot;
if( count1 > 0 )
edge = (int) dLocal1.maxDot( normals1, count1, maxDot);
// Get the separation for the edge normal.
btScalar s = EdgeSeparation(poly1, xf1, edge, poly2, xf2);

View File

@@ -232,8 +232,8 @@ void btCompoundCollisionAlgorithm::processCollision (const btCollisionObjectWrap
m_compoundShapeRevision = compoundShape->getUpdateRevision();
}
if (m_childCollisionAlgorithms.size()==0)
return;
if (m_childCollisionAlgorithms.size()==0)
return;
const btDbvt* tree = compoundShape->getDynamicAabbTree();
//use a dynamic aabb tree to cull potential child-overlaps

View File

@@ -1,4 +0,0 @@
Project: Carve, CSG library
URL: https://code.google.com/archive/p/carve/
Upstream version 9a85d733a43d
Local modifications: See patches/ folder

View File

@@ -1,4 +0,0 @@
Project: Ceres Solver
URL: http://ceres-solver.org/
Upstream version 1.11 (aef9c9563b08d5f39eee1576af133a84749d1b48)
Local modifications: None

View File

@@ -1,5 +0,0 @@
Project: OpenCL Wrangler
URL: https://github.com/OpenCLWrangler/clew
License: Apache 2.0
Upstream version: 309a653
Local modifications: None

View File

@@ -137,17 +137,6 @@ PFNCLCREATEFROMGLTEXTURE3D __clewCreateFromGLTexture3D = NULL;
#endif
PFNCLGETGLCONTEXTINFOKHR __clewGetGLContextInfoKHR = NULL;
static CLEW_DYNLIB_HANDLE dynamic_library_open_find(const char **paths) {
int i = 0;
while (paths[i] != NULL) {
CLEW_DYNLIB_HANDLE lib = CLEW_DYNLIB_OPEN(paths[i]);
if (lib != NULL) {
return lib;
}
++i;
}
return NULL;
}
static void clewExit(void)
{
@@ -162,15 +151,11 @@ static void clewExit(void)
int clewInit()
{
#ifdef _WIN32
const char *paths[] = {"OpenCL.dll", NULL};
const char *path = "OpenCL.dll";
#elif defined(__APPLE__)
const char *paths[] = {"/Library/Frameworks/OpenCL.framework/OpenCL", NULL};
const char *path = "/Library/Frameworks/OpenCL.framework/OpenCL";
#else
const char *paths[] = {"libOpenCL.so",
"libOpenCL.so.0",
"libOpenCL.so.1",
"libOpenCL.so.2",
NULL};
const char *path = "libOpenCL.so";
#endif
int error = 0;
@@ -182,7 +167,7 @@ int clewInit()
}
// Load library
module = dynamic_library_open_find(paths);
module = CLEW_DYNLIB_OPEN(path);
// Check for errors
if (module == NULL)

View File

@@ -1,5 +0,0 @@
Project: Cuda Wrangler
URL: https://github.com/CudaWrangler/cuew
License: Apache 2.0
Upstream version: e2e0315
Local modifications: None

View File

@@ -131,8 +131,8 @@ typedef struct CUsurfref_st* CUsurfref;
typedef struct CUevent_st* CUevent;
typedef struct CUstream_st* CUstream;
typedef struct CUgraphicsResource_st* CUgraphicsResource;
typedef unsigned long long CUtexObject;
typedef unsigned long long CUsurfObject;
typedef unsigned CUtexObject;
typedef unsigned CUsurfObject;
typedef struct CUuuid_st {
char bytes[16];
@@ -603,7 +603,7 @@ typedef struct CUDA_ARRAY_DESCRIPTOR_st {
size_t Width;
size_t Height;
CUarray_format Format;
unsigned int NumChannels;
unsigned NumChannels;
} CUDA_ARRAY_DESCRIPTOR;
typedef struct CUDA_ARRAY3D_DESCRIPTOR_st {
@@ -611,8 +611,8 @@ typedef struct CUDA_ARRAY3D_DESCRIPTOR_st {
size_t Height;
size_t Depth;
CUarray_format Format;
unsigned int NumChannels;
unsigned int Flags;
unsigned NumChannels;
unsigned Flags;
} CUDA_ARRAY3D_DESCRIPTOR;
typedef struct CUDA_RESOURCE_DESC_st {
@@ -627,13 +627,13 @@ typedef struct CUDA_RESOURCE_DESC_st {
struct {
CUdeviceptr devPtr;
CUarray_format format;
unsigned int numChannels;
unsigned numChannels;
size_t sizeInBytes;
} linear;
struct {
CUdeviceptr devPtr;
CUarray_format format;
unsigned int numChannels;
unsigned numChannels;
size_t width;
size_t height;
size_t pitchInBytes;
@@ -642,14 +642,14 @@ typedef struct CUDA_RESOURCE_DESC_st {
int reserved[32];
} reserved;
} res;
unsigned int flags;
unsigned flags;
} CUDA_RESOURCE_DESC;
typedef struct CUDA_TEXTURE_DESC_st {
CUaddress_mode addressMode[3];
CUfilter_mode filterMode;
unsigned int flags;
unsigned int maxAnisotropy;
unsigned flags;
unsigned maxAnisotropy;
CUfilter_mode mipmapFilterMode;
float mipmapLevelBias;
float minMipmapLevelClamp;
@@ -700,19 +700,19 @@ typedef struct CUDA_RESOURCE_VIEW_DESC_st {
size_t width;
size_t height;
size_t depth;
unsigned int firstMipmapLevel;
unsigned int lastMipmapLevel;
unsigned int firstLayer;
unsigned int lastLayer;
unsigned int reserved[16];
unsigned firstMipmapLevel;
unsigned lastMipmapLevel;
unsigned firstLayer;
unsigned lastLayer;
unsigned reserved[16];
} CUDA_RESOURCE_VIEW_DESC;
typedef struct CUDA_POINTER_ATTRIBUTE_P2P_TOKENS_st {
unsigned long long p2pToken;
unsigned int vaSpaceToken;
unsigned p2pToken;
unsigned vaSpaceToken;
} CUDA_POINTER_ATTRIBUTE_P2P_TOKENS;
typedef unsigned int GLenum;
typedef unsigned int GLuint;
typedef unsigned GLenum;
typedef unsigned GLuint;
typedef int GLint;
typedef enum CUGLDeviceList_enum {
@@ -751,7 +751,7 @@ typedef struct _nvrtcProgram* nvrtcProgram;
/* Function types. */
typedef CUresult CUDAAPI tcuGetErrorString(CUresult error, const char* pStr);
typedef CUresult CUDAAPI tcuGetErrorName(CUresult error, const char* pStr);
typedef CUresult CUDAAPI tcuInit(unsigned int Flags);
typedef CUresult CUDAAPI tcuInit(unsigned Flags);
typedef CUresult CUDAAPI tcuDriverGetVersion(int* driverVersion);
typedef CUresult CUDAAPI tcuDeviceGet(CUdevice* device, int ordinal);
typedef CUresult CUDAAPI tcuDeviceGetCount(int* count);
@@ -762,17 +762,17 @@ typedef CUresult CUDAAPI tcuDeviceGetProperties(CUdevprop* prop, CUdevice dev);
typedef CUresult CUDAAPI tcuDeviceComputeCapability(int* major, int* minor, CUdevice dev);
typedef CUresult CUDAAPI tcuDevicePrimaryCtxRetain(CUcontext* pctx, CUdevice dev);
typedef CUresult CUDAAPI tcuDevicePrimaryCtxRelease(CUdevice dev);
typedef CUresult CUDAAPI tcuDevicePrimaryCtxSetFlags(CUdevice dev, unsigned int flags);
typedef CUresult CUDAAPI tcuDevicePrimaryCtxGetState(CUdevice dev, unsigned int* flags, int* active);
typedef CUresult CUDAAPI tcuDevicePrimaryCtxSetFlags(CUdevice dev, unsigned flags);
typedef CUresult CUDAAPI tcuDevicePrimaryCtxGetState(CUdevice dev, unsigned* flags, int* active);
typedef CUresult CUDAAPI tcuDevicePrimaryCtxReset(CUdevice dev);
typedef CUresult CUDAAPI tcuCtxCreate_v2(CUcontext* pctx, unsigned int flags, CUdevice dev);
typedef CUresult CUDAAPI tcuCtxCreate_v2(CUcontext* pctx, unsigned flags, CUdevice dev);
typedef CUresult CUDAAPI tcuCtxDestroy_v2(CUcontext ctx);
typedef CUresult CUDAAPI tcuCtxPushCurrent_v2(CUcontext ctx);
typedef CUresult CUDAAPI tcuCtxPopCurrent_v2(CUcontext* pctx);
typedef CUresult CUDAAPI tcuCtxSetCurrent(CUcontext ctx);
typedef CUresult CUDAAPI tcuCtxGetCurrent(CUcontext* pctx);
typedef CUresult CUDAAPI tcuCtxGetDevice(CUdevice* device);
typedef CUresult CUDAAPI tcuCtxGetFlags(unsigned int* flags);
typedef CUresult CUDAAPI tcuCtxGetFlags(unsigned* flags);
typedef CUresult CUDAAPI tcuCtxSynchronize(void);
typedef CUresult CUDAAPI tcuCtxSetLimit(CUlimit limit, size_t value);
typedef CUresult CUDAAPI tcuCtxGetLimit(size_t* pvalue, CUlimit limit);
@@ -780,43 +780,43 @@ typedef CUresult CUDAAPI tcuCtxGetCacheConfig(CUfunc_cache* pconfig);
typedef CUresult CUDAAPI tcuCtxSetCacheConfig(CUfunc_cache config);
typedef CUresult CUDAAPI tcuCtxGetSharedMemConfig(CUsharedconfig* pConfig);
typedef CUresult CUDAAPI tcuCtxSetSharedMemConfig(CUsharedconfig config);
typedef CUresult CUDAAPI tcuCtxGetApiVersion(CUcontext ctx, unsigned int* version);
typedef CUresult CUDAAPI tcuCtxGetApiVersion(CUcontext ctx, unsigned* version);
typedef CUresult CUDAAPI tcuCtxGetStreamPriorityRange(int* leastPriority, int* greatestPriority);
typedef CUresult CUDAAPI tcuCtxAttach(CUcontext* pctx, unsigned int flags);
typedef CUresult CUDAAPI tcuCtxAttach(CUcontext* pctx, unsigned flags);
typedef CUresult CUDAAPI tcuCtxDetach(CUcontext ctx);
typedef CUresult CUDAAPI tcuModuleLoad(CUmodule* module, const char* fname);
typedef CUresult CUDAAPI tcuModuleLoadData(CUmodule* module, const void* image);
typedef CUresult CUDAAPI tcuModuleLoadDataEx(CUmodule* module, const void* image, unsigned int numOptions, CUjit_option* options, void* optionValues);
typedef CUresult CUDAAPI tcuModuleLoadDataEx(CUmodule* module, const void* image, unsigned numOptions, CUjit_option* options, void* optionValues);
typedef CUresult CUDAAPI tcuModuleLoadFatBinary(CUmodule* module, const void* fatCubin);
typedef CUresult CUDAAPI tcuModuleUnload(CUmodule hmod);
typedef CUresult CUDAAPI tcuModuleGetFunction(CUfunction* hfunc, CUmodule hmod, const char* name);
typedef CUresult CUDAAPI tcuModuleGetGlobal_v2(CUdeviceptr* dptr, size_t* bytes, CUmodule hmod, const char* name);
typedef CUresult CUDAAPI tcuModuleGetTexRef(CUtexref* pTexRef, CUmodule hmod, const char* name);
typedef CUresult CUDAAPI tcuModuleGetSurfRef(CUsurfref* pSurfRef, CUmodule hmod, const char* name);
typedef CUresult CUDAAPI tcuLinkCreate_v2(unsigned int numOptions, CUjit_option* options, void* optionValues, CUlinkState* stateOut);
typedef CUresult CUDAAPI tcuLinkAddData_v2(CUlinkState state, CUjitInputType type, void* data, size_t size, const char* name, unsigned int numOptions, CUjit_option* options, void* optionValues);
typedef CUresult CUDAAPI tcuLinkAddFile_v2(CUlinkState state, CUjitInputType type, const char* path, unsigned int numOptions, CUjit_option* options, void* optionValues);
typedef CUresult CUDAAPI tcuLinkCreate_v2(unsigned numOptions, CUjit_option* options, void* optionValues, CUlinkState* stateOut);
typedef CUresult CUDAAPI tcuLinkAddData_v2(CUlinkState state, CUjitInputType type, void* data, size_t size, const char* name, unsigned numOptions, CUjit_option* options, void* optionValues);
typedef CUresult CUDAAPI tcuLinkAddFile_v2(CUlinkState state, CUjitInputType type, const char* path, unsigned numOptions, CUjit_option* options, void* optionValues);
typedef CUresult CUDAAPI tcuLinkComplete(CUlinkState state, void* cubinOut, size_t* sizeOut);
typedef CUresult CUDAAPI tcuLinkDestroy(CUlinkState state);
typedef CUresult CUDAAPI tcuMemGetInfo_v2(size_t* free, size_t* total);
typedef CUresult CUDAAPI tcuMemAlloc_v2(CUdeviceptr* dptr, size_t bytesize);
typedef CUresult CUDAAPI tcuMemAllocPitch_v2(CUdeviceptr* dptr, size_t* pPitch, size_t WidthInBytes, size_t Height, unsigned int ElementSizeBytes);
typedef CUresult CUDAAPI tcuMemAllocPitch_v2(CUdeviceptr* dptr, size_t* pPitch, size_t WidthInBytes, size_t Height, unsigned ElementSizeBytes);
typedef CUresult CUDAAPI tcuMemFree_v2(CUdeviceptr dptr);
typedef CUresult CUDAAPI tcuMemGetAddressRange_v2(CUdeviceptr* pbase, size_t* psize, CUdeviceptr dptr);
typedef CUresult CUDAAPI tcuMemAllocHost_v2(void* pp, size_t bytesize);
typedef CUresult CUDAAPI tcuMemFreeHost(void* p);
typedef CUresult CUDAAPI tcuMemHostAlloc(void* pp, size_t bytesize, unsigned int Flags);
typedef CUresult CUDAAPI tcuMemHostGetDevicePointer_v2(CUdeviceptr* pdptr, void* p, unsigned int Flags);
typedef CUresult CUDAAPI tcuMemHostGetFlags(unsigned int* pFlags, void* p);
typedef CUresult CUDAAPI tcuMemAllocManaged(CUdeviceptr* dptr, size_t bytesize, unsigned int flags);
typedef CUresult CUDAAPI tcuMemHostAlloc(void* pp, size_t bytesize, unsigned Flags);
typedef CUresult CUDAAPI tcuMemHostGetDevicePointer_v2(CUdeviceptr* pdptr, void* p, unsigned Flags);
typedef CUresult CUDAAPI tcuMemHostGetFlags(unsigned* pFlags, void* p);
typedef CUresult CUDAAPI tcuMemAllocManaged(CUdeviceptr* dptr, size_t bytesize, unsigned flags);
typedef CUresult CUDAAPI tcuDeviceGetByPCIBusId(CUdevice* dev, const char* pciBusId);
typedef CUresult CUDAAPI tcuDeviceGetPCIBusId(char* pciBusId, int len, CUdevice dev);
typedef CUresult CUDAAPI tcuIpcGetEventHandle(CUipcEventHandle* pHandle, CUevent event);
typedef CUresult CUDAAPI tcuIpcOpenEventHandle(CUevent* phEvent, CUipcEventHandle handle);
typedef CUresult CUDAAPI tcuIpcGetMemHandle(CUipcMemHandle* pHandle, CUdeviceptr dptr);
typedef CUresult CUDAAPI tcuIpcOpenMemHandle(CUdeviceptr* pdptr, CUipcMemHandle handle, unsigned int Flags);
typedef CUresult CUDAAPI tcuIpcOpenMemHandle(CUdeviceptr* pdptr, CUipcMemHandle handle, unsigned Flags);
typedef CUresult CUDAAPI tcuIpcCloseMemHandle(CUdeviceptr dptr);
typedef CUresult CUDAAPI tcuMemHostRegister_v2(void* p, size_t bytesize, unsigned int Flags);
typedef CUresult CUDAAPI tcuMemHostRegister_v2(void* p, size_t bytesize, unsigned Flags);
typedef CUresult CUDAAPI tcuMemHostUnregister(void* p);
typedef CUresult CUDAAPI tcuMemcpy(CUdeviceptr dst, CUdeviceptr src, size_t ByteCount);
typedef CUresult CUDAAPI tcuMemcpyPeer(CUdeviceptr dstDevice, CUcontext dstContext, CUdeviceptr srcDevice, CUcontext srcContext, size_t ByteCount);
@@ -842,40 +842,40 @@ typedef CUresult CUDAAPI tcuMemcpyAtoHAsync_v2(void* dstHost, CUarray srcArray,
typedef CUresult CUDAAPI tcuMemcpy2DAsync_v2(const CUDA_MEMCPY2D* pCopy, CUstream hStream);
typedef CUresult CUDAAPI tcuMemcpy3DAsync_v2(const CUDA_MEMCPY3D* pCopy, CUstream hStream);
typedef CUresult CUDAAPI tcuMemcpy3DPeerAsync(const CUDA_MEMCPY3D_PEER* pCopy, CUstream hStream);
typedef CUresult CUDAAPI tcuMemsetD8_v2(CUdeviceptr dstDevice, unsigned char uc, size_t N);
typedef CUresult CUDAAPI tcuMemsetD16_v2(CUdeviceptr dstDevice, unsigned short us, size_t N);
typedef CUresult CUDAAPI tcuMemsetD32_v2(CUdeviceptr dstDevice, unsigned int ui, size_t N);
typedef CUresult CUDAAPI tcuMemsetD2D8_v2(CUdeviceptr dstDevice, size_t dstPitch, unsigned char uc, size_t Width, size_t Height);
typedef CUresult CUDAAPI tcuMemsetD2D16_v2(CUdeviceptr dstDevice, size_t dstPitch, unsigned short us, size_t Width, size_t Height);
typedef CUresult CUDAAPI tcuMemsetD2D32_v2(CUdeviceptr dstDevice, size_t dstPitch, unsigned int ui, size_t Width, size_t Height);
typedef CUresult CUDAAPI tcuMemsetD8Async(CUdeviceptr dstDevice, unsigned char uc, size_t N, CUstream hStream);
typedef CUresult CUDAAPI tcuMemsetD16Async(CUdeviceptr dstDevice, unsigned short us, size_t N, CUstream hStream);
typedef CUresult CUDAAPI tcuMemsetD32Async(CUdeviceptr dstDevice, unsigned int ui, size_t N, CUstream hStream);
typedef CUresult CUDAAPI tcuMemsetD2D8Async(CUdeviceptr dstDevice, size_t dstPitch, unsigned char uc, size_t Width, size_t Height, CUstream hStream);
typedef CUresult CUDAAPI tcuMemsetD2D16Async(CUdeviceptr dstDevice, size_t dstPitch, unsigned short us, size_t Width, size_t Height, CUstream hStream);
typedef CUresult CUDAAPI tcuMemsetD2D32Async(CUdeviceptr dstDevice, size_t dstPitch, unsigned int ui, size_t Width, size_t Height, CUstream hStream);
typedef CUresult CUDAAPI tcuMemsetD8_v2(CUdeviceptr dstDevice, unsigned uc, size_t N);
typedef CUresult CUDAAPI tcuMemsetD16_v2(CUdeviceptr dstDevice, unsigned us, size_t N);
typedef CUresult CUDAAPI tcuMemsetD32_v2(CUdeviceptr dstDevice, unsigned ui, size_t N);
typedef CUresult CUDAAPI tcuMemsetD2D8_v2(CUdeviceptr dstDevice, size_t dstPitch, unsigned uc, size_t Width, size_t Height);
typedef CUresult CUDAAPI tcuMemsetD2D16_v2(CUdeviceptr dstDevice, size_t dstPitch, unsigned us, size_t Width, size_t Height);
typedef CUresult CUDAAPI tcuMemsetD2D32_v2(CUdeviceptr dstDevice, size_t dstPitch, unsigned ui, size_t Width, size_t Height);
typedef CUresult CUDAAPI tcuMemsetD8Async(CUdeviceptr dstDevice, unsigned uc, size_t N, CUstream hStream);
typedef CUresult CUDAAPI tcuMemsetD16Async(CUdeviceptr dstDevice, unsigned us, size_t N, CUstream hStream);
typedef CUresult CUDAAPI tcuMemsetD32Async(CUdeviceptr dstDevice, unsigned ui, size_t N, CUstream hStream);
typedef CUresult CUDAAPI tcuMemsetD2D8Async(CUdeviceptr dstDevice, size_t dstPitch, unsigned uc, size_t Width, size_t Height, CUstream hStream);
typedef CUresult CUDAAPI tcuMemsetD2D16Async(CUdeviceptr dstDevice, size_t dstPitch, unsigned us, size_t Width, size_t Height, CUstream hStream);
typedef CUresult CUDAAPI tcuMemsetD2D32Async(CUdeviceptr dstDevice, size_t dstPitch, unsigned ui, size_t Width, size_t Height, CUstream hStream);
typedef CUresult CUDAAPI tcuArrayCreate_v2(CUarray* pHandle, const CUDA_ARRAY_DESCRIPTOR* pAllocateArray);
typedef CUresult CUDAAPI tcuArrayGetDescriptor_v2(CUDA_ARRAY_DESCRIPTOR* pArrayDescriptor, CUarray hArray);
typedef CUresult CUDAAPI tcuArrayDestroy(CUarray hArray);
typedef CUresult CUDAAPI tcuArray3DCreate_v2(CUarray* pHandle, const CUDA_ARRAY3D_DESCRIPTOR* pAllocateArray);
typedef CUresult CUDAAPI tcuArray3DGetDescriptor_v2(CUDA_ARRAY3D_DESCRIPTOR* pArrayDescriptor, CUarray hArray);
typedef CUresult CUDAAPI tcuMipmappedArrayCreate(CUmipmappedArray* pHandle, const CUDA_ARRAY3D_DESCRIPTOR* pMipmappedArrayDesc, unsigned int numMipmapLevels);
typedef CUresult CUDAAPI tcuMipmappedArrayGetLevel(CUarray* pLevelArray, CUmipmappedArray hMipmappedArray, unsigned int level);
typedef CUresult CUDAAPI tcuMipmappedArrayCreate(CUmipmappedArray* pHandle, const CUDA_ARRAY3D_DESCRIPTOR* pMipmappedArrayDesc, unsigned numMipmapLevels);
typedef CUresult CUDAAPI tcuMipmappedArrayGetLevel(CUarray* pLevelArray, CUmipmappedArray hMipmappedArray, unsigned level);
typedef CUresult CUDAAPI tcuMipmappedArrayDestroy(CUmipmappedArray hMipmappedArray);
typedef CUresult CUDAAPI tcuPointerGetAttribute(void* data, CUpointer_attribute attribute, CUdeviceptr ptr);
typedef CUresult CUDAAPI tcuPointerSetAttribute(const void* value, CUpointer_attribute attribute, CUdeviceptr ptr);
typedef CUresult CUDAAPI tcuPointerGetAttributes(unsigned int numAttributes, CUpointer_attribute* attributes, void* data, CUdeviceptr ptr);
typedef CUresult CUDAAPI tcuStreamCreate(CUstream* phStream, unsigned int Flags);
typedef CUresult CUDAAPI tcuStreamCreateWithPriority(CUstream* phStream, unsigned int flags, int priority);
typedef CUresult CUDAAPI tcuPointerGetAttributes(unsigned numAttributes, CUpointer_attribute* attributes, void* data, CUdeviceptr ptr);
typedef CUresult CUDAAPI tcuStreamCreate(CUstream* phStream, unsigned Flags);
typedef CUresult CUDAAPI tcuStreamCreateWithPriority(CUstream* phStream, unsigned flags, int priority);
typedef CUresult CUDAAPI tcuStreamGetPriority(CUstream hStream, int* priority);
typedef CUresult CUDAAPI tcuStreamGetFlags(CUstream hStream, unsigned int* flags);
typedef CUresult CUDAAPI tcuStreamWaitEvent(CUstream hStream, CUevent hEvent, unsigned int Flags);
typedef CUresult CUDAAPI tcuStreamAddCallback(CUstream hStream, CUstreamCallback callback, void* userData, unsigned int flags);
typedef CUresult CUDAAPI tcuStreamAttachMemAsync(CUstream hStream, CUdeviceptr dptr, size_t length, unsigned int flags);
typedef CUresult CUDAAPI tcuStreamGetFlags(CUstream hStream, unsigned* flags);
typedef CUresult CUDAAPI tcuStreamWaitEvent(CUstream hStream, CUevent hEvent, unsigned Flags);
typedef CUresult CUDAAPI tcuStreamAddCallback(CUstream hStream, CUstreamCallback callback, void* userData, unsigned flags);
typedef CUresult CUDAAPI tcuStreamAttachMemAsync(CUstream hStream, CUdeviceptr dptr, size_t length, unsigned flags);
typedef CUresult CUDAAPI tcuStreamQuery(CUstream hStream);
typedef CUresult CUDAAPI tcuStreamSynchronize(CUstream hStream);
typedef CUresult CUDAAPI tcuStreamDestroy_v2(CUstream hStream);
typedef CUresult CUDAAPI tcuEventCreate(CUevent* phEvent, unsigned int Flags);
typedef CUresult CUDAAPI tcuEventCreate(CUevent* phEvent, unsigned Flags);
typedef CUresult CUDAAPI tcuEventRecord(CUevent hEvent, CUstream hStream);
typedef CUresult CUDAAPI tcuEventQuery(CUevent hEvent);
typedef CUresult CUDAAPI tcuEventSynchronize(CUevent hEvent);
@@ -884,23 +884,23 @@ typedef CUresult CUDAAPI tcuEventElapsedTime(float* pMilliseconds, CUevent hStar
typedef CUresult CUDAAPI tcuFuncGetAttribute(int* pi, CUfunction_attribute attrib, CUfunction hfunc);
typedef CUresult CUDAAPI tcuFuncSetCacheConfig(CUfunction hfunc, CUfunc_cache config);
typedef CUresult CUDAAPI tcuFuncSetSharedMemConfig(CUfunction hfunc, CUsharedconfig config);
typedef CUresult CUDAAPI tcuLaunchKernel(CUfunction f, unsigned int gridDimX, unsigned int gridDimY, unsigned int gridDimZ, unsigned int blockDimX, unsigned int blockDimY, unsigned int blockDimZ, unsigned int sharedMemBytes, CUstream hStream, void* kernelParams, void* extra);
typedef CUresult CUDAAPI tcuLaunchKernel(CUfunction f, unsigned gridDimX, unsigned gridDimY, unsigned gridDimZ, unsigned blockDimX, unsigned blockDimY, unsigned blockDimZ, unsigned sharedMemBytes, CUstream hStream, void* kernelParams, void* extra);
typedef CUresult CUDAAPI tcuFuncSetBlockShape(CUfunction hfunc, int x, int y, int z);
typedef CUresult CUDAAPI tcuFuncSetSharedSize(CUfunction hfunc, unsigned int bytes);
typedef CUresult CUDAAPI tcuParamSetSize(CUfunction hfunc, unsigned int numbytes);
typedef CUresult CUDAAPI tcuParamSeti(CUfunction hfunc, int offset, unsigned int value);
typedef CUresult CUDAAPI tcuFuncSetSharedSize(CUfunction hfunc, unsigned bytes);
typedef CUresult CUDAAPI tcuParamSetSize(CUfunction hfunc, unsigned numbytes);
typedef CUresult CUDAAPI tcuParamSeti(CUfunction hfunc, int offset, unsigned value);
typedef CUresult CUDAAPI tcuParamSetf(CUfunction hfunc, int offset, float value);
typedef CUresult CUDAAPI tcuParamSetv(CUfunction hfunc, int offset, void* ptr, unsigned int numbytes);
typedef CUresult CUDAAPI tcuParamSetv(CUfunction hfunc, int offset, void* ptr, unsigned numbytes);
typedef CUresult CUDAAPI tcuLaunch(CUfunction f);
typedef CUresult CUDAAPI tcuLaunchGrid(CUfunction f, int grid_width, int grid_height);
typedef CUresult CUDAAPI tcuLaunchGridAsync(CUfunction f, int grid_width, int grid_height, CUstream hStream);
typedef CUresult CUDAAPI tcuParamSetTexRef(CUfunction hfunc, int texunit, CUtexref hTexRef);
typedef CUresult CUDAAPI tcuOccupancyMaxActiveBlocksPerMultiprocessor(int* numBlocks, CUfunction func, int blockSize, size_t dynamicSMemSize);
typedef CUresult CUDAAPI tcuOccupancyMaxActiveBlocksPerMultiprocessorWithFlags(int* numBlocks, CUfunction func, int blockSize, size_t dynamicSMemSize, unsigned int flags);
typedef CUresult CUDAAPI tcuOccupancyMaxActiveBlocksPerMultiprocessorWithFlags(int* numBlocks, CUfunction func, int blockSize, size_t dynamicSMemSize, unsigned flags);
typedef CUresult CUDAAPI tcuOccupancyMaxPotentialBlockSize(int* minGridSize, int* blockSize, CUfunction func, CUoccupancyB2DSize blockSizeToDynamicSMemSize, size_t dynamicSMemSize, int blockSizeLimit);
typedef CUresult CUDAAPI tcuOccupancyMaxPotentialBlockSizeWithFlags(int* minGridSize, int* blockSize, CUfunction func, CUoccupancyB2DSize blockSizeToDynamicSMemSize, size_t dynamicSMemSize, int blockSizeLimit, unsigned int flags);
typedef CUresult CUDAAPI tcuTexRefSetArray(CUtexref hTexRef, CUarray hArray, unsigned int Flags);
typedef CUresult CUDAAPI tcuTexRefSetMipmappedArray(CUtexref hTexRef, CUmipmappedArray hMipmappedArray, unsigned int Flags);
typedef CUresult CUDAAPI tcuOccupancyMaxPotentialBlockSizeWithFlags(int* minGridSize, int* blockSize, CUfunction func, CUoccupancyB2DSize blockSizeToDynamicSMemSize, size_t dynamicSMemSize, int blockSizeLimit, unsigned flags);
typedef CUresult CUDAAPI tcuTexRefSetArray(CUtexref hTexRef, CUarray hArray, unsigned Flags);
typedef CUresult CUDAAPI tcuTexRefSetMipmappedArray(CUtexref hTexRef, CUmipmappedArray hMipmappedArray, unsigned Flags);
typedef CUresult CUDAAPI tcuTexRefSetAddress_v2(size_t* ByteOffset, CUtexref hTexRef, CUdeviceptr dptr, size_t bytes);
typedef CUresult CUDAAPI tcuTexRefSetAddress2D_v3(CUtexref hTexRef, const CUDA_ARRAY_DESCRIPTOR* desc, CUdeviceptr dptr, size_t Pitch);
typedef CUresult CUDAAPI tcuTexRefSetFormat(CUtexref hTexRef, CUarray_format fmt, int NumPackedComponents);
@@ -909,8 +909,8 @@ typedef CUresult CUDAAPI tcuTexRefSetFilterMode(CUtexref hTexRef, CUfilter_mode
typedef CUresult CUDAAPI tcuTexRefSetMipmapFilterMode(CUtexref hTexRef, CUfilter_mode fm);
typedef CUresult CUDAAPI tcuTexRefSetMipmapLevelBias(CUtexref hTexRef, float bias);
typedef CUresult CUDAAPI tcuTexRefSetMipmapLevelClamp(CUtexref hTexRef, float minMipmapLevelClamp, float maxMipmapLevelClamp);
typedef CUresult CUDAAPI tcuTexRefSetMaxAnisotropy(CUtexref hTexRef, unsigned int maxAniso);
typedef CUresult CUDAAPI tcuTexRefSetFlags(CUtexref hTexRef, unsigned int Flags);
typedef CUresult CUDAAPI tcuTexRefSetMaxAnisotropy(CUtexref hTexRef, unsigned maxAniso);
typedef CUresult CUDAAPI tcuTexRefSetFlags(CUtexref hTexRef, unsigned Flags);
typedef CUresult CUDAAPI tcuTexRefGetAddress_v2(CUdeviceptr* pdptr, CUtexref hTexRef);
typedef CUresult CUDAAPI tcuTexRefGetArray(CUarray* phArray, CUtexref hTexRef);
typedef CUresult CUDAAPI tcuTexRefGetMipmappedArray(CUmipmappedArray* phMipmappedArray, CUtexref hTexRef);
@@ -921,10 +921,10 @@ typedef CUresult CUDAAPI tcuTexRefGetMipmapFilterMode(CUfilter_mode* pfm, CUtexr
typedef CUresult CUDAAPI tcuTexRefGetMipmapLevelBias(float* pbias, CUtexref hTexRef);
typedef CUresult CUDAAPI tcuTexRefGetMipmapLevelClamp(float* pminMipmapLevelClamp, float* pmaxMipmapLevelClamp, CUtexref hTexRef);
typedef CUresult CUDAAPI tcuTexRefGetMaxAnisotropy(int* pmaxAniso, CUtexref hTexRef);
typedef CUresult CUDAAPI tcuTexRefGetFlags(unsigned int* pFlags, CUtexref hTexRef);
typedef CUresult CUDAAPI tcuTexRefGetFlags(unsigned* pFlags, CUtexref hTexRef);
typedef CUresult CUDAAPI tcuTexRefCreate(CUtexref* pTexRef);
typedef CUresult CUDAAPI tcuTexRefDestroy(CUtexref hTexRef);
typedef CUresult CUDAAPI tcuSurfRefSetArray(CUsurfref hSurfRef, CUarray hArray, unsigned int Flags);
typedef CUresult CUDAAPI tcuSurfRefSetArray(CUsurfref hSurfRef, CUarray hArray, unsigned Flags);
typedef CUresult CUDAAPI tcuSurfRefGetArray(CUarray* phArray, CUsurfref hSurfRef);
typedef CUresult CUDAAPI tcuTexObjectCreate(CUtexObject* pTexObject, const CUDA_RESOURCE_DESC* pResDesc, const CUDA_TEXTURE_DESC* pTexDesc, const CUDA_RESOURCE_VIEW_DESC* pResViewDesc);
typedef CUresult CUDAAPI tcuTexObjectDestroy(CUtexObject texObject);
@@ -935,27 +935,27 @@ typedef CUresult CUDAAPI tcuSurfObjectCreate(CUsurfObject* pSurfObject, const CU
typedef CUresult CUDAAPI tcuSurfObjectDestroy(CUsurfObject surfObject);
typedef CUresult CUDAAPI tcuSurfObjectGetResourceDesc(CUDA_RESOURCE_DESC* pResDesc, CUsurfObject surfObject);
typedef CUresult CUDAAPI tcuDeviceCanAccessPeer(int* canAccessPeer, CUdevice dev, CUdevice peerDev);
typedef CUresult CUDAAPI tcuCtxEnablePeerAccess(CUcontext peerContext, unsigned int Flags);
typedef CUresult CUDAAPI tcuCtxEnablePeerAccess(CUcontext peerContext, unsigned Flags);
typedef CUresult CUDAAPI tcuCtxDisablePeerAccess(CUcontext peerContext);
typedef CUresult CUDAAPI tcuGraphicsUnregisterResource(CUgraphicsResource resource);
typedef CUresult CUDAAPI tcuGraphicsSubResourceGetMappedArray(CUarray* pArray, CUgraphicsResource resource, unsigned int arrayIndex, unsigned int mipLevel);
typedef CUresult CUDAAPI tcuGraphicsSubResourceGetMappedArray(CUarray* pArray, CUgraphicsResource resource, unsigned arrayIndex, unsigned mipLevel);
typedef CUresult CUDAAPI tcuGraphicsResourceGetMappedMipmappedArray(CUmipmappedArray* pMipmappedArray, CUgraphicsResource resource);
typedef CUresult CUDAAPI tcuGraphicsResourceGetMappedPointer_v2(CUdeviceptr* pDevPtr, size_t* pSize, CUgraphicsResource resource);
typedef CUresult CUDAAPI tcuGraphicsResourceSetMapFlags_v2(CUgraphicsResource resource, unsigned int flags);
typedef CUresult CUDAAPI tcuGraphicsMapResources(unsigned int count, CUgraphicsResource* resources, CUstream hStream);
typedef CUresult CUDAAPI tcuGraphicsUnmapResources(unsigned int count, CUgraphicsResource* resources, CUstream hStream);
typedef CUresult CUDAAPI tcuGraphicsResourceSetMapFlags_v2(CUgraphicsResource resource, unsigned flags);
typedef CUresult CUDAAPI tcuGraphicsMapResources(unsigned count, CUgraphicsResource* resources, CUstream hStream);
typedef CUresult CUDAAPI tcuGraphicsUnmapResources(unsigned count, CUgraphicsResource* resources, CUstream hStream);
typedef CUresult CUDAAPI tcuGetExportTable(const void* ppExportTable, const CUuuid* pExportTableId);
typedef CUresult CUDAAPI tcuGraphicsGLRegisterBuffer(CUgraphicsResource* pCudaResource, GLuint buffer, unsigned int Flags);
typedef CUresult CUDAAPI tcuGraphicsGLRegisterImage(CUgraphicsResource* pCudaResource, GLuint image, GLenum target, unsigned int Flags);
typedef CUresult CUDAAPI tcuGLGetDevices_v2(unsigned int* pCudaDeviceCount, CUdevice* pCudaDevices, unsigned int cudaDeviceCount, CUGLDeviceList deviceList);
typedef CUresult CUDAAPI tcuGLCtxCreate_v2(CUcontext* pCtx, unsigned int Flags, CUdevice device);
typedef CUresult CUDAAPI tcuGraphicsGLRegisterBuffer(CUgraphicsResource* pCudaResource, GLuint buffer, unsigned Flags);
typedef CUresult CUDAAPI tcuGraphicsGLRegisterImage(CUgraphicsResource* pCudaResource, GLuint image, GLenum target, unsigned Flags);
typedef CUresult CUDAAPI tcuGLGetDevices_v2(unsigned* pCudaDeviceCount, CUdevice* pCudaDevices, unsigned cudaDeviceCount, CUGLDeviceList deviceList);
typedef CUresult CUDAAPI tcuGLCtxCreate_v2(CUcontext* pCtx, unsigned Flags, CUdevice device);
typedef CUresult CUDAAPI tcuGLInit(void);
typedef CUresult CUDAAPI tcuGLRegisterBufferObject(GLuint buffer);
typedef CUresult CUDAAPI tcuGLMapBufferObject_v2(CUdeviceptr* dptr, size_t* size, GLuint buffer);
typedef CUresult CUDAAPI tcuGLUnmapBufferObject(GLuint buffer);
typedef CUresult CUDAAPI tcuGLUnregisterBufferObject(GLuint buffer);
typedef CUresult CUDAAPI tcuGLSetBufferObjectMapFlags(GLuint buffer, unsigned int Flags);
typedef CUresult CUDAAPI tcuGLSetBufferObjectMapFlags(GLuint buffer, unsigned Flags);
typedef CUresult CUDAAPI tcuGLMapBufferObjectAsync_v2(CUdeviceptr* dptr, size_t* size, GLuint buffer, CUstream hStream);
typedef CUresult CUDAAPI tcuGLUnmapBufferObjectAsync(GLuint buffer, CUstream hStream);

View File

@@ -26,14 +26,10 @@ set(INC_SYS
set(SRC
intern/curve_fit_cubic.c
intern/curve_fit_cubic_refit.c
intern/curve_fit_corners_detect.c
curve_fit_nd.h
intern/curve_fit_inline.h
intern/generic_alloc_impl.h
intern/generic_heap.c
intern/generic_heap.h
curve_fit_nd.h
)
blender_add_lib(extern_curve_fit_nd "${SRC}" "${INC}" "${INC_SYS}")

View File

@@ -1,5 +0,0 @@
Project: Curve-Fit-nD
URL: https://github.com/ideasman42/curve-fit-nd
License: BSD 3-Clause
Upstream version: Unknown (Last Release)
Local modifications: None

View File

@@ -25,8 +25,8 @@
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __CURVE_FIT_ND_H__
#define __CURVE_FIT_ND_H__
#ifndef __SPLINE_FIT__
#define __SPLINE_FIT__
/** \file curve_fit_nd.h
* \ingroup curve_fit
@@ -60,7 +60,6 @@ int curve_fit_cubic_to_points_db(
const unsigned int points_len,
const unsigned int dims,
const double error_threshold,
const unsigned int calc_flag,
const unsigned int *corners,
unsigned int corners_len,
@@ -73,7 +72,6 @@ int curve_fit_cubic_to_points_fl(
const unsigned int points_len,
const unsigned int dims,
const float error_threshold,
const unsigned int calc_flag,
const unsigned int *corners,
const unsigned int corners_len,
@@ -81,82 +79,6 @@ int curve_fit_cubic_to_points_fl(
unsigned int **r_cubic_orig_index,
unsigned int **r_corners_index_array, unsigned int *r_corners_index_len);
/**
* Takes a flat array of points and evalues that to calculate handle lengths.
*
* \param points, points_len: The array of points to calculate a cubics from.
* \param dims: The number of dimensions for for each element in \a points.
* \param points_length_cache: Optional pre-calculated lengths between points.
* \param error_threshold: the error threshold to allow for,
* \param tan_l, tan_r: Normalized tangents the handles will be aligned to.
* Note that tangents must both point along the direction of the \a points,
* so \a tan_l points in the same direction of the resulting handle,
* where \a tan_r will point the opposite direction of its handle.
*
* \param r_handle_l, r_handle_r: Resulting calculated handles.
* \param r_error_sq: The maximum distance (squared) this curve diverges from \a points.
*/
int curve_fit_cubic_to_points_single_db(
const double *points,
const unsigned int points_len,
const double *points_length_cache,
const unsigned int dims,
const double error_threshold,
const double tan_l[],
const double tan_r[],
double r_handle_l[],
double r_handle_r[],
double *r_error_sq);
int curve_fit_cubic_to_points_single_fl(
const float *points,
const unsigned int points_len,
const float *points_length_cache,
const unsigned int dims,
const float error_threshold,
const float tan_l[],
const float tan_r[],
float r_handle_l[],
float r_handle_r[],
float *r_error_sq);
enum {
CURVE_FIT_CALC_HIGH_QUALIY = (1 << 0),
CURVE_FIT_CALC_CYCLIC = (1 << 1),
};
/* curve_fit_cubic_refit.c */
int curve_fit_cubic_to_points_refit_db(
const double *points,
const unsigned int points_len,
const unsigned int dims,
const double error_threshold,
const unsigned int calc_flag,
const unsigned int *corners,
unsigned int corners_len,
const double corner_angle,
double **r_cubic_array, unsigned int *r_cubic_array_len,
unsigned int **r_cubic_orig_index,
unsigned int **r_corner_index_array, unsigned int *r_corner_index_len);
int curve_fit_cubic_to_points_refit_fl(
const float *points,
const unsigned int points_len,
const unsigned int dims,
const float error_threshold,
const unsigned int calc_flag,
const unsigned int *corners,
unsigned int corners_len,
const float corner_angle,
float **r_cubic_array, unsigned int *r_cubic_array_len,
unsigned int **r_cubic_orig_index,
unsigned int **r_corner_index_array, unsigned int *r_corner_index_len);
/* curve_fit_corners_detect.c */
@@ -200,4 +122,4 @@ int curve_fit_corners_detect_fl(
unsigned int **r_corners,
unsigned int *r_corners_len);
#endif /* __CURVE_FIT_ND_H__ */
#endif /* __SPLINE_FIT__ */

View File

@@ -382,9 +382,9 @@ int curve_fit_corners_detect_db(
uint i_best = i_span_start;
while (i_next < points_len) {
if ((points_angle[i_next] == 0.0) ||
(len_squared_vnvn(
&points[(i_next - 1) * dims],
&points[i_next * dims], dims) > radius_min_sq))
(len_squared_vnvn(
&points[(i_next - 1) * dims],
&points[i_next * dims], dims) > radius_min_sq))
{
break;
}

View File

@@ -29,10 +29,6 @@
* \ingroup curve_fit
*/
#ifdef _MSC_VER
# define _USE_MATH_DEFINES
#endif
#include <math.h>
#include <float.h>
#include <stdbool.h>
@@ -43,20 +39,11 @@
#include "../curve_fit_nd.h"
/* Take curvature into account when calculating the least square solution isn't usable. */
#define USE_CIRCULAR_FALLBACK
/* Use the maximum distance of any points from the direct line between 2 points
* to calculate how long the handles need to be.
* Can do a 'perfect' reversal of subdivision when for curve has symmetrical handles and doesn't change direction
* (as with an 'S' shape). */
#define USE_OFFSET_FALLBACK
/* avoid re-calculating lengths multiple times */
#define USE_LENGTH_CACHE
/* store the indices in the cubic data so we can return the original indices,
* useful when the caller has data associated with the curve. */
* useful when the caller has data assosiated with the curve. */
#define USE_ORIG_INDEX_DATA
typedef unsigned int uint;
@@ -122,19 +109,9 @@ typedef struct Cubic {
*_p3 = _p2 + (dims); ((void)0)
static size_t cubic_alloc_size(const uint dims)
{
return sizeof(Cubic) + (sizeof(double) * 4 * dims);
}
static Cubic *cubic_alloc(const uint dims)
{
return malloc(cubic_alloc_size(dims));
}
static void cubic_copy(Cubic *cubic_dst, const Cubic *cubic_src, const uint dims)
{
memcpy(cubic_dst, cubic_src, cubic_alloc_size(dims));
return malloc(sizeof(Cubic) + (sizeof(double) * 4 * dims));
}
static void cubic_init(
@@ -301,7 +278,7 @@ static void cubic_calc_acceleration(
double r_v[])
{
CUBIC_VARS_CONST(cubic, dims, p0, p1, p2, p3);
const double s = 1.0 - t;
const double s = 1.0 - t;
for (uint j = 0; j < dims; j++) {
r_v[j] = 6.0 * ((p2[j] - 2.0 * p1[j] + p0[j]) * s +
(p3[j] - 2.0 * p2[j] + p1[j]) * t);
@@ -309,19 +286,20 @@ static void cubic_calc_acceleration(
}
/**
* Returns a 'measure' of the maximum distance (squared) of the points specified
* Returns a 'measure' of the maximal discrepancy of the points specified
* by points_offset from the corresponding cubic(u[]) points.
*/
static double cubic_calc_error(
static void cubic_calc_error(
const Cubic *cubic,
const double *points_offset,
const uint points_offset_len,
const double *u,
const uint dims,
double *r_error_sq_max,
uint *r_error_index)
{
double error_max_sq = 0.0;
double error_sq_max = 0.0;
uint error_index = 0;
const double *pt_real = points_offset + dims;
@@ -335,54 +313,16 @@ static double cubic_calc_error(
cubic_evaluate(cubic, u[i], dims, pt_eval);
const double err_sq = len_squared_vnvn(pt_real, pt_eval, dims);
if (err_sq >= error_max_sq) {
error_max_sq = err_sq;
if (err_sq >= error_sq_max) {
error_sq_max = err_sq;
error_index = i;
}
}
*r_error_sq_max = error_sq_max;
*r_error_index = error_index;
return error_max_sq;
}
#ifdef USE_OFFSET_FALLBACK
/**
* A version #cubic_calc_error where we don't need the split-index and can exit early when over the limit.
*/
static double cubic_calc_error_simple(
const Cubic *cubic,
const double *points_offset,
const uint points_offset_len,
const double *u,
const double error_threshold_sq,
const uint dims)
{
double error_max_sq = 0.0;
const double *pt_real = points_offset + dims;
#ifdef USE_VLA
double pt_eval[dims];
#else
double *pt_eval = alloca(sizeof(double) * dims);
#endif
for (uint i = 1; i < points_offset_len - 1; i++, pt_real += dims) {
cubic_evaluate(cubic, u[i], dims, pt_eval);
const double err_sq = len_squared_vnvn(pt_real, pt_eval, dims);
if (err_sq >= error_threshold_sq) {
return error_threshold_sq;
}
else if (err_sq >= error_max_sq) {
error_max_sq = err_sq;
}
}
return error_max_sq;
}
#endif
/**
* Bezier multipliers
*/
@@ -448,220 +388,12 @@ static void points_calc_center_weighted(
}
}
#ifdef USE_CIRCULAR_FALLBACK
/**
* Return a scale value, used to calculate how much the curve handles should be increased,
*
* This works by placing each end-point on an imaginary circle,
* the placement on the circle is based on the tangent vectors,
* where larger differences in tangent angle cover a larger part of the circle.
*
* Return the scale representing how much larger the distance around the circle is.
*/
static double points_calc_circumference_factor(
const double tan_l[],
const double tan_r[],
const uint dims)
{
const double dot = dot_vnvn(tan_l, tan_r, dims);
const double len_tangent = dot < 0.0 ? len_vnvn(tan_l, tan_r, dims) : len_negated_vnvn(tan_l, tan_r, dims);
if (len_tangent > DBL_EPSILON) {
/* only clamp to avoid precision error */
double angle = acos(max(-fabs(dot), -1.0));
/* Angle may be less than the length when the tangents define >180 degrees of the circle,
* (tangents that point away from each other).
* We could try support this but will likely cause extreme >1 scales which could cause other issues. */
// assert(angle >= len_tangent);
double factor = (angle / len_tangent);
assert(factor < (M_PI / 2) + (DBL_EPSILON * 10));
return factor;
}
else {
/* tangents are exactly aligned (think two opposite sides of a circle). */
return (M_PI / 2);
}
}
/**
* Return the value which the distance between points will need to be scaled by,
* to define a handle, given both points are on a perfect circle.
*
* \note the return value will need to be multiplied by 1.3... for correct results.
*/
static double points_calc_circle_tangent_factor(
const double tan_l[],
const double tan_r[],
const uint dims)
{
const double eps = 1e-8;
const double tan_dot = dot_vnvn(tan_l, tan_r, dims);
if (tan_dot > 1.0 - eps) {
/* no angle difference (use fallback, length wont make any difference) */
return (1.0 / 3.0) * 0.75;
}
else if (tan_dot < -1.0 + eps) {
/* parallele tangents (half-circle) */
return (1.0 / 2.0);
}
else {
/* non-aligned tangents, calculate handle length */
const double angle = acos(tan_dot) / 2.0;
/* could also use 'angle_sin = len_vnvn(tan_l, tan_r, dims) / 2.0' */
const double angle_sin = sin(angle);
const double angle_cos = cos(angle);
return ((1.0 - angle_cos) / (angle_sin * 2.0)) / angle_sin;
}
}
/**
* Calculate the scale the handles, which serves as a best-guess
* used as a fallback when the least-square solution fails.
*/
static double points_calc_cubic_scale(
const double v_l[], const double v_r[],
const double tan_l[],
const double tan_r[],
const double coords_length, uint dims)
{
const double len_direct = len_vnvn(v_l, v_r, dims);
const double len_circle_factor = points_calc_circle_tangent_factor(tan_l, tan_r, dims);
/* if this curve is a circle, this value doesn't need modification */
const double len_circle_handle = (len_direct * (len_circle_factor / 0.75));
/* scale by the difference from the circumference distance */
const double len_circle = len_direct * points_calc_circumference_factor(tan_l, tan_r, dims);
double scale_handle = (coords_length / len_circle);
/* Could investigate an accurate calculation here,
* though this gives close results */
scale_handle = ((scale_handle - 1.0) * 1.75) + 1.0;
return len_circle_handle * scale_handle;
}
static void cubic_from_points_fallback(
const double *points_offset,
const uint points_offset_len,
const double tan_l[],
const double tan_r[],
const uint dims,
Cubic *r_cubic)
{
const double *p0 = &points_offset[0];
const double *p3 = &points_offset[(points_offset_len - 1) * dims];
double alpha = len_vnvn(p0, p3, dims) / 3.0;
double *p1 = CUBIC_PT(r_cubic, 1, dims);
double *p2 = CUBIC_PT(r_cubic, 2, dims);
copy_vnvn(CUBIC_PT(r_cubic, 0, dims), p0, dims);
copy_vnvn(CUBIC_PT(r_cubic, 3, dims), p3, dims);
#ifdef USE_ORIG_INDEX_DATA
r_cubic->orig_span = (points_offset_len - 1);
#endif
/* p1 = p0 - (tan_l * alpha_l);
* p2 = p3 + (tan_r * alpha_r);
*/
msub_vn_vnvn_fl(p1, p0, tan_l, alpha, dims);
madd_vn_vnvn_fl(p2, p3, tan_r, alpha, dims);
}
#endif /* USE_CIRCULAR_FALLBACK */
#ifdef USE_OFFSET_FALLBACK
static void cubic_from_points_offset_fallback(
const double *points_offset,
const uint points_offset_len,
const double tan_l[],
const double tan_r[],
const uint dims,
Cubic *r_cubic)
{
const double *p0 = &points_offset[0];
const double *p3 = &points_offset[(points_offset_len - 1) * dims];
#ifdef USE_VLA
double dir_unit[dims];
double a[2][dims];
double tmp[dims];
#else
double *dir_unit = alloca(sizeof(double) * dims);
double *a[2] = {
alloca(sizeof(double) * dims),
alloca(sizeof(double) * dims),
};
double *tmp = alloca(sizeof(double) * dims);
#endif
const double dir_dist = normalize_vn_vnvn(dir_unit, p3, p0, dims);
project_plane_vn_vnvn_normalized(a[0], tan_l, dir_unit, dims);
project_plane_vn_vnvn_normalized(a[1], tan_r, dir_unit, dims);
/* only for better accuracy, not essential */
normalize_vn(a[0], dims);
normalize_vn(a[1], dims);
mul_vnvn_fl(a[1], a[1], -1, dims);
double dists[2] = {0, 0};
const double *pt = &points_offset[dims];
for (uint i = 1; i < points_offset_len - 1; i++, pt += dims) {
for (uint k = 0; k < 2; k++) {
sub_vn_vnvn(tmp, p0, pt, dims);
project_vn_vnvn_normalized(tmp, tmp, a[k], dims);
dists[k] = max(dists[k], dot_vnvn(tmp, a[k], dims));
}
}
double alpha_l = (dists[0] / 0.75) / dot_vnvn(tan_l, a[0], dims);
double alpha_r = (dists[1] / 0.75) / -dot_vnvn(tan_r, a[1], dims);
if (!(alpha_l > 0.0)) {
alpha_l = dir_dist / 3.0;
}
if (!(alpha_r > 0.0)) {
alpha_r = dir_dist / 3.0;
}
double *p1 = CUBIC_PT(r_cubic, 1, dims);
double *p2 = CUBIC_PT(r_cubic, 2, dims);
copy_vnvn(CUBIC_PT(r_cubic, 0, dims), p0, dims);
copy_vnvn(CUBIC_PT(r_cubic, 3, dims), p3, dims);
#ifdef USE_ORIG_INDEX_DATA
r_cubic->orig_span = (points_offset_len - 1);
#endif
/* p1 = p0 - (tan_l * alpha_l);
* p2 = p3 + (tan_r * alpha_r);
*/
msub_vn_vnvn_fl(p1, p0, tan_l, alpha_l, dims);
madd_vn_vnvn_fl(p2, p3, tan_r, alpha_r, dims);
}
#endif /* USE_OFFSET_FALLBACK */
/**
* Use least-squares method to find Bezier control points for region.
*/
static void cubic_from_points(
const double *points_offset,
const uint points_offset_len,
#ifdef USE_CIRCULAR_FALLBACK
const double points_offset_coords_length,
#endif
const double *u_prime,
const double tan_l[],
const double tan_r[],
@@ -735,24 +467,11 @@ static void cubic_from_points(
* so only problems absurd of approximation and not for bugs in the code.
*/
bool use_clamp = true;
/* flip check to catch nan values */
if (!(alpha_l >= 0.0) ||
!(alpha_r >= 0.0))
{
#ifdef USE_CIRCULAR_FALLBACK
double alpha_test = points_calc_cubic_scale(p0, p3, tan_l, tan_r, points_offset_coords_length, dims);
if (!isfinite(alpha_test)) {
alpha_test = len_vnvn(p0, p3, dims) / 3.0;
}
alpha_l = alpha_r = alpha_test;
#else
alpha_l = alpha_r = len_vnvn(p0, p3, dims) / 3.0;
#endif
/* skip clamping when we're using default handles */
use_clamp = false;
}
double *p1 = CUBIC_PT(r_cubic, 1, dims);
@@ -774,73 +493,64 @@ static void cubic_from_points(
/* ------------------------------------
* Clamping (we could make it optional)
*/
if (use_clamp) {
#ifdef USE_VLA
double center[dims];
double center[dims];
#else
double *center = alloca(sizeof(double) * dims);
double *center = alloca(sizeof(double) * dims);
#endif
points_calc_center_weighted(points_offset, points_offset_len, dims, center);
points_calc_center_weighted(points_offset, points_offset_len, dims, center);
const double clamp_scale = 3.0; /* clamp to 3x */
double dist_sq_max = 0.0;
const double clamp_scale = 3.0; /* clamp to 3x */
double dist_sq_max = 0.0;
{
const double *pt = points_offset;
for (uint i = 0; i < points_offset_len; i++, pt += dims) {
{
const double *pt = points_offset;
for (uint i = 0; i < points_offset_len; i++, pt += dims) {
#if 0
double dist_sq_test = sq(len_vnvn(center, pt, dims) * clamp_scale);
double dist_sq_test = sq(len_vnvn(center, pt, dims) * clamp_scale);
#else
/* do inline */
double dist_sq_test = 0.0;
for (uint j = 0; j < dims; j++) {
dist_sq_test += sq((pt[j] - center[j]) * clamp_scale);
}
#endif
dist_sq_max = max(dist_sq_max, dist_sq_test);
}
}
double p1_dist_sq = len_squared_vnvn(center, p1, dims);
double p2_dist_sq = len_squared_vnvn(center, p2, dims);
if (p1_dist_sq > dist_sq_max ||
p2_dist_sq > dist_sq_max)
{
#ifdef USE_CIRCULAR_FALLBACK
double alpha_test = points_calc_cubic_scale(p0, p3, tan_l, tan_r, points_offset_coords_length, dims);
if (!isfinite(alpha_test)) {
alpha_test = len_vnvn(p0, p3, dims) / 3.0;
}
alpha_l = alpha_r = alpha_test;
#else
alpha_l = alpha_r = len_vnvn(p0, p3, dims) / 3.0;
#endif
/*
* p1 = p0 - (tan_l * alpha_l);
* p2 = p3 + (tan_r * alpha_r);
*/
/* do inline */
double dist_sq_test = 0.0;
for (uint j = 0; j < dims; j++) {
p1[j] = p0[j] - (tan_l[j] * alpha_l);
p2[j] = p3[j] + (tan_r[j] * alpha_r);
dist_sq_test += sq((pt[j] - center[j]) * clamp_scale);
}
#endif
dist_sq_max = max(dist_sq_max, dist_sq_test);
}
}
p1_dist_sq = len_squared_vnvn(center, p1, dims);
p2_dist_sq = len_squared_vnvn(center, p2, dims);
double p1_dist_sq = len_squared_vnvn(center, p1, dims);
double p2_dist_sq = len_squared_vnvn(center, p2, dims);
if (p1_dist_sq > dist_sq_max ||
p2_dist_sq > dist_sq_max)
{
alpha_l = alpha_r = len_vnvn(p0, p3, dims) / 3.0;
/*
* p1 = p0 - (tan_l * alpha_l);
* p2 = p3 + (tan_r * alpha_r);
*/
for (uint j = 0; j < dims; j++) {
p1[j] = p0[j] - (tan_l[j] * alpha_l);
p2[j] = p3[j] + (tan_r[j] * alpha_r);
}
/* clamp within the 3x radius */
if (p1_dist_sq > dist_sq_max) {
isub_vnvn(p1, center, dims);
imul_vn_fl(p1, sqrt(dist_sq_max) / sqrt(p1_dist_sq), dims);
iadd_vnvn(p1, center, dims);
}
if (p2_dist_sq > dist_sq_max) {
isub_vnvn(p2, center, dims);
imul_vn_fl(p2, sqrt(dist_sq_max) / sqrt(p2_dist_sq), dims);
iadd_vnvn(p2, center, dims);
}
p1_dist_sq = len_squared_vnvn(center, p1, dims);
p2_dist_sq = len_squared_vnvn(center, p2, dims);
}
/* clamp within the 3x radius */
if (p1_dist_sq > dist_sq_max) {
isub_vnvn(p1, center, dims);
imul_vn_fl(p1, sqrt(dist_sq_max) / sqrt(p1_dist_sq), dims);
iadd_vnvn(p1, center, dims);
}
if (p2_dist_sq > dist_sq_max) {
isub_vnvn(p2, center, dims);
imul_vn_fl(p2, sqrt(dist_sq_max) / sqrt(p2_dist_sq), dims);
iadd_vnvn(p2, center, dims);
}
/* end clamping */
}
@@ -864,10 +574,8 @@ static void points_calc_coord_length_cache(
}
#endif /* USE_LENGTH_CACHE */
/**
* \return the accumulated length of \a points_offset.
*/
static double points_calc_coord_length(
static void points_calc_coord_length(
const double *points_offset,
const uint points_offset_len,
const uint dims,
@@ -884,6 +592,7 @@ static double points_calc_coord_length(
#ifdef USE_LENGTH_CACHE
length = points_length_cache[i];
assert(len_vnvn(pt, pt_prev, dims) == points_length_cache[i]);
#else
length = len_vnvn(pt, pt_prev, dims);
@@ -896,10 +605,9 @@ static double points_calc_coord_length(
}
assert(!is_almost_zero(r_u[points_offset_len - 1]));
const double w = r_u[points_offset_len - 1];
for (uint i = 1; i < points_offset_len; i++) {
for (uint i = 0; i < points_offset_len; i++) {
r_u[i] /= w;
}
return w;
}
/**
@@ -912,10 +620,10 @@ static double points_calc_coord_length(
* \note Return value may be `nan` caller must check for this.
*/
static double cubic_find_root(
const Cubic *cubic,
const double p[],
const double u,
const uint dims)
const Cubic *cubic,
const double p[],
const double u,
const uint dims)
{
/* Newton-Raphson Method. */
/* all vectors */
@@ -987,7 +695,7 @@ static bool cubic_reparameterize(
}
static bool fit_cubic_to_points(
static void fit_cubic_to_points(
const double *points_offset,
const uint points_offset_len,
#ifdef USE_LENGTH_CACHE
@@ -995,15 +703,19 @@ static bool fit_cubic_to_points(
#endif
const double tan_l[],
const double tan_r[],
const double error_threshold_sq,
const double error_threshold,
const uint dims,
Cubic *r_cubic, double *r_error_max_sq, uint *r_split_index)
/* fill in the list */
CubicList *clist)
{
const uint iteration_max = 4;
const double error_sq = sq(error_threshold);
Cubic *cubic;
if (points_offset_len == 2) {
CUBIC_VARS(r_cubic, dims, p0, p1, p2, p3);
cubic = cubic_alloc(dims);
CUBIC_VARS(cubic, dims, p0, p1, p2, p3);
copy_vnvn(p0, &points_offset[0 * dims], dims);
copy_vnvn(p3, &points_offset[1 * dims], dims);
@@ -1013,16 +725,14 @@ static bool fit_cubic_to_points(
madd_vn_vnvn_fl(p2, p3, tan_r, dist, dims);
#ifdef USE_ORIG_INDEX_DATA
r_cubic->orig_span = 1;
cubic->orig_span = 1;
#endif
return true;
cubic_list_prepend(clist, cubic);
return;
}
double *u = malloc(sizeof(double) * points_offset_len);
#ifdef USE_CIRCULAR_FALLBACK
const double points_offset_coords_length =
#endif
points_calc_coord_length(
points_offset, points_offset_len, dims,
#ifdef USE_LENGTH_CACHE
@@ -1030,154 +740,55 @@ static bool fit_cubic_to_points(
#endif
u);
double error_max_sq;
cubic = cubic_alloc(dims);
double error_sq_max;
uint split_index;
/* Parameterize points, and attempt to fit curve */
cubic_from_points(
points_offset, points_offset_len,
#ifdef USE_CIRCULAR_FALLBACK
points_offset_coords_length,
#endif
u, tan_l, tan_r, dims, r_cubic);
points_offset, points_offset_len, u, tan_l, tan_r, dims, cubic);
/* Find max deviation of points to fitted curve */
error_max_sq = cubic_calc_error(
r_cubic, points_offset, points_offset_len, u, dims,
&split_index);
cubic_calc_error(
cubic, points_offset, points_offset_len, u, dims,
&error_sq_max, &split_index);
Cubic *cubic_test = alloca(cubic_alloc_size(dims));
/* Run this so we use the non-circular calculation when the circular-fallback
* in 'cubic_from_points' failed to give a close enough result. */
#ifdef USE_CIRCULAR_FALLBACK
if (!(error_max_sq < error_threshold_sq)) {
/* Don't use the cubic calculated above, instead calculate a new fallback cubic,
* since this tends to give more balanced split_index along the curve.
* This is because the attempt to calcualte the cubic may contain spikes
* along the curve which may give a lop-sided maximum distance. */
cubic_from_points_fallback(
points_offset, points_offset_len,
tan_l, tan_r, dims, cubic_test);
const double error_max_sq_test = cubic_calc_error(
cubic_test, points_offset, points_offset_len, u, dims,
&split_index);
/* intentionally use the newly calculated 'split_index',
* even if the 'error_max_sq_test' is worse. */
if (error_max_sq > error_max_sq_test) {
error_max_sq = error_max_sq_test;
cubic_copy(r_cubic, cubic_test, dims);
}
if (error_sq_max < error_sq) {
free(u);
cubic_list_prepend(clist, cubic);
return;
}
#endif
/* Test the offset fallback */
#ifdef USE_OFFSET_FALLBACK
if (!(error_max_sq < error_threshold_sq)) {
/* Using the offset from the curve to calculate cubic handle length may give better results
* try this as a second fallback. */
cubic_from_points_offset_fallback(
points_offset, points_offset_len,
tan_l, tan_r, dims, cubic_test);
const double error_max_sq_test = cubic_calc_error_simple(
cubic_test, points_offset, points_offset_len, u, error_max_sq, dims);
if (error_max_sq > error_max_sq_test) {
error_max_sq = error_max_sq_test;
cubic_copy(r_cubic, cubic_test, dims);
}
}
#endif
*r_error_max_sq = error_max_sq;
*r_split_index = split_index;
if (!(error_max_sq < error_threshold_sq)) {
cubic_copy(cubic_test, r_cubic, dims);
else {
/* If error not too large, try some reparameterization and iteration */
double *u_prime = malloc(sizeof(double) * points_offset_len);
for (uint iter = 0; iter < iteration_max; iter++) {
if (!cubic_reparameterize(
cubic_test, points_offset, points_offset_len, u, dims, u_prime))
cubic, points_offset, points_offset_len, u, dims, u_prime))
{
break;
}
cubic_from_points(
points_offset, points_offset_len,
#ifdef USE_CIRCULAR_FALLBACK
points_offset_coords_length,
#endif
u_prime, tan_l, tan_r, dims, cubic_test);
points_offset, points_offset_len, u_prime,
tan_l, tan_r, dims, cubic);
cubic_calc_error(
cubic, points_offset, points_offset_len, u_prime, dims,
&error_sq_max, &split_index);
const double error_max_sq_test = cubic_calc_error(
cubic_test, points_offset, points_offset_len, u_prime, dims,
&split_index);
if (error_max_sq > error_max_sq_test) {
error_max_sq = error_max_sq_test;
cubic_copy(r_cubic, cubic_test, dims);
*r_error_max_sq = error_max_sq;
*r_split_index = split_index;
}
if (!(error_max_sq < error_threshold_sq)) {
/* continue */
}
else {
assert((error_max_sq < error_threshold_sq));
if (error_sq_max < error_sq) {
free(u_prime);
free(u);
return true;
cubic_list_prepend(clist, cubic);
return;
}
SWAP(double *, u, u_prime);
}
free(u_prime);
free(u);
return false;
}
else {
free(u);
return true;
}
}
static void fit_cubic_to_points_recursive(
const double *points_offset,
const uint points_offset_len,
#ifdef USE_LENGTH_CACHE
const double *points_length_cache,
#endif
const double tan_l[],
const double tan_r[],
const double error_threshold_sq,
const uint calc_flag,
const uint dims,
/* fill in the list */
CubicList *clist)
{
Cubic *cubic = cubic_alloc(dims);
uint split_index;
double error_max_sq;
if (fit_cubic_to_points(
points_offset, points_offset_len,
#ifdef USE_LENGTH_CACHE
points_length_cache,
#endif
tan_l, tan_r,
(calc_flag & CURVE_FIT_CALC_HIGH_QUALIY) ? DBL_EPSILON : error_threshold_sq,
dims,
cubic, &error_max_sq, &split_index) ||
(error_max_sq < error_threshold_sq))
{
cubic_list_prepend(clist, cubic);
return;
}
free(u);
cubic_free(cubic);
@@ -1203,35 +814,21 @@ static void fit_cubic_to_points_recursive(
pt_a += dims;
}
{
#ifdef USE_VLA
double tan_center_a[dims];
double tan_center_b[dims];
#else
double *tan_center_a = alloca(sizeof(double) * dims);
double *tan_center_b = alloca(sizeof(double) * dims);
#endif
const double *pt = &points_offset[split_index * dims];
/* tan_center = (pt_a - pt_b).normalized() */
normalize_vn_vnvn(tan_center, pt_a, pt_b, dims);
/* tan_center = ((pt_a - pt).normalized() + (pt - pt_b).normalized()).normalized() */
normalize_vn_vnvn(tan_center_a, pt_a, pt, dims);
normalize_vn_vnvn(tan_center_b, pt, pt_b, dims);
add_vn_vnvn(tan_center, tan_center_a, tan_center_b, dims);
normalize_vn(tan_center, dims);
}
fit_cubic_to_points_recursive(
fit_cubic_to_points(
points_offset, split_index + 1,
#ifdef USE_LENGTH_CACHE
points_length_cache,
#endif
tan_l, tan_center, error_threshold_sq, calc_flag, dims, clist);
fit_cubic_to_points_recursive(
tan_l, tan_center, error_threshold, dims, clist);
fit_cubic_to_points(
&points_offset[split_index * dims], points_offset_len - split_index,
#ifdef USE_LENGTH_CACHE
points_length_cache + split_index,
#endif
tan_center, tan_r, error_threshold_sq, calc_flag, dims, clist);
tan_center, tan_r, error_threshold, dims, clist);
}
@@ -1254,7 +851,6 @@ int curve_fit_cubic_to_points_db(
const uint points_len,
const uint dims,
const double error_threshold,
const uint calc_flag,
const uint *corners,
uint corners_len,
@@ -1294,8 +890,6 @@ int curve_fit_cubic_to_points_db(
corner_index_array[corner_index++] = corners[0];
}
const double error_threshold_sq = sq(error_threshold);
for (uint i = 1; i < corners_len; i++) {
const uint points_offset_len = corners[i] - corners[i - 1] + 1;
const uint first_point = corners[i - 1];
@@ -1325,12 +919,12 @@ int curve_fit_cubic_to_points_db(
points_length_cache);
#endif
fit_cubic_to_points_recursive(
fit_cubic_to_points(
&points[first_point * dims], points_offset_len,
#ifdef USE_LENGTH_CACHE
points_length_cache,
#endif
tan_l, tan_r, error_threshold_sq, calc_flag, dims, &clist);
tan_l, tan_r, error_threshold, dims, &clist);
}
else if (points_len == 1) {
assert(points_offset_len == 1);
@@ -1397,7 +991,6 @@ int curve_fit_cubic_to_points_fl(
const uint points_len,
const uint dims,
const float error_threshold,
const uint calc_flag,
const uint *corners,
const uint corners_len,
@@ -1408,14 +1001,16 @@ int curve_fit_cubic_to_points_fl(
const uint points_flat_len = points_len * dims;
double *points_db = malloc(sizeof(double) * points_flat_len);
copy_vndb_vnfl(points_db, points, points_flat_len);
for (uint i = 0; i < points_flat_len; i++) {
points_db[i] = (double)points[i];
}
double *cubic_array_db = NULL;
float *cubic_array_fl = NULL;
uint cubic_array_len = 0;
int result = curve_fit_cubic_to_points_db(
points_db, points_len, dims, error_threshold, calc_flag, corners, corners_len,
points_db, points_len, dims, error_threshold, corners, corners_len,
&cubic_array_db, &cubic_array_len,
r_cubic_orig_index,
r_corner_index_array, r_corner_index_len);
@@ -1436,118 +1031,4 @@ int curve_fit_cubic_to_points_fl(
return result;
}
/**
* Fit a single cubic to points.
*/
int curve_fit_cubic_to_points_single_db(
const double *points,
const uint points_len,
const double *points_length_cache,
const uint dims,
const double error_threshold,
const double tan_l[],
const double tan_r[],
double r_handle_l[],
double r_handle_r[],
double *r_error_max_sq)
{
Cubic *cubic = alloca(cubic_alloc_size(dims));
uint split_index;
/* in this instance theres no advantage in using length cache,
* since we're not recursively calculating values. */
#ifdef USE_LENGTH_CACHE
double *points_length_cache_alloc = NULL;
if (points_length_cache == NULL) {
points_length_cache_alloc = malloc(sizeof(double) * points_len);
points_calc_coord_length_cache(
points, points_len, dims,
points_length_cache_alloc);
points_length_cache = points_length_cache_alloc;
}
#endif
fit_cubic_to_points(
points, points_len,
#ifdef USE_LENGTH_CACHE
points_length_cache,
#endif
tan_l, tan_r, error_threshold, dims,
cubic, r_error_max_sq, &split_index);
#ifdef USE_LENGTH_CACHE
if (points_length_cache_alloc) {
free(points_length_cache_alloc);
}
#endif
copy_vnvn(r_handle_l, CUBIC_PT(cubic, 1, dims), dims);
copy_vnvn(r_handle_r, CUBIC_PT(cubic, 2, dims), dims);
return 0;
}
int curve_fit_cubic_to_points_single_fl(
const float *points,
const uint points_len,
const float *points_length_cache,
const uint dims,
const float error_threshold,
const float tan_l[],
const float tan_r[],
float r_handle_l[],
float r_handle_r[],
float *r_error_sq)
{
const uint points_flat_len = points_len * dims;
double *points_db = malloc(sizeof(double) * points_flat_len);
double *points_length_cache_db = NULL;
copy_vndb_vnfl(points_db, points, points_flat_len);
if (points_length_cache) {
points_length_cache_db = malloc(sizeof(double) * points_len);
copy_vndb_vnfl(points_length_cache_db, points_length_cache, points_len);
}
#ifdef USE_VLA
double tan_l_db[dims];
double tan_r_db[dims];
double r_handle_l_db[dims];
double r_handle_r_db[dims];
#else
double *tan_l_db = alloca(sizeof(double) * dims);
double *tan_r_db = alloca(sizeof(double) * dims);
double *r_handle_l_db = alloca(sizeof(double) * dims);
double *r_handle_r_db = alloca(sizeof(double) * dims);
#endif
double r_error_sq_db;
copy_vndb_vnfl(tan_l_db, tan_l, dims);
copy_vndb_vnfl(tan_r_db, tan_r, dims);
int result = curve_fit_cubic_to_points_single_db(
points_db, points_len, points_length_cache_db, dims,
(double)error_threshold,
tan_l_db, tan_r_db,
r_handle_l_db, r_handle_r_db,
&r_error_sq_db);
free(points_db);
if (points_length_cache_db) {
free(points_length_cache_db);
}
copy_vnfl_vndb(r_handle_l, r_handle_l_db, dims);
copy_vnfl_vndb(r_handle_r, r_handle_r_db, dims);
*r_error_sq = (float)r_error_sq_db;
return result;
}
/** \} */

File diff suppressed because it is too large Load Diff

View File

@@ -57,7 +57,7 @@ MINLINE double max(const double a, const double b)
#endif
MINLINE void zero_vn(
double v0[], const uint dims)
double v0[], const uint dims)
{
for (uint j = 0; j < dims; j++) {
v0[j] = 0.0;
@@ -65,7 +65,7 @@ MINLINE void zero_vn(
}
MINLINE void flip_vn_vnvn(
double v_out[], const double v0[], const double v1[], const uint dims)
double v_out[], const double v0[], const double v1[], const uint dims)
{
for (uint j = 0; j < dims; j++) {
v_out[j] = v0[j] + (v0[j] - v1[j]);
@@ -80,22 +80,6 @@ MINLINE void copy_vnvn(
}
}
MINLINE void copy_vnfl_vndb(
float v0[], const double v1[], const uint dims)
{
for (uint j = 0; j < dims; j++) {
v0[j] = (float)v1[j];
}
}
MINLINE void copy_vndb_vnfl(
double v0[], const float v1[], const uint dims)
{
for (uint j = 0; j < dims; j++) {
v0[j] = (double)v1[j];
}
}
MINLINE double dot_vnvn(
const double v0[], const double v1[], const uint dims)
{
@@ -194,7 +178,7 @@ MINLINE void imul_vn_fl(double v0[], const double f, const uint dims)
MINLINE double len_squared_vnvn(
const double v0[], const double v1[], const uint dims)
const double v0[], const double v1[], const uint dims)
{
double d = 0.0;
for (uint j = 0; j < dims; j++) {
@@ -219,29 +203,13 @@ MINLINE double len_vnvn(
return sqrt(len_squared_vnvn(v0, v1, dims));
}
MINLINE double len_vn(
const double v0[], const uint dims)
#if 0
static double len_vn(
const double v0[], const uint dims)
{
return sqrt(len_squared_vn(v0, dims));
}
/* special case, save us negating a copy, then getting the length */
MINLINE double len_squared_negated_vnvn(
const double v0[], const double v1[], const uint dims)
{
double d = 0.0;
for (uint j = 0; j < dims; j++) {
d += sq(v0[j] + v1[j]);
}
return d;
}
MINLINE double len_negated_vnvn(
const double v0[], const double v1[], const uint dims)
{
return sqrt(len_squared_negated_vnvn(v0, v1, dims));
}
MINLINE double normalize_vn(
double v0[], const uint dims)
{
@@ -251,6 +219,7 @@ MINLINE double normalize_vn(
}
return d;
}
#endif
/* v_out = (v0 - v1).normalized() */
MINLINE double normalize_vn_vnvn(
@@ -290,26 +259,4 @@ MINLINE bool equals_vnvn(
return true;
}
MINLINE void project_vn_vnvn(
double v_out[], const double p[], const double v_proj[], const uint dims)
{
const double mul = dot_vnvn(p, v_proj, dims) / dot_vnvn(v_proj, v_proj, dims);
mul_vnvn_fl(v_out, v_proj, mul, dims);
}
MINLINE void project_vn_vnvn_normalized(
double v_out[], const double p[], const double v_proj[], const uint dims)
{
const double mul = dot_vnvn(p, v_proj, dims);
mul_vnvn_fl(v_out, v_proj, mul, dims);
}
MINLINE void project_plane_vn_vnvn_normalized(
double v_out[], const double v[], const double v_plane[], const uint dims)
{
assert(v != v_out);
project_vn_vnvn_normalized(v_out, v, v_plane, dims);
sub_vn_vnvn(v_out, v, v_out, dims);
}
/** \} */

View File

@@ -1,220 +0,0 @@
/*
* Copyright (c) 2016, Blender Foundation.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of the <organization> nor the
* names of its contributors may be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/**
* \file generic_alloc_impl.c
* \ingroup curve_fit
*
* Simple Memory Chunking Allocator
* ================================
*
* Defines need to be set:
* - #TPOOL_IMPL_PREFIX: Prefix to use for the API.
* - #TPOOL_ALLOC_TYPE: Struct type this pool handles.
* - #TPOOL_STRUCT: Name for pool struct name.
* - #TPOOL_CHUNK_SIZE: Chunk size (optional), use 64kb when not defined.
*
* \note #TPOOL_ALLOC_TYPE must be at least ``sizeof(void *)``.
*
* Defines the API, uses #TPOOL_IMPL_PREFIX to prefix each function.
*
* - *_pool_create()
* - *_pool_destroy()
* - *_pool_clear()
*
* - *_pool_elem_alloc()
* - *_pool_elem_calloc()
* - *_pool_elem_free()
*/
/* check we're not building directly */
#if !defined(TPOOL_IMPL_PREFIX) || \
!defined(TPOOL_ALLOC_TYPE) || \
!defined(TPOOL_STRUCT)
# error "This file can't be compiled directly, include in another source file"
#endif
#define _CONCAT_AUX(MACRO_ARG1, MACRO_ARG2) MACRO_ARG1 ## MACRO_ARG2
#define _CONCAT(MACRO_ARG1, MACRO_ARG2) _CONCAT_AUX(MACRO_ARG1, MACRO_ARG2)
#define _TPOOL_PREFIX(id) _CONCAT(TPOOL_IMPL_PREFIX, _##id)
/* local identifiers */
#define pool_create _TPOOL_PREFIX(pool_create)
#define pool_destroy _TPOOL_PREFIX(pool_destroy)
#define pool_clear _TPOOL_PREFIX(pool_clear)
#define pool_elem_alloc _TPOOL_PREFIX(pool_elem_alloc)
#define pool_elem_calloc _TPOOL_PREFIX(pool_elem_calloc)
#define pool_elem_free _TPOOL_PREFIX(pool_elem_free)
/* private identifiers (only for this file, undefine after) */
#define pool_alloc_chunk _TPOOL_PREFIX(pool_alloc_chunk)
#define TPoolChunk _TPOOL_PREFIX(TPoolChunk)
#define TPoolChunkElemFree _TPOOL_PREFIX(TPoolChunkElemFree)
#ifndef TPOOL_CHUNK_SIZE
#define TPOOL_CHUNK_SIZE (1 << 16) /* 64kb */
#define _TPOOL_CHUNK_SIZE_UNDEF
#endif
#ifndef UNLIKELY
# ifdef __GNUC__
# define UNLIKELY(x) __builtin_expect(!!(x), 0)
# else
# define UNLIKELY(x) (x)
# endif
#endif
#ifdef __GNUC__
# define MAYBE_UNUSED __attribute__((unused))
#else
# define MAYBE_UNUSED
#endif
struct TPoolChunk {
struct TPoolChunk *prev;
unsigned int size;
unsigned int bufsize;
TPOOL_ALLOC_TYPE buf[0];
};
struct TPoolChunkElemFree {
struct TPoolChunkElemFree *next;
};
struct TPOOL_STRUCT {
/* Always keep at least one chunk (never NULL) */
struct TPoolChunk *chunk;
/* when NULL, allocate a new chunk */
struct TPoolChunkElemFree *free;
};
/**
* Number of elems to include per #TPoolChunk when no reserved size is passed,
* or we allocate past the reserved number.
*
* \note Optimize number for 64kb allocs.
*/
#define _TPOOL_CHUNK_DEFAULT_NUM \
(((1 << 16) - sizeof(struct TPoolChunk)) / sizeof(TPOOL_ALLOC_TYPE))
/** \name Internal Memory Management
* \{ */
static struct TPoolChunk *pool_alloc_chunk(
unsigned int tot_elems, struct TPoolChunk *chunk_prev)
{
struct TPoolChunk *chunk = malloc(
sizeof(struct TPoolChunk) + (sizeof(TPOOL_ALLOC_TYPE) * tot_elems));
chunk->prev = chunk_prev;
chunk->bufsize = tot_elems;
chunk->size = 0;
return chunk;
}
static TPOOL_ALLOC_TYPE *pool_elem_alloc(struct TPOOL_STRUCT *pool)
{
TPOOL_ALLOC_TYPE *elem;
if (pool->free) {
elem = (TPOOL_ALLOC_TYPE *)pool->free;
pool->free = pool->free->next;
}
else {
struct TPoolChunk *chunk = pool->chunk;
if (UNLIKELY(chunk->size == chunk->bufsize)) {
chunk = pool->chunk = pool_alloc_chunk(_TPOOL_CHUNK_DEFAULT_NUM, chunk);
}
elem = &chunk->buf[chunk->size++];
}
return elem;
}
MAYBE_UNUSED
static TPOOL_ALLOC_TYPE *pool_elem_calloc(struct TPOOL_STRUCT *pool)
{
TPOOL_ALLOC_TYPE *elem = pool_elem_alloc(pool);
memset(elem, 0, sizeof(*elem));
return elem;
}
static void pool_elem_free(struct TPOOL_STRUCT *pool, TPOOL_ALLOC_TYPE *elem)
{
struct TPoolChunkElemFree *elem_free = (struct TPoolChunkElemFree *)elem;
elem_free->next = pool->free;
pool->free = elem_free;
}
static void pool_create(struct TPOOL_STRUCT *pool, unsigned int tot_reserve)
{
pool->chunk = pool_alloc_chunk((tot_reserve > 1) ? tot_reserve : _TPOOL_CHUNK_DEFAULT_NUM, NULL);
pool->free = NULL;
}
MAYBE_UNUSED
static void pool_clear(struct TPOOL_STRUCT *pool)
{
/* Remove all except the last chunk */
while (pool->chunk->prev) {
struct TPoolChunk *chunk_prev = pool->chunk->prev;
free(pool->chunk);
pool->chunk = chunk_prev;
}
pool->chunk->size = 0;
pool->free = NULL;
}
static void pool_destroy(struct TPOOL_STRUCT *pool)
{
struct TPoolChunk *chunk = pool->chunk;
do {
struct TPoolChunk *chunk_prev;
chunk_prev = chunk->prev;
free(chunk);
chunk = chunk_prev;
} while (chunk);
pool->chunk = NULL;
pool->free = NULL;
}
/** \} */
#undef _TPOOL_CHUNK_DEFAULT_NUM
#undef _CONCAT_AUX
#undef _CONCAT
#undef _TPOOL_PREFIX
#undef TPoolChunk
#undef TPoolChunkElemFree
#ifdef _TPOOL_CHUNK_SIZE_UNDEF
# undef TPOOL_CHUNK_SIZE
# undef _TPOOL_CHUNK_SIZE_UNDEF
#endif

View File

@@ -1,278 +0,0 @@
/*
* Copyright (c) 2016, Blender Foundation.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of the <organization> nor the
* names of its contributors may be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** \file generic_heap.c
* \ingroup curve_fit
*/
#include <stdlib.h>
#include <string.h>
#include <stdbool.h>
#include <assert.h>
#include "generic_heap.h"
/* swap with a temp value */
#define SWAP_TVAL(tval, a, b) { \
(tval) = (a); \
(a) = (b); \
(b) = (tval); \
} (void)0
#ifdef __GNUC__
# define UNLIKELY(x) __builtin_expect(!!(x), 0)
#else
# define UNLIKELY(x) (x)
#endif
/***/
struct HeapNode {
void *ptr;
double value;
unsigned int index;
};
/* heap_* pool allocator */
#define TPOOL_IMPL_PREFIX heap
#define TPOOL_ALLOC_TYPE HeapNode
#define TPOOL_STRUCT HeapMemPool
#include "generic_alloc_impl.h"
#undef TPOOL_IMPL_PREFIX
#undef TPOOL_ALLOC_TYPE
#undef TPOOL_STRUCT
struct Heap {
unsigned int size;
unsigned int bufsize;
HeapNode **tree;
struct HeapMemPool pool;
};
/** \name Internal Functions
* \{ */
#define HEAP_PARENT(i) (((i) - 1) >> 1)
#define HEAP_LEFT(i) (((i) << 1) + 1)
#define HEAP_RIGHT(i) (((i) << 1) + 2)
#define HEAP_COMPARE(a, b) ((a)->value < (b)->value)
#if 0 /* UNUSED */
#define HEAP_EQUALS(a, b) ((a)->value == (b)->value)
#endif
static void heap_swap(Heap *heap, const unsigned int i, const unsigned int j)
{
#if 0
SWAP(unsigned int, heap->tree[i]->index, heap->tree[j]->index);
SWAP(HeapNode *, heap->tree[i], heap->tree[j]);
#else
HeapNode **tree = heap->tree;
union {
unsigned int index;
HeapNode *node;
} tmp;
SWAP_TVAL(tmp.index, tree[i]->index, tree[j]->index);
SWAP_TVAL(tmp.node, tree[i], tree[j]);
#endif
}
static void heap_down(Heap *heap, unsigned int i)
{
/* size won't change in the loop */
const unsigned int size = heap->size;
while (1) {
const unsigned int l = HEAP_LEFT(i);
const unsigned int r = HEAP_RIGHT(i);
unsigned int smallest;
smallest = ((l < size) && HEAP_COMPARE(heap->tree[l], heap->tree[i])) ? l : i;
if ((r < size) && HEAP_COMPARE(heap->tree[r], heap->tree[smallest])) {
smallest = r;
}
if (smallest == i) {
break;
}
heap_swap(heap, i, smallest);
i = smallest;
}
}
static void heap_up(Heap *heap, unsigned int i)
{
while (i > 0) {
const unsigned int p = HEAP_PARENT(i);
if (HEAP_COMPARE(heap->tree[p], heap->tree[i])) {
break;
}
heap_swap(heap, p, i);
i = p;
}
}
/** \} */
/** \name Public Heap API
* \{ */
/* use when the size of the heap is known in advance */
Heap *HEAP_new(unsigned int tot_reserve)
{
Heap *heap = malloc(sizeof(Heap));
/* ensure we have at least one so we can keep doubling it */
heap->size = 0;
heap->bufsize = tot_reserve ? tot_reserve : 1;
heap->tree = malloc(heap->bufsize * sizeof(HeapNode *));
heap_pool_create(&heap->pool, tot_reserve);
return heap;
}
void HEAP_free(Heap *heap, HeapFreeFP ptrfreefp)
{
if (ptrfreefp) {
unsigned int i;
for (i = 0; i < heap->size; i++) {
ptrfreefp(heap->tree[i]->ptr);
}
}
heap_pool_destroy(&heap->pool);
free(heap->tree);
free(heap);
}
void HEAP_clear(Heap *heap, HeapFreeFP ptrfreefp)
{
if (ptrfreefp) {
unsigned int i;
for (i = 0; i < heap->size; i++) {
ptrfreefp(heap->tree[i]->ptr);
}
}
heap->size = 0;
heap_pool_clear(&heap->pool);
}
HeapNode *HEAP_insert(Heap *heap, double value, void *ptr)
{
HeapNode *node;
if (UNLIKELY(heap->size >= heap->bufsize)) {
heap->bufsize *= 2;
heap->tree = realloc(heap->tree, heap->bufsize * sizeof(*heap->tree));
}
node = heap_pool_elem_alloc(&heap->pool);
node->ptr = ptr;
node->value = value;
node->index = heap->size;
heap->tree[node->index] = node;
heap->size++;
heap_up(heap, node->index);
return node;
}
bool HEAP_is_empty(Heap *heap)
{
return (heap->size == 0);
}
unsigned int HEAP_size(Heap *heap)
{
return heap->size;
}
HeapNode *HEAP_top(Heap *heap)
{
return heap->tree[0];
}
double HEAP_top_value(Heap *heap)
{
return heap->tree[0]->value;
}
void *HEAP_popmin(Heap *heap)
{
void *ptr = heap->tree[0]->ptr;
assert(heap->size != 0);
heap_pool_elem_free(&heap->pool, heap->tree[0]);
if (--heap->size) {
heap_swap(heap, 0, heap->size);
heap_down(heap, 0);
}
return ptr;
}
void HEAP_remove(Heap *heap, HeapNode *node)
{
unsigned int i = node->index;
assert(heap->size != 0);
while (i > 0) {
unsigned int p = HEAP_PARENT(i);
heap_swap(heap, p, i);
i = p;
}
HEAP_popmin(heap);
}
double HEAP_node_value(HeapNode *node)
{
return node->value;
}
void *HEAP_node_ptr(HeapNode *node)
{
return node->ptr;
}

View File

@@ -1,54 +0,0 @@
/*
* Copyright (c) 2016, Blender Foundation.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of the <organization> nor the
* names of its contributors may be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __GENERIC_HEAP_H__
#define __GENERIC_HEAP_H__
/** \file generic_heap.h
* \ingroup curve_fit
*/
struct Heap;
struct HeapNode;
typedef struct Heap Heap;
typedef struct HeapNode HeapNode;
typedef void (*HeapFreeFP)(void *ptr);
Heap *HEAP_new(unsigned int tot_reserve);
bool HEAP_is_empty(Heap *heap);
void HEAP_free(Heap *heap, HeapFreeFP ptrfreefp);
void *HEAP_node_ptr(HeapNode *node);
void HEAP_remove(Heap *heap, HeapNode *node);
HeapNode *HEAP_insert(Heap *heap, double value, void *ptr);
void *HEAP_popmin(Heap *heap);
void HEAP_clear(Heap *heap, HeapFreeFP ptrfreefp);
unsigned int HEAP_size(Heap *heap);
HeapNode *HEAP_top(Heap *heap);
double HEAP_top_value(Heap *heap);
double HEAP_node_value(HeapNode *node);
#endif /* __GENERIC_HEAP_IMPL_H__ */

View File

@@ -1,5 +1,5 @@
Project: Google Flags
URL: https://github.com/gflags/gflags
URL: http://code.google.com/p/google-gflags/
License: New BSD
Upstream version: 2.2.0 (9db82895)
Local modifications:
@@ -17,7 +17,3 @@ Local modifications:
- Applied some modifications from fork https://github.com/Nazg-Gul/gflags.git
(see https://github.com/gflags/gflags/pull/129)
- Avoid attemot of acquiring mutex lock in FlagRegistry::GlobalRegistry when
doing static flags initialization. See d81dd2d in Blender repository.

View File

@@ -1,5 +1,5 @@
Project: Google Logging
URL: https://github.com/google/glog
URL: http://code.google.com/p/google-glog/
License: New BSD
Upstream version: 0.3.4, 4d391fe
Local modifications:

View File

@@ -1,66 +0,0 @@
# ***** BEGIN GPL LICENSE BLOCK *****
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# The Original Code is Copyright (C) 2014, Blender Foundation
# All rights reserved.
#
# Contributor(s): Sergey Sharybin
#
# ***** END GPL LICENSE BLOCK *****
set(INC
.
include
)
set(INC_SYS
../gtest/include
)
set(SRC
# src/gmock-all.cc
src/gmock-cardinalities.cc
src/gmock.cc
src/gmock-internal-utils.cc
src/gmock_main.cc
src/gmock-matchers.cc
src/gmock-spec-builders.cc
)
set(SRC_HEADERS
include/gmock/gmock-actions.h
include/gmock/gmock-cardinalities.h
include/gmock/gmock-generated-actions.h
include/gmock/gmock-generated-function-mockers.h
include/gmock/gmock-generated-matchers.h
include/gmock/gmock-generated-nice-strict.h
include/gmock/gmock.h
include/gmock/gmock-matchers.h
include/gmock/gmock-more-actions.h
include/gmock/gmock-more-matchers.h
include/gmock/gmock-spec-builders.h
include/gmock/internal/custom/gmock-generated-actions.h
include/gmock/internal/custom/gmock-matchers.h
include/gmock/internal/custom/gmock-port.h
include/gmock/internal/gmock-generated-internal-utils.h
include/gmock/internal/gmock-internal-utils.h
include/gmock/internal/gmock-port.h
)
include_directories(${INC})
include_directories(SYSTEM ${INC_SYS})
add_library(extern_gmock ${SRC} ${SRC_HEADERS})

28
extern/gmock/LICENSE vendored
View File

@@ -1,28 +0,0 @@
Copyright 2008, Google Inc.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

333
extern/gmock/README.md vendored
View File

@@ -1,333 +0,0 @@
## Google Mock ##
The Google C++ mocking framework.
### Overview ###
Google's framework for writing and using C++ mock classes.
It can help you derive better designs of your system and write better tests.
It is inspired by:
* [jMock](http://www.jmock.org/),
* [EasyMock](http://www.easymock.org/), and
* [Hamcrest](http://code.google.com/p/hamcrest/),
and designed with C++'s specifics in mind.
Google mock:
* lets you create mock classes trivially using simple macros.
* supports a rich set of matchers and actions.
* handles unordered, partially ordered, or completely ordered expectations.
* is extensible by users.
We hope you find it useful!
### Features ###
* Provides a declarative syntax for defining mocks.
* Can easily define partial (hybrid) mocks, which are a cross of real
and mock objects.
* Handles functions of arbitrary types and overloaded functions.
* Comes with a rich set of matchers for validating function arguments.
* Uses an intuitive syntax for controlling the behavior of a mock.
* Does automatic verification of expectations (no record-and-replay needed).
* Allows arbitrary (partial) ordering constraints on
function calls to be expressed,.
* Lets a user extend it by defining new matchers and actions.
* Does not use exceptions.
* Is easy to learn and use.
Please see the project page above for more information as well as the
mailing list for questions, discussions, and development. There is
also an IRC channel on OFTC (irc.oftc.net) #gtest available. Please
join us!
Please note that code under [scripts/generator](scripts/generator/) is
from [cppclean](http://code.google.com/p/cppclean/) and released under
the Apache License, which is different from Google Mock's license.
## Getting Started ##
If you are new to the project, we suggest that you read the user
documentation in the following order:
* Learn the [basics](../googletest/docs/Primer.md) of
Google Test, if you choose to use Google Mock with it (recommended).
* Read [Google Mock for Dummies](docs/ForDummies.md).
* Read the instructions below on how to build Google Mock.
You can also watch Zhanyong's [talk](http://www.youtube.com/watch?v=sYpCyLI47rM) on Google Mock's usage and implementation.
Once you understand the basics, check out the rest of the docs:
* [CheatSheet](docs/CheatSheet.md) - all the commonly used stuff
at a glance.
* [CookBook](docs/CookBook.md) - recipes for getting things done,
including advanced techniques.
If you need help, please check the
[KnownIssues](docs/KnownIssues.md) and
[FrequentlyAskedQuestions](docs/FrequentlyAskedQuestions.md) before
posting a question on the
[discussion group](http://groups.google.com/group/googlemock).
### Using Google Mock Without Google Test ###
Google Mock is not a testing framework itself. Instead, it needs a
testing framework for writing tests. Google Mock works seamlessly
with [Google Test](http://code.google.com/p/googletest/), but
you can also use it with [any C++ testing framework](googlemock/ForDummies.md#Using_Google_Mock_with_Any_Testing_Framework).
### Requirements for End Users ###
Google Mock is implemented on top of [Google Test](
http://github.com/google/googletest/), and depends on it.
You must use the bundled version of Google Test when using Google Mock.
You can also easily configure Google Mock to work with another testing
framework, although it will still need Google Test. Please read
["Using_Google_Mock_with_Any_Testing_Framework"](
docs/ForDummies.md#Using_Google_Mock_with_Any_Testing_Framework)
for instructions.
Google Mock depends on advanced C++ features and thus requires a more
modern compiler. The following are needed to use Google Mock:
#### Linux Requirements ####
* GNU-compatible Make or "gmake"
* POSIX-standard shell
* POSIX(-2) Regular Expressions (regex.h)
* C++98-standard-compliant compiler (e.g. GCC 3.4 or newer)
#### Windows Requirements ####
* Microsoft Visual C++ 8.0 SP1 or newer
#### Mac OS X Requirements ####
* Mac OS X 10.4 Tiger or newer
* Developer Tools Installed
### Requirements for Contributors ###
We welcome patches. If you plan to contribute a patch, you need to
build Google Mock and its tests, which has further requirements:
* Automake version 1.9 or newer
* Autoconf version 2.59 or newer
* Libtool / Libtoolize
* Python version 2.3 or newer (for running some of the tests and
re-generating certain source files from templates)
### Building Google Mock ###
#### Preparing to Build (Unix only) ####
If you are using a Unix system and plan to use the GNU Autotools build
system to build Google Mock (described below), you'll need to
configure it now.
To prepare the Autotools build system:
cd googlemock
autoreconf -fvi
To build Google Mock and your tests that use it, you need to tell your
build system where to find its headers and source files. The exact
way to do it depends on which build system you use, and is usually
straightforward.
This section shows how you can integrate Google Mock into your
existing build system.
Suppose you put Google Mock in directory `${GMOCK_DIR}` and Google Test
in `${GTEST_DIR}` (the latter is `${GMOCK_DIR}/gtest` by default). To
build Google Mock, create a library build target (or a project as
called by Visual Studio and Xcode) to compile
${GTEST_DIR}/src/gtest-all.cc and ${GMOCK_DIR}/src/gmock-all.cc
with
${GTEST_DIR}/include and ${GMOCK_DIR}/include
in the system header search path, and
${GTEST_DIR} and ${GMOCK_DIR}
in the normal header search path. Assuming a Linux-like system and gcc,
something like the following will do:
g++ -isystem ${GTEST_DIR}/include -I${GTEST_DIR} \
-isystem ${GMOCK_DIR}/include -I${GMOCK_DIR} \
-pthread -c ${GTEST_DIR}/src/gtest-all.cc
g++ -isystem ${GTEST_DIR}/include -I${GTEST_DIR} \
-isystem ${GMOCK_DIR}/include -I${GMOCK_DIR} \
-pthread -c ${GMOCK_DIR}/src/gmock-all.cc
ar -rv libgmock.a gtest-all.o gmock-all.o
(We need -pthread as Google Test and Google Mock use threads.)
Next, you should compile your test source file with
${GTEST\_DIR}/include and ${GMOCK\_DIR}/include in the header search
path, and link it with gmock and any other necessary libraries:
g++ -isystem ${GTEST_DIR}/include -isystem ${GMOCK_DIR}/include \
-pthread path/to/your_test.cc libgmock.a -o your_test
As an example, the make/ directory contains a Makefile that you can
use to build Google Mock on systems where GNU make is available
(e.g. Linux, Mac OS X, and Cygwin). It doesn't try to build Google
Mock's own tests. Instead, it just builds the Google Mock library and
a sample test. You can use it as a starting point for your own build
script.
If the default settings are correct for your environment, the
following commands should succeed:
cd ${GMOCK_DIR}/make
make
./gmock_test
If you see errors, try to tweak the contents of
[make/Makefile](make/Makefile) to make them go away.
### Windows ###
The msvc/2005 directory contains VC++ 2005 projects and the msvc/2010
directory contains VC++ 2010 projects for building Google Mock and
selected tests.
Change to the appropriate directory and run "msbuild gmock.sln" to
build the library and tests (or open the gmock.sln in the MSVC IDE).
If you want to create your own project to use with Google Mock, you'll
have to configure it to use the `gmock_config` propety sheet. For that:
* Open the Property Manager window (View | Other Windows | Property Manager)
* Right-click on your project and select "Add Existing Property Sheet..."
* Navigate to `gmock_config.vsprops` or `gmock_config.props` and select it.
* In Project Properties | Configuration Properties | General | Additional
Include Directories, type <path to Google Mock>/include.
### Tweaking Google Mock ###
Google Mock can be used in diverse environments. The default
configuration may not work (or may not work well) out of the box in
some environments. However, you can easily tweak Google Mock by
defining control macros on the compiler command line. Generally,
these macros are named like `GTEST_XYZ` and you define them to either 1
or 0 to enable or disable a certain feature.
We list the most frequently used macros below. For a complete list,
see file [${GTEST\_DIR}/include/gtest/internal/gtest-port.h](
../googletest/include/gtest/internal/gtest-port.h).
### Choosing a TR1 Tuple Library ###
Google Mock uses the C++ Technical Report 1 (TR1) tuple library
heavily. Unfortunately TR1 tuple is not yet widely available with all
compilers. The good news is that Google Test 1.4.0+ implements a
subset of TR1 tuple that's enough for Google Mock's need. Google Mock
will automatically use that implementation when the compiler doesn't
provide TR1 tuple.
Usually you don't need to care about which tuple library Google Test
and Google Mock use. However, if your project already uses TR1 tuple,
you need to tell Google Test and Google Mock to use the same TR1 tuple
library the rest of your project uses, or the two tuple
implementations will clash. To do that, add
-DGTEST_USE_OWN_TR1_TUPLE=0
to the compiler flags while compiling Google Test, Google Mock, and
your tests. If you want to force Google Test and Google Mock to use
their own tuple library, just add
-DGTEST_USE_OWN_TR1_TUPLE=1
to the compiler flags instead.
If you want to use Boost's TR1 tuple library with Google Mock, please
refer to the Boost website (http://www.boost.org/) for how to obtain
it and set it up.
### As a Shared Library (DLL) ###
Google Mock is compact, so most users can build and link it as a static
library for the simplicity. Google Mock can be used as a DLL, but the
same DLL must contain Google Test as well. See
[Google Test's README][gtest_readme]
for instructions on how to set up necessary compiler settings.
### Tweaking Google Mock ###
Most of Google Test's control macros apply to Google Mock as well.
Please see [Google Test's README][gtest_readme] for how to tweak them.
### Upgrading from an Earlier Version ###
We strive to keep Google Mock releases backward compatible.
Sometimes, though, we have to make some breaking changes for the
users' long-term benefits. This section describes what you'll need to
do if you are upgrading from an earlier version of Google Mock.
#### Upgrading from 1.1.0 or Earlier ####
You may need to explicitly enable or disable Google Test's own TR1
tuple library. See the instructions in section "[Choosing a TR1 Tuple
Library](../googletest/#choosing-a-tr1-tuple-library)".
#### Upgrading from 1.4.0 or Earlier ####
On platforms where the pthread library is available, Google Test and
Google Mock use it in order to be thread-safe. For this to work, you
may need to tweak your compiler and/or linker flags. Please see the
"[Multi-threaded Tests](../googletest#multi-threaded-tests
)" section in file Google Test's README for what you may need to do.
If you have custom matchers defined using `MatcherInterface` or
`MakePolymorphicMatcher()`, you'll need to update their definitions to
use the new matcher API (
[monomorphic](http://code.google.com/p/googlemock/wiki/CookBook#Writing_New_Monomorphic_Matchers),
[polymorphic](http://code.google.com/p/googlemock/wiki/CookBook#Writing_New_Polymorphic_Matchers)).
Matchers defined using `MATCHER()` or `MATCHER_P*()` aren't affected.
### Developing Google Mock ###
This section discusses how to make your own changes to Google Mock.
#### Testing Google Mock Itself ####
To make sure your changes work as intended and don't break existing
functionality, you'll want to compile and run Google Test's own tests.
For that you'll need Autotools. First, make sure you have followed
the instructions above to configure Google Mock.
Then, create a build output directory and enter it. Next,
${GMOCK_DIR}/configure # try --help for more info
Once you have successfully configured Google Mock, the build steps are
standard for GNU-style OSS packages.
make # Standard makefile following GNU conventions
make check # Builds and runs all tests - all should pass.
Note that when building your project against Google Mock, you are building
against Google Test as well. There is no need to configure Google Test
separately.
#### Contributing a Patch ####
We welcome patches.
Please read the [Developer's Guide](docs/DevGuide.md)
for how you can contribute. In particular, make sure you have signed
the Contributor License Agreement, or we won't be able to accept the
patch.
Happy testing!
[gtest_readme]: ../googletest/README.md "googletest"

File diff suppressed because it is too large Load Diff

View File

@@ -1,147 +0,0 @@
// Copyright 2007, Google Inc.
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
// Author: wan@google.com (Zhanyong Wan)
// Google Mock - a framework for writing C++ mock classes.
//
// This file implements some commonly used cardinalities. More
// cardinalities can be defined by the user implementing the
// CardinalityInterface interface if necessary.
#ifndef GMOCK_INCLUDE_GMOCK_GMOCK_CARDINALITIES_H_
#define GMOCK_INCLUDE_GMOCK_GMOCK_CARDINALITIES_H_
#include <limits.h>
#include <ostream> // NOLINT
#include "gmock/internal/gmock-port.h"
#include "gtest/gtest.h"
namespace testing {
// To implement a cardinality Foo, define:
// 1. a class FooCardinality that implements the
// CardinalityInterface interface, and
// 2. a factory function that creates a Cardinality object from a
// const FooCardinality*.
//
// The two-level delegation design follows that of Matcher, providing
// consistency for extension developers. It also eases ownership
// management as Cardinality objects can now be copied like plain values.
// The implementation of a cardinality.
class CardinalityInterface {
public:
virtual ~CardinalityInterface() {}
// Conservative estimate on the lower/upper bound of the number of
// calls allowed.
virtual int ConservativeLowerBound() const { return 0; }
virtual int ConservativeUpperBound() const { return INT_MAX; }
// Returns true iff call_count calls will satisfy this cardinality.
virtual bool IsSatisfiedByCallCount(int call_count) const = 0;
// Returns true iff call_count calls will saturate this cardinality.
virtual bool IsSaturatedByCallCount(int call_count) const = 0;
// Describes self to an ostream.
virtual void DescribeTo(::std::ostream* os) const = 0;
};
// A Cardinality is a copyable and IMMUTABLE (except by assignment)
// object that specifies how many times a mock function is expected to
// be called. The implementation of Cardinality is just a linked_ptr
// to const CardinalityInterface, so copying is fairly cheap.
// Don't inherit from Cardinality!
class GTEST_API_ Cardinality {
public:
// Constructs a null cardinality. Needed for storing Cardinality
// objects in STL containers.
Cardinality() {}
// Constructs a Cardinality from its implementation.
explicit Cardinality(const CardinalityInterface* impl) : impl_(impl) {}
// Conservative estimate on the lower/upper bound of the number of
// calls allowed.
int ConservativeLowerBound() const { return impl_->ConservativeLowerBound(); }
int ConservativeUpperBound() const { return impl_->ConservativeUpperBound(); }
// Returns true iff call_count calls will satisfy this cardinality.
bool IsSatisfiedByCallCount(int call_count) const {
return impl_->IsSatisfiedByCallCount(call_count);
}
// Returns true iff call_count calls will saturate this cardinality.
bool IsSaturatedByCallCount(int call_count) const {
return impl_->IsSaturatedByCallCount(call_count);
}
// Returns true iff call_count calls will over-saturate this
// cardinality, i.e. exceed the maximum number of allowed calls.
bool IsOverSaturatedByCallCount(int call_count) const {
return impl_->IsSaturatedByCallCount(call_count) &&
!impl_->IsSatisfiedByCallCount(call_count);
}
// Describes self to an ostream
void DescribeTo(::std::ostream* os) const { impl_->DescribeTo(os); }
// Describes the given actual call count to an ostream.
static void DescribeActualCallCountTo(int actual_call_count,
::std::ostream* os);
private:
internal::linked_ptr<const CardinalityInterface> impl_;
};
// Creates a cardinality that allows at least n calls.
GTEST_API_ Cardinality AtLeast(int n);
// Creates a cardinality that allows at most n calls.
GTEST_API_ Cardinality AtMost(int n);
// Creates a cardinality that allows any number of calls.
GTEST_API_ Cardinality AnyNumber();
// Creates a cardinality that allows between min and max calls.
GTEST_API_ Cardinality Between(int min, int max);
// Creates a cardinality that allows exactly n calls.
GTEST_API_ Cardinality Exactly(int n);
// Creates a cardinality from its implementation.
inline Cardinality MakeCardinality(const CardinalityInterface* c) {
return Cardinality(c);
}
} // namespace testing
#endif // GMOCK_INCLUDE_GMOCK_GMOCK_CARDINALITIES_H_

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,397 +0,0 @@
// This file was GENERATED by command:
// pump.py gmock-generated-nice-strict.h.pump
// DO NOT EDIT BY HAND!!!
// Copyright 2008, Google Inc.
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
// Author: wan@google.com (Zhanyong Wan)
// Implements class templates NiceMock, NaggyMock, and StrictMock.
//
// Given a mock class MockFoo that is created using Google Mock,
// NiceMock<MockFoo> is a subclass of MockFoo that allows
// uninteresting calls (i.e. calls to mock methods that have no
// EXPECT_CALL specs), NaggyMock<MockFoo> is a subclass of MockFoo
// that prints a warning when an uninteresting call occurs, and
// StrictMock<MockFoo> is a subclass of MockFoo that treats all
// uninteresting calls as errors.
//
// Currently a mock is naggy by default, so MockFoo and
// NaggyMock<MockFoo> behave like the same. However, we will soon
// switch the default behavior of mocks to be nice, as that in general
// leads to more maintainable tests. When that happens, MockFoo will
// stop behaving like NaggyMock<MockFoo> and start behaving like
// NiceMock<MockFoo>.
//
// NiceMock, NaggyMock, and StrictMock "inherit" the constructors of
// their respective base class, with up-to 10 arguments. Therefore
// you can write NiceMock<MockFoo>(5, "a") to construct a nice mock
// where MockFoo has a constructor that accepts (int, const char*),
// for example.
//
// A known limitation is that NiceMock<MockFoo>, NaggyMock<MockFoo>,
// and StrictMock<MockFoo> only works for mock methods defined using
// the MOCK_METHOD* family of macros DIRECTLY in the MockFoo class.
// If a mock method is defined in a base class of MockFoo, the "nice"
// or "strict" modifier may not affect it, depending on the compiler.
// In particular, nesting NiceMock, NaggyMock, and StrictMock is NOT
// supported.
//
// Another known limitation is that the constructors of the base mock
// cannot have arguments passed by non-const reference, which are
// banned by the Google C++ style guide anyway.
#ifndef GMOCK_INCLUDE_GMOCK_GMOCK_GENERATED_NICE_STRICT_H_
#define GMOCK_INCLUDE_GMOCK_GMOCK_GENERATED_NICE_STRICT_H_
#include "gmock/gmock-spec-builders.h"
#include "gmock/internal/gmock-port.h"
namespace testing {
template <class MockClass>
class NiceMock : public MockClass {
public:
// We don't factor out the constructor body to a common method, as
// we have to avoid a possible clash with members of MockClass.
NiceMock() {
::testing::Mock::AllowUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
// C++ doesn't (yet) allow inheritance of constructors, so we have
// to define it for each arity.
template <typename A1>
explicit NiceMock(const A1& a1) : MockClass(a1) {
::testing::Mock::AllowUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2>
NiceMock(const A1& a1, const A2& a2) : MockClass(a1, a2) {
::testing::Mock::AllowUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3>
NiceMock(const A1& a1, const A2& a2, const A3& a3) : MockClass(a1, a2, a3) {
::testing::Mock::AllowUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4>
NiceMock(const A1& a1, const A2& a2, const A3& a3,
const A4& a4) : MockClass(a1, a2, a3, a4) {
::testing::Mock::AllowUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4, typename A5>
NiceMock(const A1& a1, const A2& a2, const A3& a3, const A4& a4,
const A5& a5) : MockClass(a1, a2, a3, a4, a5) {
::testing::Mock::AllowUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4, typename A5,
typename A6>
NiceMock(const A1& a1, const A2& a2, const A3& a3, const A4& a4,
const A5& a5, const A6& a6) : MockClass(a1, a2, a3, a4, a5, a6) {
::testing::Mock::AllowUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4, typename A5,
typename A6, typename A7>
NiceMock(const A1& a1, const A2& a2, const A3& a3, const A4& a4,
const A5& a5, const A6& a6, const A7& a7) : MockClass(a1, a2, a3, a4, a5,
a6, a7) {
::testing::Mock::AllowUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4, typename A5,
typename A6, typename A7, typename A8>
NiceMock(const A1& a1, const A2& a2, const A3& a3, const A4& a4,
const A5& a5, const A6& a6, const A7& a7, const A8& a8) : MockClass(a1,
a2, a3, a4, a5, a6, a7, a8) {
::testing::Mock::AllowUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4, typename A5,
typename A6, typename A7, typename A8, typename A9>
NiceMock(const A1& a1, const A2& a2, const A3& a3, const A4& a4,
const A5& a5, const A6& a6, const A7& a7, const A8& a8,
const A9& a9) : MockClass(a1, a2, a3, a4, a5, a6, a7, a8, a9) {
::testing::Mock::AllowUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4, typename A5,
typename A6, typename A7, typename A8, typename A9, typename A10>
NiceMock(const A1& a1, const A2& a2, const A3& a3, const A4& a4,
const A5& a5, const A6& a6, const A7& a7, const A8& a8, const A9& a9,
const A10& a10) : MockClass(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10) {
::testing::Mock::AllowUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
virtual ~NiceMock() {
::testing::Mock::UnregisterCallReaction(
internal::ImplicitCast_<MockClass*>(this));
}
private:
GTEST_DISALLOW_COPY_AND_ASSIGN_(NiceMock);
};
template <class MockClass>
class NaggyMock : public MockClass {
public:
// We don't factor out the constructor body to a common method, as
// we have to avoid a possible clash with members of MockClass.
NaggyMock() {
::testing::Mock::WarnUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
// C++ doesn't (yet) allow inheritance of constructors, so we have
// to define it for each arity.
template <typename A1>
explicit NaggyMock(const A1& a1) : MockClass(a1) {
::testing::Mock::WarnUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2>
NaggyMock(const A1& a1, const A2& a2) : MockClass(a1, a2) {
::testing::Mock::WarnUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3>
NaggyMock(const A1& a1, const A2& a2, const A3& a3) : MockClass(a1, a2, a3) {
::testing::Mock::WarnUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4>
NaggyMock(const A1& a1, const A2& a2, const A3& a3,
const A4& a4) : MockClass(a1, a2, a3, a4) {
::testing::Mock::WarnUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4, typename A5>
NaggyMock(const A1& a1, const A2& a2, const A3& a3, const A4& a4,
const A5& a5) : MockClass(a1, a2, a3, a4, a5) {
::testing::Mock::WarnUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4, typename A5,
typename A6>
NaggyMock(const A1& a1, const A2& a2, const A3& a3, const A4& a4,
const A5& a5, const A6& a6) : MockClass(a1, a2, a3, a4, a5, a6) {
::testing::Mock::WarnUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4, typename A5,
typename A6, typename A7>
NaggyMock(const A1& a1, const A2& a2, const A3& a3, const A4& a4,
const A5& a5, const A6& a6, const A7& a7) : MockClass(a1, a2, a3, a4, a5,
a6, a7) {
::testing::Mock::WarnUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4, typename A5,
typename A6, typename A7, typename A8>
NaggyMock(const A1& a1, const A2& a2, const A3& a3, const A4& a4,
const A5& a5, const A6& a6, const A7& a7, const A8& a8) : MockClass(a1,
a2, a3, a4, a5, a6, a7, a8) {
::testing::Mock::WarnUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4, typename A5,
typename A6, typename A7, typename A8, typename A9>
NaggyMock(const A1& a1, const A2& a2, const A3& a3, const A4& a4,
const A5& a5, const A6& a6, const A7& a7, const A8& a8,
const A9& a9) : MockClass(a1, a2, a3, a4, a5, a6, a7, a8, a9) {
::testing::Mock::WarnUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4, typename A5,
typename A6, typename A7, typename A8, typename A9, typename A10>
NaggyMock(const A1& a1, const A2& a2, const A3& a3, const A4& a4,
const A5& a5, const A6& a6, const A7& a7, const A8& a8, const A9& a9,
const A10& a10) : MockClass(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10) {
::testing::Mock::WarnUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
virtual ~NaggyMock() {
::testing::Mock::UnregisterCallReaction(
internal::ImplicitCast_<MockClass*>(this));
}
private:
GTEST_DISALLOW_COPY_AND_ASSIGN_(NaggyMock);
};
template <class MockClass>
class StrictMock : public MockClass {
public:
// We don't factor out the constructor body to a common method, as
// we have to avoid a possible clash with members of MockClass.
StrictMock() {
::testing::Mock::FailUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
// C++ doesn't (yet) allow inheritance of constructors, so we have
// to define it for each arity.
template <typename A1>
explicit StrictMock(const A1& a1) : MockClass(a1) {
::testing::Mock::FailUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2>
StrictMock(const A1& a1, const A2& a2) : MockClass(a1, a2) {
::testing::Mock::FailUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3>
StrictMock(const A1& a1, const A2& a2, const A3& a3) : MockClass(a1, a2, a3) {
::testing::Mock::FailUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4>
StrictMock(const A1& a1, const A2& a2, const A3& a3,
const A4& a4) : MockClass(a1, a2, a3, a4) {
::testing::Mock::FailUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4, typename A5>
StrictMock(const A1& a1, const A2& a2, const A3& a3, const A4& a4,
const A5& a5) : MockClass(a1, a2, a3, a4, a5) {
::testing::Mock::FailUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4, typename A5,
typename A6>
StrictMock(const A1& a1, const A2& a2, const A3& a3, const A4& a4,
const A5& a5, const A6& a6) : MockClass(a1, a2, a3, a4, a5, a6) {
::testing::Mock::FailUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4, typename A5,
typename A6, typename A7>
StrictMock(const A1& a1, const A2& a2, const A3& a3, const A4& a4,
const A5& a5, const A6& a6, const A7& a7) : MockClass(a1, a2, a3, a4, a5,
a6, a7) {
::testing::Mock::FailUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4, typename A5,
typename A6, typename A7, typename A8>
StrictMock(const A1& a1, const A2& a2, const A3& a3, const A4& a4,
const A5& a5, const A6& a6, const A7& a7, const A8& a8) : MockClass(a1,
a2, a3, a4, a5, a6, a7, a8) {
::testing::Mock::FailUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4, typename A5,
typename A6, typename A7, typename A8, typename A9>
StrictMock(const A1& a1, const A2& a2, const A3& a3, const A4& a4,
const A5& a5, const A6& a6, const A7& a7, const A8& a8,
const A9& a9) : MockClass(a1, a2, a3, a4, a5, a6, a7, a8, a9) {
::testing::Mock::FailUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
template <typename A1, typename A2, typename A3, typename A4, typename A5,
typename A6, typename A7, typename A8, typename A9, typename A10>
StrictMock(const A1& a1, const A2& a2, const A3& a3, const A4& a4,
const A5& a5, const A6& a6, const A7& a7, const A8& a8, const A9& a9,
const A10& a10) : MockClass(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10) {
::testing::Mock::FailUninterestingCalls(
internal::ImplicitCast_<MockClass*>(this));
}
virtual ~StrictMock() {
::testing::Mock::UnregisterCallReaction(
internal::ImplicitCast_<MockClass*>(this));
}
private:
GTEST_DISALLOW_COPY_AND_ASSIGN_(StrictMock);
};
// The following specializations catch some (relatively more common)
// user errors of nesting nice and strict mocks. They do NOT catch
// all possible errors.
// These specializations are declared but not defined, as NiceMock,
// NaggyMock, and StrictMock cannot be nested.
template <typename MockClass>
class NiceMock<NiceMock<MockClass> >;
template <typename MockClass>
class NiceMock<NaggyMock<MockClass> >;
template <typename MockClass>
class NiceMock<StrictMock<MockClass> >;
template <typename MockClass>
class NaggyMock<NiceMock<MockClass> >;
template <typename MockClass>
class NaggyMock<NaggyMock<MockClass> >;
template <typename MockClass>
class NaggyMock<StrictMock<MockClass> >;
template <typename MockClass>
class StrictMock<NiceMock<MockClass> >;
template <typename MockClass>
class StrictMock<NaggyMock<MockClass> >;
template <typename MockClass>
class StrictMock<StrictMock<MockClass> >;
} // namespace testing
#endif // GMOCK_INCLUDE_GMOCK_GMOCK_GENERATED_NICE_STRICT_H_

File diff suppressed because it is too large Load Diff

View File

@@ -1,246 +0,0 @@
// Copyright 2007, Google Inc.
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
// Author: wan@google.com (Zhanyong Wan)
// Google Mock - a framework for writing C++ mock classes.
//
// This file implements some actions that depend on gmock-generated-actions.h.
#ifndef GMOCK_INCLUDE_GMOCK_GMOCK_MORE_ACTIONS_H_
#define GMOCK_INCLUDE_GMOCK_GMOCK_MORE_ACTIONS_H_
#include <algorithm>
#include "gmock/gmock-generated-actions.h"
namespace testing {
namespace internal {
// Implements the Invoke(f) action. The template argument
// FunctionImpl is the implementation type of f, which can be either a
// function pointer or a functor. Invoke(f) can be used as an
// Action<F> as long as f's type is compatible with F (i.e. f can be
// assigned to a tr1::function<F>).
template <typename FunctionImpl>
class InvokeAction {
public:
// The c'tor makes a copy of function_impl (either a function
// pointer or a functor).
explicit InvokeAction(FunctionImpl function_impl)
: function_impl_(function_impl) {}
template <typename Result, typename ArgumentTuple>
Result Perform(const ArgumentTuple& args) {
return InvokeHelper<Result, ArgumentTuple>::Invoke(function_impl_, args);
}
private:
FunctionImpl function_impl_;
GTEST_DISALLOW_ASSIGN_(InvokeAction);
};
// Implements the Invoke(object_ptr, &Class::Method) action.
template <class Class, typename MethodPtr>
class InvokeMethodAction {
public:
InvokeMethodAction(Class* obj_ptr, MethodPtr method_ptr)
: method_ptr_(method_ptr), obj_ptr_(obj_ptr) {}
template <typename Result, typename ArgumentTuple>
Result Perform(const ArgumentTuple& args) const {
return InvokeHelper<Result, ArgumentTuple>::InvokeMethod(
obj_ptr_, method_ptr_, args);
}
private:
// The order of these members matters. Reversing the order can trigger
// warning C4121 in MSVC (see
// http://computer-programming-forum.com/7-vc.net/6fbc30265f860ad1.htm ).
const MethodPtr method_ptr_;
Class* const obj_ptr_;
GTEST_DISALLOW_ASSIGN_(InvokeMethodAction);
};
// An internal replacement for std::copy which mimics its behavior. This is
// necessary because Visual Studio deprecates ::std::copy, issuing warning 4996.
// However Visual Studio 2010 and later do not honor #pragmas which disable that
// warning.
template<typename InputIterator, typename OutputIterator>
inline OutputIterator CopyElements(InputIterator first,
InputIterator last,
OutputIterator output) {
for (; first != last; ++first, ++output) {
*output = *first;
}
return output;
}
} // namespace internal
// Various overloads for Invoke().
// Creates an action that invokes 'function_impl' with the mock
// function's arguments.
template <typename FunctionImpl>
PolymorphicAction<internal::InvokeAction<FunctionImpl> > Invoke(
FunctionImpl function_impl) {
return MakePolymorphicAction(
internal::InvokeAction<FunctionImpl>(function_impl));
}
// Creates an action that invokes the given method on the given object
// with the mock function's arguments.
template <class Class, typename MethodPtr>
PolymorphicAction<internal::InvokeMethodAction<Class, MethodPtr> > Invoke(
Class* obj_ptr, MethodPtr method_ptr) {
return MakePolymorphicAction(
internal::InvokeMethodAction<Class, MethodPtr>(obj_ptr, method_ptr));
}
// WithoutArgs(inner_action) can be used in a mock function with a
// non-empty argument list to perform inner_action, which takes no
// argument. In other words, it adapts an action accepting no
// argument to one that accepts (and ignores) arguments.
template <typename InnerAction>
inline internal::WithArgsAction<InnerAction>
WithoutArgs(const InnerAction& action) {
return internal::WithArgsAction<InnerAction>(action);
}
// WithArg<k>(an_action) creates an action that passes the k-th
// (0-based) argument of the mock function to an_action and performs
// it. It adapts an action accepting one argument to one that accepts
// multiple arguments. For convenience, we also provide
// WithArgs<k>(an_action) (defined below) as a synonym.
template <int k, typename InnerAction>
inline internal::WithArgsAction<InnerAction, k>
WithArg(const InnerAction& action) {
return internal::WithArgsAction<InnerAction, k>(action);
}
// The ACTION*() macros trigger warning C4100 (unreferenced formal
// parameter) in MSVC with -W4. Unfortunately they cannot be fixed in
// the macro definition, as the warnings are generated when the macro
// is expanded and macro expansion cannot contain #pragma. Therefore
// we suppress them here.
#ifdef _MSC_VER
# pragma warning(push)
# pragma warning(disable:4100)
#endif
// Action ReturnArg<k>() returns the k-th argument of the mock function.
ACTION_TEMPLATE(ReturnArg,
HAS_1_TEMPLATE_PARAMS(int, k),
AND_0_VALUE_PARAMS()) {
return ::testing::get<k>(args);
}
// Action SaveArg<k>(pointer) saves the k-th (0-based) argument of the
// mock function to *pointer.
ACTION_TEMPLATE(SaveArg,
HAS_1_TEMPLATE_PARAMS(int, k),
AND_1_VALUE_PARAMS(pointer)) {
*pointer = ::testing::get<k>(args);
}
// Action SaveArgPointee<k>(pointer) saves the value pointed to
// by the k-th (0-based) argument of the mock function to *pointer.
ACTION_TEMPLATE(SaveArgPointee,
HAS_1_TEMPLATE_PARAMS(int, k),
AND_1_VALUE_PARAMS(pointer)) {
*pointer = *::testing::get<k>(args);
}
// Action SetArgReferee<k>(value) assigns 'value' to the variable
// referenced by the k-th (0-based) argument of the mock function.
ACTION_TEMPLATE(SetArgReferee,
HAS_1_TEMPLATE_PARAMS(int, k),
AND_1_VALUE_PARAMS(value)) {
typedef typename ::testing::tuple_element<k, args_type>::type argk_type;
// Ensures that argument #k is a reference. If you get a compiler
// error on the next line, you are using SetArgReferee<k>(value) in
// a mock function whose k-th (0-based) argument is not a reference.
GTEST_COMPILE_ASSERT_(internal::is_reference<argk_type>::value,
SetArgReferee_must_be_used_with_a_reference_argument);
::testing::get<k>(args) = value;
}
// Action SetArrayArgument<k>(first, last) copies the elements in
// source range [first, last) to the array pointed to by the k-th
// (0-based) argument, which can be either a pointer or an
// iterator. The action does not take ownership of the elements in the
// source range.
ACTION_TEMPLATE(SetArrayArgument,
HAS_1_TEMPLATE_PARAMS(int, k),
AND_2_VALUE_PARAMS(first, last)) {
// Visual Studio deprecates ::std::copy, so we use our own copy in that case.
#ifdef _MSC_VER
internal::CopyElements(first, last, ::testing::get<k>(args));
#else
::std::copy(first, last, ::testing::get<k>(args));
#endif
}
// Action DeleteArg<k>() deletes the k-th (0-based) argument of the mock
// function.
ACTION_TEMPLATE(DeleteArg,
HAS_1_TEMPLATE_PARAMS(int, k),
AND_0_VALUE_PARAMS()) {
delete ::testing::get<k>(args);
}
// This action returns the value pointed to by 'pointer'.
ACTION_P(ReturnPointee, pointer) { return *pointer; }
// Action Throw(exception) can be used in a mock function of any type
// to throw the given exception. Any copyable value can be thrown.
#if GTEST_HAS_EXCEPTIONS
// Suppresses the 'unreachable code' warning that VC generates in opt modes.
# ifdef _MSC_VER
# pragma warning(push) // Saves the current warning state.
# pragma warning(disable:4702) // Temporarily disables warning 4702.
# endif
ACTION_P(Throw, exception) { throw exception; }
# ifdef _MSC_VER
# pragma warning(pop) // Restores the warning state.
# endif
#endif // GTEST_HAS_EXCEPTIONS
#ifdef _MSC_VER
# pragma warning(pop)
#endif
} // namespace testing
#endif // GMOCK_INCLUDE_GMOCK_GMOCK_MORE_ACTIONS_H_

View File

@@ -1,58 +0,0 @@
// Copyright 2013, Google Inc.
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
// Author: marcus.boerger@google.com (Marcus Boerger)
// Google Mock - a framework for writing C++ mock classes.
//
// This file implements some matchers that depend on gmock-generated-matchers.h.
//
// Note that tests are implemented in gmock-matchers_test.cc rather than
// gmock-more-matchers-test.cc.
#ifndef GMOCK_GMOCK_MORE_MATCHERS_H_
#define GMOCK_GMOCK_MORE_MATCHERS_H_
#include "gmock/gmock-generated-matchers.h"
namespace testing {
// Defines a matcher that matches an empty container. The container must
// support both size() and empty(), which all STL-like containers provide.
MATCHER(IsEmpty, negation ? "isn't empty" : "is empty") {
if (arg.empty()) {
return true;
}
*result_listener << "whose size is " << arg.size();
return false;
}
} // namespace testing
#endif // GMOCK_GMOCK_MORE_MATCHERS_H_

File diff suppressed because it is too large Load Diff

View File

@@ -1,94 +0,0 @@
// Copyright 2007, Google Inc.
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
// Author: wan@google.com (Zhanyong Wan)
// Google Mock - a framework for writing C++ mock classes.
//
// This is the main header file a user should include.
#ifndef GMOCK_INCLUDE_GMOCK_GMOCK_H_
#define GMOCK_INCLUDE_GMOCK_GMOCK_H_
// This file implements the following syntax:
//
// ON_CALL(mock_object.Method(...))
// .With(...) ?
// .WillByDefault(...);
//
// where With() is optional and WillByDefault() must appear exactly
// once.
//
// EXPECT_CALL(mock_object.Method(...))
// .With(...) ?
// .Times(...) ?
// .InSequence(...) *
// .WillOnce(...) *
// .WillRepeatedly(...) ?
// .RetiresOnSaturation() ? ;
//
// where all clauses are optional and WillOnce() can be repeated.
#include "gmock/gmock-actions.h"
#include "gmock/gmock-cardinalities.h"
#include "gmock/gmock-generated-actions.h"
#include "gmock/gmock-generated-function-mockers.h"
#include "gmock/gmock-generated-nice-strict.h"
#include "gmock/gmock-generated-matchers.h"
#include "gmock/gmock-matchers.h"
#include "gmock/gmock-more-actions.h"
#include "gmock/gmock-more-matchers.h"
#include "gmock/internal/gmock-internal-utils.h"
namespace testing {
// Declares Google Mock flags that we want a user to use programmatically.
GMOCK_DECLARE_bool_(catch_leaked_mocks);
GMOCK_DECLARE_string_(verbose);
// Initializes Google Mock. This must be called before running the
// tests. In particular, it parses the command line for the flags
// that Google Mock recognizes. Whenever a Google Mock flag is seen,
// it is removed from argv, and *argc is decremented.
//
// No value is returned. Instead, the Google Mock flag variables are
// updated.
//
// Since Google Test is needed for Google Mock to work, this function
// also initializes Google Test and parses its flags, if that hasn't
// been done.
GTEST_API_ void InitGoogleMock(int* argc, char** argv);
// This overloaded version can be used in Windows programs compiled in
// UNICODE mode.
GTEST_API_ void InitGoogleMock(int* argc, wchar_t** argv);
} // namespace testing
#endif // GMOCK_INCLUDE_GMOCK_GMOCK_H_

View File

@@ -1,8 +0,0 @@
// This file was GENERATED by command:
// pump.py gmock-generated-actions.h.pump
// DO NOT EDIT BY HAND!!!
#ifndef GMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_GENERATED_ACTIONS_H_
#define GMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_GENERATED_ACTIONS_H_
#endif // GMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_GENERATED_ACTIONS_H_

View File

@@ -1,39 +0,0 @@
// Copyright 2015, Google Inc.
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
// ============================================================
// An installation-specific extension point for gmock-matchers.h.
// ============================================================
//
// Adds google3 callback support to CallableTraits.
//
#ifndef GMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_CALLBACK_MATCHERS_H_
#define GMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_CALLBACK_MATCHERS_H_
#endif // GMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_CALLBACK_MATCHERS_H_

View File

@@ -1,46 +0,0 @@
// Copyright 2015, Google Inc.
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
// Injection point for custom user configurations.
// The following macros can be defined:
//
// Flag related macros:
// GMOCK_DECLARE_bool_(name)
// GMOCK_DECLARE_int32_(name)
// GMOCK_DECLARE_string_(name)
// GMOCK_DEFINE_bool_(name, default_val, doc)
// GMOCK_DEFINE_int32_(name, default_val, doc)
// GMOCK_DEFINE_string_(name, default_val, doc)
//
// ** Custom implementation starts here **
#ifndef GMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_PORT_H_
#define GMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_PORT_H_
#endif // GMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_PORT_H_

View File

@@ -1,279 +0,0 @@
// This file was GENERATED by command:
// pump.py gmock-generated-internal-utils.h.pump
// DO NOT EDIT BY HAND!!!
// Copyright 2007, Google Inc.
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
// Author: wan@google.com (Zhanyong Wan)
// Google Mock - a framework for writing C++ mock classes.
//
// This file contains template meta-programming utility classes needed
// for implementing Google Mock.
#ifndef GMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_GENERATED_INTERNAL_UTILS_H_
#define GMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_GENERATED_INTERNAL_UTILS_H_
#include "gmock/internal/gmock-port.h"
namespace testing {
template <typename T>
class Matcher;
namespace internal {
// An IgnoredValue object can be implicitly constructed from ANY value.
// This is used in implementing the IgnoreResult(a) action.
class IgnoredValue {
public:
// This constructor template allows any value to be implicitly
// converted to IgnoredValue. The object has no data member and
// doesn't try to remember anything about the argument. We
// deliberately omit the 'explicit' keyword in order to allow the
// conversion to be implicit.
template <typename T>
IgnoredValue(const T& /* ignored */) {} // NOLINT(runtime/explicit)
};
// MatcherTuple<T>::type is a tuple type where each field is a Matcher
// for the corresponding field in tuple type T.
template <typename Tuple>
struct MatcherTuple;
template <>
struct MatcherTuple< ::testing::tuple<> > {
typedef ::testing::tuple< > type;
};
template <typename A1>
struct MatcherTuple< ::testing::tuple<A1> > {
typedef ::testing::tuple<Matcher<A1> > type;
};
template <typename A1, typename A2>
struct MatcherTuple< ::testing::tuple<A1, A2> > {
typedef ::testing::tuple<Matcher<A1>, Matcher<A2> > type;
};
template <typename A1, typename A2, typename A3>
struct MatcherTuple< ::testing::tuple<A1, A2, A3> > {
typedef ::testing::tuple<Matcher<A1>, Matcher<A2>, Matcher<A3> > type;
};
template <typename A1, typename A2, typename A3, typename A4>
struct MatcherTuple< ::testing::tuple<A1, A2, A3, A4> > {
typedef ::testing::tuple<Matcher<A1>, Matcher<A2>, Matcher<A3>,
Matcher<A4> > type;
};
template <typename A1, typename A2, typename A3, typename A4, typename A5>
struct MatcherTuple< ::testing::tuple<A1, A2, A3, A4, A5> > {
typedef ::testing::tuple<Matcher<A1>, Matcher<A2>, Matcher<A3>, Matcher<A4>,
Matcher<A5> > type;
};
template <typename A1, typename A2, typename A3, typename A4, typename A5,
typename A6>
struct MatcherTuple< ::testing::tuple<A1, A2, A3, A4, A5, A6> > {
typedef ::testing::tuple<Matcher<A1>, Matcher<A2>, Matcher<A3>, Matcher<A4>,
Matcher<A5>, Matcher<A6> > type;
};
template <typename A1, typename A2, typename A3, typename A4, typename A5,
typename A6, typename A7>
struct MatcherTuple< ::testing::tuple<A1, A2, A3, A4, A5, A6, A7> > {
typedef ::testing::tuple<Matcher<A1>, Matcher<A2>, Matcher<A3>, Matcher<A4>,
Matcher<A5>, Matcher<A6>, Matcher<A7> > type;
};
template <typename A1, typename A2, typename A3, typename A4, typename A5,
typename A6, typename A7, typename A8>
struct MatcherTuple< ::testing::tuple<A1, A2, A3, A4, A5, A6, A7, A8> > {
typedef ::testing::tuple<Matcher<A1>, Matcher<A2>, Matcher<A3>, Matcher<A4>,
Matcher<A5>, Matcher<A6>, Matcher<A7>, Matcher<A8> > type;
};
template <typename A1, typename A2, typename A3, typename A4, typename A5,
typename A6, typename A7, typename A8, typename A9>
struct MatcherTuple< ::testing::tuple<A1, A2, A3, A4, A5, A6, A7, A8, A9> > {
typedef ::testing::tuple<Matcher<A1>, Matcher<A2>, Matcher<A3>, Matcher<A4>,
Matcher<A5>, Matcher<A6>, Matcher<A7>, Matcher<A8>, Matcher<A9> > type;
};
template <typename A1, typename A2, typename A3, typename A4, typename A5,
typename A6, typename A7, typename A8, typename A9, typename A10>
struct MatcherTuple< ::testing::tuple<A1, A2, A3, A4, A5, A6, A7, A8, A9,
A10> > {
typedef ::testing::tuple<Matcher<A1>, Matcher<A2>, Matcher<A3>, Matcher<A4>,
Matcher<A5>, Matcher<A6>, Matcher<A7>, Matcher<A8>, Matcher<A9>,
Matcher<A10> > type;
};
// Template struct Function<F>, where F must be a function type, contains
// the following typedefs:
//
// Result: the function's return type.
// ArgumentN: the type of the N-th argument, where N starts with 1.
// ArgumentTuple: the tuple type consisting of all parameters of F.
// ArgumentMatcherTuple: the tuple type consisting of Matchers for all
// parameters of F.
// MakeResultVoid: the function type obtained by substituting void
// for the return type of F.
// MakeResultIgnoredValue:
// the function type obtained by substituting Something
// for the return type of F.
template <typename F>
struct Function;
template <typename R>
struct Function<R()> {
typedef R Result;
typedef ::testing::tuple<> ArgumentTuple;
typedef typename MatcherTuple<ArgumentTuple>::type ArgumentMatcherTuple;
typedef void MakeResultVoid();
typedef IgnoredValue MakeResultIgnoredValue();
};
template <typename R, typename A1>
struct Function<R(A1)>
: Function<R()> {
typedef A1 Argument1;
typedef ::testing::tuple<A1> ArgumentTuple;
typedef typename MatcherTuple<ArgumentTuple>::type ArgumentMatcherTuple;
typedef void MakeResultVoid(A1);
typedef IgnoredValue MakeResultIgnoredValue(A1);
};
template <typename R, typename A1, typename A2>
struct Function<R(A1, A2)>
: Function<R(A1)> {
typedef A2 Argument2;
typedef ::testing::tuple<A1, A2> ArgumentTuple;
typedef typename MatcherTuple<ArgumentTuple>::type ArgumentMatcherTuple;
typedef void MakeResultVoid(A1, A2);
typedef IgnoredValue MakeResultIgnoredValue(A1, A2);
};
template <typename R, typename A1, typename A2, typename A3>
struct Function<R(A1, A2, A3)>
: Function<R(A1, A2)> {
typedef A3 Argument3;
typedef ::testing::tuple<A1, A2, A3> ArgumentTuple;
typedef typename MatcherTuple<ArgumentTuple>::type ArgumentMatcherTuple;
typedef void MakeResultVoid(A1, A2, A3);
typedef IgnoredValue MakeResultIgnoredValue(A1, A2, A3);
};
template <typename R, typename A1, typename A2, typename A3, typename A4>
struct Function<R(A1, A2, A3, A4)>
: Function<R(A1, A2, A3)> {
typedef A4 Argument4;
typedef ::testing::tuple<A1, A2, A3, A4> ArgumentTuple;
typedef typename MatcherTuple<ArgumentTuple>::type ArgumentMatcherTuple;
typedef void MakeResultVoid(A1, A2, A3, A4);
typedef IgnoredValue MakeResultIgnoredValue(A1, A2, A3, A4);
};
template <typename R, typename A1, typename A2, typename A3, typename A4,
typename A5>
struct Function<R(A1, A2, A3, A4, A5)>
: Function<R(A1, A2, A3, A4)> {
typedef A5 Argument5;
typedef ::testing::tuple<A1, A2, A3, A4, A5> ArgumentTuple;
typedef typename MatcherTuple<ArgumentTuple>::type ArgumentMatcherTuple;
typedef void MakeResultVoid(A1, A2, A3, A4, A5);
typedef IgnoredValue MakeResultIgnoredValue(A1, A2, A3, A4, A5);
};
template <typename R, typename A1, typename A2, typename A3, typename A4,
typename A5, typename A6>
struct Function<R(A1, A2, A3, A4, A5, A6)>
: Function<R(A1, A2, A3, A4, A5)> {
typedef A6 Argument6;
typedef ::testing::tuple<A1, A2, A3, A4, A5, A6> ArgumentTuple;
typedef typename MatcherTuple<ArgumentTuple>::type ArgumentMatcherTuple;
typedef void MakeResultVoid(A1, A2, A3, A4, A5, A6);
typedef IgnoredValue MakeResultIgnoredValue(A1, A2, A3, A4, A5, A6);
};
template <typename R, typename A1, typename A2, typename A3, typename A4,
typename A5, typename A6, typename A7>
struct Function<R(A1, A2, A3, A4, A5, A6, A7)>
: Function<R(A1, A2, A3, A4, A5, A6)> {
typedef A7 Argument7;
typedef ::testing::tuple<A1, A2, A3, A4, A5, A6, A7> ArgumentTuple;
typedef typename MatcherTuple<ArgumentTuple>::type ArgumentMatcherTuple;
typedef void MakeResultVoid(A1, A2, A3, A4, A5, A6, A7);
typedef IgnoredValue MakeResultIgnoredValue(A1, A2, A3, A4, A5, A6, A7);
};
template <typename R, typename A1, typename A2, typename A3, typename A4,
typename A5, typename A6, typename A7, typename A8>
struct Function<R(A1, A2, A3, A4, A5, A6, A7, A8)>
: Function<R(A1, A2, A3, A4, A5, A6, A7)> {
typedef A8 Argument8;
typedef ::testing::tuple<A1, A2, A3, A4, A5, A6, A7, A8> ArgumentTuple;
typedef typename MatcherTuple<ArgumentTuple>::type ArgumentMatcherTuple;
typedef void MakeResultVoid(A1, A2, A3, A4, A5, A6, A7, A8);
typedef IgnoredValue MakeResultIgnoredValue(A1, A2, A3, A4, A5, A6, A7, A8);
};
template <typename R, typename A1, typename A2, typename A3, typename A4,
typename A5, typename A6, typename A7, typename A8, typename A9>
struct Function<R(A1, A2, A3, A4, A5, A6, A7, A8, A9)>
: Function<R(A1, A2, A3, A4, A5, A6, A7, A8)> {
typedef A9 Argument9;
typedef ::testing::tuple<A1, A2, A3, A4, A5, A6, A7, A8, A9> ArgumentTuple;
typedef typename MatcherTuple<ArgumentTuple>::type ArgumentMatcherTuple;
typedef void MakeResultVoid(A1, A2, A3, A4, A5, A6, A7, A8, A9);
typedef IgnoredValue MakeResultIgnoredValue(A1, A2, A3, A4, A5, A6, A7, A8,
A9);
};
template <typename R, typename A1, typename A2, typename A3, typename A4,
typename A5, typename A6, typename A7, typename A8, typename A9,
typename A10>
struct Function<R(A1, A2, A3, A4, A5, A6, A7, A8, A9, A10)>
: Function<R(A1, A2, A3, A4, A5, A6, A7, A8, A9)> {
typedef A10 Argument10;
typedef ::testing::tuple<A1, A2, A3, A4, A5, A6, A7, A8, A9,
A10> ArgumentTuple;
typedef typename MatcherTuple<ArgumentTuple>::type ArgumentMatcherTuple;
typedef void MakeResultVoid(A1, A2, A3, A4, A5, A6, A7, A8, A9, A10);
typedef IgnoredValue MakeResultIgnoredValue(A1, A2, A3, A4, A5, A6, A7, A8,
A9, A10);
};
} // namespace internal
} // namespace testing
#endif // GMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_GENERATED_INTERNAL_UTILS_H_

View File

@@ -1,511 +0,0 @@
// Copyright 2007, Google Inc.
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
// Author: wan@google.com (Zhanyong Wan)
// Google Mock - a framework for writing C++ mock classes.
//
// This file defines some utilities useful for implementing Google
// Mock. They are subject to change without notice, so please DO NOT
// USE THEM IN USER CODE.
#ifndef GMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_INTERNAL_UTILS_H_
#define GMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_INTERNAL_UTILS_H_
#include <stdio.h>
#include <ostream> // NOLINT
#include <string>
#include "gmock/internal/gmock-generated-internal-utils.h"
#include "gmock/internal/gmock-port.h"
#include "gtest/gtest.h"
namespace testing {
namespace internal {
// Converts an identifier name to a space-separated list of lower-case
// words. Each maximum substring of the form [A-Za-z][a-z]*|\d+ is
// treated as one word. For example, both "FooBar123" and
// "foo_bar_123" are converted to "foo bar 123".
GTEST_API_ string ConvertIdentifierNameToWords(const char* id_name);
// PointeeOf<Pointer>::type is the type of a value pointed to by a
// Pointer, which can be either a smart pointer or a raw pointer. The
// following default implementation is for the case where Pointer is a
// smart pointer.
template <typename Pointer>
struct PointeeOf {
// Smart pointer classes define type element_type as the type of
// their pointees.
typedef typename Pointer::element_type type;
};
// This specialization is for the raw pointer case.
template <typename T>
struct PointeeOf<T*> { typedef T type; }; // NOLINT
// GetRawPointer(p) returns the raw pointer underlying p when p is a
// smart pointer, or returns p itself when p is already a raw pointer.
// The following default implementation is for the smart pointer case.
template <typename Pointer>
inline const typename Pointer::element_type* GetRawPointer(const Pointer& p) {
return p.get();
}
// This overloaded version is for the raw pointer case.
template <typename Element>
inline Element* GetRawPointer(Element* p) { return p; }
// This comparator allows linked_ptr to be stored in sets.
template <typename T>
struct LinkedPtrLessThan {
bool operator()(const ::testing::internal::linked_ptr<T>& lhs,
const ::testing::internal::linked_ptr<T>& rhs) const {
return lhs.get() < rhs.get();
}
};
// Symbian compilation can be done with wchar_t being either a native
// type or a typedef. Using Google Mock with OpenC without wchar_t
// should require the definition of _STLP_NO_WCHAR_T.
//
// MSVC treats wchar_t as a native type usually, but treats it as the
// same as unsigned short when the compiler option /Zc:wchar_t- is
// specified. It defines _NATIVE_WCHAR_T_DEFINED symbol when wchar_t
// is a native type.
#if (GTEST_OS_SYMBIAN && defined(_STLP_NO_WCHAR_T)) || \
(defined(_MSC_VER) && !defined(_NATIVE_WCHAR_T_DEFINED))
// wchar_t is a typedef.
#else
# define GMOCK_WCHAR_T_IS_NATIVE_ 1
#endif
// signed wchar_t and unsigned wchar_t are NOT in the C++ standard.
// Using them is a bad practice and not portable. So DON'T use them.
//
// Still, Google Mock is designed to work even if the user uses signed
// wchar_t or unsigned wchar_t (obviously, assuming the compiler
// supports them).
//
// To gcc,
// wchar_t == signed wchar_t != unsigned wchar_t == unsigned int
#ifdef __GNUC__
// signed/unsigned wchar_t are valid types.
# define GMOCK_HAS_SIGNED_WCHAR_T_ 1
#endif
// In what follows, we use the term "kind" to indicate whether a type
// is bool, an integer type (excluding bool), a floating-point type,
// or none of them. This categorization is useful for determining
// when a matcher argument type can be safely converted to another
// type in the implementation of SafeMatcherCast.
enum TypeKind {
kBool, kInteger, kFloatingPoint, kOther
};
// KindOf<T>::value is the kind of type T.
template <typename T> struct KindOf {
enum { value = kOther }; // The default kind.
};
// This macro declares that the kind of 'type' is 'kind'.
#define GMOCK_DECLARE_KIND_(type, kind) \
template <> struct KindOf<type> { enum { value = kind }; }
GMOCK_DECLARE_KIND_(bool, kBool);
// All standard integer types.
GMOCK_DECLARE_KIND_(char, kInteger);
GMOCK_DECLARE_KIND_(signed char, kInteger);
GMOCK_DECLARE_KIND_(unsigned char, kInteger);
GMOCK_DECLARE_KIND_(short, kInteger); // NOLINT
GMOCK_DECLARE_KIND_(unsigned short, kInteger); // NOLINT
GMOCK_DECLARE_KIND_(int, kInteger);
GMOCK_DECLARE_KIND_(unsigned int, kInteger);
GMOCK_DECLARE_KIND_(long, kInteger); // NOLINT
GMOCK_DECLARE_KIND_(unsigned long, kInteger); // NOLINT
#if GMOCK_WCHAR_T_IS_NATIVE_
GMOCK_DECLARE_KIND_(wchar_t, kInteger);
#endif
// Non-standard integer types.
GMOCK_DECLARE_KIND_(Int64, kInteger);
GMOCK_DECLARE_KIND_(UInt64, kInteger);
// All standard floating-point types.
GMOCK_DECLARE_KIND_(float, kFloatingPoint);
GMOCK_DECLARE_KIND_(double, kFloatingPoint);
GMOCK_DECLARE_KIND_(long double, kFloatingPoint);
#undef GMOCK_DECLARE_KIND_
// Evaluates to the kind of 'type'.
#define GMOCK_KIND_OF_(type) \
static_cast< ::testing::internal::TypeKind>( \
::testing::internal::KindOf<type>::value)
// Evaluates to true iff integer type T is signed.
#define GMOCK_IS_SIGNED_(T) (static_cast<T>(-1) < 0)
// LosslessArithmeticConvertibleImpl<kFromKind, From, kToKind, To>::value
// is true iff arithmetic type From can be losslessly converted to
// arithmetic type To.
//
// It's the user's responsibility to ensure that both From and To are
// raw (i.e. has no CV modifier, is not a pointer, and is not a
// reference) built-in arithmetic types, kFromKind is the kind of
// From, and kToKind is the kind of To; the value is
// implementation-defined when the above pre-condition is violated.
template <TypeKind kFromKind, typename From, TypeKind kToKind, typename To>
struct LosslessArithmeticConvertibleImpl : public false_type {};
// Converting bool to bool is lossless.
template <>
struct LosslessArithmeticConvertibleImpl<kBool, bool, kBool, bool>
: public true_type {}; // NOLINT
// Converting bool to any integer type is lossless.
template <typename To>
struct LosslessArithmeticConvertibleImpl<kBool, bool, kInteger, To>
: public true_type {}; // NOLINT
// Converting bool to any floating-point type is lossless.
template <typename To>
struct LosslessArithmeticConvertibleImpl<kBool, bool, kFloatingPoint, To>
: public true_type {}; // NOLINT
// Converting an integer to bool is lossy.
template <typename From>
struct LosslessArithmeticConvertibleImpl<kInteger, From, kBool, bool>
: public false_type {}; // NOLINT
// Converting an integer to another non-bool integer is lossless iff
// the target type's range encloses the source type's range.
template <typename From, typename To>
struct LosslessArithmeticConvertibleImpl<kInteger, From, kInteger, To>
: public bool_constant<
// When converting from a smaller size to a larger size, we are
// fine as long as we are not converting from signed to unsigned.
((sizeof(From) < sizeof(To)) &&
(!GMOCK_IS_SIGNED_(From) || GMOCK_IS_SIGNED_(To))) ||
// When converting between the same size, the signedness must match.
((sizeof(From) == sizeof(To)) &&
(GMOCK_IS_SIGNED_(From) == GMOCK_IS_SIGNED_(To)))> {}; // NOLINT
#undef GMOCK_IS_SIGNED_
// Converting an integer to a floating-point type may be lossy, since
// the format of a floating-point number is implementation-defined.
template <typename From, typename To>
struct LosslessArithmeticConvertibleImpl<kInteger, From, kFloatingPoint, To>
: public false_type {}; // NOLINT
// Converting a floating-point to bool is lossy.
template <typename From>
struct LosslessArithmeticConvertibleImpl<kFloatingPoint, From, kBool, bool>
: public false_type {}; // NOLINT
// Converting a floating-point to an integer is lossy.
template <typename From, typename To>
struct LosslessArithmeticConvertibleImpl<kFloatingPoint, From, kInteger, To>
: public false_type {}; // NOLINT
// Converting a floating-point to another floating-point is lossless
// iff the target type is at least as big as the source type.
template <typename From, typename To>
struct LosslessArithmeticConvertibleImpl<
kFloatingPoint, From, kFloatingPoint, To>
: public bool_constant<sizeof(From) <= sizeof(To)> {}; // NOLINT
// LosslessArithmeticConvertible<From, To>::value is true iff arithmetic
// type From can be losslessly converted to arithmetic type To.
//
// It's the user's responsibility to ensure that both From and To are
// raw (i.e. has no CV modifier, is not a pointer, and is not a
// reference) built-in arithmetic types; the value is
// implementation-defined when the above pre-condition is violated.
template <typename From, typename To>
struct LosslessArithmeticConvertible
: public LosslessArithmeticConvertibleImpl<
GMOCK_KIND_OF_(From), From, GMOCK_KIND_OF_(To), To> {}; // NOLINT
// This interface knows how to report a Google Mock failure (either
// non-fatal or fatal).
class FailureReporterInterface {
public:
// The type of a failure (either non-fatal or fatal).
enum FailureType {
kNonfatal, kFatal
};
virtual ~FailureReporterInterface() {}
// Reports a failure that occurred at the given source file location.
virtual void ReportFailure(FailureType type, const char* file, int line,
const string& message) = 0;
};
// Returns the failure reporter used by Google Mock.
GTEST_API_ FailureReporterInterface* GetFailureReporter();
// Asserts that condition is true; aborts the process with the given
// message if condition is false. We cannot use LOG(FATAL) or CHECK()
// as Google Mock might be used to mock the log sink itself. We
// inline this function to prevent it from showing up in the stack
// trace.
inline void Assert(bool condition, const char* file, int line,
const string& msg) {
if (!condition) {
GetFailureReporter()->ReportFailure(FailureReporterInterface::kFatal,
file, line, msg);
}
}
inline void Assert(bool condition, const char* file, int line) {
Assert(condition, file, line, "Assertion failed.");
}
// Verifies that condition is true; generates a non-fatal failure if
// condition is false.
inline void Expect(bool condition, const char* file, int line,
const string& msg) {
if (!condition) {
GetFailureReporter()->ReportFailure(FailureReporterInterface::kNonfatal,
file, line, msg);
}
}
inline void Expect(bool condition, const char* file, int line) {
Expect(condition, file, line, "Expectation failed.");
}
// Severity level of a log.
enum LogSeverity {
kInfo = 0,
kWarning = 1
};
// Valid values for the --gmock_verbose flag.
// All logs (informational and warnings) are printed.
const char kInfoVerbosity[] = "info";
// Only warnings are printed.
const char kWarningVerbosity[] = "warning";
// No logs are printed.
const char kErrorVerbosity[] = "error";
// Returns true iff a log with the given severity is visible according
// to the --gmock_verbose flag.
GTEST_API_ bool LogIsVisible(LogSeverity severity);
// Prints the given message to stdout iff 'severity' >= the level
// specified by the --gmock_verbose flag. If stack_frames_to_skip >=
// 0, also prints the stack trace excluding the top
// stack_frames_to_skip frames. In opt mode, any positive
// stack_frames_to_skip is treated as 0, since we don't know which
// function calls will be inlined by the compiler and need to be
// conservative.
GTEST_API_ void Log(LogSeverity severity,
const string& message,
int stack_frames_to_skip);
// TODO(wan@google.com): group all type utilities together.
// Type traits.
// is_reference<T>::value is non-zero iff T is a reference type.
template <typename T> struct is_reference : public false_type {};
template <typename T> struct is_reference<T&> : public true_type {};
// type_equals<T1, T2>::value is non-zero iff T1 and T2 are the same type.
template <typename T1, typename T2> struct type_equals : public false_type {};
template <typename T> struct type_equals<T, T> : public true_type {};
// remove_reference<T>::type removes the reference from type T, if any.
template <typename T> struct remove_reference { typedef T type; }; // NOLINT
template <typename T> struct remove_reference<T&> { typedef T type; }; // NOLINT
// DecayArray<T>::type turns an array type U[N] to const U* and preserves
// other types. Useful for saving a copy of a function argument.
template <typename T> struct DecayArray { typedef T type; }; // NOLINT
template <typename T, size_t N> struct DecayArray<T[N]> {
typedef const T* type;
};
// Sometimes people use arrays whose size is not available at the use site
// (e.g. extern const char kNamePrefix[]). This specialization covers that
// case.
template <typename T> struct DecayArray<T[]> {
typedef const T* type;
};
// Disable MSVC warnings for infinite recursion, since in this case the
// the recursion is unreachable.
#ifdef _MSC_VER
# pragma warning(push)
# pragma warning(disable:4717)
#endif
// Invalid<T>() is usable as an expression of type T, but will terminate
// the program with an assertion failure if actually run. This is useful
// when a value of type T is needed for compilation, but the statement
// will not really be executed (or we don't care if the statement
// crashes).
template <typename T>
inline T Invalid() {
Assert(false, "", -1, "Internal error: attempt to return invalid value");
// This statement is unreachable, and would never terminate even if it
// could be reached. It is provided only to placate compiler warnings
// about missing return statements.
return Invalid<T>();
}
#ifdef _MSC_VER
# pragma warning(pop)
#endif
// Given a raw type (i.e. having no top-level reference or const
// modifier) RawContainer that's either an STL-style container or a
// native array, class StlContainerView<RawContainer> has the
// following members:
//
// - type is a type that provides an STL-style container view to
// (i.e. implements the STL container concept for) RawContainer;
// - const_reference is a type that provides a reference to a const
// RawContainer;
// - ConstReference(raw_container) returns a const reference to an STL-style
// container view to raw_container, which is a RawContainer.
// - Copy(raw_container) returns an STL-style container view of a
// copy of raw_container, which is a RawContainer.
//
// This generic version is used when RawContainer itself is already an
// STL-style container.
template <class RawContainer>
class StlContainerView {
public:
typedef RawContainer type;
typedef const type& const_reference;
static const_reference ConstReference(const RawContainer& container) {
// Ensures that RawContainer is not a const type.
testing::StaticAssertTypeEq<RawContainer,
GTEST_REMOVE_CONST_(RawContainer)>();
return container;
}
static type Copy(const RawContainer& container) { return container; }
};
// This specialization is used when RawContainer is a native array type.
template <typename Element, size_t N>
class StlContainerView<Element[N]> {
public:
typedef GTEST_REMOVE_CONST_(Element) RawElement;
typedef internal::NativeArray<RawElement> type;
// NativeArray<T> can represent a native array either by value or by
// reference (selected by a constructor argument), so 'const type'
// can be used to reference a const native array. We cannot
// 'typedef const type& const_reference' here, as that would mean
// ConstReference() has to return a reference to a local variable.
typedef const type const_reference;
static const_reference ConstReference(const Element (&array)[N]) {
// Ensures that Element is not a const type.
testing::StaticAssertTypeEq<Element, RawElement>();
#if GTEST_OS_SYMBIAN
// The Nokia Symbian compiler confuses itself in template instantiation
// for this call without the cast to Element*:
// function call '[testing::internal::NativeArray<char *>].NativeArray(
// {lval} const char *[4], long, testing::internal::RelationToSource)'
// does not match
// 'testing::internal::NativeArray<char *>::NativeArray(
// char *const *, unsigned int, testing::internal::RelationToSource)'
// (instantiating: 'testing::internal::ContainsMatcherImpl
// <const char * (&)[4]>::Matches(const char * (&)[4]) const')
// (instantiating: 'testing::internal::StlContainerView<char *[4]>::
// ConstReference(const char * (&)[4])')
// (and though the N parameter type is mismatched in the above explicit
// conversion of it doesn't help - only the conversion of the array).
return type(const_cast<Element*>(&array[0]), N,
RelationToSourceReference());
#else
return type(array, N, RelationToSourceReference());
#endif // GTEST_OS_SYMBIAN
}
static type Copy(const Element (&array)[N]) {
#if GTEST_OS_SYMBIAN
return type(const_cast<Element*>(&array[0]), N, RelationToSourceCopy());
#else
return type(array, N, RelationToSourceCopy());
#endif // GTEST_OS_SYMBIAN
}
};
// This specialization is used when RawContainer is a native array
// represented as a (pointer, size) tuple.
template <typename ElementPointer, typename Size>
class StlContainerView< ::testing::tuple<ElementPointer, Size> > {
public:
typedef GTEST_REMOVE_CONST_(
typename internal::PointeeOf<ElementPointer>::type) RawElement;
typedef internal::NativeArray<RawElement> type;
typedef const type const_reference;
static const_reference ConstReference(
const ::testing::tuple<ElementPointer, Size>& array) {
return type(get<0>(array), get<1>(array), RelationToSourceReference());
}
static type Copy(const ::testing::tuple<ElementPointer, Size>& array) {
return type(get<0>(array), get<1>(array), RelationToSourceCopy());
}
};
// The following specialization prevents the user from instantiating
// StlContainer with a reference type.
template <typename T> class StlContainerView<T&>;
// A type transform to remove constness from the first part of a pair.
// Pairs like that are used as the value_type of associative containers,
// and this transform produces a similar but assignable pair.
template <typename T>
struct RemoveConstFromKey {
typedef T type;
};
// Partially specialized to remove constness from std::pair<const K, V>.
template <typename K, typename V>
struct RemoveConstFromKey<std::pair<const K, V> > {
typedef std::pair<K, V> type;
};
// Mapping from booleans to types. Similar to boost::bool_<kValue> and
// std::integral_constant<bool, kValue>.
template <bool kValue>
struct BooleanConstant {};
} // namespace internal
} // namespace testing
#endif // GMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_INTERNAL_UTILS_H_

View File

@@ -1,91 +0,0 @@
// Copyright 2008, Google Inc.
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
// Author: vadimb@google.com (Vadim Berman)
//
// Low-level types and utilities for porting Google Mock to various
// platforms. All macros ending with _ and symbols defined in an
// internal namespace are subject to change without notice. Code
// outside Google Mock MUST NOT USE THEM DIRECTLY. Macros that don't
// end with _ are part of Google Mock's public API and can be used by
// code outside Google Mock.
#ifndef GMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_PORT_H_
#define GMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_PORT_H_
#include <assert.h>
#include <stdlib.h>
#include <iostream>
// Most of the utilities needed for porting Google Mock are also
// required for Google Test and are defined in gtest-port.h.
//
// Note to maintainers: to reduce code duplication, prefer adding
// portability utilities to Google Test's gtest-port.h instead of
// here, as Google Mock depends on Google Test. Only add a utility
// here if it's truly specific to Google Mock.
#include "gtest/internal/gtest-linked_ptr.h"
#include "gtest/internal/gtest-port.h"
#include "gmock/internal/custom/gmock-port.h"
// To avoid conditional compilation everywhere, we make it
// gmock-port.h's responsibility to #include the header implementing
// tr1/tuple. gmock-port.h does this via gtest-port.h, which is
// guaranteed to pull in the tuple header.
// For MS Visual C++, check the compiler version. At least VS 2003 is
// required to compile Google Mock.
#if defined(_MSC_VER) && _MSC_VER < 1310
# error "At least Visual C++ 2003 (7.1) is required to compile Google Mock."
#endif
// Macro for referencing flags. This is public as we want the user to
// use this syntax to reference Google Mock flags.
#define GMOCK_FLAG(name) FLAGS_gmock_##name
#if !defined(GMOCK_DECLARE_bool_)
// Macros for declaring flags.
#define GMOCK_DECLARE_bool_(name) extern GTEST_API_ bool GMOCK_FLAG(name)
#define GMOCK_DECLARE_int32_(name) \
extern GTEST_API_ ::testing::internal::Int32 GMOCK_FLAG(name)
#define GMOCK_DECLARE_string_(name) \
extern GTEST_API_ ::std::string GMOCK_FLAG(name)
// Macros for defining flags.
#define GMOCK_DEFINE_bool_(name, default_val, doc) \
GTEST_API_ bool GMOCK_FLAG(name) = (default_val)
#define GMOCK_DEFINE_int32_(name, default_val, doc) \
GTEST_API_ ::testing::internal::Int32 GMOCK_FLAG(name) = (default_val)
#define GMOCK_DEFINE_string_(name, default_val, doc) \
GTEST_API_ ::std::string GMOCK_FLAG(name) = (default_val)
#endif // !defined(GMOCK_DECLARE_bool_)
#endif // GMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_PORT_H_

View File

@@ -1,47 +0,0 @@
// Copyright 2008, Google Inc.
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
// Author: wan@google.com (Zhanyong Wan)
//
// Google C++ Mocking Framework (Google Mock)
//
// This file #includes all Google Mock implementation .cc files. The
// purpose is to allow a user to build Google Mock by compiling this
// file alone.
// This line ensures that gmock.h can be compiled on its own, even
// when it's fused.
#include "gmock/gmock.h"
// The following lines pull in the real gmock *.cc files.
#include "src/gmock-cardinalities.cc"
#include "src/gmock-internal-utils.cc"
#include "src/gmock-matchers.cc"
#include "src/gmock-spec-builders.cc"
#include "src/gmock.cc"

Some files were not shown because too many files have changed in this diff Show More