Compare commits

..

70 Commits

Author SHA1 Message Date
Severin
000009f4d6 Use full region width for file path panels
The horizontal margin would shrink the text button, reducing space to
show file paths in. For file paths, larger buttons make sense though, so
remove the margin.
2019-01-04 21:42:51 +01:00
3e159e7a9b Move text editor load/save options to filepath RNA 2019-01-04 21:13:58 +01:00
Severin
8dc369da4a Cleanup: PEP 8 2019-01-04 20:18:09 +01:00
Severin
42ff84eca6 Merge branch 'master' into userpref_redesign 2019-01-04 19:53:16 +01:00
Severin
ca9c55d0ee If anim player is not custom, gray out path option 2019-01-03 22:16:04 +01:00
62392f76c9 Revert changes to RNA path of undo settings
Too many Add-ons use these settings, we shouldn't break them all after
beta release. RNA and UI structuring are a bit out-of-sync here now, but
that's an acceptable annoyance.

Also adds some slightly nicer grouping in the RNA file, and moves the
Weight Paint Range property to the View category to match the UI.
2019-01-02 22:52:51 +01:00
Severin
52a7ecac35 Merge branch 'master' into userpref_redesign 2019-01-02 20:28:44 +01:00
Severin
204ded594d Avoid empty panel bl_label
And remove unused operator execution context change.
2018-12-30 17:46:55 +01:00
5b868a8b26 Tiny layout improvements
* Moved Anisotropic Filter setting to the Textures panel (it’s a
  texture setting after all)
* Very minor change to Textures panel ordering
* Moved Text panel higher in the list - rather than being at the bottom
  it’s not below the Display panel which makes more sense
2018-12-30 16:39:59 +01:00
Severin
a421e57490 Merge branch 'master' into userpref_redesign 2018-12-30 15:54:25 +01:00
Severin
2822a74958 Fix invisible scrollbars in Preferences
Need to force correct updates of action-zone alphas for dynamically
sized regions.
2018-12-30 12:48:33 +01:00
Severin
0093f5b1ec Fix responsive Preference layout ignoring hiDPI 2018-12-29 19:18:39 +01:00
Severin
71b98a8769 Merge branch 'master' into userpref_redesign 2018-12-29 19:12:37 +01:00
Severin
f461ba611f Fix merge errors and whitespace cleanup 2018-12-29 18:41:10 +01:00
Severin
5420be42d3 Fix "Show Preferences" shortcut not working
Was broken in 2.7 keymap.

Also whitespace cleanup.
2018-12-29 16:51:53 +01:00
Severin
2e3fdeb54c Avoid panel content margins in small regions
Don't see a need for changes in the layout system here, a simple trick
like I've done here is sufficient.
Also: Some refactoring of the `PreferencePanel` class.
2018-12-29 16:38:55 +01:00
d75c410bf5 Another round of tweaks to new Preferences
- Removed doubled Cycles Compute Device text
 - Moved header placement into menus panel
 - Moved Text panel & rna to Interface
 - Moved OpenGL Texture panel inside OpenGL panel
 - Moved Color Picker Type to from System>Misc to Interface>Menus (eventually it'd be nice to include this option inside the color pickers themselves to make it more useful and discoverable)
 - Combined all memory-related preferences (Undo + Console Scrollback + Sequencer Cache) in a Memory panel in System section (avoids needing previous Misc panel)
 - Added correct greying out to Files > Auto Run Python Scripts panel
 - Added flow layout to Save & Load checkboxes
 - Rejiggered some of the Texture preferences to make more logical sense together
 - A few other minor adjustments
2018-12-29 14:18:17 +01:00
Julian Eisel
822aae0ff2 Merge branch 'master' into userpref_redesign 2018-12-29 13:36:10 +01:00
Julian Eisel
b789e5a82a Own left-bottom aligned region for Save Preference button
* Own region allows scrolling the navigation bar independently (Save
  Preference button stays visible).
* We may want to change background color and size, but I'll leave that
  for others to do.
* Region can be hidden but not scaled or scrolled.
* Bumps subversion.
* Added new execute region type which we may want to use in more places.
* Had to do some tweaks to drawing/layout code to get the dynamically
  sized region to work without glitches.
* Also correct navigation bar versioning code to use full area width for
  header.
2018-12-24 13:00:34 +01:00
Julian Eisel
ea282f4de0 Merge branch 'master' into userpref_redesign 2018-12-23 22:55:23 +01:00
Julian Eisel
2db613225d Merge branch 'master' into userpref_redesign 2018-12-23 18:13:09 +01:00
56a4e7cba0 More tweaks to Preferences
* Brighter navigation bar background to avoid conflicts with subpanel backgrounds
* Group together device input settings into one panel (with sub-panels)
* Group together view manipulation settings into one panel (with sub-panels)
* Move Save Preferences button back to header until own region for it is ready
2018-12-23 13:48:01 +01:00
Julian Eisel
6f6ad48c25 Merge branch 'master' into userpref_redesign 2018-12-23 12:38:43 +01:00
Julian Eisel
f7cbe35aec Merge branch 'blender2.8' into userpref_redesign 2018-12-21 00:25:56 +01:00
Julian Eisel
13b3869b5f Use term "Preferences" instead of "User Preferences" in the UI 2018-12-21 00:24:04 +01:00
b5d4bd8ac3 More small tweaks/fixes to new Preferences
* Fixed gap in Viewports panel
* Made all buttons to install things (Addons, Lights, Keymaps, Themes etc) use the same icon and use elipses (...) to communicate the fact that they open a dialog.
* Fixed own code error in Input > View panel
* Re-ordered panels in Input section - now related Mouse and Devices panels are next to each other
* Renamed User Preferences in the Editor list to just Preferences - consistent with Edit > Preferences
* Removed icon for auto-key toggle - doesn't fit here
2018-12-21 00:09:01 +01:00
Julian Eisel
0178cfa305 Hide Preferences header when opened in temporary window
Also, use full width of the area again for the header.
2018-12-20 02:11:41 +01:00
Julian Eisel
0b3603dcc1 Remove colons in Preferences section group names 2018-12-20 01:54:18 +01:00
Julian Eisel
dd6d1ac32c Shrink horizontal size of Preferences Window 2018-12-20 01:49:42 +01:00
e5b553a8b7 Various improvements to Preferences buttons/layouts
* Header is now fully redundant. Added buttons to add studio lights under the Lights category.
* Removed redundant theme category dropdown
* Made the theme layout use layout flow, so it goes to single column when narrow, but multiple columns as you make it wider
* Made all the themes layouts consistent and all use property split fit with the rest of 2.8
* Fix UI Scale property so it doesn't flicker, by making it a number value rather than a slider - it's more correct this way anyway
2018-12-20 01:41:53 +01:00
Julian Eisel
1571f18a89 Merge branch 'blender2.8' into userpref_redesign 2018-12-19 22:29:22 +01:00
Julian Eisel
3cfb9e0ec5 Merge branch 'blender2.8' into userpref_redesign 2018-12-19 12:52:09 +01:00
Julian Eisel
b61dfee18a Use subpanels for User Interface theme options
The widget color subpanels are dynamically generated.
2018-12-17 00:02:22 +01:00
Julian Eisel
7cf1f30a99 Initial panel & subpanel based layout for theme section
Panels/subpanels for the editor theming are dynamically generated from the RNA
API.
Widget theming doesn't use subpanels yet.
2018-12-16 22:09:15 +01:00
Julian Eisel
1b722f1ac8 Merge branch 'blender2.8' into userpref_redesign 2018-12-16 00:32:46 +01:00
Julian Eisel
e86d851fef Merge branch 'blender2.8' into userpref_redesign 2018-12-07 01:10:06 +01:00
Julian Eisel
6d6aad08d7 Split keymap settings from "Input" into separate section
Adds a new section "Keymap" and puts keymap settings into this.
2018-12-07 01:06:58 +01:00
c4bf66e542 Use panel layout in most Preferences sections
Also removes all buttons from the header (except editor switch) and puts
them into the corresponding sections.

Breaks the input section which will be fixed in a followup commit.
2018-12-06 23:19:00 +01:00
Julian Eisel
684f885e95 Panel based, single column layout for the editing category 2018-12-02 21:31:43 +01:00
Julian Eisel
90f8900146 Merge branch 'blender2.8' into userpref_redesign 2018-12-02 17:12:29 +01:00
Julian Eisel
9cd6ca299d Minor refactor of enum item grouping logic 2018-11-25 16:21:35 +01:00
Julian Eisel
a5b8a3bcfb Rename User Preferences to Settings in new menus 2018-11-25 15:33:58 +01:00
Julian Eisel
1f71fb7057 Bump subversion for adding navigation region correctly on file load 2018-11-25 14:59:18 +01:00
Julian Eisel
ba567f634a Merge branch 'blender2.8' into userpref_redesign 2018-11-25 14:45:23 +01:00
Julian Eisel
91e0dc0b8e Move File section into system group
And rename from "File" to "Files".
2018-11-24 02:32:27 +01:00
Julian Eisel
770f42fb86 Rename operator: "Save User Preferences" -> "Save Settings" 2018-11-24 01:40:15 +01:00
Julian Eisel
f091b0fad5 Move interface item back to top 2018-11-24 01:38:10 +01:00
Julian Eisel
cbe7ecb8bd Address review inline comments 2018-11-24 01:09:36 +01:00
Julian Eisel
183eab6eb7 Merge branch 'blender2.8' into userpref_redesign 2018-11-24 00:30:47 +01:00
Julian Eisel
96acf70159 Scale up layout & use icons for category groups
Icons were already prepared by @jendrzych. Thanks a lot!
2018-11-23 22:37:42 +01:00
Julian Eisel
9e6bfa4180 Update default theme for UserPref navigation bar
Uses a slightly darker grey than the main region. This color is used in
various other places too.
2018-11-23 01:12:35 +01:00
Julian Eisel
b59443d33f Use new navigation region type for UserPref navigation 2018-11-23 00:33:57 +01:00
Julian Eisel
a2b41f105c Fix merge conflicts
Used 'git stash' to stash a single file, which (for some reason) caused quite
some trouble. Should be fine again now.
2018-11-22 22:15:17 +01:00
Julian Eisel
37b62ac344 Merge branch 'blender2.8' into userpref_redesign 2018-11-22 19:35:03 +01:00
Julian Eisel
6a7e974a93 Merge branch 'blender2.8' into userpref_redesign 2018-11-22 19:06:52 +01:00
Julian Eisel
3f5f4f089d Merge branch 'blender2.8' into userpref_redesign 2018-04-09 11:22:38 +02:00
Julian Eisel
d532f76df7 Revert some rather experimental changes
* Revert name change of "Editing" section
* Comment out placeholder worskpace sections
* Leave all system settings in a singe section
2018-03-19 23:13:18 +01:00
Severin
07112ef2ed Merge branch 'blender2.8' into userpref_redesign 2018-03-19 19:44:57 +01:00
Severin
a2ce84c089 Get rid of uiLayout.prop group parameter
Just always use the grouping behavior, that should be fine. Also add colons to
the group title.
2018-03-19 01:49:10 +01:00
Severin
6b2907926e Address most minor points from review
* Increase size of settings window again (some sections are too squeezed).
* Use '...' suffix to indicate that menu entry opens new window.
* Stick to current naming convention for area/region variables.
* Simplify tooltip.
* Remove hyphen ("keymap", not "key-map")
2018-03-19 01:48:36 +01:00
Severin
5a5ac2b217 Merge branch 'blender2.8' into userpref_redesign 2018-03-18 23:45:30 +01:00
Julian Eisel
66a3142cfd Go back to old size of UserPref/Settings window
If we split up and reorganize sections a bit, the current window size should be
totally fine. Bringing it back for now.

Reverts rB93edc452920870b.
2018-02-28 17:52:27 +01:00
Julian Eisel
f064957e2e Merge branch 'blender2.8' into userpref_redesign 2018-02-27 23:49:07 +01:00
Julian Eisel
3e4c1209de Split up System section into General, Drawing and Devices 2018-02-27 23:39:07 +01:00
Julian Eisel
cef6fb72c8 Initial grouping/categorizing of settings sections
Made this to work just like we define categories in menus: a enum item with only
the UI-name set starts a new category with given name. Also added another
uiLayout.prop option "group", not sure if thats such a nice way to do it though.
Will check during review.
2018-02-27 20:08:10 +01:00
Julian Eisel
a9d146910b Add placeholder sections for workspaces
Adds
* "Worspace Configuration File",
* "Worspace Add-on Overrides",
* "Worspace Key-map Overrides"
sections to the Blender settings.
2018-02-27 14:37:25 +01:00
Julian Eisel
33dc1214b1 Rename "Editing" section to "General"
Also moved to be first item.
2018-02-26 23:37:27 +01:00
Julian Eisel
104af87614 Rename "User Preferences" to "Settings" in the UI 2018-02-26 23:18:03 +01:00
Julian Eisel
93edc45292 UI: Increase default size of User-Preferences window
Increased it by factor 1.28 (so we get a width of 1024px).
WM_window_open_temp will ensure the window fits into the screen, so didn't need to ensure that.
2018-02-23 11:02:34 +01:00
Julian Eisel
34743b1dfa UI: Move User-Preferences navigation tabs to a sidebar region
This is the first commit for a bigger User-Preferences redesign, see T54115.
2018-02-23 11:02:34 +01:00
4356 changed files with 94442 additions and 79308 deletions

4
.gitignore vendored
View File

@@ -11,10 +11,6 @@ __pycache__/
*.swo
*#
# Indexes for emacs, vi & others
TAGS
tags
# QtCreator
CMakeLists.txt.user

View File

@@ -16,6 +16,11 @@
#
# The Original Code is Copyright (C) 2006, Blender Foundation
# All rights reserved.
#
# The Original Code is: all of this file.
#
# Contributor(s): Jacques Beaurain.
#
# ***** END GPL LICENSE BLOCK *****
#-----------------------------------------------------------------------------
@@ -406,7 +411,7 @@ option(WITH_CYCLES_CUDA_BINARIES "Build Cycles CUDA binaries" OFF)
option(WITH_CYCLES_CUBIN_COMPILER "Build cubins with nvrtc based compiler instead of nvcc" OFF)
option(WITH_CYCLES_CUDA_BUILD_SERIAL "Build cubins one after another (useful on machines with limited RAM)" OFF)
mark_as_advanced(WITH_CYCLES_CUDA_BUILD_SERIAL)
set(CYCLES_CUDA_BINARIES_ARCH sm_30 sm_35 sm_37 sm_50 sm_52 sm_60 sm_61 sm_70 sm_75 CACHE STRING "CUDA architectures to build binaries for")
set(CYCLES_CUDA_BINARIES_ARCH sm_30 sm_35 sm_37 sm_50 sm_52 sm_60 sm_61 sm_70 sm_72 sm_75 CACHE STRING "CUDA architectures to build binaries for")
mark_as_advanced(CYCLES_CUDA_BINARIES_ARCH)
unset(PLATFORM_DEFAULT)
option(WITH_CYCLES_LOGGING "Build Cycles with logging support" ON)
@@ -758,8 +763,8 @@ if(WITH_PYTHON)
# Do this before main 'platform_*' checks,
# because UNIX will search for the old Python paths which may not exist.
# giving errors about missing paths before this case is met.
if(DEFINED PYTHON_VERSION AND "${PYTHON_VERSION}" VERSION_LESS "3.7")
message(FATAL_ERROR "At least Python 3.7 is required to build")
if(DEFINED PYTHON_VERSION AND "${PYTHON_VERSION}" VERSION_LESS "3.6")
message(FATAL_ERROR "At least Python 3.6 is required to build")
endif()
if(NOT EXISTS "${CMAKE_SOURCE_DIR}/release/scripts/addons/modules")
@@ -1053,7 +1058,7 @@ if(WITH_GL_PROFILE_ES20)
else()
if(OpenGL_GL_PREFERENCE STREQUAL "LEGACY" AND OPENGL_gl_LIBRARY)
list(APPEND BLENDER_GL_LIBRARIES ${OPENGL_gl_LIBRARY})
list(APPEND BLENDER_GL_LIBRARIES ${OPENGL_gl_LIBRARY} ${OPENGL_glx_LIBRARY})
else()
list(APPEND BLENDER_GL_LIBRARIES ${OPENGL_opengl_LIBRARY} ${OPENGL_glx_LIBRARY})
endif()
@@ -1117,10 +1122,7 @@ endif()
#-----------------------------------------------------------------------------
# Configure OpenMP.
if(WITH_OPENMP)
if(NOT OPENMP_CUSTOM)
find_package(OpenMP)
endif()
find_package(OpenMP)
if(OPENMP_FOUND)
if(NOT WITH_OPENMP_STATIC)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${OpenMP_C_FLAGS}")

View File

@@ -24,133 +24,11 @@
# ../build_linux_i386
# This is for users who like to configure & build blender with a single command.
define HELP_TEXT
Convenience Targets
Provided for building Blender, (multiple at once can be used).
* debug: Build a debug binary.
* full: Enable all supported dependencies & options.
* lite: Disable non essential features for a smaller binary and faster build.
* headless: Build without an interface (renderfarm or server automation).
* cycles: Build Cycles standalone only, without Blender.
* bpy: Build as a python module which can be loaded from python directly.
* deps: Build library dependencies (intended only for platform maintainers).
* config: Run cmake configuration tool to set build options.
Note: passing the argument 'BUILD_DIR=path' when calling make will override the default build dir.
Note: passing the argument 'BUILD_CMAKE_ARGS=args' lets you add cmake arguments.
Project Files
Generate poject files for development environments.
* project_qtcreator: QtCreator Project Files.
* project_netbeans: NetBeans Project Files.
* project_eclipse: Eclipse CDT4 Project Files.
Package Targets
* package_debian: Build a debian package.
* package_pacman: Build an arch linux pacman package.
* package_archive: Build an archive package.
Testing Targets
Not associated with building Blender.
* test:
Run ctest, currently tests import/export,
operator execution and that python modules load
* test_cmake:
Runs our own cmake file checker
which detects errors in the cmake file list definitions
* test_pep8:
Checks all python script are pep8
which are tagged to use the stricter formatting
* test_deprecated:
Checks for deprecation tags in our code which may need to be removed
* test_style_c:
Checks C/C++ conforms with blenders style guide:
https://wiki.blender.org/wiki/Source/Code_Style
* test_style_c_qtc:
Same as test_style but outputs QtCreator tasks format
* test_style_osl:
Checks OpenShadingLanguage conforms with blenders style guide:
https://wiki.blender.org/wiki/Source/Code_Style
* test_style_osl_qtc:
Checks OpenShadingLanguage conforms with blenders style guide:
https://wiki.blender.org/wiki/Source/Code_Style
Static Source Code Checking
Not associated with building Blender.
* check_cppcheck: Run blender source through cppcheck (C & C++).
* check_clang_array: Run blender source through clang array checking script (C & C++).
* check_splint: Run blenders source through splint (C only).
* check_sparse: Run blenders source through sparse (C only).
* check_smatch: Run blenders source through smatch (C only).
* check_spelling_c: Check for spelling errors (C/C++ only).
* check_spelling_c_qtc: Same as check_spelling_c but outputs QtCreator tasks format.
* check_spelling_osl: Check for spelling errors (OSL only).
* check_spelling_py: Check for spelling errors (Python only).
* check_descriptions: Check for duplicate/invalid descriptions.
Utilities
Not associated with building Blender.
* icons:
Updates PNG icons from SVG files.
Optionally pass in variables: 'BLENDER_BIN', 'INKSCAPE_BIN'
otherwise default paths are used.
Example
make icons INKSCAPE_BIN=/path/to/inkscape
* icons_geom:
Updates Geometry icons from BLEND file.
Optionally pass in variable: 'BLENDER_BIN'
otherwise default paths are used.
Example
make icons_geom BLENDER_BIN=/path/to/blender
* tgz:
Create a compressed archive of the source code.
* update:
updates git and all submodules
Environment Variables
* BUILD_CMAKE_ARGS: Arguments passed to CMake.
* BUILD_DIR: Override default build path.
* PYTHON: Use this for the Python command (used for checking tools).
* NPROCS: Number of processes to use building (auto-detect when omitted).
Documentation Targets
Not associated with building Blender.
* doc_py: Generate sphinx python api docs.
* doc_doxy: Generate doxygen C/C++ docs.
* doc_dna: Generate blender file format reference.
* doc_man: Generate manpage.
Information
* help: This help message.
* help_features: Show a list of optional features when building.
endef
# HELP_TEXT (end)
# System Vars
OS:=$(shell uname -s)
OS_NCASE:=$(shell uname -s | tr '[A-Z]' '[a-z]')
CPU:=$(shell uname -m)
# CPU:=$(shell uname -m) # UNUSED
# Source and Build DIR's
@@ -177,7 +55,7 @@ ifndef DEPS_INSTALL_DIR
ifneq ($(OS_NCASE),darwin)
# Add processor type to directory name
DEPS_INSTALL_DIR:=$(DEPS_INSTALL_DIR)_$(CPU)
DEPS_INSTALL_DIR:=$(DEPS_INSTALL_DIR)_$(shell uname -p)
endif
endif
@@ -220,12 +98,10 @@ endif
# -----------------------------------------------------------------------------
# Blender binary path
# Allow passing in own BLENDER_BIN so developers who don't
# use the default build path can still use utility helpers.
ifeq ($(OS), Darwin)
BLENDER_BIN?="$(BUILD_DIR)/bin/blender.app/Contents/MacOS/blender"
BLENDER_BIN="$(BUILD_DIR)/bin/blender.app/Contents/MacOS/blender"
else
BLENDER_BIN?="$(BUILD_DIR)/bin/blender"
BLENDER_BIN="$(BUILD_DIR)/bin/blender"
endif
@@ -320,9 +196,87 @@ config: .FORCE
# -----------------------------------------------------------------------------
# Help for build targets
export HELP_TEXT
help: .FORCE
@echo "$$HELP_TEXT"
@echo ""
@echo "Convenience targets provided for building blender, (multiple at once can be used)"
@echo " * debug - build a debug binary"
@echo " * full - enable all supported dependencies & options"
@echo " * lite - disable non essential features for a smaller binary and faster build"
@echo " * headless - build without an interface (renderfarm or server automation)"
@echo " * cycles - build Cycles standalone only, without Blender"
@echo " * bpy - build as a python module which can be loaded from python directly"
@echo " * deps - build library dependencies (intended only for platform maintainers)"
@echo ""
@echo " * config - run cmake configuration tool to set build options"
@echo ""
@echo " Note, passing the argument 'BUILD_DIR=path' when calling make will override the default build dir."
@echo " Note, passing the argument 'BUILD_CMAKE_ARGS=args' lets you add cmake arguments."
@echo ""
@echo ""
@echo "Project Files for IDE's"
@echo " * project_qtcreator - QtCreator Project Files"
@echo " * project_netbeans - NetBeans Project Files"
@echo " * project_eclipse - Eclipse CDT4 Project Files"
@echo ""
@echo "Package Targets"
@echo " * package_debian - build a debian package"
@echo " * package_pacman - build an arch linux pacman package"
@echo " * package_archive - build an archive package"
@echo ""
@echo "Testing Targets (not associated with building blender)"
@echo " * test - run ctest, currently tests import/export,"
@echo " operator execution and that python modules load"
@echo " * test_cmake - runs our own cmake file checker"
@echo " which detects errors in the cmake file list definitions"
@echo " * test_pep8 - checks all python script are pep8"
@echo " which are tagged to use the stricter formatting"
@echo " * test_deprecated - checks for deprecation tags in our code which may need to be removed"
@echo " * test_style_c - checks C/C++ conforms with blenders style guide:"
@echo " https://wiki.blender.org/wiki/Source/Code_Style"
@echo " * test_style_c_qtc - same as test_style but outputs QtCreator tasks format"
@echo " * test_style_osl - checks OpenShadingLanguage conforms with blenders style guide:"
@echo " https://wiki.blender.org/wiki/Source/Code_Style"
@echo " * test_style_osl_qtc - checks OpenShadingLanguage conforms with blenders style guide:"
@echo " https://wiki.blender.org/wiki/Source/Code_Style"
@echo ""
@echo "Static Source Code Checking (not associated with building blender)"
@echo " * check_cppcheck - run blender source through cppcheck (C & C++)"
@echo " * check_clang_array - run blender source through clang array checking script (C & C++)"
@echo " * check_splint - run blenders source through splint (C only)"
@echo " * check_sparse - run blenders source through sparse (C only)"
@echo " * check_smatch - run blenders source through smatch (C only)"
@echo " * check_spelling_c - check for spelling errors (C/C++ only)"
@echo " * check_spelling_c_qtc - same as check_spelling_c but outputs QtCreator tasks format"
@echo " * check_spelling_osl - check for spelling errors (OSL only)"
@echo " * check_spelling_py - check for spelling errors (Python only)"
@echo " * check_descriptions - check for duplicate/invalid descriptions"
@echo ""
@echo "Utilities (not associated with building blender)"
@echo " * icons - Updates PNG icons from SVG files."
@echo " Set environment variables 'BLENDER_BIN' and 'INKSCAPE_BIN'"
@echo " to define your own commands."
@echo " * icons_geom - Updates Geometry icons from BLEND file."
@echo " Set environment variable 'BLENDER_BIN'"
@echo " to define your own command."
@echo " * tgz - create a compressed archive of the source code."
@echo " * update - updates git and all submodules"
@echo ""
@echo "Environment Variables"
@echo " * BUILD_CMAKE_ARGS - arguments passed to CMake."
@echo " * BUILD_DIR - override default build path."
@echo " * PYTHON - use this for the Python command (used for checking tools)."
@echo " * NPROCS - number of processes to use building (auto-detect when omitted)."
@echo ""
@echo "Documentation Targets (not associated with building blender)"
@echo " * doc_py - generate sphinx python api docs"
@echo " * doc_doxy - generate doxygen C/C++ docs"
@echo " * doc_dna - generate blender file format reference"
@echo " * doc_man - generate manpage"
@echo ""
@echo "Information"
@echo " * help - this help message"
@echo " * help_features - show a list of optional features when building"
@echo ""
# -----------------------------------------------------------------------------
# Packages
@@ -486,12 +440,9 @@ check_descriptions: .FORCE
tgz: .FORCE
./build_files/utils/build_tgz.sh
INKSCAPE_BIN?="inkscape"
icons: .FORCE
BLENDER_BIN=$(BLENDER_BIN) INKSCAPE_BIN=$(INKSCAPE_BIN) \
"$(BLENDER_DIR)/release/datafiles/blender_icons_update.py"
BLENDER_BIN=$(BLENDER_BIN) INKSCAPE_BIN=$(INKSCAPE_BIN) \
"$(BLENDER_DIR)/release/datafiles/prvicons_update.py"
"$(BLENDER_DIR)/release/datafiles/blender_icons_update.py"
"$(BLENDER_DIR)/release/datafiles/prvicons_update.py"
icons_geom: .FORCE
BLENDER_BIN=$(BLENDER_BIN) \

View File

@@ -114,7 +114,7 @@ endif()
if(NOT WIN32 OR ENABLE_MINGW64)
include(cmake/openjpeg.cmake)
if(NOT WIN32 OR BUILD_MODE STREQUAL Release)
if(BUILD_MODE STREQUAL Release)
if(WIN32)
include(cmake/zlib_mingw.cmake)
endif()

View File

@@ -102,6 +102,7 @@ function(harvest from to)
FILES_MATCHING PATTERN ${pattern}
PATTERN "pkgconfig" EXCLUDE
PATTERN "cmake" EXCLUDE
PATTERN "clang" EXCLUDE
PATTERN "__pycache__" EXCLUDE
PATTERN "tests" EXCLUDE)
endif()
@@ -128,13 +129,8 @@ harvest(jemalloc/lib jemalloc/lib "*.a")
harvest(jpg/include jpeg/include "*.h")
harvest(jpg/lib jpeg/lib "libjpeg.a")
harvest(lame/lib ffmpeg/lib "*.a")
harvest(clang/bin llvm/bin "clang-format")
harvest(llvm/bin llvm/bin "llvm-config")
harvest(llvm/lib llvm/lib "libLLVM*.a")
if(APPLE)
harvest(openmp/lib openmp/lib "*")
harvest(openmp/include openmp/include "*.h")
endif()
harvest(ogg/lib ffmpeg/lib "*.a")
harvest(openal/include openal/include "*.h")
if(UNIX AND NOT APPLE)

View File

@@ -23,8 +23,7 @@ ExternalProject_Add(external_openmp
URL_HASH MD5=${OPENMP_HASH}
PREFIX ${BUILD_DIR}/openmp
CMAKE_ARGS -DCMAKE_INSTALL_PREFIX=${LIBDIR}/openmp ${DEFAULT_CMAKE_FLAGS}
INSTALL_COMMAND cd ${BUILD_DIR}/openmp/src/external_openmp-build && install_name_tool -id @executable_path/../Resources/lib/libomp.dylib runtime/src/libomp.dylib && make install
INSTALL_DIR ${LIBDIR}/openmp
INSTALL_DIR ${LIBDIR}/clang
)
add_dependencies(

View File

@@ -127,7 +127,8 @@ else()
)
set(OSX_ARCHITECTURES x86_64)
set(OSX_DEPLOYMENT_TARGET 10.9)
set(OSX_SYSROOT ${XCODE_DEV_PATH}/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk)
set(OSX_SDK_VERSION 10.13)
set(OSX_SYSROOT ${XCODE_DEV_PATH}/Platforms/MacOSX.platform/Developer/SDKs/MacOSX${OSX_SDK_VERSION}.sdk)
set(PLATFORM_CFLAGS "-isysroot ${OSX_SYSROOT} -mmacosx-version-min=${OSX_DEPLOYMENT_TARGET}")
set(PLATFORM_CXXFLAGS "-isysroot ${OSX_SYSROOT} -mmacosx-version-min=${OSX_DEPLOYMENT_TARGET} -std=c++11 -stdlib=libc++")

View File

@@ -36,7 +36,7 @@ add_dependencies(
external_zlib
)
if(WIN32 AND BUILD_MODE STREQUAL Debug)
if(BUILD_MODE STREQUAL Debug)
ExternalProject_Add_Step(external_png after_install
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/png/lib/libpng16_staticd${LIBEXT} ${LIBDIR}/png/lib/libpng16${LIBEXT}
DEPENDEES install

View File

@@ -17,6 +17,7 @@
# ***** END GPL LICENSE BLOCK *****
if(WIN32)
set(PTHREAD_XCFLAGS /MD)
if(MSVC14) # vs2015 has timespec
set(PTHREAD_CPPFLAGS "/I. /DHAVE_CONFIG_H /D_TIMESPEC_DEFINED ")
@@ -24,7 +25,7 @@ if(WIN32)
set(PTHREAD_CPPFLAGS "/I. /DHAVE_CONFIG_H ")
endif()
set(PTHREADS_BUILD cd ${BUILD_DIR}/pthreads/src/external_pthreads/ && cd && nmake VC-static /e CPPFLAGS=${PTHREAD_CPPFLAGS} /e XLIBS=/NODEFAULTLIB:msvcr)
set(PTHREADS_BUILD cd ${BUILD_DIR}/pthreads/src/external_pthreads/ && cd && nmake VC /e CPPFLAGS=${PTHREAD_CPPFLAGS} /e XCFLAGS=${PTHREAD_XCFLAGS} /e XLIBS=/NODEFAULTLIB:msvcr)
ExternalProject_Add(external_pthreads
URL ${PTHREADS_URI}
@@ -34,7 +35,8 @@ if(WIN32)
CONFIGURE_COMMAND echo .
BUILD_COMMAND ${PTHREADS_BUILD}
INSTALL_COMMAND COMMAND
${CMAKE_COMMAND} -E copy ${BUILD_DIR}/pthreads/src/external_pthreads/libpthreadVC3${LIBEXT} ${LIBDIR}/pthreads/lib/pthreadVC3${LIBEXT} &&
${CMAKE_COMMAND} -E copy ${BUILD_DIR}/pthreads/src/external_pthreads/pthreadVC3.dll ${LIBDIR}/pthreads/lib/pthreadVC3.dll &&
${CMAKE_COMMAND} -E copy ${BUILD_DIR}/pthreads/src/external_pthreads/pthreadVC3${LIBEXT} ${LIBDIR}/pthreads/lib/pthreadVC3${LIBEXT} &&
${CMAKE_COMMAND} -E copy ${BUILD_DIR}/pthreads/src/external_pthreads/pthread.h ${LIBDIR}/pthreads/inc/pthread.h &&
${CMAKE_COMMAND} -E copy ${BUILD_DIR}/pthreads/src/external_pthreads/sched.h ${LIBDIR}/pthreads/inc/sched.h &&
${CMAKE_COMMAND} -E copy ${BUILD_DIR}/pthreads/src/external_pthreads/semaphore.h ${LIBDIR}/pthreads/inc/semaphore.h &&

View File

@@ -39,7 +39,7 @@ add_dependencies(
external_zlib
)
if(WIN32 AND BUILD_MODE STREQUAL Debug)
if(BUILD_MODE STREQUAL Debug)
ExternalProject_Add_Step(external_tiff after_install
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tiff/lib/tiffd${LIBEXT} ${LIBDIR}/tiff/lib/tiff${LIBEXT}
DEPENDEES install

View File

@@ -40,8 +40,16 @@ if (WIN32)
)
endif()
else()
ExternalProject_Add_Step(external_zlib after_install
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/zlib/lib/libz.a ${LIBDIR}/zlib/lib/libz_pic.a
DEPENDEES install
)
if(BUILD_MODE STREQUAL Debug)
ExternalProject_Add_Step(external_zlib after_install
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/zlib/lib/zlibstaticd${LIBEXT} ${LIBDIR}/zlib/lib/${ZLIB_LIBRARY}
DEPENDEES install
)
endif()
if (UNIX)
ExternalProject_Add_Step(external_zlib after_install
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/zlib/lib/libz.a ${LIBDIR}/zlib/lib/libz_pic.a
DEPENDEES install
)
endif()
endif()

View File

@@ -1,3 +1,4 @@
#
# VLMC RPM Finder
# Authors: Rohit Yadav <rohityadav89@gmail.com>
#

View File

@@ -16,6 +16,8 @@
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# Contributor(s): Campbell Barton
#
# ***** END GPL LICENSE BLOCK *****
# <pep8 compliant>

View File

@@ -16,6 +16,8 @@
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# Contributor(s): Campbell Barton, M.G. Kishalmi
#
# ***** END GPL LICENSE BLOCK *****
# <pep8 compliant>

View File

@@ -16,6 +16,8 @@
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# Contributor(s): Campbell Barton, M.G. Kishalmi
#
# ***** END GPL LICENSE BLOCK *****
# <pep8 compliant>

View File

@@ -16,6 +16,8 @@
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# Contributor(s): Campbell Barton
#
# ***** END GPL LICENSE BLOCK *****
# <pep8 compliant>

View File

@@ -16,6 +16,8 @@
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# Contributor(s): Campbell Barton
#
# ***** END GPL LICENSE BLOCK *****
# <pep8 compliant>

View File

@@ -16,6 +16,8 @@
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# Contributor(s): Campbell Barton
#
# ***** END GPL LICENSE BLOCK *****
# <pep8 compliant>

View File

@@ -16,6 +16,8 @@
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# Contributor(s): Campbell Barton
#
# ***** END GPL LICENSE BLOCK *****
# <pep8 compliant>

View File

@@ -16,6 +16,8 @@
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# Contributor(s): Campbell Barton
#
# ***** END GPL LICENSE BLOCK *****
# <pep8 compliant>

View File

@@ -52,7 +52,7 @@ set(WITH_X11_XF86VMODE ON CACHE BOOL "" FORCE)
set(WITH_MEM_JEMALLOC ON CACHE BOOL "" FORCE)
set(WITH_CYCLES_CUDA_BINARIES ON CACHE BOOL "" FORCE)
set(CYCLES_CUDA_BINARIES_ARCH sm_30;sm_35;sm_37;sm_50;sm_52;sm_60;sm_61;sm_70;sm_75 CACHE STRING "" FORCE)
set(CYCLES_CUDA_BINARIES_ARCH sm_30;sm_35;sm_37;sm_50;sm_52;sm_60;sm_61;sm_70;sm_72;sm_75 CACHE STRING "" FORCE)
# platform dependent options
if(UNIX AND NOT APPLE)

View File

@@ -0,0 +1,10 @@
#!/bin/bash
# filters CMake output to be more like nan-makefiles
FILTER="^\[ *[0-9]*%] \|^Built target \|^Scanning "
make $@ | \
sed -u -e 's/^Linking .*\//Linking /' | \
sed -u -e 's/^.*\// /' | \
grep --line-buffered -v "$FILTER"
echo "Build Done"

View File

@@ -16,6 +16,11 @@
#
# The Original Code is Copyright (C) 2006, Blender Foundation
# All rights reserved.
#
# The Original Code is: all of this file.
#
# Contributor(s): Jacques Beaurain.
#
# ***** END GPL LICENSE BLOCK *****
macro(list_insert_after
@@ -1393,7 +1398,7 @@ function(find_python_package
NO_DEFAULT_PATH
)
if(NOT EXISTS "${PYTHON_${_upper_package}_PATH}")
if(NOT EXISTS "${PYTHON_${_upper_package}_PATH}")
message(WARNING
"Python package '${package}' path could not be found in:\n"
"'${PYTHON_LIBPATH}/python${PYTHON_VERSION}/site-packages/${package}', "

View File

@@ -16,6 +16,9 @@
#
# The Original Code is Copyright (C) 2016, Blender Foundation
# All rights reserved.
#
# Contributor(s): Sergey Sharybin.
#
# ***** END GPL LICENSE BLOCK *****
# Libraries configuration for Apple.
@@ -382,27 +385,13 @@ if(WITH_CYCLES_EMBREE)
set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -Xlinker -stack_size -Xlinker 0x100000")
endif()
# CMake FindOpenMP doesn't know about AppleClang before 3.12, so provide custom flags.
if(WITH_OPENMP)
if(CMAKE_C_COMPILER_ID MATCHES "AppleClang" AND CMAKE_C_COMPILER_VERSION VERSION_GREATER_EQUAL "7.0")
# Use OpenMP from our precompiled libraries.
message(STATUS "Using ${LIBDIR}/openmp for OpenMP")
set(OPENMP_CUSTOM ON)
set(OPENMP_FOUND ON)
set(OpenMP_C_FLAGS "-Xclang -fopenmp -I'${LIBDIR}/openmp/include'")
set(OpenMP_CXX_FLAGS "-Xclang -fopenmp -I'${LIBDIR}/openmp/include'")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -L'${LIBDIR}/openmp/lib' -lomp")
# Copy libomp.dylib to allow executables like datatoc to work.
if(CMAKE_MAKE_PROGRAM MATCHES "xcodebuild")
set(OPENMP_DYLIB_AUX_PATH "${CMAKE_BINARY_DIR}/bin")
else()
set(OPENMP_DYLIB_AUX_PATH "${CMAKE_BINARY_DIR}")
endif()
execute_process(
COMMAND mkdir -p ${OPENMP_DYLIB_AUX_PATH}/Resources/lib
COMMAND cp -p ${LIBDIR}/openmp/lib/libomp.dylib ${OPENMP_DYLIB_AUX_PATH}/Resources/lib/libomp.dylib)
execute_process(COMMAND ${CMAKE_C_COMPILER} --version OUTPUT_VARIABLE COMPILER_VENDOR)
string(SUBSTRING "${COMPILER_VENDOR}" 0 5 VENDOR_NAME) # truncate output
if(${VENDOR_NAME} MATCHES "Apple") # Apple does not support OpenMP reliable with gcc and not with clang
set(WITH_OPENMP OFF)
else() # vanilla gcc or clang_omp support OpenMP
message(STATUS "Using special OpenMP enabled compiler !") # letting find_package(OpenMP) module work for gcc
endif()
endif()

View File

@@ -16,6 +16,9 @@
#
# The Original Code is Copyright (C) 2016, Blender Foundation
# All rights reserved.
#
# Contributor(s): Jacques Beaurain.
#
# ***** END GPL LICENSE BLOCK *****
# Xcode and system configuration for Apple.

View File

@@ -16,6 +16,9 @@
#
# The Original Code is Copyright (C) 2016, Blender Foundation
# All rights reserved.
#
# Contributor(s): Sergey Sharybin.
#
# ***** END GPL LICENSE BLOCK *****
# Libraries configuration for any *nix system including Linux and Unix.
@@ -39,10 +42,6 @@ if(EXISTS ${LIBDIR})
set(WITH_OPENMP_STATIC ON)
endif()
if(WITH_STATIC_LIBS)
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -static-libstdc++")
endif()
# Wrapper to prefer static libraries
macro(find_package_wrapper)
if(WITH_STATIC_LIBS)
@@ -246,17 +245,13 @@ if(WITH_OPENVDB)
find_package_wrapper(OpenVDB)
find_package_wrapper(TBB)
find_package_wrapper(Blosc)
if(NOT TBB_FOUND)
set(WITH_OPENVDB OFF)
set(WITH_OPENVDB_BLOSC OFF)
message(STATUS "TBB not found, disabling OpenVDB")
elseif(NOT OPENVDB_FOUND)
if(NOT OPENVDB_FOUND OR NOT TBB_FOUND)
set(WITH_OPENVDB OFF)
set(WITH_OPENVDB_BLOSC OFF)
message(STATUS "OpenVDB not found, disabling it")
elseif(NOT BLOSC_FOUND)
set(WITH_OPENVDB_BLOSC OFF)
message(STATUS "Blosc not found, disabling it for OpenVBD")
message(STATUS "Blosc not found, disabling it")
endif()
endif()

View File

@@ -16,6 +16,9 @@
#
# The Original Code is Copyright (C) 2016, Blender Foundation
# All rights reserved.
#
# Contributor(s): Sergey Sharybin.
#
# ***** END GPL LICENSE BLOCK *****
# Libraries configuration for Windows.
@@ -170,10 +173,7 @@ if(NOT DEFINED LIBDIR)
set(LIBDIR_BASE "windows")
endif()
# Can be 1910..1912
if(MSVC_VERSION GREATER 1919)
message(STATUS "Visual Studio 2019 detected.")
set(LIBDIR ${CMAKE_SOURCE_DIR}/../lib/${LIBDIR_BASE}_vc14)
elseif(MSVC_VERSION GREATER 1909)
if(MSVC_VERSION GREATER 1909)
message(STATUS "Visual Studio 2017 detected.")
set(LIBDIR ${CMAKE_SOURCE_DIR}/../lib/${LIBDIR_BASE}_vc14)
elseif(MSVC_VERSION EQUAL 1900)

View File

@@ -16,6 +16,8 @@
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# Contributor(s): Campbell Barton, M.G. Kishalmi
#
# ***** END GPL LICENSE BLOCK *****
# <pep8 compliant>

View File

@@ -14,6 +14,8 @@
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# Contributor(s): Campbell Barton
#
# ***** END GPL LICENSE BLOCK *****
# <pep8 compliant>

View File

@@ -29,7 +29,7 @@ try:
os.remove(package_archive)
if os.path.exists(package_dir):
shutil.rmtree(package_dir)
except Exception as ex:
except Exception, ex:
sys.stderr.write('Failed to clean up old package files: ' + str(ex) + '\n')
sys.exit(1)
@@ -40,7 +40,7 @@ try:
for f in os.listdir(package_dir):
if f.startswith('makes'):
os.remove(os.path.join(package_dir, f))
except Exception as ex:
except Exception, ex:
sys.stderr.write('Failed to copy install directory: ' + str(ex) + '\n')
sys.exit(1)
@@ -58,13 +58,13 @@ try:
sys.exit(-1)
subprocess.call(archive_cmd)
except Exception as ex:
except Exception, ex:
sys.stderr.write('Failed to create package archive: ' + str(ex) + '\n')
sys.exit(1)
# empty temporary package dir
try:
shutil.rmtree(package_dir)
except Exception as ex:
except Exception, ex:
sys.stderr.write('Failed to clean up package directory: ' + str(ex) + '\n')
sys.exit(1)

View File

@@ -3,9 +3,6 @@ echo No explicit msvc version requested, autodetecting version.
call "%~dp0\detect_msvc2017.cmd"
if %ERRORLEVEL% EQU 0 goto DetectionComplete
call "%~dp0\detect_msvc2019.cmd"
if %ERRORLEVEL% EQU 0 goto DetectionComplete
call "%~dp0\detect_msvc2015.cmd"
if %ERRORLEVEL% EQU 0 goto DetectionComplete

View File

@@ -1,6 +1,5 @@
if "%BUILD_VS_YEAR%"=="2015" set BUILD_VS_LIBDIRPOST=vc14
if "%BUILD_VS_YEAR%"=="2017" set BUILD_VS_LIBDIRPOST=vc14
if "%BUILD_VS_YEAR%"=="2019" set BUILD_VS_LIBDIRPOST=vc14
if "%BUILD_ARCH%"=="x64" (
set BUILD_VS_SVNDIR=win64_%BUILD_VS_LIBDIRPOST%

View File

@@ -1,4 +1,4 @@
if NOT exist "%BLENDER_DIR%\source\tools\.git" (
if NOT exist "%BLENDER_DIR%/source/tools" (
echo Checking out sub-modules
if not "%GIT%" == "" (
"%GIT%" submodule update --init --recursive --progress

View File

@@ -1,5 +1,3 @@
set BUILD_GENERATOR_POST=
set BUILD_PLATFORM_SELECT=
if "%BUILD_ARCH%"=="x64" (
set MSBUILD_PLATFORM=x64
) else if "%BUILD_ARCH%"=="x86" (
@@ -11,7 +9,7 @@ if "%BUILD_ARCH%"=="x64" (
)
if "%WITH_CLANG%"=="1" (
set CLANG_CMAKE_ARGS=-T"llvm"
set CLANG_CMAKE_ARGS=-T"LLVM-vs2017"
if "%WITH_ASAN%"=="1" (
set ASAN_CMAKE_ARGS=-DWITH_COMPILER_ASAN=On
)
@@ -25,14 +23,7 @@ if "%WITH_CLANG%"=="1" (
if "%WITH_PYDEBUG%"=="1" (
set PYDEBUG_CMAKE_ARGS=-DWINDOWS_PYTHON_DEBUG=On
)
if "%BUILD_VS_YEAR%"=="2019" (
set BUILD_PLATFORM_SELECT=-A %MSBUILD_PLATFORM%
) else (
set BUILD_GENERATOR_POST=%WINDOWS_ARCH%
)
set BUILD_CMAKE_ARGS=%BUILD_CMAKE_ARGS% -G "Visual Studio %BUILD_VS_VER% %BUILD_VS_YEAR%%BUILD_GENERATOR_POST%" %BUILD_PLATFORM_SELECT% %TESTS_CMAKE_ARGS% %CLANG_CMAKE_ARGS% %ASAN_CMAKE_ARGS% %PYDEBUG_CMAKE_ARGS%
set BUILD_CMAKE_ARGS=%BUILD_CMAKE_ARGS% -G "Visual Studio %BUILD_VS_VER% %BUILD_VS_YEAR%%WINDOWS_ARCH%" %TESTS_CMAKE_ARGS% %CLANG_CMAKE_ARGS% %ASAN_CMAKE_ARGS% %PYDEBUG_CMAKE_ARGS%
if NOT EXIST %BUILD_DIR%\nul (
mkdir %BUILD_DIR%
@@ -61,8 +52,8 @@ if "%MUST_CONFIGURE%"=="1" (
%BUILD_CMAKE_ARGS% ^
-H%BLENDER_DIR% ^
-B%BUILD_DIR%
if errorlevel 1 (
if %ERRORLEVEL% NEQ 0 (
echo "Configuration Failed"
exit /b 1
)

View File

@@ -1,3 +1,76 @@
if NOT "%verbose%" == "" (
echo Detecting msvc 2017
)
set BUILD_VS_VER=15
set BUILD_VS_YEAR=2017
call "%~dp0\detect_msvc_vswhere.cmd"
set ProgramFilesX86=%ProgramFiles(x86)%
if not exist "%ProgramFilesX86%" set ProgramFilesX86=%ProgramFiles%
set vs_where=%ProgramFilesX86%\Microsoft Visual Studio\Installer\vswhere.exe
if not exist "%vs_where%" (
if NOT "%verbose%" == "" (
echo Visual Studio 2017 ^(15.2 or newer^) is not detected
)
goto FAIL
)
if NOT "%verbose%" == "" (
echo "%vs_where%" -latest %VSWHERE_ARGS% -requires Microsoft.VisualStudio.Component.VC.Tools.x86.x64`
)
for /f "usebackq tokens=1* delims=: " %%i in (`"%vs_where%" -latest %VSWHERE_ARGS% -requires Microsoft.VisualStudio.Component.VC.Tools.x86.x64`) do (
if /i "%%i"=="installationPath" set VS_InstallDir=%%j
)
if "%VS_InstallDir%"=="" (
if NOT "%verbose%" == "" (
echo Visual Studio is detected but the "Desktop development with C++" workload has not been instlled
goto FAIL
)
)
set VCVARS=%VS_InstallDir%\VC\Auxiliary\Build\vcvarsall.bat
if exist "%VCVARS%" (
call "%VCVARS%" %BUILD_ARCH%
) else (
if NOT "%verbose%" == "" (
echo "%VCVARS%" not found
)
goto FAIL
)
rem try msbuild
msbuild /version > NUL
if errorlevel 1 (
if NOT "%verbose%" == "" (
echo Visual Studio %BUILD_VS_YEAR% msbuild not found
)
goto FAIL
)
if NOT "%verbose%" == "" (
echo Visual Studio %BUILD_VS_YEAR% msbuild found
)
REM try the c++ compiler
cl 2> NUL 1>&2
if errorlevel 1 (
if NOT "%verbose%" == "" (
echo Visual Studio %BUILD_VS_YEAR% C/C++ Compiler not found
)
goto FAIL
)
if NOT "%verbose%" == "" (
echo Visual Studio %BUILD_VS_YEAR% C/C++ Compiler found
)
if NOT "%verbose%" == "" (
echo Visual Studio 2017 is detected successfully
)
goto EOF
:FAIL
exit /b 1
:EOF

View File

@@ -1,3 +0,0 @@
set BUILD_VS_VER=16
set BUILD_VS_YEAR=2019
call "%~dp0\detect_msvc_vswhere.cmd"

View File

@@ -1,79 +0,0 @@
if NOT "%verbose%" == "" (
echo Detecting msvc %BUILD_VS_YEAR%
)
set ProgramFilesX86=%ProgramFiles(x86)%
if not exist "%ProgramFilesX86%" set ProgramFilesX86=%ProgramFiles%
set vs_where=%ProgramFilesX86%\Microsoft Visual Studio\Installer\vswhere.exe
if not exist "%vs_where%" (
if NOT "%verbose%" == "" (
echo Visual Studio %BUILD_VS_YEAR% is not detected
)
goto FAIL
)
if NOT "%verbose%" == "" (
echo "%vs_where%" -latest %VSWHERE_ARGS% -version ^[%BUILD_VS_VER%.0^,%BUILD_VS_VER%.99^) -requires Microsoft.VisualStudio.Component.VC.Tools.x86.x64
)
for /f "usebackq tokens=1* delims=: " %%i in (`"%vs_where%" -latest -version ^[%BUILD_VS_VER%.0^,%BUILD_VS_VER%.99^) %VSWHERE_ARGS% -requires Microsoft.VisualStudio.Component.VC.Tools.x86.x64`) do (
if /i "%%i"=="installationPath" set VS_InstallDir=%%j
)
if NOT "%verbose%" == "" (
echo VS_Installdir="%VS_InstallDir%"
)
if "%VS_InstallDir%"=="" (
if NOT "%verbose%" == "" (
echo Visual Studio is detected but the "Desktop development with C++" workload has not been instlled
goto FAIL
)
)
set VCVARS=%VS_InstallDir%\VC\Auxiliary\Build\vcvarsall.bat
if exist "%VCVARS%" (
call "%VCVARS%" %BUILD_ARCH%
) else (
if NOT "%verbose%" == "" (
echo "%VCVARS%" not found
)
goto FAIL
)
rem try msbuild
msbuild /version > NUL
if errorlevel 1 (
if NOT "%verbose%" == "" (
echo Visual Studio %BUILD_VS_YEAR% msbuild not found
)
goto FAIL
)
if NOT "%verbose%" == "" (
echo Visual Studio %BUILD_VS_YEAR% msbuild found
)
REM try the c++ compiler
cl 2> NUL 1>&2
if errorlevel 1 (
if NOT "%verbose%" == "" (
echo Visual Studio %BUILD_VS_YEAR% C/C++ Compiler not found
)
goto FAIL
)
if NOT "%verbose%" == "" (
echo Visual Studio %BUILD_VS_YEAR% C/C++ Compiler found
)
if NOT "%verbose%" == "" (
echo Visual Studio %BUILD_VS_YEAR% is detected successfully
)
goto EOF
:FAIL
exit /b 1
:EOF

View File

@@ -50,17 +50,10 @@ if NOT "%1" == "" (
) else if "%1" == "2017pre" (
set BUILD_VS_YEAR=2017
set VSWHERE_ARGS=-prerelease
set BUILD_VS_YEAR=2017
) else if "%1" == "2017b" (
set BUILD_VS_YEAR=2017
set VSWHERE_ARGS=-products Microsoft.VisualStudio.Product.BuildTools
) else if "%1" == "2019" (
set BUILD_VS_YEAR=2019
) else if "%1" == "2019pre" (
set BUILD_VS_YEAR=2019
set VSWHERE_ARGS=-prerelease
) else if "%1" == "2019b" (
set BUILD_VS_YEAR=2019
set VSWHERE_ARGS=-products Microsoft.VisualStudio.Product.BuildTools
) else if "%1" == "2015" (
set BUILD_VS_YEAR=2015
) else if "%1" == "packagename" (

View File

@@ -1,4 +1,4 @@
# Doxyfile 1.8.15
# Doxyfile 1.8.11
# This file describes the settings to be used by the documentation system
# doxygen (www.doxygen.org) for a project.
@@ -17,11 +17,11 @@
# Project related configuration options
#---------------------------------------------------------------------------
# This tag specifies the encoding used for all characters in the configuration
# file that follow. The default is UTF-8 which is also the encoding used for all
# text before the first occurrence of this tag. Doxygen uses libiconv (or the
# iconv built into libc) for the transcoding. See
# https://www.gnu.org/software/libiconv/ for the list of possible encodings.
# This tag specifies the encoding used for all characters in the config file
# that follow. The default is UTF-8 which is also the encoding used for all text
# before the first occurrence of this tag. Doxygen uses libiconv (or the iconv
# built into libc) for the transcoding. See http://www.gnu.org/software/libiconv
# for the list of possible encodings.
# The default value is: UTF-8.
DOXYFILE_ENCODING = UTF-8
@@ -93,14 +93,6 @@ ALLOW_UNICODE_NAMES = NO
OUTPUT_LANGUAGE = English
# The OUTPUT_TEXT_DIRECTION tag is used to specify the direction in which all
# documentation generated by doxygen is written. Doxygen will use this
# information to generate all generated output in the proper direction.
# Possible values are: None, LTR, RTL and Context.
# The default value is: None.
OUTPUT_TEXT_DIRECTION = None
# If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member
# descriptions after the members that are listed in the file and class
# documentation (similar to Javadoc). Set to NO to disable this.
@@ -244,12 +236,7 @@ TAB_SIZE = 4
# will allow you to put the command \sideeffect (or @sideeffect) in the
# documentation, which will result in a user-defined paragraph with heading
# "Side Effects:". You can put \n's in the value part of an alias to insert
# newlines (in the resulting output). You can put ^^ in the value part of an
# alias to insert a newline as if a physical newline was in the original file.
# When you need a literal { or } or , in the value part of an alias you have to
# escape them by means of a backslash (\), this can lead to conflicts with the
# commands \{ and \} for these it is advised to use the version @{ and @} or use
# a double escape (\\{ and \\})
# newlines.
ALIASES =
@@ -287,26 +274,17 @@ OPTIMIZE_FOR_FORTRAN = NO
OPTIMIZE_OUTPUT_VHDL = NO
# Set the OPTIMIZE_OUTPUT_SLICE tag to YES if your project consists of Slice
# sources only. Doxygen will then generate output that is more tailored for that
# language. For instance, namespaces will be presented as modules, types will be
# separated into more groups, etc.
# The default value is: NO.
OPTIMIZE_OUTPUT_SLICE = NO
# Doxygen selects the parser to use depending on the extension of the files it
# parses. With this tag you can assign which parser to use for a given
# extension. Doxygen has a built-in mapping, but you can override or extend it
# using this tag. The format is ext=language, where ext is a file extension, and
# language is one of the parsers supported by doxygen: IDL, Java, Javascript,
# Csharp (C#), C, C++, D, PHP, md (Markdown), Objective-C, Python, Slice,
# Fortran (fixed format Fortran: FortranFixed, free formatted Fortran:
# FortranFree, unknown formatted Fortran: Fortran. In the later case the parser
# tries to guess whether the code is fixed or free formatted code, this is the
# default for Fortran type files), VHDL, tcl. For instance to make doxygen treat
# .inc files as Fortran files (default is PHP), and .f files as C (default is
# Fortran), use: inc=Fortran f=C.
# C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran:
# FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran:
# Fortran. In the later case the parser tries to guess whether the code is fixed
# or free formatted code, this is the default for Fortran type files), VHDL. For
# instance to make doxygen treat .inc files as Fortran files (default is PHP),
# and .f files as C (default is Fortran), use: inc=Fortran f=C.
#
# Note: For files without extension you can use no_extension as a placeholder.
#
@@ -317,7 +295,7 @@ EXTENSION_MAPPING =
# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments
# according to the Markdown format, which allows for more readable
# documentation. See https://daringfireball.net/projects/markdown/ for details.
# documentation. See http://daringfireball.net/projects/markdown/ for details.
# The output of markdown processing is further processed by doxygen, so you can
# mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in
# case of backward compatibilities issues.
@@ -325,15 +303,6 @@ EXTENSION_MAPPING =
MARKDOWN_SUPPORT = YES
# When the TOC_INCLUDE_HEADINGS tag is set to a non-zero value, all headings up
# to that level are automatically included in the table of contents, even if
# they do not have an id attribute.
# Note: This feature currently applies only to Markdown headings.
# Minimum value: 0, maximum value: 99, default value: 0.
# This tag requires that the tag MARKDOWN_SUPPORT is set to YES.
TOC_INCLUDE_HEADINGS = 0
# When enabled doxygen tries to link words that correspond to documented
# classes, or namespaces to their corresponding documentation. Such a link can
# be prevented in individual cases by putting a % sign in front of the word or
@@ -359,7 +328,7 @@ BUILTIN_STL_SUPPORT = NO
CPP_CLI_SUPPORT = NO
# Set the SIP_SUPPORT tag to YES if your project consists of sip (see:
# https://www.riverbankcomputing.com/software/sip/intro) sources only. Doxygen
# http://www.riverbankcomputing.co.uk/software/sip/intro) sources only. Doxygen
# will parse them like normal C++ but will assume all classes use public instead
# of private inheritance when no explicit protection keyword is present.
# The default value is: NO.
@@ -762,7 +731,7 @@ WARNINGS = YES
# will automatically be disabled.
# The default value is: YES.
WARN_IF_UNDOCUMENTED = NO
WARN_IF_UNDOCUMENTED = YES
# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for
# potential errors in the documentation, such as not documenting some parameters
@@ -775,8 +744,7 @@ WARN_IF_DOC_ERROR = YES
# This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that
# are documented, but have no documentation for their parameters or return
# value. If set to NO, doxygen will only warn about wrong or incomplete
# parameter documentation, but not about the absence of documentation. If
# EXTRACT_ALL is set to YES then this flag will automatically be disabled.
# parameter documentation, but not about the absence of documentation.
# The default value is: NO.
WARN_NO_PARAMDOC = NO
@@ -824,7 +792,7 @@ INPUT = doxygen.main.h \
# This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
# libiconv (or the iconv built into libc) for the transcoding. See the libiconv
# documentation (see: https://www.gnu.org/software/libiconv/) for the list of
# documentation (see: http://www.gnu.org/software/libiconv) for the list of
# possible encodings.
# The default value is: UTF-8.
@@ -841,8 +809,8 @@ INPUT_ENCODING = UTF-8
# If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp,
# *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h,
# *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc,
# *.m, *.markdown, *.md, *.mm, *.dox, *.py, *.pyw, *.f90, *.f95, *.f03, *.f08,
# *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf, *.qsf and *.ice.
# *.m, *.markdown, *.md, *.mm, *.dox, *.py, *.pyw, *.f90, *.f, *.for, *.tcl,
# *.vhd, *.vhdl, *.ucf, *.qsf, *.as and *.js.
FILE_PATTERNS =
@@ -859,7 +827,7 @@ RECURSIVE = YES
# Note that relative paths are relative to the directory from which doxygen is
# run.
EXCLUDE = ../../build_files \
EXCLUDE = ../../build_files, \
../../release
# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or
@@ -1000,7 +968,7 @@ INLINE_SOURCES = NO
STRIP_CODE_COMMENTS = YES
# If the REFERENCED_BY_RELATION tag is set to YES then for each documented
# entity all documented functions referencing it will be listed.
# function all documented functions referencing it will be listed.
# The default value is: NO.
REFERENCED_BY_RELATION = YES
@@ -1032,12 +1000,12 @@ SOURCE_TOOLTIPS = YES
# If the USE_HTAGS tag is set to YES then the references to source code will
# point to the HTML generated by the htags(1) tool instead of doxygen built-in
# source browser. The htags tool is part of GNU's global source tagging system
# (see https://www.gnu.org/software/global/global.html). You will need version
# (see http://www.gnu.org/software/global/global.html). You will need version
# 4.8.6 or higher.
#
# To use it do the following:
# - Install the latest version of global
# - Enable SOURCE_BROWSER and USE_HTAGS in the configuration file
# - Enable SOURCE_BROWSER and USE_HTAGS in the config file
# - Make sure the INPUT points to the root of the source tree
# - Run doxygen as normal
#
@@ -1213,17 +1181,6 @@ HTML_COLORSTYLE_GAMMA = 79
HTML_TIMESTAMP = YES
# If the HTML_DYNAMIC_MENUS tag is set to YES then the generated HTML
# documentation will contain a main index with vertical navigation menus that
# are dynamically created via Javascript. If disabled, the navigation index will
# consists of multiple levels of tabs that are statically embedded in every HTML
# page. Disable this option to support browsers that do not have Javascript,
# like the Qt help browser.
# The default value is: YES.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_DYNAMIC_MENUS = YES
# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML
# documentation will contain sections that can be hidden and shown after the
# page has loaded.
@@ -1247,13 +1204,13 @@ HTML_INDEX_NUM_ENTRIES = 100
# If the GENERATE_DOCSET tag is set to YES, additional index files will be
# generated that can be used as input for Apple's Xcode 3 integrated development
# environment (see: https://developer.apple.com/xcode/), introduced with OSX
# 10.5 (Leopard). To create a documentation set, doxygen will generate a
# environment (see: http://developer.apple.com/tools/xcode/), introduced with
# OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a
# Makefile in the HTML output directory. Running make will produce the docset in
# that directory and running make install will install the docset in
# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at
# startup. See https://developer.apple.com/library/archive/featuredarticles/Doxy
# genXcode/_index.html for more information.
# startup. See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html
# for more information.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.
@@ -1292,7 +1249,7 @@ DOCSET_PUBLISHER_NAME = Publisher
# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three
# additional HTML index files: index.hhp, index.hhc, and index.hhk. The
# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop
# (see: https://www.microsoft.com/en-us/download/details.aspx?id=21138) on
# (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on
# Windows.
#
# The HTML Help Workshop contains a compiler that can convert all HTML output
@@ -1368,7 +1325,7 @@ QCH_FILE =
# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help
# Project output. For more information please see Qt Help Project / Namespace
# (see: http://doc.qt.io/archives/qt-4.8/qthelpproject.html#namespace).
# (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#namespace).
# The default value is: org.doxygen.Project.
# This tag requires that the tag GENERATE_QHP is set to YES.
@@ -1376,7 +1333,7 @@ QHP_NAMESPACE = org.doxygen.Project
# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt
# Help Project output. For more information please see Qt Help Project / Virtual
# Folders (see: http://doc.qt.io/archives/qt-4.8/qthelpproject.html#virtual-
# Folders (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#virtual-
# folders).
# The default value is: doc.
# This tag requires that the tag GENERATE_QHP is set to YES.
@@ -1385,7 +1342,7 @@ QHP_VIRTUAL_FOLDER = doc
# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom
# filter to add. For more information please see Qt Help Project / Custom
# Filters (see: http://doc.qt.io/archives/qt-4.8/qthelpproject.html#custom-
# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-
# filters).
# This tag requires that the tag GENERATE_QHP is set to YES.
@@ -1393,7 +1350,7 @@ QHP_CUST_FILTER_NAME =
# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the
# custom filter to add. For more information please see Qt Help Project / Custom
# Filters (see: http://doc.qt.io/archives/qt-4.8/qthelpproject.html#custom-
# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-
# filters).
# This tag requires that the tag GENERATE_QHP is set to YES.
@@ -1401,7 +1358,7 @@ QHP_CUST_FILTER_ATTRS =
# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this
# project's filter section matches. Qt Help Project / Filter Attributes (see:
# http://doc.qt.io/archives/qt-4.8/qthelpproject.html#filter-attributes).
# http://qt-project.org/doc/qt-4.8/qthelpproject.html#filter-attributes).
# This tag requires that the tag GENERATE_QHP is set to YES.
QHP_SECT_FILTER_ATTRS =
@@ -1494,7 +1451,7 @@ EXT_LINKS_IN_WINDOW = NO
FORMULA_FONTSIZE = 10
# Use the FORMULA_TRANSPARENT tag to determine whether or not the images
# Use the FORMULA_TRANPARENT tag to determine whether or not the images
# generated for formulas are transparent PNGs. Transparent PNGs are not
# supported properly for IE 6.0, but are supported on all modern browsers.
#
@@ -1506,7 +1463,7 @@ FORMULA_FONTSIZE = 10
FORMULA_TRANSPARENT = YES
# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see
# https://www.mathjax.org) which uses client side Javascript for the rendering
# http://www.mathjax.org) which uses client side Javascript for the rendering
# instead of using pre-rendered bitmaps. Use this if you do not have LaTeX
# installed or if you want to formulas look prettier in the HTML output. When
# enabled you may also need to install MathJax separately and configure the path
@@ -1533,8 +1490,8 @@ MATHJAX_FORMAT = HTML-CSS
# MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax
# Content Delivery Network so you can quickly see the result without installing
# MathJax. However, it is strongly recommended to install a local copy of
# MathJax from https://www.mathjax.org before deployment.
# The default value is: https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/.
# MathJax from http://www.mathjax.org before deployment.
# The default value is: http://cdn.mathjax.org/mathjax/latest.
# This tag requires that the tag USE_MATHJAX is set to YES.
MATHJAX_RELPATH = http://www.mathjax.org/mathjax
@@ -1595,7 +1552,7 @@ SERVER_BASED_SEARCH = NO
#
# Doxygen ships with an example indexer (doxyindexer) and search engine
# (doxysearch.cgi) which are based on the open source search engine library
# Xapian (see: https://xapian.org/).
# Xapian (see: http://xapian.org/).
#
# See the section "External Indexing and Searching" for details.
# The default value is: NO.
@@ -1608,7 +1565,7 @@ EXTERNAL_SEARCH = NO
#
# Doxygen ships with an example indexer (doxyindexer) and search engine
# (doxysearch.cgi) which are based on the open source search engine library
# Xapian (see: https://xapian.org/). See the section "External Indexing and
# Xapian (see: http://xapian.org/). See the section "External Indexing and
# Searching" for details.
# This tag requires that the tag SEARCHENGINE is set to YES.
@@ -1660,34 +1617,21 @@ LATEX_OUTPUT = latex
# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be
# invoked.
#
# Note that when not enabling USE_PDFLATEX the default is latex when enabling
# USE_PDFLATEX the default is pdflatex and when in the later case latex is
# chosen this is overwritten by pdflatex. For specific output languages the
# default can have been set differently, this depends on the implementation of
# the output language.
# Note that when enabling USE_PDFLATEX this option is only used for generating
# bitmaps for formulas in the HTML output, but not in the Makefile that is
# written to the output directory.
# The default file is: latex.
# This tag requires that the tag GENERATE_LATEX is set to YES.
LATEX_CMD_NAME = latex
# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to generate
# index for LaTeX.
# Note: This tag is used in the Makefile / make.bat.
# See also: LATEX_MAKEINDEX_CMD for the part in the generated output file
# (.tex).
# The default file is: makeindex.
# This tag requires that the tag GENERATE_LATEX is set to YES.
MAKEINDEX_CMD_NAME = makeindex
# The LATEX_MAKEINDEX_CMD tag can be used to specify the command name to
# generate index for LaTeX.
# Note: This tag is used in the generated output file (.tex).
# See also: MAKEINDEX_CMD_NAME for the part in the Makefile / make.bat.
# The default value is: \makeindex.
# This tag requires that the tag GENERATE_LATEX is set to YES.
LATEX_MAKEINDEX_CMD = \makeindex
# If the COMPACT_LATEX tag is set to YES, doxygen generates more compact LaTeX
# documents. This may be useful for small projects and may help to save some
# trees in general.
@@ -1822,14 +1766,6 @@ LATEX_BIB_STYLE = plain
LATEX_TIMESTAMP = NO
# The LATEX_EMOJI_DIRECTORY tag is used to specify the (relative or absolute)
# path from which the emoji images will be read. If a relative path is entered,
# it will be relative to the LATEX_OUTPUT directory. If left blank the
# LATEX_OUTPUT directory will be used.
# This tag requires that the tag GENERATE_LATEX is set to YES.
LATEX_EMOJI_DIRECTORY =
#---------------------------------------------------------------------------
# Configuration options related to the RTF output
#---------------------------------------------------------------------------
@@ -1869,9 +1805,9 @@ COMPACT_RTF = NO
RTF_HYPERLINKS = NO
# Load stylesheet definitions from file. Syntax is similar to doxygen's
# configuration file, i.e. a series of assignments. You only have to provide
# replacements, missing definitions are set to their default value.
# Load stylesheet definitions from file. Syntax is similar to doxygen's config
# file, i.e. a series of assignments. You only have to provide replacements,
# missing definitions are set to their default value.
#
# See also section "Doxygen usage" for information on how to generate the
# default style sheet that doxygen normally uses.
@@ -1880,8 +1816,8 @@ RTF_HYPERLINKS = NO
RTF_STYLESHEET_FILE =
# Set optional variables used in the generation of an RTF document. Syntax is
# similar to doxygen's configuration file. A template extensions file can be
# generated using doxygen -e rtf extensionFile.
# similar to doxygen's config file. A template extensions file can be generated
# using doxygen -e rtf extensionFile.
# This tag requires that the tag GENERATE_RTF is set to YES.
RTF_EXTENSIONS_FILE =
@@ -1967,13 +1903,6 @@ XML_OUTPUT = xml
XML_PROGRAMLISTING = YES
# If the XML_NS_MEMB_FILE_SCOPE tag is set to YES, doxygen will include
# namespace members in file scope as well, matching the HTML output.
# The default value is: NO.
# This tag requires that the tag GENERATE_XML is set to YES.
XML_NS_MEMB_FILE_SCOPE = NO
#---------------------------------------------------------------------------
# Configuration options related to the DOCBOOK output
#---------------------------------------------------------------------------
@@ -2006,9 +1935,9 @@ DOCBOOK_PROGRAMLISTING = NO
#---------------------------------------------------------------------------
# If the GENERATE_AUTOGEN_DEF tag is set to YES, doxygen will generate an
# AutoGen Definitions (see http://autogen.sourceforge.net/) file that captures
# the structure of the code including all documentation. Note that this feature
# is still experimental and incomplete at the moment.
# AutoGen Definitions (see http://autogen.sf.net) file that captures the
# structure of the code including all documentation. Note that this feature is
# still experimental and incomplete at the moment.
# The default value is: NO.
GENERATE_AUTOGEN_DEF = NO
@@ -2431,11 +2360,6 @@ DIAFILE_DIRS =
PLANTUML_JAR_PATH =
# When using plantuml, the PLANTUML_CFG_FILE tag can be used to specify a
# configuration file for plantuml.
PLANTUML_CFG_FILE =
# When using plantuml, the specified paths are searched for files specified by
# the !include statement in a plantuml block.

View File

@@ -1,4 +1,10 @@
/** \defgroup blenderplayer Blender Player */
/** \defgroup blc bad level calls
* \ingroup blenderplayer
*/
/** \defgroup render Rendering
* \ingroup blender
*/
@@ -23,14 +29,15 @@
* \ingroup python
*/
/** \defgroup blpluginapi Blender pluginapi
* \ingroup blender
* \attention not in use currently
*/
/* ================================ */
/** \defgroup blender Blender */
/** \defgroup balembic BlenderAlembic
* \ingroup blender
*/
/** \defgroup blt BlenTranslation
* \ingroup blender
*/
@@ -74,10 +81,6 @@
* \ingroup blender
*/
/** \defgroup shader_fx Shader Effects
* \ingroup blender
*/
/** \defgroup data DNA, RNA and .blend access*/
/** \defgroup gpu GPU

View File

@@ -222,7 +222,7 @@ Support Overview
.. note::
Using the :mod:`bmesh` API is completely separate API from :mod:`bpy`,
Using the :mod:`bmesh` api is completely separate api from :mod:`bpy`,
typically you would would use one or the other based on the level of editing needed,
not simply for a different way to access faces.
@@ -233,7 +233,7 @@ Creating
All 3 datatypes can be used for face creation.
- polygons are the most efficient way to create faces but the data structure is _very_ rigid and inflexible,
you must have all your vertices and faces ready and create them all at once.
you must have all your vertes and faces ready and create them all at once.
This is further complicated by the fact that each polygon does not store its own verts,
rather they reference an index and size in :class:`bpy.types.Mesh.loops` which are a fixed array too.
- bmesh-faces are most likely the easiest way for new scripts to create faces,
@@ -534,7 +534,7 @@ Strange errors using 'threading' module
Python threading with Blender only works properly when the threads finish up before the script does.
By using ``threading.join()`` for example.
Here is an example of threading supported by Blender:
Heres an example of threading supported by Blender:
.. code-block:: python
@@ -604,17 +604,12 @@ so until its properly supported, best not make use of this.
Help! My script crashes Blender
===============================
**TL;DR:** Do not keep direct references to Blender data (of any kind) when modifying the container
of that data, and/or when some undo/redo may happen (e.g. during modal operators execution...).
Instead, use indices (or other data always stored by value in Python, like string keys...),
that allow you to get access to the desired data.
Ideally it would be impossible to crash Blender from Python
however there are some problems with the API where it can be made to crash.
Strictly speaking this is a bug in the API but fixing it would mean adding memory verification
on every access since most crashes are caused by the Python objects referencing Blenders memory directly,
whenever the memory is freed or re-allocated, further Python access to it can crash the script.
whenever the memory is freed, further Python access to it can crash the script.
But fixing this would make the scripts run very slow,
or writing a very different kind of API which doesn't reference the memory directly.
@@ -624,15 +619,10 @@ Here are some general hints to avoid running into these problems.
especially when working with large lists since Blender can crash simply by running out of memory.
- Many hard to fix crashes end up being because of referencing freed data,
when removing data be sure not to hold any references to it.
- Re-allocation can lead to the same issues
(e.g. if you add a lot of items to some Collection,
this can lead to re-allocating the underlying container's memory,
invalidating all previous references to existing items).
- Modules or classes that remain active while Blender is used,
should not hold references to data the user may remove, instead,
fetch data from the context each time the script is activated.
- Crashes may not happen every time, they may happen more on some configurations/operating-systems.
- Be wary of recursive patterns, those are very efficient at hiding the issues described here.
.. note::
@@ -642,55 +632,6 @@ Here are some general hints to avoid running into these problems.
While the crash may be in Blenders C/C++ code,
this can help a lot to track down the area of the script that causes the crash.
.. note::
Some container modifications are actually safe, because they will never re-allocate existing data
(e.g. linked lists containers will never re-allocate existing items when adding or removing others).
But knowing which cases are safe and which aren't implies a deep understanding of Blender's internals.
That's why, unless you are willing to dive into the RNA C implementation, it's simpler to
always assume that data references will become invalid when modifying their containers,
in any possible way.
**Dont:**
.. code-block:: python
class TestItems(bpy.types.PropertyGroup):
name: bpy.props.StringProperty()
bpy.utils.register_class(TestItems)
bpy.types.Scene.test_items = bpy.props.CollectionProperty(type=TestItems)
first_item = bpy.context.scene.test_items.add()
for i in range(100):
bpy.context.scene.test_items.add()
# This is likely to crash, as internal code may re-allocate
# the whole container (the collection) memory at some point.
first_item.name = "foobar"
**Do:**
.. code-block:: python
class TestItems(bpy.types.PropertyGroup):
name: bpy.props.StringProperty()
bpy.utils.register_class(TestItems)
bpy.types.Scene.test_items = bpy.props.CollectionProperty(type=TestItems)
first_item = bpy.context.scene.test_items.add()
for i in range(100):
bpy.context.scene.test_items.add()
# This is safe, we are getting again desired data *after*
# all modifications to its container are done.
first_item = bpy.context.scene.test_items[0]
first_item.name = "foobar"
Undo/Redo
---------
@@ -775,7 +716,7 @@ the object data but are most common when switching edit-mode.
Array Re-Allocation
-------------------
When adding new points to a curve or vertices/edges/polygons to a mesh,
When adding new points to a curve or vertices's/edges/polygons to a mesh,
internally the array which stores this data is re-allocated.
.. code-block:: python
@@ -788,7 +729,7 @@ internally the array which stores this data is re-allocated.
point.co = 1.0, 2.0, 3.0
This can be avoided by re-assigning the point variables after adding the new one or by storing
indices to the points rather than the points themselves.
indices's to the points rather than the points themselves.
The best way is to sidestep the problem altogether add all the points to the curve at once.
This means you don't have to worry about array re-allocation and its faster too
@@ -804,7 +745,7 @@ along with objects, scenes, collections, bones.. etc.
The ``remove()`` api calls will invalidate the data they free to prevent common mistakes.
The following example shows how this precaution works.
The following example shows how this precortion works.
.. code-block:: python

View File

@@ -14,6 +14,8 @@
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# Contributor(s): Campbell Barton
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>

View File

@@ -14,6 +14,8 @@
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# Contributor(s): Campbell Barton
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>

View File

@@ -16,6 +16,8 @@
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# Contributor(s): Bastien Montagne
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>

View File

@@ -16,6 +16,11 @@
#
# The Original Code is Copyright (C) 2006, Blender Foundation
# All rights reserved.
#
# The Original Code is: all of this file.
#
# Contributor(s): Jacques Beaurain.
#
# ***** END GPL LICENSE BLOCK *****
# Libs that adhere to strict flags

View File

@@ -212,7 +212,7 @@ AUD_API const char* AUD_Sound_write(AUD_Sound* sound, const char* filename, AUD_
std::shared_ptr<IWriter> writer = FileWriter::createWriter(filename, specs, static_cast<Container>(container), static_cast<Codec>(codec), bitrate);
FileWriter::writeReader(reader, writer, 0, buffersize);
}
catch(Exception&)
catch(Exception& e)
{
return "An exception occured while writing.";
}

View File

@@ -140,6 +140,8 @@ Sound_data(Sound* self)
std::memcpy(data, buffer->getBuffer(), buffer->getSize());
Py_INCREF(array);
return reinterpret_cast<PyObject*>(array);
}

View File

@@ -261,8 +261,6 @@ FFMPEGWriter::FFMPEGWriter(std::string filename, DeviceSpecs specs, Container fo
case CHANNELS_SURROUND71:
channel_layout = AV_CH_LAYOUT_7POINT1;
break;
default:
AUD_THROW(FileException, "File couldn't be written, channel layout not supported.");
}
try

View File

@@ -61,58 +61,10 @@ bool OpenALDevice::OpenALHandle::pause(bool keep)
return false;
}
bool OpenALDevice::OpenALHandle::reinitialize()
{
DeviceSpecs specs = m_device->m_specs;
specs.specs = m_reader->getSpecs();
ALenum format;
if(!m_device->getFormat(format, specs.specs))
return true;
m_format = format;
// OpenAL playback code
alGenBuffers(CYCLE_BUFFERS, m_buffers);
if(alGetError() != AL_NO_ERROR)
return true;
m_device->m_buffer.assureSize(m_device->m_buffersize * AUD_DEVICE_SAMPLE_SIZE(specs));
int length;
bool eos;
for(m_current = 0; m_current < CYCLE_BUFFERS; m_current++)
{
length = m_device->m_buffersize;
m_reader->read(length, eos, m_device->m_buffer.getBuffer());
if(length == 0)
break;
alBufferData(m_buffers[m_current], m_format, m_device->m_buffer.getBuffer(), length * AUD_DEVICE_SAMPLE_SIZE(specs), specs.rate);
if(alGetError() != AL_NO_ERROR)
return true;
}
alGenSources(1, &m_source);
if(alGetError() != AL_NO_ERROR)
return true;
alSourceQueueBuffers(m_source, m_current, m_buffers);
if(alGetError() != AL_NO_ERROR)
return true;
alSourcei(m_source, AL_SOURCE_RELATIVE, m_relative);
return false;
}
OpenALDevice::OpenALHandle::OpenALHandle(OpenALDevice* device, ALenum format, std::shared_ptr<IReader> reader, bool keep) :
m_isBuffered(false), m_reader(reader), m_keep(keep), m_format(format),
m_eos(false), m_loopcount(0), m_stop(nullptr), m_stop_data(nullptr), m_status(STATUS_PLAYING),
m_relative(1), m_device(device)
m_device(device)
{
DeviceSpecs specs = m_device->m_specs;
specs.specs = m_reader->getSpecs();
@@ -210,9 +162,6 @@ bool OpenALDevice::OpenALHandle::stop()
if(!m_status)
return false;
if(m_stop)
m_stop(m_stop_data);
m_status = STATUS_INVALID;
alDeleteSources(1, &m_source);
@@ -576,6 +525,8 @@ bool OpenALDevice::OpenALHandle::setOrientation(const Quaternion& orientation)
bool OpenALDevice::OpenALHandle::isRelative()
{
int result;
if(!m_status)
return false;
@@ -584,9 +535,9 @@ bool OpenALDevice::OpenALHandle::isRelative()
if(!m_status)
return false;
alGetSourcei(m_source, AL_SOURCE_RELATIVE, &m_relative);
alGetSourcei(m_source, AL_SOURCE_RELATIVE, &result);
return m_relative;
return result;
}
bool OpenALDevice::OpenALHandle::setRelative(bool relative)
@@ -599,9 +550,7 @@ bool OpenALDevice::OpenALHandle::setRelative(bool relative)
if(!m_status)
return false;
m_relative = relative;
alSourcei(m_source, AL_SOURCE_RELATIVE, m_relative);
alSourcei(m_source, AL_SOURCE_RELATIVE, relative);
return true;
}
@@ -903,80 +852,6 @@ void OpenALDevice::updateStreams()
{
lock();
if(m_checkDisconnect)
{
ALCint connected;
alcGetIntegerv(m_device, alcGetEnumValue(m_device, "ALC_CONNECTED"), 1, &connected);
if(!connected)
{
// quit OpenAL
alcMakeContextCurrent(nullptr);
alcDestroyContext(m_context);
alcCloseDevice(m_device);
// restart
if(m_name.empty())
m_device = alcOpenDevice(nullptr);
else
m_device = alcOpenDevice(m_name.c_str());
// if device opening failed, there's really nothing we can do
if(m_device)
{
// at least try to set the frequency
ALCint attribs[] = { ALC_FREQUENCY, (ALCint)specs.rate, 0 };
ALCint* attributes = attribs;
if(specs.rate == RATE_INVALID)
attributes = nullptr;
m_context = alcCreateContext(m_device, attributes);
alcMakeContextCurrent(m_context);
m_checkDisconnect = alcIsExtensionPresent(m_device, "ALC_EXT_disconnect");
alcGetIntegerv(m_device, ALC_FREQUENCY, 1, (ALCint*)&specs.rate);
// check for specific formats and channel counts to be played back
if(alIsExtensionPresent("AL_EXT_FLOAT32") == AL_TRUE)
specs.format = FORMAT_FLOAT32;
else
specs.format = FORMAT_S16;
// if the format of the device changed, all handles are invalidated
// this is unlikely to happen though
if(specs.format != m_specs.format)
stopAll();
m_useMC = alIsExtensionPresent("AL_EXT_MCFORMATS") == AL_TRUE;
if((!m_useMC && specs.channels > CHANNELS_STEREO) ||
specs.channels == CHANNELS_STEREO_LFE ||
specs.channels == CHANNELS_SURROUND5)
specs.channels = CHANNELS_STEREO;
alGetError();
alcGetError(m_device);
m_specs = specs;
std::list<std::shared_ptr<OpenALHandle> > stopSounds;
for(auto& handle : m_playingSounds)
if(handle->reinitialize())
stopSounds.push_back(handle);
for(auto& handle : m_pausedSounds)
if(handle->reinitialize())
stopSounds.push_back(handle);
for(auto& sound : stopSounds)
sound->stop();
}
}
}
alcSuspendContext(m_context);
cerr = alcGetError(m_device);
if(cerr == ALC_NO_ERROR)
@@ -1082,14 +957,12 @@ void OpenALDevice::updateStreams()
// if it really stopped
if(sound->m_eos && info != AL_INITIAL)
{
if(sound->m_stop)
sound->m_stop(sound->m_stop_data);
// pause or
if(sound->m_keep)
{
if(sound->m_stop)
sound->m_stop(sound->m_stop_data);
pauseSounds.push_back(sound);
}
// stop
else
stopSounds.push_back(sound);
@@ -1132,16 +1005,16 @@ void OpenALDevice::updateStreams()
/******************************************************************************/
OpenALDevice::OpenALDevice(DeviceSpecs specs, int buffersize, std::string name) :
m_name(name), m_playing(false), m_buffersize(buffersize)
m_playing(false), m_buffersize(buffersize)
{
// cannot determine how many channels or which format OpenAL uses, but
// it at least is able to play 16 bit stereo audio
specs.format = FORMAT_S16;
if(m_name.empty())
if(name.empty())
m_device = alcOpenDevice(nullptr);
else
m_device = alcOpenDevice(m_name.c_str());
m_device = alcOpenDevice(name.c_str());
if(!m_device)
AUD_THROW(DeviceException, "The audio device couldn't be opened with OpenAL.");
@@ -1155,8 +1028,6 @@ OpenALDevice::OpenALDevice(DeviceSpecs specs, int buffersize, std::string name)
m_context = alcCreateContext(m_device, attributes);
alcMakeContextCurrent(m_context);
m_checkDisconnect = alcIsExtensionPresent(m_device, "ALC_EXT_disconnect");
alcGetIntegerv(m_device, ALC_FREQUENCY, 1, (ALCint*)&specs.rate);
// check for specific formats and channel counts to be played back

View File

@@ -95,16 +95,11 @@ private:
/// Current status of the handle
Status m_status;
/// Whether the source is relative or not.
ALint m_relative;
/// Own device.
OpenALDevice* m_device;
AUD_LOCAL bool pause(bool keep);
AUD_LOCAL bool reinitialize();
// delete copy constructor and operator=
OpenALHandle(const OpenALHandle&) = delete;
OpenALHandle& operator=(const OpenALHandle&) = delete;
@@ -178,21 +173,11 @@ private:
*/
DeviceSpecs m_specs;
/**
* The device name.
*/
std::string m_name;
/**
* Whether the device has the AL_EXT_MCFORMATS extension.
*/
bool m_useMC;
/**
* Whether the ALC_EXT_disconnect extension is present and device disconnect should be checked repeatedly.
*/
bool m_checkDisconnect;
/**
* The list of sounds that are currently playing.
*/

View File

@@ -296,77 +296,72 @@ const Channel* ChannelMapperReader::CHANNEL_MAPS[] =
ChannelMapperReader::SURROUND71_MAP
};
constexpr float deg2rad(double angle)
{
return float(angle * M_PI / 180.0);
}
const float ChannelMapperReader::MONO_ANGLES[] =
{
deg2rad(0.0)
0.0f * M_PI / 180.0f
};
const float ChannelMapperReader::STEREO_ANGLES[] =
{
deg2rad(-90.0),
deg2rad( 90.0)
-90.0f * M_PI / 180.0f,
90.0f * M_PI / 180.0f
};
const float ChannelMapperReader::STEREO_LFE_ANGLES[] =
{
deg2rad(-90.0),
deg2rad( 90.0),
deg2rad( 0.0)
-90.0f * M_PI / 180.0f,
90.0f * M_PI / 180.0f,
0.0f * M_PI / 180.0f
};
const float ChannelMapperReader::SURROUND4_ANGLES[] =
{
deg2rad( -45.0),
deg2rad( 45.0),
deg2rad(-135.0),
deg2rad( 135.0)
-45.0f * M_PI / 180.0f,
45.0f * M_PI / 180.0f,
-135.0f * M_PI / 180.0f,
135.0f * M_PI / 180.0f
};
const float ChannelMapperReader::SURROUND5_ANGLES[] =
{
deg2rad( -30.0),
deg2rad( 30.0),
deg2rad( 0.0),
deg2rad(-110.0),
deg2rad( 110.0)
-30.0f * M_PI / 180.0f,
30.0f * M_PI / 180.0f,
0.0f * M_PI / 180.0f,
-110.0f * M_PI / 180.0f,
110.0f * M_PI / 180.0f
};
const float ChannelMapperReader::SURROUND51_ANGLES[] =
{
deg2rad( -30.0),
deg2rad( 30.0),
deg2rad( 0.0),
deg2rad( 0.0),
deg2rad(-110.0),
deg2rad( 110.0)
-30.0f * M_PI / 180.0f,
30.0f * M_PI / 180.0f,
0.0f * M_PI / 180.0f,
0.0f * M_PI / 180.0f,
-110.0f * M_PI / 180.0f,
110.0f * M_PI / 180.0f
};
const float ChannelMapperReader::SURROUND61_ANGLES[] =
{
deg2rad( -30.0),
deg2rad( 30.0),
deg2rad( 0.0),
deg2rad( 0.0),
deg2rad( 180.0),
deg2rad(-110.0),
deg2rad( 110.0)
-30.0f * M_PI / 180.0f,
30.0f * M_PI / 180.0f,
0.0f * M_PI / 180.0f,
0.0f * M_PI / 180.0f,
180.0f * M_PI / 180.0f,
-110.0f * M_PI / 180.0f,
110.0f * M_PI / 180.0f
};
const float ChannelMapperReader::SURROUND71_ANGLES[] =
{
deg2rad( -30.0),
deg2rad( 30.0),
deg2rad( 0.0),
deg2rad( 0.0),
deg2rad(-110.0),
deg2rad( 110.0),
deg2rad(-150.0),
deg2rad( 150.0)
-30.0f * M_PI / 180.0f,
30.0f * M_PI / 180.0f,
0.0f * M_PI / 180.0f,
0.0f * M_PI / 180.0f,
-110.0f * M_PI / 180.0f,
110.0f * M_PI / 180.0f,
-150.0f * M_PI / 180.0f,
150.0f * M_PI / 180.0f
};
const float* ChannelMapperReader::CHANNEL_ANGLES[] =

View File

@@ -16,6 +16,11 @@
#
# The Original Code is Copyright (C) 2006, Blender Foundation
# All rights reserved.
#
# The Original Code is: all of this file.
#
# Contributor(s): Jacques Beaurai, Erwin Coumans
#
# ***** END GPL LICENSE BLOCK *****
set(INC

View File

@@ -16,6 +16,10 @@
#
# The Original Code is Copyright (C) 2012, Blender Foundation
# All rights reserved.
#
# Contributor(s): Blender Foundation,
# Sergey Sharybin
#
# ***** END GPL LICENSE BLOCK *****
# NOTE: This file is automatically generated by bundle.sh script

View File

@@ -16,6 +16,11 @@
#
# The Original Code is Copyright (C) 2006, Blender Foundation
# All rights reserved.
#
# The Original Code is: all of this file.
#
# Contributor(s): Jacques Beaurain.
#
# ***** END GPL LICENSE BLOCK *****
set(INC

View File

@@ -16,6 +16,11 @@
#
# The Original Code is Copyright (C) 2006, Blender Foundation
# All rights reserved.
#
# The Original Code is: all of this file.
#
# Contributor(s): Jacques Beaurain.
#
# ***** END GPL LICENSE BLOCK *****
set(INC

View File

@@ -1490,3 +1490,4 @@ int curve_fit_cubic_to_points_refit_fl(
return result;
}

View File

@@ -16,6 +16,10 @@
#
# The Original Code is Copyright (C) 2016, Blender Foundation
# All rights reserved.
#
# Contributor(s): Blender Foundation,
# Sergey Sharybin
#
# ***** END GPL LICENSE BLOCK *****
set(INC

View File

@@ -16,6 +16,11 @@
#
# The Original Code is Copyright (C) 2013, Blender Foundation
# All rights reserved.
#
# The Original Code is: all of this file.
#
# Contributor(s): Jason Wilkins
#
# ***** END GPL LICENSE BLOCK *****
set(INC

View File

@@ -16,6 +16,11 @@
#
# The Original Code is Copyright (C) 2006, Blender Foundation
# All rights reserved.
#
# The Original Code is: all of this file.
#
# Contributor(s): Jacques Beaurain.
#
# ***** END GPL LICENSE BLOCK *****
set(INC

View File

@@ -16,6 +16,10 @@
#
# The Original Code is Copyright (C) 2016, Blender Foundation
# All rights reserved.
#
# Contributor(s): Blender Foundation,
# Sergey Sharybin
#
# ***** END GPL LICENSE BLOCK *****
set(INC

View File

@@ -16,6 +16,9 @@
#
# The Original Code is Copyright (C) 2014, Blender Foundation
# All rights reserved.
#
# Contributor(s): Sergey Sharybin
#
# ***** END GPL LICENSE BLOCK *****
set(INC

View File

@@ -16,6 +16,9 @@
#
# The Original Code is Copyright (C) 2014, Blender Foundation
# All rights reserved.
#
# Contributor(s): Sergey Sharybin
#
# ***** END GPL LICENSE BLOCK *****
# avoid noisy warnings

View File

@@ -16,6 +16,11 @@
#
# The Original Code is Copyright (C) 2006, Blender Foundation
# All rights reserved.
#
# The Original Code is: all of this file.
#
# Contributor(s): Daniel Genrich
#
# ***** END GPL LICENSE BLOCK *****
set(INC

View File

@@ -16,6 +16,11 @@
#
# The Original Code is Copyright (C) 2006, Blender Foundation
# All rights reserved.
#
# The Original Code is: all of this file.
#
# Contributor(s): Daniel Genrich
#
# ***** END GPL LICENSE BLOCK *****
remove_strict_flags()

View File

@@ -16,6 +16,11 @@
#
# The Original Code is Copyright (C) 2006, Blender Foundation
# All rights reserved.
#
# The Original Code is: all of this file.
#
# Contributor(s): Jacques Beaurain.
#
# ***** END GPL LICENSE BLOCK *****
set(INC

View File

@@ -16,6 +16,11 @@
#
# The Original Code is Copyright (C) 2013, Blender Foundation
# All rights reserved.
#
# The Original Code is: all of this file.
#
# Contributor(s): none yet.
#
# ***** END GPL LICENSE BLOCK *****
set(INC

View File

@@ -1,4 +1,6 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
@@ -15,6 +17,10 @@
*
* The Original Code is Copyright (C) 2013 Blender Foundation.
* All rights reserved.
*
* Contributor(s): none yet.
*
* ***** END GPL LICENSE BLOCK *****
*/
#ifndef __WCWIDTH_H__

View File

@@ -16,6 +16,11 @@
#
# The Original Code is Copyright (C) 2012, Blender Foundation
# All rights reserved.
#
# The Original Code is: all of this file.
#
# Contributor(s): Sergey Sharybin.
#
# ***** END GPL LICENSE BLOCK *****
set(INC

View File

@@ -16,6 +16,11 @@
#
# The Original Code is Copyright (C) 2006, Blender Foundation
# All rights reserved.
#
# The Original Code is: all of this file.
#
# Contributor(s): Jacques Beaurain.
#
# ***** END GPL LICENSE BLOCK *****
# add_subdirectory(atomic) # header only

View File

@@ -24,6 +24,9 @@
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
* ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
@@ -42,13 +45,17 @@
* All rights reserved.
*
* The Original Code is: adapted from jemalloc.
*
* ***** END GPL LICENSE BLOCK *****
*/
/** \file
/**
* \file atomic_ops.h
* \ingroup Atomic
*
* \brief Provides wrapper around system-specific atomic primitives,
* and some extensions (faked-atomic operations over float numbers).
* \author Copyright (C) 2016 Blender Foundation, adapted from jemalloc.
* \brief Provides wrapper around system-specific atomic primitives, and some extensions (faked-atomic operations
* over float numbers).
*/
#ifndef __ATOMIC_OPS_H__

View File

@@ -24,6 +24,9 @@
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
* ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
@@ -42,6 +45,8 @@
* All rights reserved.
*
* The Original Code is: adapted from jemalloc.
*
* ***** END GPL LICENSE BLOCK *****
*/
#ifndef __ATOMIC_OPS_EXT_H__

View File

@@ -24,6 +24,9 @@
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
* ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
@@ -42,6 +45,8 @@
* All rights reserved.
*
* The Original Code is: adapted from jemalloc.
*
* ***** END GPL LICENSE BLOCK *****
*/
#ifndef __ATOMIC_OPS_UNIX_H__

View File

@@ -24,6 +24,9 @@
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
* ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
@@ -42,6 +45,8 @@
* All rights reserved.
*
* The Original Code is: adapted from jemalloc.
*
* ***** END GPL LICENSE BLOCK *****
*/
#ifndef __ATOMIC_OPS_UTILS_H__

View File

@@ -1,4 +1,6 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* Copyright 2009-2011 Jörg Hermann Müller
*
* This file is part of AudaSpace.
@@ -16,10 +18,12 @@
* You should have received a copy of the GNU General Public License
* along with Audaspace; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* ***** END GPL LICENSE BLOCK *****
*/
/** \file
* \ingroup audaspaceintern
/** \file audaspace/intern/AUD_PyInit.cpp
* \ingroup audaspaceintern
*/
#include "AUD_PyInit.h"

View File

@@ -1,4 +1,6 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* Copyright 2009-2011 Jörg Hermann Müller
*
* This file is part of AudaSpace.
@@ -16,10 +18,12 @@
* You should have received a copy of the GNU General Public License
* along with Audaspace; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* ***** END GPL LICENSE BLOCK *****
*/
/** \file
* \ingroup audaspaceintern
/** \file audaspace/intern/AUD_PyInit.h
* \ingroup audaspaceintern
*/

View File

@@ -1,4 +1,6 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* Copyright 2009-2011 Jörg Hermann Müller
*
* This file is part of AudaSpace.
@@ -16,10 +18,12 @@
* You should have received a copy of the GNU General Public License
* along with Audaspace; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* ***** END GPL LICENSE BLOCK *****
*/
/** \file
* \ingroup audaspaceintern
/** \file audaspace/intern/AUD_Set.cpp
* \ingroup audaspaceintern
*/
#include <set>

View File

@@ -1,4 +1,6 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* Copyright 2009-2011 Jörg Hermann Müller
*
* This file is part of AudaSpace.
@@ -16,10 +18,12 @@
* You should have received a copy of the GNU General Public License
* along with Audaspace; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* ***** END GPL LICENSE BLOCK *****
*/
/** \file
* \ingroup audaspace
/** \file AUD_Set.h
* \ingroup audaspace
*/
#ifndef __AUD_SET_H__

View File

@@ -1,4 +1,6 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
@@ -12,13 +14,15 @@
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* ***** END GPL LICENSE BLOCK *****
*/
#ifndef __CLG_LOG_H__
#define __CLG_LOG_H__
#ifndef __CLOG_H__
#define __CLOG_H__
/** \file
* \ingroup clog
/** \file clog/CLG_log.h
* \ingroup clog
*
* C Logging Library (clog)
* ========================
@@ -49,6 +53,8 @@
* - `WARN`: General warnings (which aren't necessary to show to users).
* - `ERROR`: An error we can recover from, should not happen.
* - `FATAL`: Similar to assert. This logs the message, then a stack trace and abort.
*
*
* Verbosity Level
* ---------------
*
@@ -84,6 +90,11 @@ extern "C" {
# define _CLOG_ATTR_PRINTF_FORMAT(format_param, dots_param)
#endif
#if defined(_MSC_VER) && !defined(__func__)
# define __func__MSVC
# define __func__ __FUNCTION__
#endif
#define STRINGIFY_ARG(x) "" #x
#define STRINGIFY_APPEND(a, b) "" a #b
#define STRINGIFY(x) STRINGIFY_APPEND("", x)
@@ -134,7 +145,6 @@ void CLG_exit(void);
void CLG_output_set(void *file_handle);
void CLG_output_use_basename_set(int value);
void CLG_output_use_timestamp_set(int value);
void CLG_fatal_fn_set(void (*fatal_fn)(void *file_handle));
void CLG_backtrace_fn_set(void (*fatal_fn)(void *file_handle));
@@ -164,7 +174,7 @@ void CLG_logref_init(CLG_LogRef *clg_ref);
#define CLOG_STR_AT_SEVERITY(clg_ref, severity, verbose_level, str) { \
CLG_LogType *_lg_ty = CLOG_ENSURE(clg_ref); \
if (((_lg_ty->flag & CLG_FLAG_USE) && (_lg_ty->level >= verbose_level)) || (severity >= CLG_SEVERITY_WARN)) { \
CLG_log_str(_lg_ty, severity, __FILE__ ":" STRINGIFY(__LINE__), __func__, str); \
CLG_log_str(lg, severity, __FILE__ ":" STRINGIFY(__LINE__), __func__, str); \
} \
} ((void)0)
@@ -182,19 +192,23 @@ void CLG_logref_init(CLG_LogRef *clg_ref);
#define CLOG_ERROR(clg_ref, ...) CLOG_AT_SEVERITY(clg_ref, CLG_SEVERITY_ERROR, 0, __VA_ARGS__)
#define CLOG_FATAL(clg_ref, ...) CLOG_AT_SEVERITY(clg_ref, CLG_SEVERITY_FATAL, 0, __VA_ARGS__)
#define CLOG_STR_INFO(clg_ref, level, str) CLOG_STR_AT_SEVERITY(clg_ref, CLG_SEVERITY_INFO, level, str)
#define CLOG_STR_WARN(clg_ref, str) CLOG_STR_AT_SEVERITY(clg_ref, CLG_SEVERITY_WARN, 0, str)
#define CLOG_STR_ERROR(clg_ref, str) CLOG_STR_AT_SEVERITY(clg_ref, CLG_SEVERITY_ERROR, 0, str)
#define CLOG_STR_FATAL(clg_ref, str) CLOG_STR_AT_SEVERITY(clg_ref, CLG_SEVERITY_FATAL, 0, str)
#define CLOG_STR_INFO(clg_ref, level, ...) CLOG_STR_AT_SEVERITY(clg_ref, CLG_SEVERITY_INFO, level, __VA_ARGS__)
#define CLOG_STR_WARN(clg_ref, ...) CLOG_STR_AT_SEVERITY(clg_ref, CLG_SEVERITY_WARN, 0, __VA_ARGS__)
#define CLOG_STR_ERROR(clg_ref, ...) CLOG_STR_AT_SEVERITY(clg_ref, CLG_SEVERITY_ERROR, 0, __VA_ARGS__)
#define CLOG_STR_FATAL(clg_ref, ...) CLOG_STR_AT_SEVERITY(clg_ref, CLG_SEVERITY_FATAL, 0, __VA_ARGS__)
/* Allocated string which is immediately freed. */
#define CLOG_STR_INFO_N(clg_ref, level, str) CLOG_STR_AT_SEVERITY_N(clg_ref, CLG_SEVERITY_INFO, level, str)
#define CLOG_STR_WARN_N(clg_ref, str) CLOG_STR_AT_SEVERITY_N(clg_ref, CLG_SEVERITY_WARN, 0, str)
#define CLOG_STR_ERROR_N(clg_ref, str) CLOG_STR_AT_SEVERITY_N(clg_ref, CLG_SEVERITY_ERROR, 0, str)
#define CLOG_STR_FATAL_N(clg_ref, str) CLOG_STR_AT_SEVERITY_N(clg_ref, CLG_SEVERITY_FATAL, 0, str)
#define CLOG_STR_INFO_N(clg_ref, level, ...) CLOG_STR_AT_SEVERITY_N(clg_ref, CLG_SEVERITY_INFO, level, __VA_ARGS__)
#define CLOG_STR_WARN_N(clg_ref, ...) CLOG_STR_AT_SEVERITY_N(clg_ref, CLG_SEVERITY_WARN, 0, __VA_ARGS__)
#define CLOG_STR_ERROR_N(clg_ref, ...) CLOG_STR_AT_SEVERITY_N(clg_ref, CLG_SEVERITY_ERROR, 0, __VA_ARGS__)
#define CLOG_STR_FATAL_N(clg_ref, ...) CLOG_STR_AT_SEVERITY_N(clg_ref, CLG_SEVERITY_FATAL, 0, __VA_ARGS__)
#ifdef __func__MSVC
# undef __func__MSVC
#endif
#ifdef __cplusplus
}
#endif
#endif /* __CLG_LOG_H__ */
#endif /* __CLOG_H__ */

View File

@@ -18,7 +18,6 @@
set(INC
.
../atomic
../guardedalloc
)
@@ -32,7 +31,4 @@ set(SRC
CLG_log.h
)
# Disabled for makesdna/makesrna.
add_definitions(-DWITH_CLOG_PTHREADS)
blender_add_lib(bf_intern_clog "${SRC}" "${INC}" "${INC_SYS}")

View File

@@ -1,4 +1,6 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
@@ -12,10 +14,12 @@
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* ***** END GPL LICENSE BLOCK *****
*/
/** \file
* \ingroup clog
/** \file clog/clog.c
* \ingroup clog
*/
#include <stdarg.h>
@@ -24,29 +28,14 @@
#include <stdint.h>
#include <assert.h>
/* Disable for small single threaded programs
* to avoid having to link with pthreads. */
#ifdef WITH_CLOG_PTHREADS
# include <pthread.h>
# include "atomic_ops.h"
#endif
/* For 'isatty' to check for color. */
#if defined(__unix__) || defined(__APPLE__) || defined(__HAIKU__)
# include <unistd.h>
# include <sys/time.h>
#endif
#if defined(_MSC_VER)
# include <io.h>
# include <windows.h>
#endif
/* For printing timestamp. */
#define __STDC_FORMAT_MACROS
#include <inttypes.h>
/* Only other dependency (could use regular malloc too). */
#include "MEM_guardedalloc.h"
@@ -76,23 +65,15 @@ typedef struct CLG_IDFilter {
typedef struct CLogContext {
/** Single linked list of types. */
CLG_LogType *types;
#ifdef WITH_CLOG_PTHREADS
pthread_mutex_t types_lock;
#endif
/* exclude, include filters. */
CLG_IDFilter *filters[2];
bool use_color;
bool use_basename;
bool use_timestamp;
/** Borrowed, not owned. */
int output;
FILE *output_file;
/** For timer (use_timestamp). */
uint64_t timestamp_tick_start;
/** For new types. */
struct {
int level;
@@ -365,35 +346,12 @@ static void clg_ctx_backtrace(CLogContext *ctx)
fflush(ctx->output_file);
}
static uint64_t clg_timestamp_ticks_get(void)
{
uint64_t tick;
#if defined(_MSC_VER)
tick = GetTickCount64();
#else
struct timeval tv;
gettimeofday(&tv, NULL);
tick = tv.tv_sec * 1000 + tv.tv_usec / 1000;
#endif
return tick;
}
/** \} */
/* -------------------------------------------------------------------- */
/** \name Logging API
* \{ */
static void write_timestamp(CLogStringBuf *cstr, const uint64_t timestamp_tick_start)
{
char timestamp_str[64];
const uint64_t timestamp = clg_timestamp_ticks_get() - timestamp_tick_start;
const uint timestamp_len = snprintf(
timestamp_str, sizeof(timestamp_str), "%" PRIu64 ".%03u ",
timestamp / 1000, (uint)(timestamp % 1000));
clg_str_append_with_len(cstr, timestamp_str, timestamp_len);
}
static void write_severity(CLogStringBuf *cstr, enum CLG_Severity severity, bool use_color)
{
assert((unsigned int)severity < CLG_SEVERITY_LEN);
@@ -445,10 +403,6 @@ void CLG_log_str(
char cstr_stack_buf[CLOG_BUF_LEN_INIT];
clg_str_init(&cstr, cstr_stack_buf, sizeof(cstr_stack_buf));
if (lg->ctx->use_timestamp) {
write_timestamp(&cstr, lg->ctx->timestamp_tick_start);
}
write_severity(&cstr, severity, lg->ctx->use_color);
write_type(&cstr, lg);
@@ -481,10 +435,6 @@ void CLG_logf(
char cstr_stack_buf[CLOG_BUF_LEN_INIT];
clg_str_init(&cstr, cstr_stack_buf, sizeof(cstr_stack_buf));
if (lg->ctx->use_timestamp) {
write_timestamp(&cstr, lg->ctx->timestamp_tick_start);
}
write_severity(&cstr, severity, lg->ctx->use_color);
write_type(&cstr, lg);
@@ -533,14 +483,6 @@ static void CLG_ctx_output_use_basename_set(CLogContext *ctx, int value)
ctx->use_basename = (bool)value;
}
static void CLG_ctx_output_use_timestamp_set(CLogContext *ctx, int value)
{
ctx->use_timestamp = (bool)value;
if (ctx->use_timestamp) {
ctx->timestamp_tick_start = clg_timestamp_ticks_get();
}
}
/** Action on fatal severity. */
static void CLG_ctx_fatal_fn_set(CLogContext *ctx, void (*fatal_fn)(void *file_handle))
{
@@ -585,9 +527,6 @@ static void CLG_ctx_level_set(CLogContext *ctx, int level)
static CLogContext *CLG_ctx_init(void)
{
CLogContext *ctx = MEM_callocN(sizeof(*ctx), __func__);
#ifdef WITH_CLOG_PTHREADS
pthread_mutex_init(&ctx->types_lock, NULL);
#endif
ctx->use_color = true;
ctx->default_type.level = 1;
CLG_ctx_output_set(ctx, stdout);
@@ -610,9 +549,6 @@ static void CLG_ctx_free(CLogContext *ctx)
MEM_freeN(item);
}
}
#ifdef WITH_CLOG_PTHREADS
pthread_mutex_destroy(&ctx->types_lock);
#endif
MEM_freeN(ctx);
}
@@ -649,10 +585,6 @@ void CLG_output_use_basename_set(int value)
CLG_ctx_output_use_basename_set(g_ctx, value);
}
void CLG_output_use_timestamp_set(int value)
{
CLG_ctx_output_use_timestamp_set(g_ctx, value);
}
void CLG_fatal_fn_set(void (*fatal_fn)(void *file_handle))
{
@@ -689,24 +621,9 @@ void CLG_level_set(int level)
void CLG_logref_init(CLG_LogRef *clg_ref)
{
#ifdef WITH_CLOG_PTHREADS
/* Only runs once when initializing a static type in most cases. */
pthread_mutex_lock(&g_ctx->types_lock);
#endif
if (clg_ref->type == NULL) {
CLG_LogType *clg_ty = clg_ctx_type_find_by_name(g_ctx, clg_ref->identifier);
if (clg_ty == NULL) {
clg_ty = clg_ctx_type_register(g_ctx, clg_ref->identifier);
}
#ifdef WITH_CLOG_PTHREADS
atomic_cas_ptr((void **)&clg_ref->type, clg_ref->type, clg_ty);
#else
clg_ref->type = clg_ty;
#endif
}
#ifdef WITH_CLOG_PTHREADS
pthread_mutex_unlock(&g_ctx->types_lock);
#endif
assert(clg_ref->type == NULL);
CLG_LogType *clg_ty = clg_ctx_type_find_by_name(g_ctx, clg_ref->identifier);
clg_ref->type = clg_ty ? clg_ty : clg_ctx_type_register(g_ctx, clg_ref->identifier);
}
/** \} */

View File

@@ -36,7 +36,7 @@ if(WITH_CYCLES_OSL)
endif()
if(NOT CYCLES_STANDALONE_REPOSITORY)
list(APPEND LIBRARIES bf_intern_glew_mx bf_intern_guardedalloc bf_intern_numaapi)
list(APPEND LIBRARIES bf_intern_glew_mx bf_intern_guardedalloc)
endif()
if(WITH_CYCLES_LOGGING)

View File

@@ -63,7 +63,7 @@ public:
bool fast_math;
};
static bool compile_cuda(CompilationSettings &settings)
bool compile_cuda(CompilationSettings &settings)
{
const char* headers[] = {"stdlib.h" , "float.h", "math.h", "stdio.h"};
const char* header_content[] = {"\n", "\n", "\n", "\n"};
@@ -99,7 +99,7 @@ static bool compile_cuda(CompilationSettings &settings)
headers); // includeNames
if(result != NVRTC_SUCCESS) {
fprintf(stderr, "Error: nvrtcCreateProgram failed (%d)\n\n", (int)result);
fprintf(stderr, "Error: nvrtcCreateProgram failed (%x)\n\n", result);
return false;
}
@@ -112,7 +112,7 @@ static bool compile_cuda(CompilationSettings &settings)
result = nvrtcCompileProgram(prog, options.size(), &opts[0]);
if(result != NVRTC_SUCCESS) {
fprintf(stderr, "Error: nvrtcCompileProgram failed (%d)\n\n", (int)result);
fprintf(stderr, "Error: nvrtcCompileProgram failed (%x)\n\n", result);
size_t log_size;
nvrtcGetProgramLogSize(prog, &log_size);
@@ -128,14 +128,14 @@ static bool compile_cuda(CompilationSettings &settings)
size_t ptx_size;
result = nvrtcGetPTXSize(prog, &ptx_size);
if(result != NVRTC_SUCCESS) {
fprintf(stderr, "Error: nvrtcGetPTXSize failed (%d)\n\n", (int)result);
fprintf(stderr, "Error: nvrtcGetPTXSize failed (%x)\n\n", result);
return false;
}
vector<char> ptx_code(ptx_size);
result = nvrtcGetPTX(prog, &ptx_code[0]);
if(result != NVRTC_SUCCESS) {
fprintf(stderr, "Error: nvrtcGetPTX failed (%d)\n\n", (int)result);
fprintf(stderr, "Error: nvrtcGetPTX failed (%x)\n\n", result);
return false;
}
@@ -148,7 +148,7 @@ static bool compile_cuda(CompilationSettings &settings)
return true;
}
static bool link_ptxas(CompilationSettings &settings)
bool link_ptxas(CompilationSettings &settings)
{
string cudapath = "";
if(settings.cuda_toolkit_dir.size())
@@ -166,7 +166,7 @@ static bool link_ptxas(CompilationSettings &settings)
int pxresult = system(ptx.c_str());
if(pxresult) {
fprintf(stderr, "Error: ptxas failed (%d)\n\n", pxresult);
fprintf(stderr, "Error: ptxas failed (%x)\n\n", pxresult);
return false;
}
@@ -177,19 +177,17 @@ static bool link_ptxas(CompilationSettings &settings)
return true;
}
static bool init(CompilationSettings &settings)
bool init(CompilationSettings &settings)
{
#ifdef _MSC_VER
if(settings.cuda_toolkit_dir.size()) {
SetDllDirectory((settings.cuda_toolkit_dir + "/bin").c_str());
}
#else
(void)settings;
#endif
int cuewresult = cuewInit(CUEW_INIT_NVRTC);
if(cuewresult != CUEW_SUCCESS) {
fprintf(stderr, "Error: cuew init fialed (0x%d)\n\n", cuewresult);
fprintf(stderr, "Error: cuew init fialed (0x%x)\n\n", cuewresult);
return false;
}
@@ -231,7 +229,7 @@ static bool init(CompilationSettings &settings)
return true;
}
static bool parse_parameters(int argc, const char **argv, CompilationSettings &settings)
bool parse_parameters(int argc, const char **argv, CompilationSettings &settings)
{
OIIO::ArgParse ap;
ap.options("Usage: cycles_cubin_cc [options]",

View File

@@ -363,8 +363,13 @@ static void options_parse(int argc, const char **argv)
string devicename = "CPU";
bool list = false;
/* List devices for which support is compiled in. */
vector<DeviceType> types = Device::available_types();
vector<DeviceType>& types = Device::available_types();
/* TODO(sergey): Here's a feedback loop happens: on the one hand we want
* the device list to be printed in help message, on the other hand logging
* is not initialized yet so we wouldn't have debug log happening in the
* device initialization.
*/
foreach(DeviceType type, types) {
if(device_names != "")
device_names += ", ";
@@ -416,7 +421,7 @@ static void options_parse(int argc, const char **argv)
}
if(list) {
vector<DeviceInfo> devices = Device::available_devices();
vector<DeviceInfo>& devices = Device::available_devices();
printf("Devices:\n");
foreach(DeviceInfo& info, devices) {
@@ -451,12 +456,15 @@ static void options_parse(int argc, const char **argv)
/* find matching device */
DeviceType device_type = Device::type_from_string(devicename.c_str());
vector<DeviceInfo> devices = Device::available_devices(DEVICE_MASK(device_type));
vector<DeviceInfo>& devices = Device::available_devices();
bool device_available = false;
if (!devices.empty()) {
options.session_params.device = devices.front();
device_available = true;
foreach(DeviceInfo& device, devices) {
if(device_type == device.type) {
options.session_params.device = device;
device_available = true;
break;
}
}
/* handle invalid configurations */

View File

@@ -17,7 +17,6 @@ set(INC_SYS
set(SRC
blender_camera.cpp
blender_device.cpp
blender_mesh.cpp
blender_object.cpp
blender_object_cull.cpp
@@ -41,7 +40,6 @@ set(SRC
set(ADDON_FILES
addon/__init__.py
addon/engine.py
addon/operators.py
addon/osl.py
addon/presets.py
addon/properties.py
@@ -51,10 +49,6 @@ set(ADDON_FILES
add_definitions(${GL_DEFINITIONS})
if(WITH_CYCLES_DEVICE_OPENCL)
add_definitions(-DWITH_OPENCL)
endif()
if(WITH_CYCLES_NETWORK)
add_definitions(-DWITH_NETWORK)
endif()

View File

@@ -37,8 +37,6 @@ if "bpy" in locals():
importlib.reload(version_update)
if "ui" in locals():
importlib.reload(ui)
if "operators" in locals():
importlib.reload(operators)
if "properties" in locals():
importlib.reload(properties)
if "presets" in locals():
@@ -121,7 +119,6 @@ classes = (
def register():
from bpy.utils import register_class
from . import ui
from . import operators
from . import properties
from . import presets
import atexit
@@ -134,7 +131,6 @@ def register():
properties.register()
ui.register()
operators.register()
presets.register()
for cls in classes:
@@ -146,7 +142,6 @@ def register():
def unregister():
from bpy.utils import unregister_class
from . import ui
from . import operators
from . import properties
from . import presets
import atexit
@@ -154,7 +149,6 @@ def unregister():
bpy.app.handlers.version_update.remove(version_update.do_versions)
ui.unregister()
operators.unregister()
properties.unregister()
presets.unregister()

View File

@@ -270,11 +270,14 @@ def register_passes(engine, scene, srl):
engine.register_pass(scene, srl, "Noisy Image", 4, "RGBA", 'COLOR')
if crl.denoising_store_passes:
engine.register_pass(scene, srl, "Denoising Normal", 3, "XYZ", 'VECTOR')
engine.register_pass(scene, srl, "Denoising Normal Variance", 3, "XYZ", 'VECTOR')
engine.register_pass(scene, srl, "Denoising Albedo", 3, "RGB", 'COLOR')
engine.register_pass(scene, srl, "Denoising Albedo Variance", 3, "RGB", 'COLOR')
engine.register_pass(scene, srl, "Denoising Depth", 1, "Z", 'VALUE')
engine.register_pass(scene, srl, "Denoising Shadowing", 1, "X", 'VALUE')
engine.register_pass(scene, srl, "Denoising Variance", 3, "RGB", 'COLOR')
engine.register_pass(scene, srl, "Denoising Intensity", 1, "X", 'VALUE')
engine.register_pass(scene, srl, "Denoising Depth Variance", 1, "Z", 'VALUE')
engine.register_pass(scene, srl, "Denoising Shadow A", 3, "XYV", 'VECTOR')
engine.register_pass(scene, srl, "Denoising Shadow B", 3, "XYV", 'VECTOR')
engine.register_pass(scene, srl, "Denoising Image Variance", 3, "RGB", 'COLOR')
clean_options = ("denoising_diffuse_direct", "denoising_diffuse_indirect",
"denoising_glossy_direct", "denoising_glossy_indirect",
"denoising_transmission_direct", "denoising_transmission_indirect",

View File

@@ -1,141 +0,0 @@
#
# Copyright 2011-2019 Blender Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# <pep8 compliant>
import bpy
from bpy.types import Operator
from bpy.props import StringProperty
class CYCLES_OT_use_shading_nodes(Operator):
"""Enable nodes on a material, world or light"""
bl_idname = "cycles.use_shading_nodes"
bl_label = "Use Nodes"
@classmethod
def poll(cls, context):
return (getattr(context, "material", False) or getattr(context, "world", False) or
getattr(context, "light", False))
def execute(self, context):
if context.material:
context.material.use_nodes = True
elif context.world:
context.world.use_nodes = True
elif context.light:
context.light.use_nodes = True
return {'FINISHED'}
class CYCLES_OT_denoise_animation(Operator):
"Denoise rendered animation sequence using current scene and view " \
"layer settings. Requires denoising data passes and output to " \
"OpenEXR multilayer files"
bl_idname = "cycles.denoise_animation"
bl_label = "Denoise Animation"
input_filepath: StringProperty(
name='Input Filepath',
description='File path for image to denoise. If not specified, uses the render file path and frame range from the scene',
default='',
subtype='FILE_PATH')
output_filepath: StringProperty(
name='Output Filepath',
description='If not specified, renders will be denoised in-place',
default='',
subtype='FILE_PATH')
def execute(self, context):
import os
preferences = context.preferences
scene = context.scene
view_layer = context.view_layer
in_filepath = self.input_filepath
out_filepath = self.output_filepath
in_filepaths = []
out_filepaths = []
if in_filepath != '':
# Denoise a single file
if out_filepath == '':
out_filepath = in_filepath
in_filepaths.append(in_filepath)
out_filepaths.append(out_filepath)
else:
# Denoise animation sequence with expanded frames matching
# Blender render output file naming.
in_filepath = scene.render.filepath
if out_filepath == '':
out_filepath = in_filepath
# Backup since we will overwrite the scene path temporarily
original_filepath = scene.render.filepath
for frame in range(scene.frame_start, scene.frame_end + 1):
scene.render.filepath = in_filepath
filepath = scene.render.frame_path(frame=frame)
in_filepaths.append(filepath)
if not os.path.isfile(filepath):
scene.render.filepath = original_filepath
self.report({'ERROR'}, f"Frame '{filepath}' not found, animation must be complete.")
return {'CANCELLED'}
scene.render.filepath = out_filepath
filepath = scene.render.frame_path(frame=frame)
out_filepaths.append(filepath)
scene.render.filepath = original_filepath
# Run denoiser
# TODO: support cancel and progress reports.
import _cycles
try:
_cycles.denoise(preferences.as_pointer(),
scene.as_pointer(),
view_layer.as_pointer(),
input=in_filepaths,
output=out_filepaths)
except Exception as e:
self.report({'ERROR'}, str(e))
return {'FINISHED'}
self.report({'INFO'}, "Denoising completed.")
return {'FINISHED'}
classes = (
CYCLES_OT_use_shading_nodes,
CYCLES_OT_denoise_animation
)
def register():
from bpy.utils import register_class
for cls in classes:
register_class(cls)
def unregister():
from bpy.utils import unregister_class
for cls in classes:
unregister_class(cls)

View File

@@ -24,7 +24,7 @@ class AddPresetIntegrator(AddPresetBase, Operator):
'''Add an Integrator Preset'''
bl_idname = "render.cycles_integrator_preset_add"
bl_label = "Add Integrator Preset"
preset_menu = "CYCLES_PT_integrator_presets"
preset_menu = "CYCLES_MT_integrator_presets"
preset_defines = [
"cycles = bpy.context.scene.cycles"
@@ -49,7 +49,7 @@ class AddPresetSampling(AddPresetBase, Operator):
'''Add a Sampling Preset'''
bl_idname = "render.cycles_sampling_preset_add"
bl_label = "Add Sampling Preset"
preset_menu = "CYCLES_PT_sampling_presets"
preset_menu = "CYCLES_MT_sampling_presets"
preset_defines = [
"cycles = bpy.context.scene.cycles"

View File

@@ -360,8 +360,7 @@ class CyclesRenderSettings(bpy.types.PropertyGroup):
description="Distance between volume shader samples when rendering the volume "
"(lower values give more accurate and detailed results, but also increased render time)",
default=0.1,
min=0.0000001, max=100000.0, soft_min=0.01, soft_max=1.0, precision=4,
unit='LENGTH'
min=0.0000001, max=100000.0, soft_min=0.01, soft_max=1.0, precision=4
)
volume_max_steps: IntProperty(
@@ -377,14 +376,14 @@ class CyclesRenderSettings(bpy.types.PropertyGroup):
description="Size of a micropolygon in pixels",
min=0.1, max=1000.0, soft_min=0.5,
default=1.0,
subtype='PIXEL'
subtype="PIXEL"
)
preview_dicing_rate: FloatProperty(
name="Preview Dicing Rate",
description="Size of a micropolygon in pixels during preview render",
min=0.1, max=1000.0, soft_min=0.5,
default=8.0,
subtype='PIXEL'
subtype="PIXEL"
)
max_subdivisions: IntProperty(
@@ -457,7 +456,6 @@ class CyclesRenderSettings(bpy.types.PropertyGroup):
description="Pixel filter width",
min=0.01, max=10.0,
default=1.5,
subtype='PIXEL'
)
seed: IntProperty(
@@ -504,7 +502,6 @@ class CyclesRenderSettings(bpy.types.PropertyGroup):
"progressively increasing it to the full viewport size",
min=8, max=16384,
default=64,
subtype='PIXEL'
)
debug_reset_timeout: FloatProperty(
@@ -599,8 +596,7 @@ class CyclesRenderSettings(bpy.types.PropertyGroup):
name="Camera Cull Margin",
description="Margin for the camera space culling",
default=0.1,
min=0.0, max=5.0,
subtype='FACTOR'
min=0.0, max=5.0
)
use_distance_cull: BoolProperty(
@@ -613,8 +609,7 @@ class CyclesRenderSettings(bpy.types.PropertyGroup):
name="Cull Distance",
description="Cull objects which are further away from camera than this distance",
default=50,
min=0.0,
unit='LENGTH'
min=0.0
)
motion_blur_position: EnumProperty(
@@ -644,7 +639,6 @@ class CyclesRenderSettings(bpy.types.PropertyGroup):
description="Scanline \"exposure\" time for the rolling shutter effect",
default=0.1,
min=0.0, max=1.0,
subtype='FACTOR',
)
texture_limit: EnumProperty(
@@ -724,7 +718,7 @@ class CyclesRenderSettings(bpy.types.PropertyGroup):
debug_opencl_kernel_single_program: BoolProperty(
name="Single Program",
default=False,
default=True,
update=_devices_update_callback,
)
del _devices_update_callback
@@ -895,7 +889,7 @@ class CyclesMaterialSettings(bpy.types.PropertyGroup):
name="Displacement Method",
description="Method to use for the displacement",
items=enum_displacement_methods,
default='BUMP',
default='DISPLACEMENT',
)
@classmethod
@@ -1202,14 +1196,12 @@ class CyclesCurveRenderSettings(bpy.types.PropertyGroup):
description="Minimal pixel width for strands (0 - deactivated)",
min=0.0, max=100.0,
default=0.0,
subtype='PIXEL'
)
maximum_width: FloatProperty(
name="Maximal width",
description="Maximum extension that strand radius can be increased by",
min=0.0, max=100.0,
default=0.1,
subtype='PIXEL'
)
subdivisions: IntProperty(
name="Subdivisions",
@@ -1344,7 +1336,6 @@ class CyclesRenderLayerSettings(bpy.types.PropertyGroup):
description="Size of the image area that's used to denoise a pixel (higher values are smoother, but might lose detail and are slower)",
min=1, max=25,
default=8,
subtype="PIXEL",
)
denoising_relative_pca: BoolProperty(
name="Relative filter",
@@ -1357,12 +1348,6 @@ class CyclesRenderLayerSettings(bpy.types.PropertyGroup):
default=False,
update=update_render_passes,
)
denoising_neighbor_frames: IntProperty(
name="Neighbor Frames",
description="Number of neighboring frames to use for denoising animations (more frames produce smoother results at the cost of performance)",
min=0, max=7,
default=0,
)
use_pass_crypto_object: BoolProperty(
name="Cryptomatte Object",
description="Render cryptomatte object pass, for isolating objects in compositing",
@@ -1461,7 +1446,7 @@ class CyclesPreferences(bpy.types.AddonPreferences):
def get_devices(self):
import _cycles
# Layout of the device tuples: (Name, Type, Persistent ID)
device_list = _cycles.available_devices(self.compute_device_type)
device_list = _cycles.available_devices()
# Make sure device entries are up to date and not referenced before
# we know we don't add new devices. This way we guarantee to not
# hold pointers to a resized array.
@@ -1485,7 +1470,7 @@ class CyclesPreferences(bpy.types.AddonPreferences):
def get_num_gpu_devices(self):
import _cycles
device_list = _cycles.available_devices(self.compute_device_type)
device_list = _cycles.available_devices()
num = 0
for device in device_list:
if device[1] != self.compute_device_type:
@@ -1498,32 +1483,25 @@ class CyclesPreferences(bpy.types.AddonPreferences):
def has_active_device(self):
return self.get_num_gpu_devices() > 0
def _draw_devices(self, layout, device_type, devices):
box = layout.box()
found_device = False
for device in devices:
if device.type == device_type:
found_device = True
break
if not found_device:
box.label(text="No compatible GPUs found", icon='INFO')
return
for device in devices:
box.prop(device, "use", text=device.name)
def draw_impl(self, layout, context):
row = layout.row()
row.prop(self, "compute_device_type", expand=True)
available_device_types = self.get_device_types(context)
if len(available_device_types) == 1:
layout.label(text="No compatible GPUs found", icon='INFO')
return
layout.row().prop(self, "compute_device_type", expand=True)
cuda_devices, opencl_devices = self.get_devices()
row = layout.row()
if self.compute_device_type == 'CUDA':
self._draw_devices(row, 'CUDA', cuda_devices)
elif self.compute_device_type == 'OPENCL':
self._draw_devices(row, 'OPENCL', opencl_devices)
if self.compute_device_type == 'CUDA' and cuda_devices:
box = row.box()
for device in cuda_devices:
box.prop(device, "use", text=device.name)
if self.compute_device_type == 'OPENCL' and opencl_devices:
box = row.box()
for device in opencl_devices:
box.prop(device, "use", text=device.name)
def draw(self, context):
self.draw_impl(self.layout, context)

View File

@@ -19,8 +19,12 @@
import bpy
from bpy_extras.node_utils import find_node_input
from bl_operators.presets import PresetMenu
import _cycles
from bpy.types import Panel
from bpy.types import (
Panel,
Operator,
)
class CYCLES_PT_sampling_presets(PresetMenu):
@@ -632,8 +636,6 @@ class CYCLES_RENDER_PT_performance_acceleration_structure(CyclesButtonsPanel, Pa
bl_parent_id = "CYCLES_RENDER_PT_performance"
def draw(self, context):
import _cycles
layout = self.layout
layout.use_property_split = True
layout.use_property_decorate = False
@@ -1275,6 +1277,27 @@ class CYCLES_OBJECT_PT_cycles_settings_performance(CyclesButtonsPanel, Panel):
col.prop(cob, "use_distance_cull")
class CYCLES_OT_use_shading_nodes(Operator):
"""Enable nodes on a material, world or light"""
bl_idname = "cycles.use_shading_nodes"
bl_label = "Use Nodes"
@classmethod
def poll(cls, context):
return (getattr(context, "material", False) or getattr(context, "world", False) or
getattr(context, "light", False))
def execute(self, context):
if context.material:
context.material.use_nodes = True
elif context.world:
context.world.use_nodes = True
elif context.light:
context.light.use_nodes = True
return {'FINISHED'}
def panel_node_draw(layout, id_data, output_type, input_name):
if not id_data.use_nodes:
layout.operator("cycles.use_shading_nodes", icon='NODETREE')
@@ -1743,41 +1766,6 @@ class CYCLES_RENDER_PT_bake(CyclesButtonsPanel, Panel):
bl_options = {'DEFAULT_CLOSED'}
COMPAT_ENGINES = {'CYCLES'}
def draw(self, context):
layout = self.layout
layout.use_property_split = True
layout.use_property_decorate = False # No animation.
scene = context.scene
cscene = scene.cycles
cbk = scene.render.bake
rd = scene.render
if rd.use_bake_multires:
layout.operator("object.bake_image", icon='RENDER_STILL')
layout.prop(rd, "use_bake_multires")
layout.prop(rd, "bake_type")
else:
layout.operator("object.bake", icon='RENDER_STILL').type = cscene.bake_type
layout.prop(rd, "use_bake_multires")
layout.prop(cscene, "bake_type")
class CYCLES_RENDER_PT_bake_influence(CyclesButtonsPanel, Panel):
bl_label = "Influence"
bl_context = "render"
bl_parent_id = "CYCLES_RENDER_PT_bake"
COMPAT_ENGINES = {'CYCLES'}
@classmethod
def poll(cls, context):
scene = context.scene
cscene = scene.cycles
rd = scene.render
if rd.use_bake_multires == False and cscene.bake_type in {
'NORMAL', 'COMBINED', 'DIFFUSE', 'GLOSSY', 'TRANSMISSION', 'SUBSURFACE'}:
return True
def draw(self, context):
layout = self.layout
layout.use_property_split = True
@@ -1789,104 +1777,75 @@ class CYCLES_RENDER_PT_bake_influence(CyclesButtonsPanel, Panel):
rd = scene.render
col = layout.column()
if cscene.bake_type == 'NORMAL':
col.prop(cbk, "normal_space", text="Space")
sub = col.column(align=True)
sub.prop(cbk, "normal_r", text="Swizzle R")
sub.prop(cbk, "normal_g", text="G")
sub.prop(cbk, "normal_b", text="B")
elif cscene.bake_type == 'COMBINED':
row = col.row(align=True)
row.use_property_split = False
row.prop(cbk, "use_pass_direct", toggle=True)
row.prop(cbk, "use_pass_indirect", toggle=True)
flow = col.grid_flow(row_major=False, columns=0, even_columns=False, even_rows=False, align=True)
flow.active = cbk.use_pass_direct or cbk.use_pass_indirect
flow.prop(cbk, "use_pass_diffuse")
flow.prop(cbk, "use_pass_glossy")
flow.prop(cbk, "use_pass_transmission")
flow.prop(cbk, "use_pass_subsurface")
flow.prop(cbk, "use_pass_ambient_occlusion")
flow.prop(cbk, "use_pass_emit")
elif cscene.bake_type in {'DIFFUSE', 'GLOSSY', 'TRANSMISSION', 'SUBSURFACE'}:
row = col.row(align=True)
row.use_property_split = False
row.prop(cbk, "use_pass_direct", toggle=True)
row.prop(cbk, "use_pass_indirect", toggle=True)
row.prop(cbk, "use_pass_color", toggle=True)
class CYCLES_RENDER_PT_bake_selected_to_active(CyclesButtonsPanel, Panel):
bl_label = "Selected to Active"
bl_context = "render"
bl_parent_id = "CYCLES_RENDER_PT_bake"
bl_options = {'DEFAULT_CLOSED'}
COMPAT_ENGINES = {'CYCLES'}
@classmethod
def poll(cls, context):
scene = context.scene
rd = scene.render
return rd.use_bake_multires == False
def draw_header(self, context):
scene = context.scene
cbk = scene.render.bake
self.layout.prop(cbk, "use_selected_to_active", text="")
def draw(self, context):
layout = self.layout
layout.use_property_split = True
layout.use_property_decorate = False # No animation.
scene = context.scene
cscene = scene.cycles
cbk = scene.render.bake
rd = scene.render
layout.active = cbk.use_selected_to_active
col = layout.column()
col.prop(cbk, "use_cage", text="Cage")
if cbk.use_cage:
col.prop(cbk, "cage_extrusion", text="Extrusion")
col.prop(cbk, "cage_object", text="Cage Object")
else:
col.prop(cbk, "cage_extrusion", text="Ray Distance")
class CYCLES_RENDER_PT_bake_output(CyclesButtonsPanel, Panel):
bl_label = "Output"
bl_context = "render"
bl_parent_id = "CYCLES_RENDER_PT_bake"
COMPAT_ENGINES = {'CYCLES'}
def draw(self, context):
layout = self.layout
layout.use_property_split = True
layout.use_property_decorate = False # No animation.
scene = context.scene
cscene = scene.cycles
cbk = scene.render.bake
rd = scene.render
col.prop(rd, "use_bake_multires")
if rd.use_bake_multires:
layout.prop(rd, "bake_margin")
layout.prop(rd, "use_bake_clear", text="Clear Image")
col.prop(rd, "bake_type")
col = layout.column()
col.prop(rd, "bake_margin")
col.prop(rd, "use_bake_clear")
if rd.bake_type == 'DISPLACEMENT':
col.prop(rd, "use_bake_lores_mesh")
else:
layout.prop(cbk, "margin")
layout.prop(cbk, "use_clear", text="Clear Image")
col.operator("object.bake_image", icon='RENDER_STILL')
else:
col.prop(cscene, "bake_type")
col = layout.column()
if cscene.bake_type == 'NORMAL':
col.prop(cbk, "normal_space", text="Space")
sub = col.column(align=True)
sub.prop(cbk, "normal_r", text="Swizzle R")
sub.prop(cbk, "normal_g", text="G")
sub.prop(cbk, "normal_b", text="B")
elif cscene.bake_type == 'COMBINED':
row = col.row(align=True)
row.use_property_split = False
row.prop(cbk, "use_pass_direct", toggle=True)
row.prop(cbk, "use_pass_indirect", toggle=True)
col = col.column()
col.active = cbk.use_pass_direct or cbk.use_pass_indirect
col.prop(cbk, "use_pass_diffuse")
col.prop(cbk, "use_pass_glossy")
col.prop(cbk, "use_pass_transmission")
col.prop(cbk, "use_pass_subsurface")
col.prop(cbk, "use_pass_ambient_occlusion")
col.prop(cbk, "use_pass_emit")
elif cscene.bake_type in {'DIFFUSE', 'GLOSSY', 'TRANSMISSION', 'SUBSURFACE'}:
row = col.row(align=True)
row.use_property_split = False
row.prop(cbk, "use_pass_direct", toggle=True)
row.prop(cbk, "use_pass_indirect", toggle=True)
row.prop(cbk, "use_pass_color", toggle=True)
layout.separator()
col = layout.column()
col.prop(cbk, "margin")
col.prop(cbk, "use_clear", text="Clear Image")
col.separator()
col.prop(cbk, "use_selected_to_active")
sub = col.column()
sub.active = cbk.use_selected_to_active
sub.prop(cbk, "use_cage", text="Cage")
if cbk.use_cage:
sub.prop(cbk, "cage_extrusion", text="Extrusion")
sub.prop(cbk, "cage_object", text="Cage Object")
else:
sub.prop(cbk, "cage_extrusion", text="Ray Distance")
layout.separator()
layout.operator("object.bake", icon='RENDER_STILL').type = cscene.bake_type
class CYCLES_RENDER_PT_debug(CyclesButtonsPanel, Panel):
@@ -1927,7 +1886,8 @@ class CYCLES_RENDER_PT_debug(CyclesButtonsPanel, Panel):
col.separator()
col = layout.column()
col.label(text='OpenCL Flags:')
col.label(text="OpenCL Flags:")
col.prop(cscene, "debug_opencl_kernel_type", text="Kernel")
col.prop(cscene, "debug_opencl_device_type", text="Device")
col.prop(cscene, "debug_opencl_kernel_single_program", text="Single Program")
col.prop(cscene, "debug_use_opencl_debug", text="Debug")
@@ -1975,7 +1935,7 @@ class CYCLES_RENDER_PT_simplify_viewport(CyclesButtonsPanel, Panel):
col.prop(rd, "simplify_child_particles", text="Child Particles")
col.prop(cscene, "texture_limit", text="Texture Limit")
col.prop(cscene, "ao_bounces", text="AO Bounces")
col.prop(rd, "use_simplify_smoke_highres")
class CYCLES_RENDER_PT_simplify_render(CyclesButtonsPanel, Panel):
bl_label = "Render"
@@ -2176,6 +2136,7 @@ classes = (
CYCLES_OBJECT_PT_cycles_settings,
CYCLES_OBJECT_PT_cycles_settings_ray_visibility,
CYCLES_OBJECT_PT_cycles_settings_performance,
CYCLES_OT_use_shading_nodes,
CYCLES_LIGHT_PT_preview,
CYCLES_LIGHT_PT_light,
CYCLES_LIGHT_PT_nodes,
@@ -2197,9 +2158,6 @@ classes = (
CYCLES_MATERIAL_PT_settings_surface,
CYCLES_MATERIAL_PT_settings_volume,
CYCLES_RENDER_PT_bake,
CYCLES_RENDER_PT_bake_influence,
CYCLES_RENDER_PT_bake_selected_to_active,
CYCLES_RENDER_PT_bake_output,
CYCLES_RENDER_PT_debug,
CYCLES_NODE_PT_settings,
CYCLES_NODE_PT_settings_surface,

View File

@@ -22,39 +22,50 @@ import math
from bpy.app.handlers import persistent
def foreach_cycles_nodetree_group(nodetree, traversed):
def foreach_notree_node(nodetree, callback, traversed):
if nodetree in traversed:
return
traversed.add(nodetree)
for node in nodetree.nodes:
callback(node)
if node.bl_idname == 'ShaderNodeGroup':
foreach_notree_node(node.node_tree, callback, traversed)
def foreach_cycles_node(callback):
traversed = set()
for material in bpy.data.materials:
if material.node_tree:
foreach_notree_node(
material.node_tree,
callback,
traversed,
)
for world in bpy.data.worlds:
if world.node_tree:
foreach_notree_node(
world.node_tree,
callback,
traversed,
)
for light in bpy.data.lights:
if light.node_tree:
foreach_notree_node(
light.node_tree,
callback,
traversed,
)
def displacement_node_insert(material, nodetree, traversed):
if nodetree in traversed:
return
traversed.add(nodetree)
for node in nodetree.nodes:
if node.bl_idname == 'ShaderNodeGroup':
group = node.node_tree
if group and group not in traversed:
traversed.add(group)
yield group, group.library
yield from foreach_cycles_nodetree_group(group, traversed)
displacement_node_insert(material, node.node_tree, traversed)
def foreach_cycles_nodetree():
traversed = set()
for material in bpy.data.materials:
nodetree = material.node_tree
if nodetree:
yield nodetree, material.library
yield from foreach_cycles_nodetree_group(nodetree, traversed)
for world in bpy.data.worlds:
nodetree = world.node_tree
if nodetree:
yield nodetree, world.library
foreach_cycles_nodetree_group(nodetree, traversed)
for light in bpy.data.lights:
nodetree = light.node_tree
if nodetree:
yield nodetree, light.library
foreach_cycles_nodetree_group(nodetree, traversed)
def displacement_node_insert(nodetree):
# Gather links to replace
displacement_links = []
for link in nodetree.links:
@@ -84,6 +95,13 @@ def displacement_node_insert(nodetree):
nodetree.links.new(node.outputs['Displacement'], to_socket)
def displacement_nodes_insert():
traversed = set()
for material in bpy.data.materials:
if material.node_tree:
displacement_node_insert(material, material.node_tree, traversed)
def displacement_principled_nodes(node):
if node.bl_idname == 'ShaderNodeDisplacement':
if node.space != 'WORLD':
@@ -93,7 +111,11 @@ def displacement_principled_nodes(node):
node.subsurface_method = 'BURLEY'
def square_roughness_node_insert(nodetree):
def square_roughness_node_insert(material, nodetree, traversed):
if nodetree in traversed:
return
traversed.add(nodetree)
roughness_node_types = {
'ShaderNodeBsdfAnisotropic',
'ShaderNodeBsdfGlass',
@@ -102,7 +124,9 @@ def square_roughness_node_insert(nodetree):
# Update default values
for node in nodetree.nodes:
if node.bl_idname in roughness_node_types:
if node.bl_idname == 'ShaderNodeGroup':
square_roughness_node_insert(material, node.node_tree, traversed)
elif node.bl_idname in roughness_node_types:
roughness_input = node.inputs['Roughness']
roughness_input.default_value = math.sqrt(max(roughness_input.default_value, 0.0))
@@ -132,6 +156,13 @@ def square_roughness_node_insert(nodetree):
nodetree.links.new(node.outputs['Value'], to_socket)
def square_roughness_nodes_insert():
traversed = set()
for material in bpy.data.materials:
if material.node_tree:
square_roughness_node_insert(material, material.node_tree, traversed)
def mapping_node_order_flip(node):
"""
Flip euler order of mapping shader node
@@ -213,12 +244,18 @@ def custom_bake_remap(scene):
scene.render.bake.use_pass_indirect = False
def ambient_occlusion_node_relink(nodetree):
def ambient_occlusion_node_relink(material, nodetree, traversed):
if nodetree in traversed:
return
traversed.add(nodetree)
for node in nodetree.nodes:
if node.bl_idname == 'ShaderNodeAmbientOcclusion':
node.samples = 1
node.only_local = False
node.inputs['Distance'].default_value = 0.0
elif node.bl_idname == 'ShaderNodeGroup':
ambient_occlusion_node_relink(material, node.node_tree, traversed)
# Gather links to replace
ao_links = []
@@ -235,6 +272,13 @@ def ambient_occlusion_node_relink(nodetree):
nodetree.links.new(from_node.outputs['Color'], to_socket)
def ambient_occlusion_nodes_relink():
traversed = set()
for material in bpy.data.materials:
if material.node_tree:
ambient_occlusion_node_relink(material, material.node_tree, traversed)
@persistent
def do_versions(self):
if bpy.context.preferences.version <= (2, 78, 1):
@@ -260,199 +304,160 @@ def do_versions(self):
if not bpy.data.is_saved:
return
# Map of versions used by libraries.
library_versions = {}
library_versions[bpy.data.version] = [None]
for library in bpy.data.libraries:
library_versions.setdefault(library.version, []).append(library)
# Do versioning per library, since they might have different versions.
max_need_versioning = (2, 79, 6)
for version, libraries in library_versions.items():
if version > max_need_versioning:
continue
# Scenes
# Clamp Direct/Indirect separation in 270
if bpy.data.version <= (2, 70, 0):
for scene in bpy.data.scenes:
if scene.library not in libraries:
continue
cscene = scene.cycles
sample_clamp = cscene.get("sample_clamp", False)
if (sample_clamp and
not cscene.is_property_set("sample_clamp_direct") and
not cscene.is_property_set("sample_clamp_indirect")):
# Clamp Direct/Indirect separation in 270
if version <= (2, 70, 0):
cscene = scene.cycles
sample_clamp = cscene.get("sample_clamp", False)
if (sample_clamp and
not cscene.is_property_set("sample_clamp_direct") and
not cscene.is_property_set("sample_clamp_indirect")):
cscene.sample_clamp_direct = sample_clamp
cscene.sample_clamp_indirect = sample_clamp
cscene.sample_clamp_direct = sample_clamp
cscene.sample_clamp_indirect = sample_clamp
# Change of Volume Bounces in 271
if version <= (2, 71, 0):
cscene = scene.cycles
if not cscene.is_property_set("volume_bounces"):
cscene.volume_bounces = 1
# Change of Volume Bounces in 271
if bpy.data.version <= (2, 71, 0):
for scene in bpy.data.scenes:
cscene = scene.cycles
if not cscene.is_property_set("volume_bounces"):
cscene.volume_bounces = 1
# Caustics Reflective/Refractive separation in 272
if version <= (2, 72, 0):
cscene = scene.cycles
if (cscene.get("no_caustics", False) and
not cscene.is_property_set("caustics_reflective") and
# Caustics Reflective/Refractive separation in 272
if bpy.data.version <= (2, 72, 0):
for scene in bpy.data.scenes:
cscene = scene.cycles
if (cscene.get("no_caustics", False) and
not cscene.is_property_set("caustics_reflective") and
not cscene.is_property_set("caustics_refractive")):
cscene.caustics_reflective = False
cscene.caustics_refractive = False
# Baking types changed
if version <= (2, 76, 6):
custom_bake_remap(scene)
cscene.caustics_reflective = False
cscene.caustics_refractive = False
# Several default changes for 2.77
if version <= (2, 76, 8):
cscene = scene.cycles
# Euler order was ZYX in previous versions.
if bpy.data.version <= (2, 73, 4):
foreach_cycles_node(mapping_node_order_flip)
# Samples
if not cscene.is_property_set("samples"):
cscene.samples = 10
if bpy.data.version <= (2, 76, 5):
foreach_cycles_node(vector_curve_node_remap)
# Preview Samples
if not cscene.is_property_set("preview_samples"):
cscene.preview_samples = 10
# Baking types changed
if bpy.data.version <= (2, 76, 6):
for scene in bpy.data.scenes:
custom_bake_remap(scene)
# Filter
if not cscene.is_property_set("filter_type"):
cscene.pixel_filter_type = 'GAUSSIAN'
# Several default changes for 2.77
if bpy.data.version <= (2, 76, 8):
for scene in bpy.data.scenes:
cscene = scene.cycles
# Tile Order
if not cscene.is_property_set("tile_order"):
cscene.tile_order = 'CENTER'
# Samples
if not cscene.is_property_set("samples"):
cscene.samples = 10
if version <= (2, 76, 10):
cscene = scene.cycles
if cscene.is_property_set("filter_type"):
if not cscene.is_property_set("pixel_filter_type"):
cscene.pixel_filter_type = cscene.filter_type
if cscene.filter_type == 'BLACKMAN_HARRIS':
cscene.filter_type = 'GAUSSIAN'
# Preview Samples
if not cscene.is_property_set("preview_samples"):
cscene.preview_samples = 10
if version <= (2, 78, 2):
cscene = scene.cycles
if not cscene.is_property_set("light_sampling_threshold"):
cscene.light_sampling_threshold = 0.0
# Filter
if not cscene.is_property_set("filter_type"):
cscene.pixel_filter_type = 'GAUSSIAN'
if version <= (2, 79, 0):
cscene = scene.cycles
# Default changes
if not cscene.is_property_set("aa_samples"):
cscene.aa_samples = 4
if not cscene.is_property_set("preview_aa_samples"):
cscene.preview_aa_samples = 4
if not cscene.is_property_set("blur_glossy"):
cscene.blur_glossy = 0.0
if not cscene.is_property_set("sample_clamp_indirect"):
cscene.sample_clamp_indirect = 0.0
# Tile Order
if not cscene.is_property_set("tile_order"):
cscene.tile_order = 'CENTER'
# Lamps
for light in bpy.data.lights:
if light.library not in libraries:
continue
clight = light.cycles
if version <= (2, 76, 5):
clight = light.cycles
# MIS
if not clight.is_property_set("use_multiple_importance_sampling"):
clight.use_multiple_importance_sampling = False
# MIS
if not clight.is_property_set("use_multiple_importance_sampling"):
clight.use_multiple_importance_sampling = False
# Worlds
for world in bpy.data.worlds:
if world.library not in libraries:
continue
if version <= (2, 76, 9):
cworld = world.cycles
# World MIS Samples
if not cworld.is_property_set("samples"):
cworld.samples = 4
# World MIS Resolution
if not cworld.is_property_set("sample_map_resolution"):
cworld.sample_map_resolution = 256
if version <= (2, 79, 4) or \
(version >= (2, 80, 0) and version <= (2, 80, 18)):
cworld = world.cycles
# World MIS
if not cworld.is_property_set("sampling_method"):
if cworld.get("sample_as_light", True):
cworld.sampling_method = 'MANUAL'
else:
cworld.sampling_method = 'NONE'
# Materials
for mat in bpy.data.materials:
if mat.library not in libraries:
continue
cmat = mat.cycles
if version <= (2, 76, 5):
cmat = mat.cycles
# Volume Sampling
if not cmat.is_property_set("volume_sampling"):
cmat.volume_sampling = 'DISTANCE'
# Volume Sampling
if not cmat.is_property_set("volume_sampling"):
cmat.volume_sampling = 'DISTANCE'
if version <= (2, 79, 2):
cmat = mat.cycles
if not cmat.is_property_set("displacement_method"):
cmat.displacement_method = 'BUMP'
if bpy.data.version <= (2, 76, 9):
for world in bpy.data.worlds:
cworld = world.cycles
# Change default to bump again.
if version <= (2, 79, 6) or \
(version >= (2, 80, 0) and version <= (2, 80, 41)):
cmat = mat.cycles
if not cmat.is_property_set("displacement_method"):
cmat.displacement_method = 'DISPLACEMENT'
# World MIS Samples
if not cworld.is_property_set("samples"):
cworld.samples = 4
# Nodes
for nodetree, library in foreach_cycles_nodetree():
if library not in libraries:
continue
# World MIS Resolution
if not cworld.is_property_set("sample_map_resolution"):
cworld.sample_map_resolution = 256
# Euler order was ZYX in previous versions.
if version <= (2, 73, 4):
for node in nodetree.nodes:
mapping_node_order_flip(node)
if bpy.data.version <= (2, 76, 10):
for scene in bpy.data.scenes:
cscene = scene.cycles
if cscene.is_property_set("filter_type"):
if not cscene.is_property_set("pixel_filter_type"):
cscene.pixel_filter_type = cscene.filter_type
if cscene.filter_type == 'BLACKMAN_HARRIS':
cscene.filter_type = 'GAUSSIAN'
if version <= (2, 76, 5):
for node in nodetree.nodes:
vector_curve_node_remap(node)
if bpy.data.version <= (2, 78, 2):
for scene in bpy.data.scenes:
cscene = scene.cycles
if not cscene.is_property_set("light_sampling_threshold"):
cscene.light_sampling_threshold = 0.0
if version <= (2, 79, 1) or \
(version >= (2, 80, 0) and version <= (2, 80, 3)):
displacement_node_insert(nodetree)
if bpy.data.version <= (2, 79, 0):
for scene in bpy.data.scenes:
cscene = scene.cycles
# Default changes
if not cscene.is_property_set("aa_samples"):
cscene.aa_samples = 4
if not cscene.is_property_set("preview_aa_samples"):
cscene.preview_aa_samples = 4
if not cscene.is_property_set("blur_glossy"):
cscene.blur_glossy = 0.0
if not cscene.is_property_set("sample_clamp_indirect"):
cscene.sample_clamp_indirect = 0.0
if version <= (2, 79, 2):
for node in nodetree.nodes:
displacement_principled_nodes(node)
if bpy.data.version <= (2, 79, 1) or \
(bpy.data.version >= (2, 80, 0) and bpy.data.version <= (2, 80, 3)):
displacement_nodes_insert()
if version <= (2, 79, 3) or \
(version >= (2, 80, 0) and version <= (2, 80, 4)):
# Switch to squared roughness convention
square_roughness_node_insert(nodetree)
if bpy.data.version <= (2, 79, 2):
for mat in bpy.data.materials:
cmat = mat.cycles
if not cmat.is_property_set("displacement_method"):
cmat.displacement_method = 'BUMP'
if version <= (2, 79, 4):
ambient_occlusion_node_relink(nodetree)
foreach_cycles_node(displacement_principled_nodes)
# Particles
if bpy.data.version <= (2, 79, 3) or \
(bpy.data.version >= (2, 80, 0) and bpy.data.version <= (2, 80, 4)):
# Switch to squared roughness convention
square_roughness_nodes_insert()
if bpy.data.version <= (2, 80, 15):
# Copy cycles hair settings to internal settings
for part in bpy.data.particles:
if part.library not in libraries:
continue
cpart = part.get("cycles", None)
if cpart:
part.shape = cpart.get("shape", 0.0)
part.root_radius = cpart.get("root_width", 1.0)
part.tip_radius = cpart.get("tip_width", 0.0)
part.radius_scale = cpart.get("radius_scale", 0.01)
part.use_close_tip = cpart.get("use_closetip", True)
# Copy cycles hair settings to internal settings
if version <= (2, 80, 15):
cpart = part.get("cycles", None)
if cpart:
part.shape = cpart.get("shape", 0.0)
part.root_radius = cpart.get("root_width", 1.0)
part.tip_radius = cpart.get("tip_width", 0.0)
part.radius_scale = cpart.get("radius_scale", 0.01)
part.use_close_tip = cpart.get("use_closetip", True)
if bpy.data.version <= (2, 79, 4) or \
(bpy.data.version >= (2, 80, 0) and bpy.data.version <= (2, 80, 18)):
for world in bpy.data.worlds:
cworld = world.cycles
# World MIS
if not cworld.is_property_set("sampling_method"):
if cworld.get("sample_as_light", True):
cworld.sampling_method = 'MANUAL'
else:
cworld.sampling_method = 'NONE'
ambient_occlusion_nodes_relink()

View File

@@ -228,17 +228,12 @@ static void blender_camera_from_object(BlenderCamera *bcam,
bcam->sensor_fit = BlenderCamera::HORIZONTAL;
else
bcam->sensor_fit = BlenderCamera::VERTICAL;
}
else if(b_ob_data.is_a(&RNA_Light)) {
/* Can also look through spot light. */
BL::SpotLight b_light(b_ob_data);
float lens = 16.0f / tanf(b_light.spot_size() * 0.5f);
if (lens > 0.0f) {
bcam->lens = lens;
}
}
bcam->motion_steps = object_motion_steps(b_ob, b_ob);
bcam->motion_steps = object_motion_steps(b_ob, b_ob);
}
else {
/* from light not implemented yet */
}
}
static Transform blender_camera_matrix(const Transform& tfm,
@@ -648,7 +643,7 @@ static void blender_camera_from_view(BlenderCamera *bcam,
if(b_rv3d.view_perspective() == BL::RegionView3D::view_perspective_CAMERA) {
/* camera view */
BL::Object b_ob = (b_v3d.use_local_camera())? b_v3d.camera(): b_scene.camera();
BL::Object b_ob = (b_v3d.lock_camera_and_layers())? b_scene.camera(): b_v3d.camera();
if(b_ob) {
blender_camera_from_object(bcam, b_engine, b_ob, skip_panorama);
@@ -784,7 +779,7 @@ static void blender_camera_border(BlenderCamera *bcam,
return;
}
BL::Object b_ob = (b_v3d.use_local_camera())? b_v3d.camera(): b_scene.camera();
BL::Object b_ob = (b_v3d.lock_camera_and_layers())? b_scene.camera(): b_v3d.camera();
if(!b_ob)
return;

View File

@@ -185,7 +185,9 @@ static bool ObtainCacheParticleData(Mesh *mesh,
float3 cKey = make_float3(nco[0], nco[1], nco[2]);
cKey = transform_point(&itfm, cKey);
if(step_no > 0) {
const float step_length = len(cKey - pcKey);
float step_length = len(cKey - pcKey);
if(step_length == 0.0f)
continue;
curve_length += step_length;
}
CData->curvekey_co.push_back_slow(cKey);
@@ -334,6 +336,9 @@ static void ExportCurveTrianglePlanes(Mesh *mesh, ParticleCurveData *CData,
/* compute and reserve size of arrays */
for(int sys = 0; sys < CData->psys_firstcurve.size(); sys++) {
for(int curve = CData->psys_firstcurve[sys]; curve < CData->psys_firstcurve[sys] + CData->psys_curvenum[sys]; curve++) {
if(CData->curve_keynum[curve] <= 1 || CData->curve_length[curve] == 0.0f)
continue;
numverts += 2 + (CData->curve_keynum[curve] - 1)*2;
numtris += (CData->curve_keynum[curve] - 1)*2;
}
@@ -344,6 +349,9 @@ static void ExportCurveTrianglePlanes(Mesh *mesh, ParticleCurveData *CData,
/* actually export */
for(int sys = 0; sys < CData->psys_firstcurve.size(); sys++) {
for(int curve = CData->psys_firstcurve[sys]; curve < CData->psys_firstcurve[sys] + CData->psys_curvenum[sys]; curve++) {
if(CData->curve_keynum[curve] <= 1 || CData->curve_length[curve] == 0.0f)
continue;
float3 xbasis;
float3 v1;
float time = 0.0f;
@@ -413,6 +421,9 @@ static void ExportCurveTriangleGeometry(Mesh *mesh,
/* compute and reserve size of arrays */
for(int sys = 0; sys < CData->psys_firstcurve.size(); sys++) {
for(int curve = CData->psys_firstcurve[sys]; curve < CData->psys_firstcurve[sys] + CData->psys_curvenum[sys]; curve++) {
if(CData->curve_keynum[curve] <= 1 || CData->curve_length[curve] == 0.0f)
continue;
numverts += (CData->curve_keynum[curve] - 1)*resolution + resolution;
numtris += (CData->curve_keynum[curve] - 1)*2*resolution;
}
@@ -423,6 +434,9 @@ static void ExportCurveTriangleGeometry(Mesh *mesh,
/* actually export */
for(int sys = 0; sys < CData->psys_firstcurve.size(); sys++) {
for(int curve = CData->psys_firstcurve[sys]; curve < CData->psys_firstcurve[sys] + CData->psys_curvenum[sys]; curve++) {
if(CData->curve_keynum[curve] <= 1 || CData->curve_length[curve] == 0.0f)
continue;
float3 firstxbasis = cross(make_float3(1.0f,0.0f,0.0f),CData->curvekey_co[CData->curve_firstkey[curve]+1] - CData->curvekey_co[CData->curve_firstkey[curve]]);
if(!is_zero(firstxbasis))
firstxbasis = normalize(firstxbasis);
@@ -550,6 +564,9 @@ static void ExportCurveSegments(Scene *scene, Mesh *mesh, ParticleCurveData *CDa
/* compute and reserve size of arrays */
for(int sys = 0; sys < CData->psys_firstcurve.size(); sys++) {
for(int curve = CData->psys_firstcurve[sys]; curve < CData->psys_firstcurve[sys] + CData->psys_curvenum[sys]; curve++) {
if(CData->curve_keynum[curve] <= 1 || CData->curve_length[curve] == 0.0f)
continue;
num_keys += CData->curve_keynum[curve];
num_curves++;
}
@@ -567,27 +584,19 @@ static void ExportCurveSegments(Scene *scene, Mesh *mesh, ParticleCurveData *CDa
/* actually export */
for(int sys = 0; sys < CData->psys_firstcurve.size(); sys++) {
for(int curve = CData->psys_firstcurve[sys]; curve < CData->psys_firstcurve[sys] + CData->psys_curvenum[sys]; curve++) {
if(CData->curve_keynum[curve] <= 1 || CData->curve_length[curve] == 0.0f)
continue;
size_t num_curve_keys = 0;
for(int curvekey = CData->curve_firstkey[curve];
curvekey < CData->curve_firstkey[curve] + CData->curve_keynum[curve];
curvekey++)
{
const float3 ickey_loc = CData->curvekey_co[curvekey];
const float curve_time = CData->curvekey_time[curvekey];
const float curve_length = CData->curve_length[curve];
const float time = (curve_length > 0.0f)
? curve_time / curve_length
: 0.0f;
float radius = shaperadius(CData->psys_shape[sys],
CData->psys_rootradius[sys],
CData->psys_tipradius[sys],
time);
if(CData->psys_closetip[sys] &&
(curvekey == CData->curve_firstkey[curve] + CData->curve_keynum[curve] - 1))
{
for(int curvekey = CData->curve_firstkey[curve]; curvekey < CData->curve_firstkey[curve] + CData->curve_keynum[curve]; curvekey++) {
float3 ickey_loc = CData->curvekey_co[curvekey];
float time = CData->curvekey_time[curvekey]/CData->curve_length[curve];
float radius = shaperadius(CData->psys_shape[sys], CData->psys_rootradius[sys], CData->psys_tipradius[sys], time);
if(CData->psys_closetip[sys] && (curvekey == CData->curve_firstkey[curve] + CData->curve_keynum[curve] - 1))
radius = 0.0f;
}
mesh->add_curve_key(ickey_loc, radius);
if(attr_intercept)
attr_intercept->add(time);
@@ -614,10 +623,8 @@ static void ExportCurveSegments(Scene *scene, Mesh *mesh, ParticleCurveData *CDa
static float4 CurveSegmentMotionCV(ParticleCurveData *CData, int sys, int curve, int curvekey)
{
const float3 ickey_loc = CData->curvekey_co[curvekey];
const float curve_time = CData->curvekey_time[curvekey];
const float curve_length = CData->curve_length[curve];
float time = (curve_length > 0.0f) ? curve_time / curve_length : 0.0f;
float3 ickey_loc = CData->curvekey_co[curvekey];
float time = CData->curvekey_time[curvekey]/CData->curve_length[curve];
float radius = shaperadius(CData->psys_shape[sys], CData->psys_rootradius[sys], CData->psys_tipradius[sys], time);
if(CData->psys_closetip[sys] && (curvekey == CData->curve_firstkey[curve] + CData->curve_keynum[curve] - 1))
@@ -676,7 +683,13 @@ static void ExportCurveSegmentsMotion(Mesh *mesh, ParticleCurveData *CData, int
int num_curves = 0;
for(int sys = 0; sys < CData->psys_firstcurve.size(); sys++) {
if(CData->psys_curvenum[sys] == 0)
continue;
for(int curve = CData->psys_firstcurve[sys]; curve < CData->psys_firstcurve[sys] + CData->psys_curvenum[sys]; curve++) {
if(CData->curve_keynum[curve] <= 1 || CData->curve_length[curve] == 0.0f)
continue;
/* Curve lengths may not match! Curves can be clipped. */
int curve_key_end = (num_curves+1 < (int)mesh->curve_first_key.size() ? mesh->curve_first_key[num_curves+1] : (int)mesh->curve_keys.size());
const int num_center_curve_keys = curve_key_end - mesh->curve_first_key[num_curves];
@@ -702,10 +715,7 @@ static void ExportCurveSegmentsMotion(Mesh *mesh, ParticleCurveData *CData, int
else {
/* Number of keys has changed. Genereate an interpolated version
* to preserve motion blur. */
const float step_size =
num_center_curve_keys > 1
? 1.0f / (num_center_curve_keys - 1)
: 0.0f;
float step_size = 1.0f / (num_center_curve_keys-1);
for(int step_index = 0;
step_index < num_center_curve_keys;
++step_index)
@@ -764,10 +774,11 @@ static void ExportCurveTriangleUV(ParticleCurveData *CData,
for(int sys = 0; sys < CData->psys_firstcurve.size(); sys++) {
for(int curve = CData->psys_firstcurve[sys]; curve < CData->psys_firstcurve[sys] + CData->psys_curvenum[sys]; curve++) {
if(CData->curve_keynum[curve] <= 1 || CData->curve_length[curve] == 0.0f)
continue;
for(int curvekey = CData->curve_firstkey[curve]; curvekey < CData->curve_firstkey[curve] + CData->curve_keynum[curve] - 1; curvekey++) {
const float curve_time = CData->curvekey_time[curvekey];
const float curve_length = CData->curve_length[curve];
time = (curve_length > 0.0f) ? curve_time / curve_length : 0.0f;
time = CData->curvekey_time[curvekey]/CData->curve_length[curve];
for(int section = 0; section < resol; section++) {
uvdata[vertexindex] = CData->curve_uv[curve];
@@ -808,6 +819,9 @@ static void ExportCurveTriangleVcol(ParticleCurveData *CData,
for(int sys = 0; sys < CData->psys_firstcurve.size(); sys++) {
for(int curve = CData->psys_firstcurve[sys]; curve < CData->psys_firstcurve[sys] + CData->psys_curvenum[sys]; curve++) {
if(CData->curve_keynum[curve] <= 1 || CData->curve_length[curve] == 0.0f)
continue;
for(int curvekey = CData->curve_firstkey[curve]; curvekey < CData->curve_firstkey[curve] + CData->curve_keynum[curve] - 1; curvekey++) {
for(int section = 0; section < resol; section++) {
/* Encode vertex color using the sRGB curve. */
@@ -1035,9 +1049,9 @@ void BlenderSync::sync_curves(Mesh *mesh,
size_t i = 0;
/* Encode vertex color using the sRGB curve. */
for(size_t curve = 0; curve < CData.curve_vcol.size(); curve++) {
fdata[i++] = color_srgb_to_linear_v3(CData.curve_vcol[curve]);
}
for(size_t curve = 0; curve < CData.curve_vcol.size(); curve++)
if(!(CData.curve_keynum[curve] <= 1 || CData.curve_length[curve] == 0.0f))
fdata[i++] = color_srgb_to_linear_v3(CData.curve_vcol[curve]);
}
}
}
@@ -1080,9 +1094,9 @@ void BlenderSync::sync_curves(Mesh *mesh,
if(uv) {
size_t i = 0;
for(size_t curve = 0; curve < CData.curve_uv.size(); curve++) {
uv[i++] = CData.curve_uv[curve];
}
for(size_t curve = 0; curve < CData.curve_uv.size(); curve++)
if(!(CData.curve_keynum[curve] <= 1 || CData.curve_length[curve] == 0.0f))
uv[i++] = CData.curve_uv[curve];
}
}
}

View File

@@ -1,109 +0,0 @@
/*
* Copyright 2011-2013 Blender Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include "blender/blender_device.h"
#include "blender/blender_util.h"
CCL_NAMESPACE_BEGIN
int blender_device_threads(BL::Scene& b_scene)
{
BL::RenderSettings b_r = b_scene.render();
if(b_r.threads_mode() == BL::RenderSettings::threads_mode_FIXED)
return b_r.threads();
else
return 0;
}
DeviceInfo blender_device_info(BL::Preferences& b_preferences, BL::Scene& b_scene, bool background)
{
PointerRNA cscene = RNA_pointer_get(&b_scene.ptr, "cycles");
/* Default to CPU device. */
DeviceInfo device = Device::available_devices(DEVICE_MASK_CPU).front();
if(get_enum(cscene, "device") == 2) {
/* Find network device. */
vector<DeviceInfo> devices = Device::available_devices(DEVICE_MASK_NETWORK);
if(!devices.empty()) {
device = devices.front();
}
}
else if(get_enum(cscene, "device") == 1) {
/* Find cycles preferences. */
PointerRNA cpreferences;
BL::Preferences::addons_iterator b_addon_iter;
for(b_preferences.addons.begin(b_addon_iter); b_addon_iter != b_preferences.addons.end(); ++b_addon_iter) {
if(b_addon_iter->module() == "cycles") {
cpreferences = b_addon_iter->preferences().ptr;
break;
}
}
/* Test if we are using GPU devices. */
enum ComputeDevice {
COMPUTE_DEVICE_CPU = 0,
COMPUTE_DEVICE_CUDA = 1,
COMPUTE_DEVICE_OPENCL = 2,
COMPUTE_DEVICE_NUM = 3,
};
ComputeDevice compute_device = (ComputeDevice)get_enum(cpreferences,
"compute_device_type",
COMPUTE_DEVICE_NUM,
COMPUTE_DEVICE_CPU);
if(compute_device != COMPUTE_DEVICE_CPU) {
/* Query GPU devices with matching types. */
uint mask = DEVICE_MASK_CPU;
if(compute_device == COMPUTE_DEVICE_CUDA) {
mask |= DEVICE_MASK_CUDA;
}
else if(compute_device == COMPUTE_DEVICE_OPENCL) {
mask |= DEVICE_MASK_OPENCL;
}
vector<DeviceInfo> devices = Device::available_devices(mask);
/* Match device preferences and available devices. */
vector<DeviceInfo> used_devices;
RNA_BEGIN(&cpreferences, device, "devices") {
if(get_boolean(device, "use")) {
string id = get_string(device, "id");
foreach(DeviceInfo& info, devices) {
if(info.id == id) {
used_devices.push_back(info);
break;
}
}
}
} RNA_END;
if(!used_devices.empty()) {
int threads = blender_device_threads(b_scene);
device = Device::get_multi_device(used_devices,
threads,
background);
}
/* Else keep using the CPU device that was set before. */
}
}
return device;
}
CCL_NAMESPACE_END

View File

@@ -1,37 +0,0 @@
/*
* Copyright 2011-2013 Blender Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef __BLENDER_DEVICE_H__
#define __BLENDER_DEVICE_H__
#include "MEM_guardedalloc.h"
#include "RNA_types.h"
#include "RNA_access.h"
#include "RNA_blender_cpp.h"
#include "device/device.h"
CCL_NAMESPACE_BEGIN
/* Get number of threads to use for rendering. */
int blender_device_threads(BL::Scene& b_scene);
/* Convert Blender settings to device specification. */
DeviceInfo blender_device_info(BL::Preferences& b_preferences, BL::Scene& b_scene, bool background);
CCL_NAMESPACE_END
#endif /* __BLENDER_DEVICE_H__ */

View File

@@ -756,11 +756,13 @@ static void create_mesh(Scene *scene,
return;
}
BL::Mesh::vertices_iterator v;
BL::Mesh::polygons_iterator p;
if(!subdivision) {
numtris = numfaces;
}
else {
BL::Mesh::polygons_iterator p;
for(b_mesh.polygons.begin(p); p != b_mesh.polygons.end(); ++p) {
numngons += (p->loop_total() == 4)? 0: 1;
numcorners += p->loop_total();
@@ -772,7 +774,6 @@ static void create_mesh(Scene *scene,
mesh->reserve_subd_faces(numfaces, numngons, numcorners);
/* create vertex coordinates and normals */
BL::Mesh::vertices_iterator v;
for(b_mesh.vertices.begin(v); v != b_mesh.vertices.end(); ++v)
mesh->add_vertex(get_float3(v->co()));
@@ -807,10 +808,13 @@ static void create_mesh(Scene *scene,
}
/* create faces */
vector<int> nverts(numfaces);
if(!subdivision) {
BL::Mesh::loop_triangles_iterator t;
int ti = 0;
for(b_mesh.loop_triangles.begin(t); t != b_mesh.loop_triangles.end(); ++t) {
for(b_mesh.loop_triangles.begin(t); t != b_mesh.loop_triangles.end(); ++t, ++ti) {
BL::MeshPolygon p = b_mesh.polygons[t->polygon_index()];
int3 vi = get_int3(t->vertices());
@@ -831,10 +835,10 @@ static void create_mesh(Scene *scene,
* NOTE: Autosmooth is already taken care about.
*/
mesh->add_triangle(vi[0], vi[1], vi[2], shader, smooth);
nverts[ti] = 3;
}
}
else {
BL::Mesh::polygons_iterator p;
vector<int> vi;
for(b_mesh.polygons.begin(p); p != b_mesh.polygons.end(); ++p) {
@@ -1061,26 +1065,37 @@ Mesh *BlenderSync::sync_mesh(BL::Depsgraph& b_depsgraph,
mesh->name = ustring(b_ob_data.name().c_str());
if(requested_geometry_flags != Mesh::GEOMETRY_NONE) {
/* Adaptive subdivision setup. Not for baking since that requires
* exact mapping to the Blender mesh. */
if(scene->bake_manager->get_baking()) {
mesh->subdivision_type = Mesh::SUBDIVISION_NONE;
}
else {
mesh->subdivision_type = object_subdivision_type(b_ob, preview, experimental);
/* mesh objects does have special handle in the dependency graph,
* they're ensured to have properly updated.
*
* updating meshes here will end up having derived mesh referencing
* freed data from the blender side.
*/
if(preview && b_ob.type() != BL::Object::type_MESH) {
b_ob.update_from_editmode(b_data);
}
/* For some reason, meshes do not need this... */
bool apply_modifiers = (b_ob.type() != BL::Object::type_MESH);
bool need_undeformed = mesh->need_attribute(scene, ATTR_STD_GENERATED);
mesh->subdivision_type = object_subdivision_type(b_ob, preview, experimental);
/* Disable adaptive subdivision while baking as the baking system
* currently doesnt support the topology and will crash.
*/
if(scene->bake_manager->get_baking()) {
mesh->subdivision_type = Mesh::SUBDIVISION_NONE;
}
BL::Mesh b_mesh = object_to_mesh(b_data,
b_ob,
b_depsgraph,
apply_modifiers,
need_undeformed,
mesh->subdivision_type);
if(b_mesh) {
/* Sync mesh itself. */
if(view_layer.use_surfaces && show_self) {
if(mesh->subdivision_type != Mesh::SUBDIVISION_NONE)
create_subd_mesh(scene, mesh, b_ob, b_mesh, used_shaders,
@@ -1091,12 +1106,11 @@ Mesh *BlenderSync::sync_mesh(BL::Depsgraph& b_depsgraph,
create_mesh_volume_attributes(scene, b_ob, mesh, b_scene.frame_current());
}
/* Sync hair curves. */
if(view_layer.use_hair && show_particles && mesh->subdivision_type == Mesh::SUBDIVISION_NONE) {
if(view_layer.use_hair && show_particles && mesh->subdivision_type == Mesh::SUBDIVISION_NONE)
sync_curves(mesh, b_mesh, b_ob, false);
}
free_object_to_mesh(b_data, b_ob, b_mesh);
/* free derived mesh */
b_data.meshes.remove(b_mesh, false, true, false);
}
}
mesh->geometry_flags = requested_geometry_flags;
@@ -1162,6 +1176,7 @@ void BlenderSync::sync_mesh_motion(BL::Depsgraph& b_depsgraph,
b_ob,
b_depsgraph,
false,
false,
Mesh::SUBDIVISION_NONE);
}
@@ -1272,7 +1287,7 @@ void BlenderSync::sync_mesh_motion(BL::Depsgraph& b_depsgraph,
sync_curves(mesh, b_mesh, b_ob, true, motion_step);
/* free derived mesh */
free_object_to_mesh(b_data, b_ob, b_mesh);
b_data.meshes.remove(b_mesh, false, true, false);
}
CCL_NAMESPACE_END

View File

@@ -461,7 +461,7 @@ Object *BlenderSync::sync_object(BL::Depsgraph& b_depsgraph,
uint motion_steps;
if(need_motion == Scene::MOTION_BLUR) {
if(scene->need_motion() == Scene::MOTION_BLUR) {
motion_steps = object_motion_steps(b_parent, b_ob);
mesh->motion_steps = motion_steps;
if(motion_steps && object_use_deform_motion(b_parent, b_ob)) {

View File

@@ -18,12 +18,9 @@
#include "blender/CCL_api.h"
#include "blender/blender_device.h"
#include "blender/blender_sync.h"
#include "blender/blender_session.h"
#include "render/denoising.h"
#include "util/util_debug.h"
#include "util/util_foreach.h"
#include "util/util_logging.h"
@@ -40,10 +37,6 @@
#include <OSL/oslconfig.h>
#endif
#ifdef WITH_OPENCL
#include "device/device_intern.h"
#endif
CCL_NAMESPACE_BEGIN
namespace {
@@ -67,6 +60,7 @@ bool debug_flags_sync_from_scene(BL::Scene b_scene)
PointerRNA cscene = RNA_pointer_get(&b_scene.ptr, "cycles");
/* Backup some settings for comparison. */
DebugFlags::OpenCL::DeviceType opencl_device_type = flags.opencl.device_type;
DebugFlags::OpenCL::KernelType opencl_kernel_type = flags.opencl.kernel_type;
/* Synchronize shared flags. */
flags.viewport_static_bvh = get_enum(cscene, "debug_bvh_type");
/* Synchronize CPU flags. */
@@ -80,6 +74,18 @@ bool debug_flags_sync_from_scene(BL::Scene b_scene)
/* Synchronize CUDA flags. */
flags.cuda.adaptive_compile = get_boolean(cscene, "debug_use_cuda_adaptive_compile");
flags.cuda.split_kernel = get_boolean(cscene, "debug_use_cuda_split_kernel");
/* Synchronize OpenCL kernel type. */
switch(get_enum(cscene, "debug_opencl_kernel_type")) {
case 0:
flags.opencl.kernel_type = DebugFlags::OpenCL::KERNEL_DEFAULT;
break;
case 1:
flags.opencl.kernel_type = DebugFlags::OpenCL::KERNEL_MEGA;
break;
case 2:
flags.opencl.kernel_type = DebugFlags::OpenCL::KERNEL_SPLIT;
break;
}
/* Synchronize OpenCL device type. */
switch(get_enum(cscene, "debug_opencl_device_type")) {
case 0:
@@ -105,7 +111,8 @@ bool debug_flags_sync_from_scene(BL::Scene b_scene)
flags.opencl.debug = get_boolean(cscene, "debug_use_opencl_debug");
flags.opencl.mem_limit = ((size_t)get_int(cscene, "debug_opencl_mem_limit"))*1024*1024;
flags.opencl.single_program = get_boolean(cscene, "debug_opencl_kernel_single_program");
return flags.opencl.device_type != opencl_device_type;
return flags.opencl.device_type != opencl_device_type ||
flags.opencl.kernel_type != opencl_kernel_type;
}
/* Reset debug flags to default values.
@@ -116,8 +123,10 @@ bool debug_flags_reset()
DebugFlagsRef flags = DebugFlags();
/* Backup some settings for comparison. */
DebugFlags::OpenCL::DeviceType opencl_device_type = flags.opencl.device_type;
DebugFlags::OpenCL::KernelType opencl_kernel_type = flags.opencl.kernel_type;
flags.reset();
return flags.opencl.device_type != opencl_device_type;
return flags.opencl.device_type != opencl_device_type ||
flags.opencl.kernel_type != opencl_kernel_type;
}
} /* namespace */
@@ -194,10 +203,10 @@ static PyObject *exit_func(PyObject * /*self*/, PyObject * /*args*/)
static PyObject *create_func(PyObject * /*self*/, PyObject *args)
{
PyObject *pyengine, *pypreferences, *pydata, *pyregion, *pyv3d, *pyrv3d;
PyObject *pyengine, *pyuserpref, *pydata, *pyregion, *pyv3d, *pyrv3d;
int preview_osl;
if(!PyArg_ParseTuple(args, "OOOOOOi", &pyengine, &pypreferences, &pydata,
if(!PyArg_ParseTuple(args, "OOOOOOi", &pyengine, &pyuserpref, &pydata,
&pyregion, &pyv3d, &pyrv3d, &preview_osl))
{
return NULL;
@@ -208,9 +217,9 @@ static PyObject *create_func(PyObject * /*self*/, PyObject *args)
RNA_pointer_create(NULL, &RNA_RenderEngine, (void*)PyLong_AsVoidPtr(pyengine), &engineptr);
BL::RenderEngine engine(engineptr);
PointerRNA preferencesptr;
RNA_pointer_create(NULL, &RNA_Preferences, (void*)PyLong_AsVoidPtr(pypreferences), &preferencesptr);
BL::Preferences preferences(preferencesptr);
PointerRNA userprefptr;
RNA_pointer_create(NULL, &RNA_Preferences, (void*)PyLong_AsVoidPtr(pyuserpref), &userprefptr);
BL::Preferences userpref(userprefptr);
PointerRNA dataptr;
RNA_main_pointer_create((Main*)PyLong_AsVoidPtr(pydata), &dataptr);
@@ -236,11 +245,11 @@ static PyObject *create_func(PyObject * /*self*/, PyObject *args)
int width = region.width();
int height = region.height();
session = new BlenderSession(engine, preferences, data, v3d, rv3d, width, height);
session = new BlenderSession(engine, userpref, data, v3d, rv3d, width, height);
}
else {
/* offline session or preview render */
session = new BlenderSession(engine, preferences, data, preview_osl);
session = new BlenderSession(engine, userpref, data, preview_osl);
}
return PyLong_FromVoidPtr(session);
@@ -379,18 +388,9 @@ static PyObject *sync_func(PyObject * /*self*/, PyObject *args)
Py_RETURN_NONE;
}
static PyObject *available_devices_func(PyObject * /*self*/, PyObject * args)
static PyObject *available_devices_func(PyObject * /*self*/, PyObject * /*args*/)
{
const char *type_name;
if(!PyArg_ParseTuple(args, "s", &type_name)) {
return NULL;
}
DeviceType type = Device::type_from_string(type_name);
uint mask = (type == DEVICE_NONE) ? DEVICE_MASK_ALL : DEVICE_MASK(type);
mask |= DEVICE_MASK_CPU;
vector<DeviceInfo> devices = Device::available_devices(mask);
vector<DeviceInfo>& devices = Device::available_devices();
PyObject *ret = PyTuple_New(devices.size());
for(size_t i = 0; i < devices.size(); i++) {
@@ -616,148 +616,8 @@ static PyObject *opencl_disable_func(PyObject * /*self*/, PyObject * /*value*/)
DebugFlags().opencl.device_type = DebugFlags::OpenCL::DEVICE_NONE;
Py_RETURN_NONE;
}
static PyObject *opencl_compile_func(PyObject * /*self*/, PyObject *args)
{
PyObject *sequence = PySequence_Fast(args, "Arguments must be a sequence");
if(sequence == NULL) {
Py_RETURN_FALSE;
}
vector<string> parameters;
for(Py_ssize_t i = 0; i < PySequence_Fast_GET_SIZE(sequence); i++) {
PyObject *item = PySequence_Fast_GET_ITEM(sequence, i);
PyObject *item_as_string = PyObject_Str(item);
const char *parameter_string = PyUnicode_AsUTF8(item_as_string);
parameters.push_back(parameter_string);
Py_DECREF(item_as_string);
}
Py_DECREF(sequence);
if (device_opencl_compile_kernel(parameters)) {
Py_RETURN_TRUE;
}
else {
Py_RETURN_FALSE;
}
}
#endif
static bool denoise_parse_filepaths(PyObject *pyfilepaths, vector<string>& filepaths)
{
if(PyUnicode_Check(pyfilepaths)) {
const char *filepath = PyUnicode_AsUTF8(pyfilepaths);
filepaths.push_back(filepath);
return true;
}
PyObject *sequence = PySequence_Fast(pyfilepaths, "File paths must be a string or sequence of strings");
if(sequence == NULL) {
return false;
}
for(Py_ssize_t i = 0; i < PySequence_Fast_GET_SIZE(sequence); i++) {
PyObject *item = PySequence_Fast_GET_ITEM(sequence, i);
const char *filepath = PyUnicode_AsUTF8(item);
if(filepath == NULL) {
PyErr_SetString(PyExc_ValueError, "File paths must be a string or sequence of strings.");
Py_DECREF(sequence);
return false;
}
filepaths.push_back(filepath);
}
Py_DECREF(sequence);
return true;
}
static PyObject *denoise_func(PyObject * /*self*/, PyObject *args, PyObject *keywords)
{
static const char *keyword_list[] = {"preferences", "scene", "view_layer",
"input", "output",
"tile_size", "samples", NULL};
PyObject *pypreferences, *pyscene, *pyviewlayer;
PyObject *pyinput, *pyoutput = NULL;
int tile_size = 0, samples = 0;
if (!PyArg_ParseTupleAndKeywords(args, keywords, "OOOO|Oii", (char**)keyword_list,
&pypreferences, &pyscene, &pyviewlayer,
&pyinput, &pyoutput,
&tile_size, &samples)) {
return NULL;
}
/* Get device specification from preferences and scene. */
PointerRNA preferencesptr;
RNA_pointer_create(NULL, &RNA_Preferences, (void*)PyLong_AsVoidPtr(pypreferences), &preferencesptr);
BL::Preferences b_preferences(preferencesptr);
PointerRNA sceneptr;
RNA_id_pointer_create((ID*)PyLong_AsVoidPtr(pyscene), &sceneptr);
BL::Scene b_scene(sceneptr);
DeviceInfo device = blender_device_info(b_preferences, b_scene, true);
/* Get denoising parameters from view layer. */
PointerRNA viewlayerptr;
RNA_pointer_create((ID*)PyLong_AsVoidPtr(pyscene), &RNA_ViewLayer, PyLong_AsVoidPtr(pyviewlayer), &viewlayerptr);
PointerRNA cviewlayer = RNA_pointer_get(&viewlayerptr, "cycles");
DenoiseParams params;
params.radius = get_int(cviewlayer, "denoising_radius");
params.strength = get_float(cviewlayer, "denoising_strength");
params.feature_strength = get_float(cviewlayer, "denoising_feature_strength");
params.relative_pca = get_boolean(cviewlayer, "denoising_relative_pca");
params.neighbor_frames = get_int(cviewlayer, "denoising_neighbor_frames");
/* Parse file paths list. */
vector<string> input, output;
if(!denoise_parse_filepaths(pyinput, input)) {
return NULL;
}
if(pyoutput) {
if(!denoise_parse_filepaths(pyoutput, output)) {
return NULL;
}
}
else {
output = input;
}
if(input.empty()) {
PyErr_SetString(PyExc_ValueError, "No input file paths specified.");
return NULL;
}
if(input.size() != output.size()) {
PyErr_SetString(PyExc_ValueError, "Number of input and output file paths does not match.");
return NULL;
}
/* Create denoiser. */
Denoiser denoiser(device);
denoiser.params = params;
denoiser.input = input;
denoiser.output = output;
if (tile_size > 0) {
denoiser.tile_size = make_int2(tile_size, tile_size);
}
if (samples > 0) {
denoiser.samples_override = samples;
}
/* Run denoiser. */
if(!denoiser.run()) {
PyErr_SetString(PyExc_ValueError, denoiser.error.c_str());
return NULL;
}
Py_RETURN_NONE;
}
static PyObject *debug_flags_update_func(PyObject * /*self*/, PyObject *args)
{
PyObject *pyscene;
@@ -886,11 +746,11 @@ static PyObject *enable_print_stats_func(PyObject * /*self*/, PyObject * /*args*
static PyObject *get_device_types_func(PyObject * /*self*/, PyObject * /*args*/)
{
vector<DeviceType> device_types = Device::available_types();
vector<DeviceInfo>& devices = Device::available_devices();
bool has_cuda = false, has_opencl = false;
foreach(DeviceType device_type, device_types) {
has_cuda |= (device_type == DEVICE_CUDA);
has_opencl |= (device_type == DEVICE_OPENCL);
for(int i = 0; i < devices.size(); i++) {
has_cuda |= (devices[i].type == DEVICE_CUDA);
has_opencl |= (devices[i].type == DEVICE_OPENCL);
}
PyObject *list = PyTuple_New(2);
PyTuple_SET_ITEM(list, 0, PyBool_FromLong(has_cuda));
@@ -912,16 +772,12 @@ static PyMethodDef methods[] = {
{"osl_update_node", osl_update_node_func, METH_VARARGS, ""},
{"osl_compile", osl_compile_func, METH_VARARGS, ""},
#endif
{"available_devices", available_devices_func, METH_VARARGS, ""},
{"available_devices", available_devices_func, METH_NOARGS, ""},
{"system_info", system_info_func, METH_NOARGS, ""},
#ifdef WITH_OPENCL
{"opencl_disable", opencl_disable_func, METH_NOARGS, ""},
{"opencl_compile", opencl_compile_func, METH_VARARGS, ""},
#endif
/* Standalone denoising */
{"denoise", (PyCFunction)denoise_func, METH_VARARGS|METH_KEYWORDS, ""},
/* Debugging routines */
{"debug_flags_update", debug_flags_update_func, METH_VARARGS, ""},
{"debug_flags_reset", debug_flags_reset_func, METH_NOARGS, ""},
@@ -945,7 +801,7 @@ static struct PyModuleDef module = {
"Blender cycles render integration",
-1,
methods,
NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL
};
CCL_NAMESPACE_END

View File

@@ -30,7 +30,6 @@
#include "render/shader.h"
#include "render/stats.h"
#include "util/util_algorithm.h"
#include "util/util_color.h"
#include "util/util_foreach.h"
#include "util/util_function.h"
@@ -115,6 +114,9 @@ BlenderSession::~BlenderSession()
void BlenderSession::create()
{
create_session();
if(b_v3d)
session->start();
}
void BlenderSession::create_session()
@@ -393,41 +395,6 @@ static void add_cryptomatte_layer(BL::RenderResult& b_rr, string name, string ma
render_add_metadata(b_rr, prefix+"manifest", manifest);
}
void BlenderSession::stamp_view_layer_metadata(Scene *scene, const string& view_layer_name)
{
BL::RenderResult b_rr = b_engine.get_result();
string prefix = "cycles." + view_layer_name + ".";
/* Configured number of samples for the view layer. */
b_rr.stamp_data_add_field(
(prefix + "samples").c_str(),
to_string(session->params.samples).c_str());
/* Store ranged samples information. */
if(session->tile_manager.range_num_samples != -1) {
b_rr.stamp_data_add_field(
(prefix + "range_start_sample").c_str(),
to_string(session->tile_manager.range_start_sample).c_str());
b_rr.stamp_data_add_field(
(prefix + "range_num_samples").c_str(),
to_string(session->tile_manager.range_num_samples).c_str());
}
/* Write cryptomatte metadata. */
if(scene->film->cryptomatte_passes & CRYPT_OBJECT) {
add_cryptomatte_layer(b_rr, view_layer_name + ".CryptoObject",
scene->object_manager->get_cryptomatte_objects(scene));
}
if(scene->film->cryptomatte_passes & CRYPT_MATERIAL) {
add_cryptomatte_layer(b_rr, view_layer_name + ".CryptoMaterial",
scene->shader_manager->get_cryptomatte_materials(scene));
}
if(scene->film->cryptomatte_passes & CRYPT_ASSET) {
add_cryptomatte_layer(b_rr, view_layer_name + ".CryptoAsset",
scene->object_manager->get_cryptomatte_assets(scene));
}
}
void BlenderSession::render(BL::Depsgraph& b_depsgraph_)
{
b_depsgraph = b_depsgraph_;
@@ -443,6 +410,9 @@ void BlenderSession::render(BL::Depsgraph& b_depsgraph_)
/* render each layer */
BL::ViewLayer b_view_layer = b_depsgraph.view_layer_eval();
/* We do some special meta attributes when we only have single layer. */
const bool is_single_layer = (b_scene.view_layers.length() == 1);
/* temporary render result to find needed passes and views */
BL::RenderResult b_rr = begin_render_result(b_engine, 0, 0, 1, 1, b_view_layer.name().c_str(), NULL);
BL::RenderResult::layers_iterator b_single_rlay;
@@ -455,27 +425,26 @@ void BlenderSession::render(BL::Depsgraph& b_depsgraph_)
buffer_params.passes = passes;
PointerRNA crl = RNA_pointer_get(&b_view_layer.ptr, "cycles");
bool full_denoising = get_boolean(crl, "use_denoising");
bool write_denoising_passes = get_boolean(crl, "denoising_store_passes");
bool use_denoising = get_boolean(crl, "use_denoising");
bool denoising_passes = use_denoising || get_boolean(crl, "denoising_store_passes");
bool run_denoising = full_denoising || write_denoising_passes;
session->tile_manager.schedule_denoising = run_denoising;
buffer_params.denoising_data_pass = run_denoising;
session->tile_manager.schedule_denoising = use_denoising;
buffer_params.denoising_data_pass = denoising_passes;
buffer_params.denoising_clean_pass = (scene->film->denoising_flags & DENOISING_CLEAN_ALL_PASSES);
buffer_params.denoising_prefiltered_pass = write_denoising_passes;
session->params.run_denoising = run_denoising;
session->params.full_denoising = full_denoising;
session->params.write_denoising_passes = write_denoising_passes;
session->params.denoising.radius = get_int(crl, "denoising_radius");
session->params.denoising.strength = get_float(crl, "denoising_strength");
session->params.denoising.feature_strength = get_float(crl, "denoising_feature_strength");
session->params.denoising.relative_pca = get_boolean(crl, "denoising_relative_pca");
session->params.use_denoising = use_denoising;
session->params.denoising_passes = denoising_passes;
session->params.denoising_radius = get_int(crl, "denoising_radius");
session->params.denoising_strength = get_float(crl, "denoising_strength");
session->params.denoising_feature_strength = get_float(crl, "denoising_feature_strength");
session->params.denoising_relative_pca = get_boolean(crl, "denoising_relative_pca");
scene->film->denoising_data_pass = buffer_params.denoising_data_pass;
scene->film->denoising_clean_pass = buffer_params.denoising_clean_pass;
scene->film->denoising_prefiltered_pass = buffer_params.denoising_prefiltered_pass;
session->params.denoising_radius = get_int(crl, "denoising_radius");
session->params.denoising_strength = get_float(crl, "denoising_strength");
session->params.denoising_feature_strength = get_float(crl, "denoising_feature_strength");
session->params.denoising_relative_pca = get_boolean(crl, "denoising_relative_pca");
scene->film->pass_alpha_threshold = b_view_layer.pass_alpha_threshold();
scene->film->tag_passes_update(scene, passes);
@@ -483,12 +452,6 @@ void BlenderSession::render(BL::Depsgraph& b_depsgraph_)
scene->integrator->tag_update(scene);
BL::RenderResult::views_iterator b_view_iter;
int num_views = 0;
for(b_rr.views.begin(b_view_iter); b_view_iter != b_rr.views.end(); ++b_view_iter) {
num_views++;
}
int view_index = 0;
for(b_rr.views.begin(b_view_iter); b_view_iter != b_rr.views.end(); ++b_view_iter, ++view_index) {
b_rview_name = b_view_iter->name();
@@ -510,12 +473,7 @@ void BlenderSession::render(BL::Depsgraph& b_depsgraph_)
/* Attempt to free all data which is held by Blender side, since at this
* point we knwo that we've got everything to render current view layer.
*/
/* At the moment we only free if we are not doing multi-view (or if we are rendering the last view).
* See T58142/D4239 for discussion.
*/
if(view_index == num_views - 1) {
free_blender_memory_if_possible();
}
free_blender_memory_if_possible();
/* Make sure all views have different noise patterns. - hardcoded value just to make it random */
if(view_index != 0) {
@@ -553,8 +511,28 @@ void BlenderSession::render(BL::Depsgraph& b_depsgraph_)
break;
}
/* add metadata */
stamp_view_layer_metadata(scene, b_rlay_name);
if(is_single_layer) {
BL::RenderResult b_rr = b_engine.get_result();
string num_aa_samples = string_printf("%d", session->params.samples);
b_rr.stamp_data_add_field("Cycles Samples", num_aa_samples.c_str());
/* TODO(sergey): Report whether we're doing resumable render
* and also start/end sample if so.
*/
}
/* Write cryptomatte metadata. */
if(scene->film->cryptomatte_passes & CRYPT_OBJECT) {
add_cryptomatte_layer(b_rr, b_rlay_name+".CryptoObject",
scene->object_manager->get_cryptomatte_objects(scene));
}
if(scene->film->cryptomatte_passes & CRYPT_MATERIAL) {
add_cryptomatte_layer(b_rr, b_rlay_name+".CryptoMaterial",
scene->shader_manager->get_cryptomatte_materials(scene));
}
if(scene->film->cryptomatte_passes & CRYPT_ASSET) {
add_cryptomatte_layer(b_rr, b_rlay_name+".CryptoAsset",
scene->object_manager->get_cryptomatte_assets(scene));
}
/* free result without merging */
end_render_result(b_engine, b_rr, true, true, false);
@@ -705,14 +683,10 @@ void BlenderSession::bake(BL::Depsgraph& b_depsgraph_,
}
}
/* Object might have been disabled for rendering or excluded in some
* other way, in that case Blender will report a warning afterwards. */
if (object_index != OBJECT_NONE) {
int object = object_index;
int object = object_index;
bake_data = scene->bake_manager->init(object, tri_offset, num_pixels);
populate_bake_data(bake_data, object_id, pixel_array, num_pixels);
}
bake_data = scene->bake_manager->init(object, tri_offset, num_pixels);
populate_bake_data(bake_data, object_id, pixel_array, num_pixels);
/* set number of samples */
session->tile_manager.set_samples(session_params.samples);
@@ -723,7 +697,7 @@ void BlenderSession::bake(BL::Depsgraph& b_depsgraph_,
}
/* Perform bake. Check cancel to avoid crash with incomplete scene data. */
if(!session->progress.get_cancel() && bake_data) {
if(!session->progress.get_cancel()) {
scene->bake_manager->bake(scene->device, &scene->dscene, scene, session->progress, shader_type, bake_pass_filter, bake_data, result);
}
@@ -830,6 +804,7 @@ void BlenderSession::synchronize(BL::Depsgraph& b_depsgraph_)
{
free_session();
create_session();
session->start();
return;
}
@@ -882,10 +857,6 @@ void BlenderSession::synchronize(BL::Depsgraph& b_depsgraph_)
/* reset time */
start_resize_time = 0.0;
}
/* Start rendering thread, if it's not running already. Do this
* after all scene data has been synced at least once. */
session->start();
}
bool BlenderSession::draw(int w, int h)
@@ -1413,15 +1384,9 @@ void BlenderSession::update_resumable_tile_manager(int num_samples)
return;
}
if (num_resumable_chunks > num_samples) {
fprintf(stderr, "Cycles warning: more sample chunks (%d) than samples (%d), "
"this will cause some samples to be included in multiple chunks.\n",
num_resumable_chunks, num_samples);
}
const int num_samples_per_chunk = (int)ceilf((float)num_samples / num_resumable_chunks);
const float num_samples_per_chunk = (float)num_samples / num_resumable_chunks;
float range_start_sample, range_num_samples;
int range_start_sample, range_num_samples;
if(current_resumable_chunk != 0) {
/* Single chunk rendering. */
range_start_sample = num_samples_per_chunk * (current_resumable_chunk - 1);
@@ -1433,25 +1398,19 @@ void BlenderSession::update_resumable_tile_manager(int num_samples)
range_start_sample = num_samples_per_chunk * (start_resumable_chunk - 1);
range_num_samples = num_chunks * num_samples_per_chunk;
}
/* Round after doing the multiplications with num_chunks and num_samples_per_chunk
* to allow for many small chunks. */
int rounded_range_start_sample = (int)floor(range_start_sample + 0.5f);
int rounded_range_num_samples = max((int)floor(range_num_samples + 0.5f), 1);
/* Make sure we don't overshoot. */
if(rounded_range_start_sample + rounded_range_num_samples > num_samples) {
rounded_range_num_samples = num_samples - rounded_range_num_samples;
if(range_start_sample + range_num_samples > num_samples) {
range_num_samples = num_samples - range_num_samples;
}
VLOG(1) << "Samples range start is " << range_start_sample << ", "
<< "number of samples to render is " << range_num_samples;
scene->integrator->start_sample = rounded_range_start_sample;
scene->integrator->start_sample = range_start_sample;
scene->integrator->tag_update(scene);
session->tile_manager.range_start_sample = rounded_range_start_sample;
session->tile_manager.range_num_samples = rounded_range_num_samples;
session->tile_manager.range_start_sample = range_start_sample;
session->tile_manager.range_num_samples = range_num_samples;
}
void BlenderSession::free_blender_memory_if_possible()

View File

@@ -151,8 +151,6 @@ public:
static bool print_render_stats;
protected:
void stamp_view_layer_metadata(Scene *scene, const string& view_layer_name);
void do_write_update_render_result(BL::RenderResult& b_rr,
BL::RenderLayer& b_rlay,
RenderTile& rtile,

Some files were not shown because too many files have changed in this diff Show More