Compare commits

..

61 Commits

Author SHA1 Message Date
3a97c4056f Proof of Concept: File Browser thumbnail mode using grid view
This was meant as an experiment to see how tangible it is to rewrite the
File Browser UI code to be based on views, starting with the grid view
for thumbnail mode. See T99890.
My initial conclusion is that porting to views is quite doable, but
we'll need some further UI code features to make certain things
possible. Like big "composed" icons, where a file type icon is displayed
on top of a big, generic file icon.

There is a fair bit of stuff here that I'm not happy with. Plus things
like selection, double clicking to open and renaming don't work yet.
It's a start, a proof of concept even :)
2022-07-21 17:16:10 +02:00
9dbcefb10e Merge branch 'asset-browser-grid-view' into file-browser-grid-view 2022-07-20 17:25:31 +02:00
85f0b2ef5d Merge branch 'master' into asset-browser-grid-view 2022-07-20 17:17:12 +02:00
bffc1fbf31 Apply changes to catalog tree view from master 2022-07-20 16:01:05 +02:00
6bae10ef45 Merge branch 'master' into asset-browser-grid-view 2022-07-20 15:42:58 +02:00
0f8e58e7b8 Merge commit 'c355be6faeac~1' into asset-browser-grid-view 2022-07-20 15:42:48 +02:00
6fc388743d Merge branch 'master' into asset-browser-grid-view 2022-06-15 20:20:21 +02:00
66204860ca Merge branch 'master' into asset-browser-grid-view 2022-06-02 13:01:29 +02:00
e1ced645fa Merge branch 'asset-browser-grid-view' into file-browser-grid-view 2022-05-27 11:33:26 +02:00
5162632a20 Merge branch 'master' into asset-browser-grid-view 2022-05-27 11:33:05 +02:00
3f7015a79f Merge branch 'asset-browser-grid-view' into file-browser-grid-view 2022-05-20 12:01:21 +02:00
560c20e067 Fix wrong data passed after recent changes in master 2022-05-20 12:00:52 +02:00
63cc972402 Fix wrong data passed after recent changes in master 2022-05-20 12:00:16 +02:00
c8c8783088 Merge branch 'master' into asset-browser-grid-view 2022-05-20 11:46:53 +02:00
1049b0c818 Merge branch 'master' into asset-browser-grid-view 2022-05-18 22:43:03 +02:00
46934eaf25 Merge branch 'master' into asset-browser-grid-view 2022-05-10 17:09:32 +02:00
c67a650718 Merge branch 'master' into asset-browser-grid-view 2022-05-10 17:09:13 +02:00
065bc42ce5 Merge branch 'master' into asset-browser-grid-view 2022-04-06 11:37:28 +02:00
063689b8a8 Cleanup: Use new license header convention 2022-03-28 00:14:49 +02:00
eaa58a1607 Adopt to RNA prototypes changes in master (fix compile error) 2022-03-28 00:13:49 +02:00
72e67691ed Merge branch 'master' into asset-browser-grid-view 2022-03-27 23:56:26 +02:00
e1a8a15945 Merge branch 'master' into asset-browser-grid-view 2022-03-02 15:07:28 +01:00
e91e4bf8e5 Merge branch 'master' into asset-browser-grid-view 2022-02-28 14:32:46 +01:00
9ae2259db5 Solve redraw performance issue in huge asset libraries
In a big asset library (>3000 assets in the one I'm testing with), the asset
browser would get a notable redraw lag due to the O(n^2) complexity of the tree
item matching (to recognize items over multiple redraws and keep state like
selection, highlights, renaming, ...).
2022-02-17 23:50:41 +01:00
9bac0894f6 Merge branch 'master' into asset-browser-grid-view 2022-02-17 21:43:07 +01:00
731c1be92a Speedup preview icon loading
Significantly speeds up loading of previews, not just for assets but also
Python loaded custom previews. Patch will be submitted for master.
2022-02-17 21:41:22 +01:00
3d31ad823a Fix missing redraw when switching asset libraries 2022-02-17 21:40:17 +01:00
fdc5301205 Fix crash when loading asset library takes multiple redraws 2022-02-17 21:39:43 +01:00
2ce7f02a06 Fix missing (re)load when changing asset libraries
Logic to compare asset library references was incorrect.
2022-02-17 21:35:34 +01:00
91853d95a9 Merge branch 'master' into asset-browser-grid-view 2022-02-16 18:13:12 +01:00
eafd98c920 Remove preview-caching from asset-list API
This isn't needed anymore, see previous commit.
2022-02-16 18:10:40 +01:00
39eab45c8e Let UI do lazy-loading of previews, rather than file-list cache
This is an important step to decouple the asset-views from the file browser
backend. One downside is that the UI preview loading is much slower than the
file browser one, however, I'm pretty sure I know how to address this.
2022-02-16 17:59:27 +01:00
ef58467594 Store active item, asset metadata sidebar
Also:
- Fix double-free bug when closing Blender
- Fix issues with identifying assets with the same name.
- Add functions to store asset handle as pointer in the asset list storage,
  asset browser exposes it via context.
2022-02-15 21:44:08 +01:00
8f48dd8f72 Merge branch 'master' into asset-browser-grid-view 2022-02-14 17:57:52 +01:00
d46357dd25 Merge branch 'master' into asset-browser-grid-view 2022-02-14 17:57:16 +01:00
696295f849 Support active item
Makes activating assets in the Asset Browser work. The active item is only
stored at the UI level now, so it's not stored in files, not accessible via
context and is lost when changing editors. The Asset Browser itself will have
to get a way to store that.
2022-02-14 17:48:51 +01:00
c9c332f422 Fix failed assert with small region size
If there's not enough space to draw at least one item per row, there would be a
failed assert in the code to skip drawing items scrolled out of view.
2022-02-14 16:15:29 +01:00
18f8749fb7 Only add items to layout that are visible on screen
Basically we skip adding buttons for items that aren't visible, but
scrolled out of view. This already makes scrolling in very large
libraries smoother. However this also prepares the next step, where we
only load previews for items that are currently in view, which should
make the experience with large asset libraries better.
2022-02-11 20:30:54 +01:00
947c73578c Merge branch 'master' into asset-browser-grid-view 2022-02-11 18:18:59 +01:00
f41368cf02 Fix null-reference use 2022-02-11 18:18:15 +01:00
f29fa9895f Bring back editor pulldown menus, get operators to work, T for nav-bar
- Adds the View, Edit & Select pulldown menus
- Makes asset operators work (e.g. catalog management)
- Correct notifiers (fixing missing redraws)
- Add T shortcut to toggle toolbar
2022-02-11 18:13:48 +01:00
ff213c802c Fix failing assert when splitting new asset browser in 2
This is an issue in master already, but for many layouts it just isn't
triggered. Pretty sure the assert just shouldn't be executed in extreme
cases, where there is no width for the layout and max-sizes become < 0.
2022-02-09 19:35:07 +01:00
5fcf6822dd Reduce brightness of grid item highlight
Also show theme settings for the view item in the Theme Preferences.
Called it "Data-View Item" for now, to be evaluated.
2022-02-09 19:33:58 +01:00
9530fb60ad Fix crash when loading files with asset browser open
Some fun with static memory. When loading a file, the asset-library
service was destructed since some while ago. For the old asset browser
that wasn't a problem, since its storage was recreated from scratch. But
the new asset browser accesses the global asset library storage of the
asset system which is static and thus stays alive if a different file is
loaded.

For now just destruct the global asset library storage when loading
a new file.
2022-02-09 19:01:12 +01:00
86ea1ad5df Support reading and writing asset browsers from/to files 2022-02-09 18:53:21 +01:00
2011d1f6d0 Various GUI tweaks
- Tweak tile size to match previous Asset Browser better
- Reduce margins between preview tiles (grid view was ignoring `align`
  parameter).
- Improve placement of text below previews (allowing previews to be
  slightly bigger).
- Tweak margins of main asset browser layout.
2022-02-09 17:59:01 +01:00
f17ea3da02 Add theme colors for view items 2022-02-09 15:04:27 +01:00
400d7235c3 Mouse hover highlight for grid items
Like the tree-view rows, grid items use an overlapping layout to draw
the background and a custom layout on top. There is a new dedicated
button type for the grid view items.
Adds some related bits needed for persistent view-item state storage.
Also a bunch of code for the grid item button type can be shared with
the tree-row one, but I prefer doing that separately.
2022-02-09 14:41:11 +01:00
29fdd43605 Get changes in catalog tree to update the UI correctly
The catalog tree-view now sends appropriate notifiers and messages when
changing data. Either itself or other UIs (like the main asset browser
region) can then listen to these and update.
I try to design this in a way that the views become independent on the
editor displaying them, so the views can be reused in multiple places.
2022-02-08 16:29:49 +01:00
eab2a8479a Move asset catalog filtering to editors/assets
Such general asset-view functionality can go to the general editors
level, I think that makes the most sense.
2022-02-07 18:58:25 +01:00
ee013f44b5 Show asset catalog tree-view in navigation sidebar
Selecting catalogs doesn't work yet.
Includes some temporary changes needed to avoid conflicts between old
File/Asset Browser and new Asset Browser code.
2022-02-07 17:18:08 +01:00
f20814892a Merge branch 'master' into asset-browser-grid-view 2022-02-07 12:20:07 +01:00
39e3580065 Enforce a fixed column number for the grid layout
Avoid stretching items to fit the layout, they should always have a
fixed size.
2022-02-07 12:17:17 +01:00
d2c4918d77 Merge branch 'master' into asset-browser-grid-view 2022-02-07 11:13:31 +01:00
11d0c91ba5 Merge branch 'master' into asset-browser-grid-view 2022-02-04 16:58:04 +01:00
925b82efb0 Basic grid layout with big preview tiles
Draws the asset previews in a grid layout, similar to the "old" Asset
Browser. No interactivity is supported yet.
The layout is managed through the grid-view.
2022-02-04 15:18:40 +01:00
3df2e4e888 Display a basic list of assets, set up scrolling and notifier listening
* Display a non-interactive list of assets, updates as assets get
  loaded.
* Notifier listening happens via the view, so the grid-view supports
  listening to notifiers. This is what triggers regular redraws as
  assets get loaded.
* Scrolling may need more fine tuning, so that scroll-bars are
  hidden if they don't apply (e.g. don't show horizontal scroll-bar for
  the vertically expanding grid-view layout).
2022-02-03 15:36:05 +01:00
a54bd5fe19 Merge branch 'master' into asset-browser-grid-view 2022-01-31 23:51:12 +01:00
4b43bd820e Merge branch 'master' into asset-browser-grid-view 2022-01-31 23:45:08 +01:00
b8b7b0af70 Add basic asset library loading, general cleanups
Adds an asset library selector and prints the list of assets when drawing the
Asset Browser main window.
2022-01-31 23:39:29 +01:00
b3597b1128 Add Asset Browser as own editor
This is part of a (tentative) plan to split off the Asset Browser from
the File Browser, while adding a new grid-view API that generalizes most
of the UI. Both editors can use this and only have to implement their
case-specific logic. This then allows us to add a proper asset system,
that's not tied to the file browser backend anymore.
2022-01-28 19:27:37 +01:00
2892 changed files with 182561 additions and 122873 deletions

View File

@@ -266,12 +266,11 @@ ForEachMacros:
- MAP_SLOT_PROBING_BEGIN - MAP_SLOT_PROBING_BEGIN
- VECTOR_SET_SLOT_PROBING_BEGIN - VECTOR_SET_SLOT_PROBING_BEGIN
- WL_ARRAY_FOR_EACH - WL_ARRAY_FOR_EACH
- FOREACH_SPECTRUM_CHANNEL
StatementMacros: StatementMacros:
- PyObject_HEAD - PyObject_HEAD
- PyObject_VAR_HEAD - PyObject_VAR_HEAD
- ccl_gpu_kernel_postfix - ccl_gpu_kernel_postfix
MacroBlockBegin: "^OSL_CLOSURE_STRUCT_BEGIN$" MacroBlockBegin: "^BSDF_CLOSURE_CLASS_BEGIN$"
MacroBlockEnd: "^OSL_CLOSURE_STRUCT_END$" MacroBlockEnd: "^BSDF_CLOSURE_CLASS_END$"

View File

@@ -1,12 +1,11 @@
# SPDX-License-Identifier: GPL-2.0-or-later # SPDX-License-Identifier: GPL-2.0-or-later
# Copyright 2006 Blender Foundation. All rights reserved. # Copyright 2006 Blender Foundation. All rights reserved.
# ----------------------------------------------------------------------------- #-----------------------------------------------------------------------------
# Early Initialization # We don't allow in-source builds. This causes no end of troubles because
# NOTE: We don't allow in-source builds. This causes no end of troubles because
# all out-of-source builds will use the CMakeCache.txt file there and even # all out-of-source builds will use the CMakeCache.txt file there and even
# build the libs and objects in it. # build the libs and objects in it.
if(${CMAKE_SOURCE_DIR} STREQUAL ${CMAKE_BINARY_DIR}) if(${CMAKE_SOURCE_DIR} STREQUAL ${CMAKE_BINARY_DIR})
if(NOT DEFINED WITH_IN_SOURCE_BUILD) if(NOT DEFINED WITH_IN_SOURCE_BUILD)
message(FATAL_ERROR message(FATAL_ERROR
@@ -26,6 +25,13 @@ endif()
cmake_minimum_required(VERSION 3.10) cmake_minimum_required(VERSION 3.10)
# Prefer LEGACY OpenGL to be compatible with all the existing releases and
# platforms which don't have GLVND yet. Only do it if preference was not set
# externally.
if(NOT DEFINED OpenGL_GL_PREFERENCE)
set(OpenGL_GL_PREFERENCE "LEGACY")
endif()
if(NOT EXECUTABLE_OUTPUT_PATH) if(NOT EXECUTABLE_OUTPUT_PATH)
set(FIRST_RUN TRUE) set(FIRST_RUN TRUE)
else() else()
@@ -36,7 +42,7 @@ endif()
list(APPEND CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/build_files/cmake/Modules") list(APPEND CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/build_files/cmake/Modules")
list(APPEND CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/build_files/cmake/platform") list(APPEND CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/build_files/cmake/platform")
# Avoid having an empty `CMAKE_BUILD_TYPE`. # avoid having empty buildtype
if(NOT DEFINED CMAKE_BUILD_TYPE_INIT) if(NOT DEFINED CMAKE_BUILD_TYPE_INIT)
set(CMAKE_BUILD_TYPE_INIT "Release") set(CMAKE_BUILD_TYPE_INIT "Release")
# Internal logic caches this variable, avoid showing it by default # Internal logic caches this variable, avoid showing it by default
@@ -60,8 +66,7 @@ set_property(DIRECTORY APPEND PROPERTY COMPILE_DEFINITIONS
$<$<CONFIG:RelWithDebInfo>:NDEBUG> $<$<CONFIG:RelWithDebInfo>:NDEBUG>
) )
#-----------------------------------------------------------------------------
# -----------------------------------------------------------------------------
# Set policy # Set policy
# see "cmake --help-policy CMP0003" # see "cmake --help-policy CMP0003"
@@ -91,16 +96,13 @@ endif()
if(POLICY CMP0087) if(POLICY CMP0087)
cmake_policy(SET CMP0087 NEW) cmake_policy(SET CMP0087 NEW)
endif() endif()
#-----------------------------------------------------------------------------
# Load some macros.
# -----------------------------------------------------------------------------
# Load Blender's Local Macros
include(build_files/cmake/macros.cmake) include(build_files/cmake/macros.cmake)
# ----------------------------------------------------------------------------- #-----------------------------------------------------------------------------
# Initialize Project # Initialize project.
blender_project_hack_pre() blender_project_hack_pre()
@@ -110,15 +112,8 @@ blender_project_hack_post()
enable_testing() enable_testing()
#-----------------------------------------------------------------------------
# ----------------------------------------------------------------------------- # Redirect output files
# Test Compiler/Library Features
include(build_files/cmake/have_features.cmake)
# -----------------------------------------------------------------------------
# Redirect Output Files
set(EXECUTABLE_OUTPUT_PATH ${CMAKE_BINARY_DIR}/bin CACHE INTERNAL "" FORCE) set(EXECUTABLE_OUTPUT_PATH ${CMAKE_BINARY_DIR}/bin CACHE INTERNAL "" FORCE)
set(LIBRARY_OUTPUT_PATH ${CMAKE_BINARY_DIR}/lib CACHE INTERNAL "" FORCE) set(LIBRARY_OUTPUT_PATH ${CMAKE_BINARY_DIR}/lib CACHE INTERNAL "" FORCE)
@@ -130,15 +125,14 @@ else()
set(TESTS_OUTPUT_DIR ${EXECUTABLE_OUTPUT_PATH}/tests/ CACHE INTERNAL "" FORCE) set(TESTS_OUTPUT_DIR ${EXECUTABLE_OUTPUT_PATH}/tests/ CACHE INTERNAL "" FORCE)
endif() endif()
#-----------------------------------------------------------------------------
# ----------------------------------------------------------------------------- # Set default config options
# Set Default Configuration Options
get_blender_version() get_blender_version()
# ----------------------------------------------------------------------------- #-----------------------------------------------------------------------------
# Declare Options # Options
# Blender internal features # Blender internal features
option(WITH_BLENDER "Build blender (disable to build only the blender player)" ON) option(WITH_BLENDER "Build blender (disable to build only the blender player)" ON)
@@ -164,6 +158,9 @@ mark_as_advanced(WITH_PYTHON_SECURITY) # some distributions see this as a secur
option(WITH_PYTHON_SAFETY "Enable internal API error checking to track invalid data to prevent crash on access (at the expense of some efficiency, only enable for development)." OFF) option(WITH_PYTHON_SAFETY "Enable internal API error checking to track invalid data to prevent crash on access (at the expense of some efficiency, only enable for development)." OFF)
mark_as_advanced(WITH_PYTHON_SAFETY) mark_as_advanced(WITH_PYTHON_SAFETY)
option(WITH_PYTHON_MODULE "Enable building as a python module which runs without a user interface, like running regular blender in background mode (experimental, only enable for development), installs to PYTHON_SITE_PACKAGES (or CMAKE_INSTALL_PREFIX if WITH_INSTALL_PORTABLE is enabled)." OFF) option(WITH_PYTHON_MODULE "Enable building as a python module which runs without a user interface, like running regular blender in background mode (experimental, only enable for development), installs to PYTHON_SITE_PACKAGES (or CMAKE_INSTALL_PREFIX if WITH_INSTALL_PORTABLE is enabled)." OFF)
if(APPLE)
option(WITH_PYTHON_FRAMEWORK "Enable building using the Python available in the framework (OSX only)" OFF)
endif()
option(WITH_BUILDINFO "Include extra build details (only disable for development & faster builds)" ON) option(WITH_BUILDINFO "Include extra build details (only disable for development & faster builds)" ON)
set(BUILDINFO_OVERRIDE_DATE "" CACHE STRING "Use instead of the current date for reproducible builds (empty string disables this option)") set(BUILDINFO_OVERRIDE_DATE "" CACHE STRING "Use instead of the current date for reproducible builds (empty string disables this option)")
@@ -199,7 +196,7 @@ endif()
option(WITH_GMP "Enable features depending on GMP (Exact Boolean)" ON) option(WITH_GMP "Enable features depending on GMP (Exact Boolean)" ON)
# Compositor # Compositor
option(WITH_COMPOSITOR_CPU "Enable the tile based CPU nodal compositor" ON) option(WITH_COMPOSITOR "Enable the tile based nodal compositor" ON)
option(WITH_OPENIMAGEDENOISE "Enable the OpenImageDenoise compositing node" ON) option(WITH_OPENIMAGEDENOISE "Enable the OpenImageDenoise compositing node" ON)
option(WITH_OPENSUBDIV "Enable OpenSubdiv for surface subdivision" ON) option(WITH_OPENSUBDIV "Enable OpenSubdiv for surface subdivision" ON)
@@ -226,7 +223,7 @@ if(UNIX AND NOT (APPLE OR HAIKU))
option(WITH_GHOST_WAYLAND "Enable building Blender against Wayland for windowing (under development)" OFF) option(WITH_GHOST_WAYLAND "Enable building Blender against Wayland for windowing (under development)" OFF)
mark_as_advanced(WITH_GHOST_WAYLAND) mark_as_advanced(WITH_GHOST_WAYLAND)
if(WITH_GHOST_WAYLAND) if (WITH_GHOST_WAYLAND)
option(WITH_GHOST_WAYLAND_LIBDECOR "Optionally build with LibDecor window decorations" OFF) option(WITH_GHOST_WAYLAND_LIBDECOR "Optionally build with LibDecor window decorations" OFF)
mark_as_advanced(WITH_GHOST_WAYLAND_LIBDECOR) mark_as_advanced(WITH_GHOST_WAYLAND_LIBDECOR)
@@ -265,6 +262,7 @@ if(WITH_GHOST_X11)
option(WITH_X11_XINPUT "Enable X11 Xinput (tablet support and unicode input)" ON) option(WITH_X11_XINPUT "Enable X11 Xinput (tablet support and unicode input)" ON)
option(WITH_X11_XF86VMODE "Enable X11 video mode switching" ON) option(WITH_X11_XF86VMODE "Enable X11 video mode switching" ON)
option(WITH_X11_XFIXES "Enable X11 XWayland cursor warping workaround" ON) option(WITH_X11_XFIXES "Enable X11 XWayland cursor warping workaround" ON)
option(WITH_X11_ALPHA "Enable X11 transparent background" ON)
endif() endif()
if(UNIX AND NOT APPLE) if(UNIX AND NOT APPLE)
@@ -368,7 +366,7 @@ if(WIN32 OR APPLE)
endif() endif()
option(WITH_INPUT_NDOF "Enable NDOF input devices (SpaceNavigator and friends)" ON) option(WITH_INPUT_NDOF "Enable NDOF input devices (SpaceNavigator and friends)" ON)
if(UNIX AND NOT APPLE) if(UNIX AND NOT APPLE)
option(WITH_INSTALL_PORTABLE "Install redistributable runtime, otherwise install into CMAKE_INSTALL_PREFIX" ON) option(WITH_INSTALL_PORTABLE "Install redistributeable runtime, otherwise install into CMAKE_INSTALL_PREFIX" ON)
option(WITH_STATIC_LIBS "Try to link with static libraries, as much as possible, to make blender more portable across distributions" OFF) option(WITH_STATIC_LIBS "Try to link with static libraries, as much as possible, to make blender more portable across distributions" OFF)
if(WITH_STATIC_LIBS) if(WITH_STATIC_LIBS)
option(WITH_BOOST_ICU "Boost uses ICU library (required for linking with static Boost built with libicu)." OFF) option(WITH_BOOST_ICU "Boost uses ICU library (required for linking with static Boost built with libicu)." OFF)
@@ -439,16 +437,10 @@ if(NOT APPLE)
option(WITH_CYCLES_CUBIN_COMPILER "Build cubins with nvrtc based compiler instead of nvcc" OFF) option(WITH_CYCLES_CUBIN_COMPILER "Build cubins with nvrtc based compiler instead of nvcc" OFF)
option(WITH_CYCLES_CUDA_BUILD_SERIAL "Build cubins one after another (useful on machines with limited RAM)" OFF) option(WITH_CYCLES_CUDA_BUILD_SERIAL "Build cubins one after another (useful on machines with limited RAM)" OFF)
option(WITH_CUDA_DYNLOAD "Dynamically load CUDA libraries at runtime (for developers, makes cuda-gdb work)" ON) option(WITH_CUDA_DYNLOAD "Dynamically load CUDA libraries at runtime (for developers, makes cuda-gdb work)" ON)
set(OPTIX_ROOT_DIR "" CACHE PATH "Path to the OptiX SDK root directory, for building Cycles OptiX kernels.")
set(CYCLES_RUNTIME_OPTIX_ROOT_DIR "" CACHE PATH "Path to the OptiX SDK root directory. When set, this path will be used at runtime to compile OptiX kernels.")
mark_as_advanced(CYCLES_CUDA_BINARIES_ARCH) mark_as_advanced(CYCLES_CUDA_BINARIES_ARCH)
mark_as_advanced(WITH_CYCLES_CUBIN_COMPILER) mark_as_advanced(WITH_CYCLES_CUBIN_COMPILER)
mark_as_advanced(WITH_CYCLES_CUDA_BUILD_SERIAL) mark_as_advanced(WITH_CYCLES_CUDA_BUILD_SERIAL)
mark_as_advanced(WITH_CUDA_DYNLOAD) mark_as_advanced(WITH_CUDA_DYNLOAD)
mark_as_advanced(OPTIX_ROOT_DIR)
mark_as_advanced(CYCLES_RUNTIME_OPTIX_ROOT_DIR)
endif() endif()
# AMD HIP # AMD HIP
@@ -472,8 +464,8 @@ if(NOT APPLE)
option(WITH_CYCLES_ONEAPI_SYCL_HOST_ENABLED "Enable use of SYCL host (CPU) device execution by oneAPI implementation. This option is for debugging purposes and impacts GPU execution." OFF) option(WITH_CYCLES_ONEAPI_SYCL_HOST_ENABLED "Enable use of SYCL host (CPU) device execution by oneAPI implementation. This option is for debugging purposes and impacts GPU execution." OFF)
# https://www.intel.com/content/www/us/en/develop/documentation/oneapi-dpcpp-cpp-compiler-dev-guide-and-reference/top/compilation/ahead-of-time-compilation.html # https://www.intel.com/content/www/us/en/develop/documentation/oneapi-dpcpp-cpp-compiler-dev-guide-and-reference/top/compilation/ahead-of-time-compilation.html
set(CYCLES_ONEAPI_SPIR64_GEN_DEVICES "dg2" CACHE STRING "oneAPI Intel GPU architectures to build binaries for") SET (CYCLES_ONEAPI_SPIR64_GEN_DEVICES "dg2" CACHE STRING "oneAPI Intel GPU architectures to build binaries for")
set(CYCLES_ONEAPI_SYCL_TARGETS spir64 spir64_gen CACHE STRING "oneAPI targets to build AOT binaries for") SET (CYCLES_ONEAPI_SYCL_TARGETS spir64 spir64_gen CACHE STRING "oneAPI targets to build AOT binaries for")
mark_as_advanced(WITH_CYCLES_ONEAPI_SYCL_HOST_ENABLED) mark_as_advanced(WITH_CYCLES_ONEAPI_SYCL_HOST_ENABLED)
mark_as_advanced(CYCLES_ONEAPI_SPIR64_GEN_DEVICES) mark_as_advanced(CYCLES_ONEAPI_SPIR64_GEN_DEVICES)
@@ -544,27 +536,71 @@ endif()
# OpenGL # OpenGL
# Experimental EGL option.
option(WITH_GL_EGL "Use the EGL OpenGL system library instead of the platform specific OpenGL system library (CGL, GLX or WGL)" OFF)
mark_as_advanced(WITH_GL_EGL)
if(WITH_GHOST_WAYLAND)
# Wayland can only use EGL to create OpenGL contexts, not GLX.
set(WITH_GL_EGL ON)
endif()
if(UNIX AND NOT APPLE)
if(WITH_GL_EGL)
# GLEW can only be built with either GLX or EGL support. Most binary distributions are
# built with GLX support and we have no automated way to detect this. So always build
# GLEW from source to be sure it has EGL support.
set(WITH_SYSTEM_GLEW OFF)
else()
option(WITH_SYSTEM_GLEW "Use GLEW OpenGL wrapper library provided by the operating system" OFF)
endif()
option(WITH_SYSTEM_GLES "Use OpenGL ES library provided by the operating system" ON)
else()
# System GLEW and GLES not an option on other platforms.
set(WITH_SYSTEM_GLEW OFF)
set(WITH_SYSTEM_GLES OFF)
endif()
option(WITH_OPENGL "When off limits visibility of the opengl headers to just bf_gpu and gawain (temporary option for development purposes)" ON) option(WITH_OPENGL "When off limits visibility of the opengl headers to just bf_gpu and gawain (temporary option for development purposes)" ON)
option(WITH_GLEW_ES "Switches to experimental copy of GLEW that has support for OpenGL ES. (temporary option for development purposes)" OFF)
option(WITH_GL_PROFILE_ES20 "Support using OpenGL ES 2.0. (through either EGL or the AGL/WGL/XGL 'es20' profile)" OFF)
option(WITH_GPU_BUILDTIME_SHADER_BUILDER "Shader builder is a developer option enabling linting on GLSL during compilation" OFF) option(WITH_GPU_BUILDTIME_SHADER_BUILDER "Shader builder is a developer option enabling linting on GLSL during compilation" OFF)
mark_as_advanced( mark_as_advanced(
WITH_OPENGL WITH_OPENGL
WITH_GLEW_ES
WITH_GL_PROFILE_ES20
WITH_GPU_BUILDTIME_SHADER_BUILDER WITH_GPU_BUILDTIME_SHADER_BUILDER
) )
if(WITH_HEADLESS)
set(WITH_OPENGL OFF)
endif()
# Metal # Metal
if(APPLE) if (APPLE)
option(WITH_METAL_BACKEND "Use Metal for graphics instead of (or as well as) OpenGL on macOS." OFF) option(WITH_METAL_BACKEND "Use Metal for graphics instead of (or as well as) OpenGL on macOS." OFF)
mark_as_advanced(WITH_METAL_BACKEND) mark_as_advanced(WITH_METAL_BACKEND)
else() else()
set(WITH_METAL_BACKEND OFF) set(WITH_METAL_BACKEND OFF)
endif() endif()
if(WITH_METAL_BACKEND) if (WITH_METAL_BACKEND)
set(CMAKE_OSX_DEPLOYMENT_TARGET "10.15" CACHE STRING "Minimum OS X deployment version" FORCE) set(CMAKE_OSX_DEPLOYMENT_TARGET "10.15" CACHE STRING "Minimum OS X deployment version" FORCE)
endif() endif()
if(WIN32)
option(WITH_GL_ANGLE "Link with the ANGLE library, an OpenGL ES 2.0 implementation based on Direct3D, instead of the system OpenGL library." OFF)
mark_as_advanced(WITH_GL_ANGLE)
endif()
if(WITH_GLEW_ES AND WITH_SYSTEM_GLEW)
message(WARNING Ignoring WITH_SYSTEM_GLEW and using WITH_GLEW_ES)
set(WITH_SYSTEM_GLEW OFF)
endif()
if(WIN32) if(WIN32)
getDefaultWindowsPrefixBase(CMAKE_GENERIC_PROGRAM_FILES) getDefaultWindowsPrefixBase(CMAKE_GENERIC_PROGRAM_FILES)
set(CPACK_INSTALL_PREFIX ${CMAKE_GENERIC_PROGRAM_FILES}/${}) set(CPACK_INSTALL_PREFIX ${CMAKE_GENERIC_PROGRAM_FILES}/${})
@@ -741,13 +777,6 @@ if(CMAKE_INSTALL_PREFIX_INITIALIZED_TO_DEFAULT)
endif() endif()
endif() endif()
# Effective install path including config folder, as a generator expression.
get_property(GENERATOR_IS_MULTI_CONFIG GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG)
if(GENERATOR_IS_MULTI_CONFIG)
string(REPLACE "\${BUILD_TYPE}" "$<CONFIG>" CMAKE_INSTALL_PREFIX_WITH_CONFIG ${CMAKE_INSTALL_PREFIX})
else()
string(REPLACE "\${BUILD_TYPE}" "" CMAKE_INSTALL_PREFIX_WITH_CONFIG ${CMAKE_INSTALL_PREFIX})
endif()
# Apple # Apple
@@ -757,8 +786,8 @@ if(APPLE)
endif() endif()
# ----------------------------------------------------------------------------- #-----------------------------------------------------------------------------
# Check for Conflicting/Unsupported Configurations # Check for conflicting/unsupported configurations
if(NOT WITH_BLENDER AND NOT WITH_CYCLES_STANDALONE AND NOT WITH_CYCLES_HYDRA_RENDER_DELEGATE) if(NOT WITH_BLENDER AND NOT WITH_CYCLES_STANDALONE AND NOT WITH_CYCLES_HYDRA_RENDER_DELEGATE)
message(FATAL_ERROR message(FATAL_ERROR
@@ -868,6 +897,7 @@ if(WITH_GHOST_SDL OR WITH_HEADLESS)
set(WITH_X11_XINPUT OFF) set(WITH_X11_XINPUT OFF)
set(WITH_X11_XF86VMODE OFF) set(WITH_X11_XF86VMODE OFF)
set(WITH_X11_XFIXES OFF) set(WITH_X11_XFIXES OFF)
set(WITH_X11_ALPHA OFF)
set(WITH_GHOST_XDND OFF) set(WITH_GHOST_XDND OFF)
set(WITH_INPUT_IME OFF) set(WITH_INPUT_IME OFF)
set(WITH_XR_OPENXR OFF) set(WITH_XR_OPENXR OFF)
@@ -892,11 +922,7 @@ endif()
if(WITH_CYCLES AND WITH_CYCLES_DEVICE_CUDA AND NOT WITH_CUDA_DYNLOAD) if(WITH_CYCLES AND WITH_CYCLES_DEVICE_CUDA AND NOT WITH_CUDA_DYNLOAD)
find_package(CUDA) find_package(CUDA)
if(NOT CUDA_FOUND) if(NOT CUDA_FOUND)
message( message(STATUS "CUDA toolkit not found, using dynamic runtime loading of libraries (WITH_CUDA_DYNLOAD) instead")
STATUS
"CUDA toolkit not found, "
"using dynamic runtime loading of libraries (WITH_CUDA_DYNLOAD) instead"
)
set(WITH_CUDA_DYNLOAD ON) set(WITH_CUDA_DYNLOAD ON)
endif() endif()
endif() endif()
@@ -906,16 +932,14 @@ if(WITH_CYCLES_DEVICE_HIP)
set(WITH_HIP_DYNLOAD ON) set(WITH_HIP_DYNLOAD ON)
endif() endif()
#-----------------------------------------------------------------------------
# ----------------------------------------------------------------------------- # Check if submodules are cloned.
# Check if Sub-modules are Cloned
if(WITH_INTERNATIONAL) if(WITH_INTERNATIONAL)
file(GLOB RESULT "${CMAKE_SOURCE_DIR}/release/datafiles/locale") file(GLOB RESULT "${CMAKE_SOURCE_DIR}/release/datafiles/locale")
list(LENGTH RESULT DIR_LEN) list(LENGTH RESULT DIR_LEN)
if(DIR_LEN EQUAL 0) if(DIR_LEN EQUAL 0)
message( message(WARNING
WARNING
"Translation path '${CMAKE_SOURCE_DIR}/release/datafiles/locale' is missing, " "Translation path '${CMAKE_SOURCE_DIR}/release/datafiles/locale' is missing, "
"This is a 'git submodule', which are known not to work with bridges to other version " "This is a 'git submodule', which are known not to work with bridges to other version "
"control systems, disabling 'WITH_INTERNATIONAL'." "control systems, disabling 'WITH_INTERNATIONAL'."
@@ -933,17 +957,13 @@ if(WITH_PYTHON)
# because UNIX will search for the old Python paths which may not exist. # because UNIX will search for the old Python paths which may not exist.
# giving errors about missing paths before this case is met. # giving errors about missing paths before this case is met.
if(DEFINED PYTHON_VERSION AND "${PYTHON_VERSION}" VERSION_LESS "3.10") if(DEFINED PYTHON_VERSION AND "${PYTHON_VERSION}" VERSION_LESS "3.10")
message( message(FATAL_ERROR "At least Python 3.10 is required to build, but found Python ${PYTHON_VERSION}")
FATAL_ERROR
"At least Python 3.10 is required to build, but found Python ${PYTHON_VERSION}"
)
endif() endif()
file(GLOB RESULT "${CMAKE_SOURCE_DIR}/release/scripts/addons") file(GLOB RESULT "${CMAKE_SOURCE_DIR}/release/scripts/addons")
list(LENGTH RESULT DIR_LEN) list(LENGTH RESULT DIR_LEN)
if(DIR_LEN EQUAL 0) if(DIR_LEN EQUAL 0)
message( message(WARNING
WARNING
"Addons path '${CMAKE_SOURCE_DIR}/release/scripts/addons' is missing, " "Addons path '${CMAKE_SOURCE_DIR}/release/scripts/addons' is missing, "
"This is a 'git submodule', which are known not to work with bridges to other version " "This is a 'git submodule', which are known not to work with bridges to other version "
"control systems: * CONTINUING WITHOUT ADDONS *" "control systems: * CONTINUING WITHOUT ADDONS *"
@@ -951,9 +971,8 @@ if(WITH_PYTHON)
endif() endif()
endif() endif()
#-----------------------------------------------------------------------------
# ----------------------------------------------------------------------------- # Initialize un-cached vars, avoid unused warning
# InitialIze Un-cached Vars, Avoid Unused Warning
# linux only, not cached # linux only, not cached
set(WITH_BINRELOC OFF) set(WITH_BINRELOC OFF)
@@ -1028,7 +1047,6 @@ if(WITH_CPU_SIMD)
endif() endif()
endif() endif()
# ---------------------------------------------------------------------------- # ----------------------------------------------------------------------------
# Main Platform Checks # Main Platform Checks
# #
@@ -1044,9 +1062,8 @@ elseif(APPLE)
include(platform_apple) include(platform_apple)
endif() endif()
#-----------------------------------------------------------------------------
# ----------------------------------------------------------------------------- # Common.
# Common Checks for Compatible Options
if(NOT WITH_FFTW3 AND WITH_MOD_OCEANSIM) if(NOT WITH_FFTW3 AND WITH_MOD_OCEANSIM)
message(FATAL_ERROR "WITH_MOD_OCEANSIM requires WITH_FFTW3 to be ON") message(FATAL_ERROR "WITH_MOD_OCEANSIM requires WITH_FFTW3 to be ON")
@@ -1054,15 +1071,13 @@ endif()
if(WITH_CYCLES) if(WITH_CYCLES)
if(NOT WITH_OPENIMAGEIO) if(NOT WITH_OPENIMAGEIO)
message( message(FATAL_ERROR
FATAL_ERROR
"Cycles requires WITH_OPENIMAGEIO, the library may not have been found. " "Cycles requires WITH_OPENIMAGEIO, the library may not have been found. "
"Configure OIIO or disable WITH_CYCLES" "Configure OIIO or disable WITH_CYCLES"
) )
endif() endif()
if(NOT WITH_BOOST) if(NOT WITH_BOOST)
message( message(FATAL_ERROR
FATAL_ERROR
"Cycles requires WITH_BOOST, the library may not have been found. " "Cycles requires WITH_BOOST, the library may not have been found. "
"Configure BOOST or disable WITH_CYCLES" "Configure BOOST or disable WITH_CYCLES"
) )
@@ -1070,8 +1085,7 @@ if(WITH_CYCLES)
if(WITH_CYCLES_OSL) if(WITH_CYCLES_OSL)
if(NOT WITH_LLVM) if(NOT WITH_LLVM)
message( message(FATAL_ERROR
FATAL_ERROR
"Cycles OSL requires WITH_LLVM, the library may not have been found. " "Cycles OSL requires WITH_LLVM, the library may not have been found. "
"Configure LLVM or disable WITH_CYCLES_OSL" "Configure LLVM or disable WITH_CYCLES_OSL"
) )
@@ -1081,8 +1095,7 @@ endif()
if(WITH_INTERNATIONAL) if(WITH_INTERNATIONAL)
if(NOT WITH_BOOST) if(NOT WITH_BOOST)
message( message(FATAL_ERROR
FATAL_ERROR
"Internationalization requires WITH_BOOST, the library may not have been found. " "Internationalization requires WITH_BOOST, the library may not have been found. "
"Configure BOOST or disable WITH_INTERNATIONAL" "Configure BOOST or disable WITH_INTERNATIONAL"
) )
@@ -1183,8 +1196,7 @@ if(WITH_OPENVDB)
list(APPEND OPENVDB_INCLUDE_DIRS list(APPEND OPENVDB_INCLUDE_DIRS
${BOOST_INCLUDE_DIR} ${BOOST_INCLUDE_DIR}
${TBB_INCLUDE_DIRS} ${TBB_INCLUDE_DIRS}
${OPENEXR_INCLUDE_DIRS} ${OPENEXR_INCLUDE_DIRS})
)
list(APPEND OPENVDB_LIBRARIES ${OPENEXR_LIBRARIES} ${ZLIB_LIBRARIES}) list(APPEND OPENVDB_LIBRARIES ${OPENEXR_LIBRARIES} ${ZLIB_LIBRARIES})
@@ -1196,19 +1208,145 @@ if(WITH_OPENVDB)
list(APPEND OPENVDB_LIBRARIES ${BOOST_LIBRARIES} ${TBB_LIBRARIES}) list(APPEND OPENVDB_LIBRARIES ${BOOST_LIBRARIES} ${TBB_LIBRARIES})
endif() endif()
#-----------------------------------------------------------------------------
# Configure OpenGL.
# ----------------------------------------------------------------------------- find_package(OpenGL)
# Configure OpenGL blender_include_dirs_sys("${OPENGL_INCLUDE_DIR}")
if(WITH_OPENGL) if(WITH_OPENGL)
add_definitions(-DWITH_OPENGL) add_definitions(-DWITH_OPENGL)
endif() endif()
if(WITH_SYSTEM_GLES)
find_package_wrapper(OpenGLES)
endif()
# ----------------------------------------------------------------------------- if(WITH_GL_PROFILE_ES20)
# Configure Metal if(WITH_SYSTEM_GLES)
if(NOT OPENGLES_LIBRARY)
message(FATAL_ERROR
"Unable to find OpenGL ES libraries. "
"Install them or disable WITH_SYSTEM_GLES."
)
endif()
if(WITH_METAL_BACKEND) list(APPEND BLENDER_GL_LIBRARIES "${OPENGLES_LIBRARY}")
else()
set(OPENGLES_LIBRARY "" CACHE FILEPATH "OpenGL ES 2.0 library file")
mark_as_advanced(OPENGLES_LIBRARY)
list(APPEND BLENDER_GL_LIBRARIES "${OPENGLES_LIBRARY}")
if(NOT OPENGLES_LIBRARY)
message(FATAL_ERROR
"To compile WITH_GL_EGL you need to set OPENGLES_LIBRARY "
"to the file path of an OpenGL ES 2.0 library."
)
endif()
endif()
if(WIN32)
# Setup paths to files needed to install and redistribute Windows Blender with OpenGL ES
set(OPENGLES_DLL "" CACHE FILEPATH "OpenGL ES 2.0 redistributable DLL file")
mark_as_advanced(OPENGLES_DLL)
if(NOT OPENGLES_DLL)
message(FATAL_ERROR
"To compile WITH_GL_PROFILE_ES20 you need to set OPENGLES_DLL to the file "
"path of an OpenGL ES 2.0 runtime dynamic link library (DLL)."
)
endif()
if(WITH_GL_ANGLE)
list(APPEND GL_DEFINITIONS -DWITH_ANGLE)
set(D3DCOMPILER_DLL "" CACHE FILEPATH "Direct3D Compiler redistributable DLL file (needed by ANGLE)")
get_filename_component(D3DCOMPILER_FILENAME "${D3DCOMPILER_DLL}" NAME)
list(APPEND GL_DEFINITIONS "-DD3DCOMPILER=\"\\\"${D3DCOMPILER_FILENAME}\\\"\"")
mark_as_advanced(D3DCOMPILER_DLL)
if(D3DCOMPILER_DLL STREQUAL "")
message(FATAL_ERROR
"To compile WITH_GL_ANGLE you need to set D3DCOMPILER_DLL to the file "
"path of a copy of the DirectX redistributable DLL file: D3DCompiler_46.dll"
)
endif()
endif()
endif()
else()
if(OpenGL_GL_PREFERENCE STREQUAL "LEGACY" AND OPENGL_gl_LIBRARY)
list(APPEND BLENDER_GL_LIBRARIES ${OPENGL_gl_LIBRARY})
else()
list(APPEND BLENDER_GL_LIBRARIES ${OPENGL_opengl_LIBRARY} ${OPENGL_glx_LIBRARY})
endif()
endif()
if(WITH_GL_EGL)
find_package(OpenGL REQUIRED EGL)
list(APPEND BLENDER_GL_LIBRARIES OpenGL::EGL)
list(APPEND GL_DEFINITIONS -DWITH_GL_EGL -DGLEW_EGL -DGLEW_INC_EGL)
if(WITH_SYSTEM_GLES)
if(NOT OPENGLES_EGL_LIBRARY)
message(FATAL_ERROR
"Unable to find OpenGL ES libraries. "
"Install them or disable WITH_SYSTEM_GLES."
)
endif()
list(APPEND BLENDER_GL_LIBRARIES ${OPENGLES_EGL_LIBRARY})
else()
set(OPENGLES_EGL_LIBRARY "" CACHE FILEPATH "EGL library file")
mark_as_advanced(OPENGLES_EGL_LIBRARY)
list(APPEND BLENDER_GL_LIBRARIES "${OPENGLES_LIBRARY}" "${OPENGLES_EGL_LIBRARY}")
if(NOT OPENGLES_EGL_LIBRARY)
message(FATAL_ERROR
"To compile WITH_GL_EGL you need to set OPENGLES_EGL_LIBRARY "
"to the file path of an EGL library."
)
endif()
endif()
if(WIN32)
# Setup paths to files needed to install and redistribute Windows Blender with OpenGL ES
set(OPENGLES_EGL_DLL "" CACHE FILEPATH "EGL redistributable DLL file")
mark_as_advanced(OPENGLES_EGL_DLL)
if(NOT OPENGLES_EGL_DLL)
message(FATAL_ERROR
"To compile WITH_GL_EGL you need to set OPENGLES_EGL_DLL "
"to the file path of an EGL runtime dynamic link library (DLL)."
)
endif()
endif()
endif()
if(WITH_GL_PROFILE_ES20)
list(APPEND GL_DEFINITIONS -DWITH_GL_PROFILE_ES20)
else()
list(APPEND GL_DEFINITIONS -DWITH_GL_PROFILE_CORE)
endif()
#-----------------------------------------------------------------------------
# Configure Metal.
if (WITH_METAL_BACKEND)
add_definitions(-DWITH_METAL_BACKEND) add_definitions(-DWITH_METAL_BACKEND)
# No need to add frameworks here, all the ones we need for Metal and # No need to add frameworks here, all the ones we need for Metal and
@@ -1216,10 +1354,8 @@ if(WITH_METAL_BACKEND)
# build_files/cmake/platform/platform_apple.cmake # build_files/cmake/platform/platform_apple.cmake
endif() endif()
#-----------------------------------------------------------------------------
# ----------------------------------------------------------------------------- # Configure OpenMP.
# Configure OpenMP
if(WITH_OPENMP) if(WITH_OPENMP)
if(NOT OPENMP_CUSTOM) if(NOT OPENMP_CUSTOM)
find_package(OpenMP) find_package(OpenMP)
@@ -1251,8 +1387,67 @@ if(WITH_OPENMP)
) )
endif() endif()
#-----------------------------------------------------------------------------
# Configure GLEW
# ----------------------------------------------------------------------------- if(WITH_SYSTEM_GLEW)
find_package(GLEW)
# Note: There is an assumption here that the system GLEW is not a static library.
if(NOT GLEW_FOUND)
message(FATAL_ERROR "GLEW is required to build Blender. Install it or disable WITH_SYSTEM_GLEW.")
endif()
set(GLEW_INCLUDE_PATH "${GLEW_INCLUDE_DIR}")
set(BLENDER_GLEW_LIBRARIES ${GLEW_LIBRARY})
else()
if(WITH_GLEW_ES)
set(GLEW_INCLUDE_PATH "${CMAKE_SOURCE_DIR}/extern/glew-es/include")
list(APPEND GL_DEFINITIONS -DGLEW_STATIC -DWITH_GLEW_ES)
# These definitions remove APIs from glew.h, making GLEW smaller, and catching unguarded API usage
if(WITH_GL_PROFILE_ES20)
list(APPEND GL_DEFINITIONS -DGLEW_ES_ONLY)
else()
# No ES functions are needed
list(APPEND GL_DEFINITIONS -DGLEW_NO_ES)
endif()
if(WITH_GL_PROFILE_ES20)
if(WITH_GL_EGL)
list(APPEND GL_DEFINITIONS -DGLEW_USE_LIB_ES20)
endif()
# ToDo: This is an experiment to eliminate ES 1 symbols,
# GLEW doesn't really properly provide this level of control
# (for example, without modification it eliminates too many symbols)
# so there are lots of modifications to GLEW to make this work,
# and no attempt to make it work beyond Blender at this point.
list(APPEND GL_DEFINITIONS -DGL_ES_VERSION_1_0=0 -DGL_ES_VERSION_CL_1_1=0 -DGL_ES_VERSION_CM_1_1=0)
endif()
set(BLENDER_GLEW_LIBRARIES extern_glew_es bf_intern_glew_mx)
else()
set(GLEW_INCLUDE_PATH "${CMAKE_SOURCE_DIR}/extern/glew/include")
list(APPEND GL_DEFINITIONS -DGLEW_STATIC)
# This won't affect the non-experimental glew library,
# but is used for conditional compilation elsewhere.
list(APPEND GL_DEFINITIONS -DGLEW_NO_ES)
set(BLENDER_GLEW_LIBRARIES extern_glew)
endif()
endif()
list(APPEND GL_DEFINITIONS -DGLEW_NO_GLU)
#-----------------------------------------------------------------------------
# Configure Bullet # Configure Bullet
if(WITH_BULLET AND WITH_SYSTEM_BULLET) if(WITH_BULLET AND WITH_SYSTEM_BULLET)
@@ -1266,17 +1461,15 @@ else()
# set(BULLET_LIBRARIES "") # set(BULLET_LIBRARIES "")
endif() endif()
#-----------------------------------------------------------------------------
# ----------------------------------------------------------------------------- # Configure Python.
# Configure Python
if(WITH_PYTHON_MODULE) if(WITH_PYTHON_MODULE)
add_definitions(-DPy_ENABLE_SHARED) add_definitions(-DPy_ENABLE_SHARED)
endif() endif()
#-----------------------------------------------------------------------------
# ----------------------------------------------------------------------------- # Configure GLog/GFlags
# Configure `GLog/GFlags`
if(WITH_LIBMV OR WITH_GTESTS OR (WITH_CYCLES AND WITH_CYCLES_LOGGING)) if(WITH_LIBMV OR WITH_GTESTS OR (WITH_CYCLES AND WITH_CYCLES_LOGGING))
if(WITH_SYSTEM_GFLAGS) if(WITH_SYSTEM_GFLAGS)
@@ -1284,7 +1477,7 @@ if(WITH_LIBMV OR WITH_GTESTS OR (WITH_CYCLES AND WITH_CYCLES_LOGGING))
if(NOT GFLAGS_FOUND) if(NOT GFLAGS_FOUND)
message(FATAL_ERROR "System wide Gflags is requested but was not found") message(FATAL_ERROR "System wide Gflags is requested but was not found")
endif() endif()
# `FindGflags` does not define this, and we are not even sure what to use here. # FindGflags does not define this, and we are not even sure what to use here.
set(GFLAGS_DEFINES) set(GFLAGS_DEFINES)
else() else()
set(GFLAGS_DEFINES set(GFLAGS_DEFINES
@@ -1302,7 +1495,7 @@ if(WITH_LIBMV OR WITH_GTESTS OR (WITH_CYCLES AND WITH_CYCLES_LOGGING))
if(NOT GLOG_FOUND) if(NOT GLOG_FOUND)
message(FATAL_ERROR "System wide Glog is requested but was not found") message(FATAL_ERROR "System wide Glog is requested but was not found")
endif() endif()
# `FindGlog` does not define this, and we are not even sure what to use here. # FindGlog does not define this, and we are not even sure what to use here.
set(GLOG_DEFINES) set(GLOG_DEFINES)
else() else()
set(GLOG_DEFINES set(GLOG_DEFINES
@@ -1317,13 +1510,9 @@ if(WITH_LIBMV OR WITH_GTESTS OR (WITH_CYCLES AND WITH_CYCLES_LOGGING))
endif() endif()
endif() endif()
#-----------------------------------------------------------------------------
# -----------------------------------------------------------------------------
# Ninja Job Limiting
# Extra limits to number of jobs running in parallel for some kind os tasks. # Extra limits to number of jobs running in parallel for some kind os tasks.
# Only supported by Ninja build system currently. # Only supported by Ninja build system currently.
if("${CMAKE_GENERATOR}" MATCHES "Ninja" AND WITH_NINJA_POOL_JOBS) if("${CMAKE_GENERATOR}" MATCHES "Ninja" AND WITH_NINJA_POOL_JOBS)
if(NOT NINJA_MAX_NUM_PARALLEL_COMPILE_JOBS AND if(NOT NINJA_MAX_NUM_PARALLEL_COMPILE_JOBS AND
NOT NINJA_MAX_NUM_PARALLEL_COMPILE_HEAVY_JOBS AND NOT NINJA_MAX_NUM_PARALLEL_COMPILE_HEAVY_JOBS AND
@@ -1335,8 +1524,7 @@ if("${CMAKE_GENERATOR}" MATCHES "Ninja" AND WITH_NINJA_POOL_JOBS)
# Note: this gives mem in MB. # Note: this gives mem in MB.
cmake_host_system_information(RESULT _TOT_MEM QUERY TOTAL_PHYSICAL_MEMORY) cmake_host_system_information(RESULT _TOT_MEM QUERY TOTAL_PHYSICAL_MEMORY)
# Heuristics: the more cores we have, the more free memory we have to keep # Heuristics... the more cores we have, the more free mem we have to keep for the non-heavy tasks too.
# for the non-heavy tasks too.
if(${_TOT_MEM} LESS 8000 AND ${_NUM_CORES} GREATER 2) if(${_TOT_MEM} LESS 8000 AND ${_NUM_CORES} GREATER 2)
set(_compile_heavy_jobs "1") set(_compile_heavy_jobs "1")
elseif(${_TOT_MEM} LESS 16000 AND ${_NUM_CORES} GREATER 4) elseif(${_TOT_MEM} LESS 16000 AND ${_NUM_CORES} GREATER 4)
@@ -1356,8 +1544,7 @@ if("${CMAKE_GENERATOR}" MATCHES "Ninja" AND WITH_NINJA_POOL_JOBS)
mark_as_advanced(NINJA_MAX_NUM_PARALLEL_COMPILE_HEAVY_JOBS) mark_as_advanced(NINJA_MAX_NUM_PARALLEL_COMPILE_HEAVY_JOBS)
set(_compile_heavy_jobs) set(_compile_heavy_jobs)
# Only set regular compile jobs if we set heavy jobs, # Only set regular compile jobs if we set heavy jobs, otherwise default (using all cores) if fine.
# otherwise default (using all cores) if fine.
if(NINJA_MAX_NUM_PARALLEL_COMPILE_HEAVY_JOBS) if(NINJA_MAX_NUM_PARALLEL_COMPILE_HEAVY_JOBS)
math(EXPR _compile_jobs "${_NUM_CORES} - 1") math(EXPR _compile_jobs "${_NUM_CORES} - 1")
else() else()
@@ -1368,8 +1555,8 @@ if("${CMAKE_GENERATOR}" MATCHES "Ninja" AND WITH_NINJA_POOL_JOBS)
mark_as_advanced(NINJA_MAX_NUM_PARALLEL_COMPILE_JOBS) mark_as_advanced(NINJA_MAX_NUM_PARALLEL_COMPILE_JOBS)
set(_compile_jobs) set(_compile_jobs)
# In practice, even when there is RAM available, # In practice, even when there is RAM available, this proves to be quicker than running in parallel
# this proves to be quicker than running in parallel (due to slow disks accesses). # (due to slow disks accesses).
set(NINJA_MAX_NUM_PARALLEL_LINK_JOBS "1" CACHE STRING set(NINJA_MAX_NUM_PARALLEL_LINK_JOBS "1" CACHE STRING
"Define the maximum number of concurrent link jobs, for ninja build system." FORCE) "Define the maximum number of concurrent link jobs, for ninja build system." FORCE)
mark_as_advanced(NINJA_MAX_NUM_PARALLEL_LINK_JOBS) mark_as_advanced(NINJA_MAX_NUM_PARALLEL_LINK_JOBS)
@@ -1393,9 +1580,8 @@ if("${CMAKE_GENERATOR}" MATCHES "Ninja" AND WITH_NINJA_POOL_JOBS)
endif() endif()
endif() endif()
#-----------------------------------------------------------------------------
# ----------------------------------------------------------------------------- # Extra compile flags
# Extra Compile Flags
if(CMAKE_COMPILER_IS_GNUCC) if(CMAKE_COMPILER_IS_GNUCC)
@@ -1485,7 +1671,7 @@ if(CMAKE_COMPILER_IS_GNUCC)
endif() endif()
# --------------------- #----------------------
# Suppress Strict Flags # Suppress Strict Flags
# #
# Exclude the following warnings from this list: # Exclude the following warnings from this list:
@@ -1551,7 +1737,7 @@ elseif(CMAKE_C_COMPILER_ID MATCHES "Clang")
# ADD_CHECK_C_COMPILER_FLAG(C_WARNINGS C_WARN_UNUSED_MACROS -Wunused-macros) # ADD_CHECK_C_COMPILER_FLAG(C_WARNINGS C_WARN_UNUSED_MACROS -Wunused-macros)
# ADD_CHECK_CXX_COMPILER_FLAG(CXX_WARNINGS CXX_WARN_UNUSED_MACROS -Wunused-macros) # ADD_CHECK_CXX_COMPILER_FLAG(CXX_WARNINGS CXX_WARN_UNUSED_MACROS -Wunused-macros)
# --------------------- #----------------------
# Suppress Strict Flags # Suppress Strict Flags
# flags to undo strict flags # flags to undo strict flags
@@ -1642,8 +1828,7 @@ endif()
# be most problematic. # be most problematic.
if(WITH_PYTHON) if(WITH_PYTHON)
if(NOT EXISTS "${PYTHON_INCLUDE_DIR}/Python.h") if(NOT EXISTS "${PYTHON_INCLUDE_DIR}/Python.h")
message( message(FATAL_ERROR
FATAL_ERROR
"Missing: \"${PYTHON_INCLUDE_DIR}/Python.h\",\n" "Missing: \"${PYTHON_INCLUDE_DIR}/Python.h\",\n"
"Set the cache entry 'PYTHON_INCLUDE_DIR' to point " "Set the cache entry 'PYTHON_INCLUDE_DIR' to point "
"to a valid python include path. Containing " "to a valid python include path. Containing "
@@ -1651,8 +1836,8 @@ if(WITH_PYTHON)
) )
endif() endif()
if(WIN32) if(WIN32 OR APPLE)
# Always use numpy bundled in precompiled libs. # Windows and macOS have this bundled with Python libraries.
elseif((WITH_PYTHON_INSTALL AND WITH_PYTHON_INSTALL_NUMPY) OR WITH_PYTHON_NUMPY) elseif((WITH_PYTHON_INSTALL AND WITH_PYTHON_INSTALL_NUMPY) OR WITH_PYTHON_NUMPY)
if(("${PYTHON_NUMPY_PATH}" STREQUAL "") OR (${PYTHON_NUMPY_PATH} MATCHES NOTFOUND)) if(("${PYTHON_NUMPY_PATH}" STREQUAL "") OR (${PYTHON_NUMPY_PATH} MATCHES NOTFOUND))
find_python_package(numpy "core/include") find_python_package(numpy "core/include")
@@ -1660,13 +1845,13 @@ if(WITH_PYTHON)
endif() endif()
if(WIN32 OR APPLE) if(WIN32 OR APPLE)
# Always copy from precompiled libs. # pass, we have this in lib/python/site-packages
elseif(WITH_PYTHON_INSTALL_REQUESTS) elseif(WITH_PYTHON_INSTALL_REQUESTS)
find_python_package(requests "") find_python_package(requests "")
endif() endif()
if(WIN32 OR APPLE) if(WIN32 OR APPLE)
# Always copy from precompiled libs. # pass, we have this in lib/python/site-packages
elseif(WITH_PYTHON_INSTALL_ZSTANDARD) elseif(WITH_PYTHON_INSTALL_ZSTANDARD)
find_python_package(zstandard "") find_python_package(zstandard "")
endif() endif()
@@ -1712,11 +1897,9 @@ if(WITH_COMPILER_SHORT_FILE_MACRO)
if(XCODE AND ${XCODE_VERSION} VERSION_LESS 12.0) if(XCODE AND ${XCODE_VERSION} VERSION_LESS 12.0)
# Developers may have say LLVM Clang-10.0.1 toolchain (which supports the flag) # Developers may have say LLVM Clang-10.0.1 toolchain (which supports the flag)
# with Xcode-11 (the Clang of which doesn't support the flag). # with Xcode-11 (the Clang of which doesn't support the flag).
message( message(WARNING
WARNING
"-fmacro-prefix-map flag is NOT supported by Clang shipped with Xcode-${XCODE_VERSION}." "-fmacro-prefix-map flag is NOT supported by Clang shipped with Xcode-${XCODE_VERSION}."
" Some Xcode functionality in Product menu may not work. " " Some Xcode functionality in Product menu may not work. Disabling WITH_COMPILER_SHORT_FILE_MACRO."
"Disabling WITH_COMPILER_SHORT_FILE_MACRO."
) )
set(WITH_COMPILER_SHORT_FILE_MACRO OFF) set(WITH_COMPILER_SHORT_FILE_MACRO OFF)
endif() endif()
@@ -1732,8 +1915,7 @@ if(WITH_COMPILER_SHORT_FILE_MACRO)
unset(_bin_dir) unset(_bin_dir)
endif() endif()
else() else()
message( message(WARNING
WARNING
"-fmacro-prefix-map flag is NOT supported by C/C++ compiler." "-fmacro-prefix-map flag is NOT supported by C/C++ compiler."
" Disabling WITH_COMPILER_SHORT_FILE_MACRO." " Disabling WITH_COMPILER_SHORT_FILE_MACRO."
) )
@@ -1763,8 +1945,7 @@ mark_as_advanced(
LLVM_VERSION LLVM_VERSION
) )
#-------------------------------------------------------------------------------
# -------------------------------------------------------------------------------
# Global Defines # Global Defines
# better not set includes here but this debugging option is off by default. # better not set includes here but this debugging option is off by default.
@@ -1780,9 +1961,8 @@ endif()
# message(STATUS "Using CFLAGS: ${CMAKE_C_FLAGS}") # message(STATUS "Using CFLAGS: ${CMAKE_C_FLAGS}")
# message(STATUS "Using CXXFLAGS: ${CMAKE_CXX_FLAGS}") # message(STATUS "Using CXXFLAGS: ${CMAKE_CXX_FLAGS}")
#-----------------------------------------------------------------------------
# ----------------------------------------------------------------------------- # Libraries
# Add Sub-Directories
if(WITH_BLENDER) if(WITH_BLENDER)
add_subdirectory(intern) add_subdirectory(intern)
@@ -1792,6 +1972,7 @@ if(WITH_BLENDER)
# internal and external library information first, for test linking # internal and external library information first, for test linking
add_subdirectory(source) add_subdirectory(source)
elseif(WITH_CYCLES_STANDALONE OR WITH_CYCLES_HYDRA_RENDER_DELEGATE) elseif(WITH_CYCLES_STANDALONE OR WITH_CYCLES_HYDRA_RENDER_DELEGATE)
add_subdirectory(intern/glew-mx)
add_subdirectory(intern/guardedalloc) add_subdirectory(intern/guardedalloc)
add_subdirectory(intern/libc_compat) add_subdirectory(intern/libc_compat)
add_subdirectory(intern/sky) add_subdirectory(intern/sky)
@@ -1809,43 +1990,38 @@ elseif(WITH_CYCLES_STANDALONE OR WITH_CYCLES_HYDRA_RENDER_DELEGATE)
if(WITH_HIP_DYNLOAD) if(WITH_HIP_DYNLOAD)
add_subdirectory(extern/hipew) add_subdirectory(extern/hipew)
endif() endif()
if(NOT WITH_SYSTEM_GLEW)
add_subdirectory(extern/glew)
endif()
endif() endif()
#-----------------------------------------------------------------------------
# ----------------------------------------------------------------------------- # Testing
# Add Testing Directory
add_subdirectory(tests) add_subdirectory(tests)
#-----------------------------------------------------------------------------
# ----------------------------------------------------------------------------- # Blender Application
# Add Blender Application
if(WITH_BLENDER) if(WITH_BLENDER)
add_subdirectory(source/creator) add_subdirectory(source/creator)
endif() endif()
# ----------------------------------------------------------------------------- #-----------------------------------------------------------------------------
# Define 'heavy' sub-modules (for Ninja builder when using pools) # Define 'heavy' submodules (for Ninja builder when using pools).
setup_heavy_lib_pool() setup_heavy_lib_pool()
# ----------------------------------------------------------------------------- #-----------------------------------------------------------------------------
# CPack for generating packages # CPack for generating packages
include(build_files/cmake/packaging.cmake) include(build_files/cmake/packaging.cmake)
#-----------------------------------------------------------------------------
# ----------------------------------------------------------------------------- # Use dynamic loading for OpenMP
# Use Dynamic Loading for OpenMP
if(WITH_BLENDER) if(WITH_BLENDER)
openmp_delayload(blender) openmp_delayload(blender)
endif() endif()
#-----------------------------------------------------------------------------
# -----------------------------------------------------------------------------
# Print Final Configuration # Print Final Configuration
if(FIRST_RUN) if(FIRST_RUN)
@@ -1909,6 +2085,8 @@ if(FIRST_RUN)
info_cfg_option(WITH_INSTALL_PORTABLE) info_cfg_option(WITH_INSTALL_PORTABLE)
info_cfg_option(WITH_MEM_JEMALLOC) info_cfg_option(WITH_MEM_JEMALLOC)
info_cfg_option(WITH_MEM_VALGRIND) info_cfg_option(WITH_MEM_VALGRIND)
info_cfg_option(WITH_SYSTEM_GLEW)
info_cfg_option(WITH_X11_ALPHA)
info_cfg_option(WITH_X11_XF86VMODE) info_cfg_option(WITH_X11_XF86VMODE)
info_cfg_option(WITH_X11_XFIXES) info_cfg_option(WITH_X11_XFIXES)
info_cfg_option(WITH_X11_XINPUT) info_cfg_option(WITH_X11_XINPUT)
@@ -1941,6 +2119,9 @@ if(FIRST_RUN)
info_cfg_option(WITH_LZO) info_cfg_option(WITH_LZO)
info_cfg_text("Python:") info_cfg_text("Python:")
if(APPLE)
info_cfg_option(WITH_PYTHON_FRAMEWORK)
endif()
info_cfg_option(WITH_PYTHON_INSTALL) info_cfg_option(WITH_PYTHON_INSTALL)
info_cfg_option(WITH_PYTHON_INSTALL_NUMPY) info_cfg_option(WITH_PYTHON_INSTALL_NUMPY)
info_cfg_option(WITH_PYTHON_INSTALL_ZSTANDARD) info_cfg_option(WITH_PYTHON_INSTALL_ZSTANDARD)
@@ -1952,6 +2133,14 @@ if(FIRST_RUN)
info_cfg_option(WITH_MOD_OCEANSIM) info_cfg_option(WITH_MOD_OCEANSIM)
info_cfg_option(WITH_MOD_REMESH) info_cfg_option(WITH_MOD_REMESH)
info_cfg_text("OpenGL:")
if(WIN32)
info_cfg_option(WITH_GL_ANGLE)
endif()
info_cfg_option(WITH_GL_EGL)
info_cfg_option(WITH_GL_PROFILE_ES20)
info_cfg_option(WITH_GLEW_ES)
info_cfg_text("") info_cfg_text("")
message("${_config_msg}") message("${_config_msg}")

View File

@@ -162,7 +162,6 @@ CPU:=$(shell uname -m)
# Source and Build DIR's # Source and Build DIR's
BLENDER_DIR:=$(shell pwd -P) BLENDER_DIR:=$(shell pwd -P)
BUILD_TYPE:=Release BUILD_TYPE:=Release
BLENDER_IS_PYTHON_MODULE:=
# CMake arguments, assigned to local variable to make it mutable. # CMake arguments, assigned to local variable to make it mutable.
CMAKE_CONFIG_ARGS := $(BUILD_CMAKE_ARGS) CMAKE_CONFIG_ARGS := $(BUILD_CMAKE_ARGS)
@@ -230,18 +229,9 @@ endif
# ----------------------------------------------------------------------------- # -----------------------------------------------------------------------------
# Additional targets for the build configuration # additional targets for the build configuration
# NOTE: These targets can be combined and are applied in reverse order listed here. # support 'make debug'
# So it's important that `bpy` comes before `release` (for example)
# `make bpy release` first loads `release` configuration, then `bpy`.
# This is important as `bpy` will turn off some settings enabled by release.
ifneq "$(findstring bpy, $(MAKECMDGOALS))" ""
BUILD_DIR:=$(BUILD_DIR)_bpy
CMAKE_CONFIG_ARGS:=-C"$(BLENDER_DIR)/build_files/cmake/config/bpy_module.cmake" $(CMAKE_CONFIG_ARGS)
BLENDER_IS_PYTHON_MODULE:=1
endif
ifneq "$(findstring debug, $(MAKECMDGOALS))" "" ifneq "$(findstring debug, $(MAKECMDGOALS))" ""
BUILD_DIR:=$(BUILD_DIR)_debug BUILD_DIR:=$(BUILD_DIR)_debug
BUILD_TYPE:=Debug BUILD_TYPE:=Debug
@@ -266,6 +256,10 @@ ifneq "$(findstring headless, $(MAKECMDGOALS))" ""
BUILD_DIR:=$(BUILD_DIR)_headless BUILD_DIR:=$(BUILD_DIR)_headless
CMAKE_CONFIG_ARGS:=-C"$(BLENDER_DIR)/build_files/cmake/config/blender_headless.cmake" $(CMAKE_CONFIG_ARGS) CMAKE_CONFIG_ARGS:=-C"$(BLENDER_DIR)/build_files/cmake/config/blender_headless.cmake" $(CMAKE_CONFIG_ARGS)
endif endif
ifneq "$(findstring bpy, $(MAKECMDGOALS))" ""
BUILD_DIR:=$(BUILD_DIR)_bpy
CMAKE_CONFIG_ARGS:=-C"$(BLENDER_DIR)/build_files/cmake/config/bpy_module.cmake" $(CMAKE_CONFIG_ARGS)
endif
ifneq "$(findstring developer, $(MAKECMDGOALS))" "" ifneq "$(findstring developer, $(MAKECMDGOALS))" ""
CMAKE_CONFIG_ARGS:=-C"$(BLENDER_DIR)/build_files/cmake/config/blender_developer.cmake" $(CMAKE_CONFIG_ARGS) CMAKE_CONFIG_ARGS:=-C"$(BLENDER_DIR)/build_files/cmake/config/blender_developer.cmake" $(CMAKE_CONFIG_ARGS)
@@ -303,10 +297,8 @@ endif
# use the default build path can still use utility helpers. # use the default build path can still use utility helpers.
ifeq ($(OS), Darwin) ifeq ($(OS), Darwin)
BLENDER_BIN?="$(BUILD_DIR)/bin/Blender.app/Contents/MacOS/Blender" BLENDER_BIN?="$(BUILD_DIR)/bin/Blender.app/Contents/MacOS/Blender"
BLENDER_BIN_DIR?="$(BUILD_DIR)/bin/Blender.app/Contents/MacOS/Blender"
else else
BLENDER_BIN?="$(BUILD_DIR)/bin/blender" BLENDER_BIN?="$(BUILD_DIR)/bin/blender"
BLENDER_BIN_DIR?="$(BUILD_DIR)/bin"
endif endif
@@ -363,12 +355,8 @@ all: .FORCE
@echo Building Blender ... @echo Building Blender ...
$(BUILD_COMMAND) -C "$(BUILD_DIR)" -j $(NPROCS) install $(BUILD_COMMAND) -C "$(BUILD_DIR)" -j $(NPROCS) install
@echo @echo
@echo Edit build configuration with: \"$(BUILD_DIR)/CMakeCache.txt\" run make again to rebuild. @echo edit build configuration with: "$(BUILD_DIR)/CMakeCache.txt" run make again to rebuild.
@if test "$(BLENDER_IS_PYTHON_MODULE)" == ""; then \ @echo Blender successfully built, run from: $(BLENDER_BIN)
echo Blender successfully built, run from: $(BLENDER_BIN); \
else \
echo Blender successfully built as a Python module, \"bpy\" can be imported from: $(BLENDER_BIN_DIR); \
fi
@echo @echo
debug: all debug: all

View File

@@ -53,8 +53,8 @@ include(cmake/imath.cmake)
include(cmake/openexr.cmake) include(cmake/openexr.cmake)
include(cmake/brotli.cmake) include(cmake/brotli.cmake)
include(cmake/freetype.cmake) include(cmake/freetype.cmake)
include(cmake/epoxy.cmake)
include(cmake/freeglut.cmake) include(cmake/freeglut.cmake)
include(cmake/glew.cmake)
include(cmake/alembic.cmake) include(cmake/alembic.cmake)
include(cmake/opensubdiv.cmake) include(cmake/opensubdiv.cmake)
include(cmake/sdl.cmake) include(cmake/sdl.cmake)
@@ -139,7 +139,6 @@ if(NOT WIN32 OR ENABLE_MINGW64)
include(cmake/vpx.cmake) include(cmake/vpx.cmake)
include(cmake/x264.cmake) include(cmake/x264.cmake)
include(cmake/xvidcore.cmake) include(cmake/xvidcore.cmake)
include(cmake/aom.cmake)
include(cmake/ffmpeg.cmake) include(cmake/ffmpeg.cmake)
include(cmake/fftw.cmake) include(cmake/fftw.cmake)
include(cmake/sndfile.cmake) include(cmake/sndfile.cmake)

View File

@@ -42,5 +42,4 @@ endif()
add_dependencies( add_dependencies(
external_alembic external_alembic
external_openexr external_openexr
external_imath
) )

View File

@@ -1,45 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-or-later
if(WIN32)
# The default generator on windows is msbuild, which we do not
# want to use for this dep, as needs to build with mingw
set(AOM_GENERATOR "Ninja")
# The default flags are full of MSVC options given this will be
# building with mingw, it'll have an unhappy time with that and
# we need to clear them out.
set(AOM_CMAKE_FLAGS )
# CMake will correctly identify phreads being available, however
# we do not want to use them, as that gains a dependency on
# libpthreadswin.dll which we do not want. when pthreads is not
# available oam will use a pthreads emulation layer using win32 threads
set(AOM_EXTRA_ARGS_WIN32 -DCMAKE_HAVE_PTHREAD_H=OFF)
else()
set(AOM_GENERATOR "Unix Makefiles")
set(AOM_CMAKE_FLAGS ${DEFAULT_CMAKE_FLAGS})
endif()
set(AOM_EXTRA_ARGS
-DENABLE_TESTDATA=OFF
-DENABLE_TESTS=OFF
-DENABLE_TOOLS=OFF
-DENABLE_EXAMPLES=OFF
${AOM_EXTRA_ARGS_WIN32}
)
# This is slightly different from all other deps in the way that
# aom uses cmake as a build system, but still needs the environment setup
# to include perl so we manually setup the environment and call
# cmake directly for the configure, build and install commands.
ExternalProject_Add(external_aom
URL file://${PACKAGE_DIR}/${AOM_FILE}
DOWNLOAD_DIR ${DOWNLOAD_DIR}
URL_HASH ${AOM_HASH_TYPE}=${AOM_HASH}
PREFIX ${BUILD_DIR}/aom
CONFIGURE_COMMAND ${CONFIGURE_ENV} &&
cd ${BUILD_DIR}/aom/src/external_aom-build/ &&
${CMAKE_COMMAND} -G "${AOM_GENERATOR}" -DCMAKE_INSTALL_PREFIX=${LIBDIR}/aom ${AOM_CMAKE_FLAGS} ${AOM_EXTRA_ARGS} ${BUILD_DIR}/aom/src/external_aom/
BUILD_COMMAND ${CMAKE_COMMAND} --build .
INSTALL_COMMAND ${CMAKE_COMMAND} --build . --target install
INSTALL_DIR ${LIBDIR}/aom
)

View File

@@ -12,13 +12,21 @@ if(UNIX)
automake automake
bison bison
${_libtoolize_name} ${_libtoolize_name}
meson
ninja
pkg-config pkg-config
tclsh tclsh
yasm yasm
) )
if(NOT APPLE)
set(_required_software
${_required_software}
# Needed for Mesa.
meson
ninja
)
endif()
foreach(_software ${_required_software}) foreach(_software ${_required_software})
find_program(_software_find NAMES ${_software}) find_program(_software_find NAMES ${_software})
if(NOT _software_find) if(NOT _software_find)
@@ -49,7 +57,7 @@ if(UNIX)
" apt install autoconf automake libtool yasm tcl ninja-build meson python3-mako\n" " apt install autoconf automake libtool yasm tcl ninja-build meson python3-mako\n"
"\n" "\n"
"On macOS (with homebrew):\n" "On macOS (with homebrew):\n"
" brew install autoconf automake bison flex libtool meson ninja pkg-config yasm\n" " brew install autoconf automake bison flex libtool pkg-config yasm\n"
"\n" "\n"
"Other platforms:\n" "Other platforms:\n"
" Install equivalent packages.\n") " Install equivalent packages.\n")

View File

@@ -36,7 +36,7 @@ download_source(BLOSC)
download_source(PTHREADS) download_source(PTHREADS)
download_source(OPENEXR) download_source(OPENEXR)
download_source(FREETYPE) download_source(FREETYPE)
download_source(EPOXY) download_source(GLEW)
download_source(FREEGLUT) download_source(FREEGLUT)
download_source(ALEMBIC) download_source(ALEMBIC)
download_source(OPENSUBDIV) download_source(OPENSUBDIV)
@@ -94,7 +94,6 @@ download_source(GMP)
download_source(POTRACE) download_source(POTRACE)
download_source(HARU) download_source(HARU)
download_source(ZSTD) download_source(ZSTD)
download_source(SSE2NEON)
download_source(FLEX) download_source(FLEX)
download_source(BROTLI) download_source(BROTLI)
download_source(FMT) download_source(FMT)
@@ -117,4 +116,3 @@ download_source(IGC_SPIRV_TOOLS)
download_source(IGC_SPIRV_TRANSLATOR) download_source(IGC_SPIRV_TRANSLATOR)
download_source(GMMLIB) download_source(GMMLIB)
download_source(OCLOC) download_source(OCLOC)
download_source(AOM)

View File

@@ -1,25 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-or-later
if(WIN32)
set(EPOXY_LIB_TYPE shared)
else()
set(EPOXY_LIB_TYPE static)
endif()
ExternalProject_Add(external_epoxy
URL file://${PACKAGE_DIR}/${EPOXY_FILE}
DOWNLOAD_DIR ${DOWNLOAD_DIR}
URL_HASH ${EPOXY_HASH_TYPE}=${EPOXY_HASH}
PREFIX ${BUILD_DIR}/epoxy
PATCH_COMMAND ${PATCH_CMD} -p 1 -N -d ${BUILD_DIR}/epoxy/src/external_epoxy/ < ${PATCH_DIR}/epoxy.diff
CONFIGURE_COMMAND ${CONFIGURE_ENV} && meson setup --prefix ${LIBDIR}/epoxy --default-library ${EPOXY_LIB_TYPE} --libdir lib ${BUILD_DIR}/epoxy/src/external_epoxy-build ${BUILD_DIR}/epoxy/src/external_epoxy -Dtests=false
BUILD_COMMAND ninja
INSTALL_COMMAND ninja install
)
if(BUILD_MODE STREQUAL Release AND WIN32)
ExternalProject_Add_Step(external_epoxy after_install
COMMAND ${CMAKE_COMMAND} -E copy_directory ${LIBDIR}/epoxy/include ${HARVEST_TARGET}/epoxy/include
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/epoxy/bin/epoxy-0.dll ${HARVEST_TARGET}/epoxy/bin/epoxy-0.dll
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/epoxy/lib/epoxy.lib ${HARVEST_TARGET}/epoxy/lib/epoxy.lib
DEPENDEES install
)
endif()

View File

@@ -1,9 +1,9 @@
# SPDX-License-Identifier: GPL-2.0-or-later # SPDX-License-Identifier: GPL-2.0-or-later
set(FFMPEG_CFLAGS "-I${mingw_LIBDIR}/lame/include -I${mingw_LIBDIR}/openjpeg/include/ -I${mingw_LIBDIR}/ogg/include -I${mingw_LIBDIR}/vorbis/include -I${mingw_LIBDIR}/theora/include -I${mingw_LIBDIR}/opus/include -I${mingw_LIBDIR}/vpx/include -I${mingw_LIBDIR}/x264/include -I${mingw_LIBDIR}/xvidcore/include -I${mingw_LIBDIR}/zlib/include -I${mingw_LIBDIR}/aom/include") set(FFMPEG_CFLAGS "-I${mingw_LIBDIR}/lame/include -I${mingw_LIBDIR}/openjpeg/include/ -I${mingw_LIBDIR}/ogg/include -I${mingw_LIBDIR}/vorbis/include -I${mingw_LIBDIR}/theora/include -I${mingw_LIBDIR}/opus/include -I${mingw_LIBDIR}/vpx/include -I${mingw_LIBDIR}/x264/include -I${mingw_LIBDIR}/xvidcore/include -I${mingw_LIBDIR}/zlib/include")
set(FFMPEG_LDFLAGS "-L${mingw_LIBDIR}/lame/lib -L${mingw_LIBDIR}/openjpeg/lib -L${mingw_LIBDIR}/ogg/lib -L${mingw_LIBDIR}/vorbis/lib -L${mingw_LIBDIR}/theora/lib -L${mingw_LIBDIR}/opus/lib -L${mingw_LIBDIR}/vpx/lib -L${mingw_LIBDIR}/x264/lib -L${mingw_LIBDIR}/xvidcore/lib -L${mingw_LIBDIR}/zlib/lib -L${mingw_LIBDIR}/aom/lib") set(FFMPEG_LDFLAGS "-L${mingw_LIBDIR}/lame/lib -L${mingw_LIBDIR}/openjpeg/lib -L${mingw_LIBDIR}/ogg/lib -L${mingw_LIBDIR}/vorbis/lib -L${mingw_LIBDIR}/theora/lib -L${mingw_LIBDIR}/opus/lib -L${mingw_LIBDIR}/vpx/lib -L${mingw_LIBDIR}/x264/lib -L${mingw_LIBDIR}/xvidcore/lib -L${mingw_LIBDIR}/zlib/lib")
set(FFMPEG_EXTRA_FLAGS --pkg-config-flags=--static --extra-cflags=${FFMPEG_CFLAGS} --extra-ldflags=${FFMPEG_LDFLAGS}) set(FFMPEG_EXTRA_FLAGS --pkg-config-flags=--static --extra-cflags=${FFMPEG_CFLAGS} --extra-ldflags=${FFMPEG_LDFLAGS})
set(FFMPEG_ENV PKG_CONFIG_PATH=${mingw_LIBDIR}/openjpeg/lib/pkgconfig:${mingw_LIBDIR}/x264/lib/pkgconfig:${mingw_LIBDIR}/vorbis/lib/pkgconfig:${mingw_LIBDIR}/ogg/lib/pkgconfig:${mingw_LIBDIR}:${mingw_LIBDIR}/vpx/lib/pkgconfig:${mingw_LIBDIR}/theora/lib/pkgconfig:${mingw_LIBDIR}/openjpeg/lib/pkgconfig:${mingw_LIBDIR}/opus/lib/pkgconfig:${mingw_LIBDIR}/aom/lib/pkgconfig:) set(FFMPEG_ENV PKG_CONFIG_PATH=${mingw_LIBDIR}/openjpeg/lib/pkgconfig:${mingw_LIBDIR}/x264/lib/pkgconfig:${mingw_LIBDIR}/vorbis/lib/pkgconfig:${mingw_LIBDIR}/ogg/lib/pkgconfig:${mingw_LIBDIR}:${mingw_LIBDIR}/vpx/lib/pkgconfig:${mingw_LIBDIR}/theora/lib/pkgconfig:${mingw_LIBDIR}/openjpeg/lib/pkgconfig:${mingw_LIBDIR}/opus/lib/pkgconfig:)
if(WIN32) if(WIN32)
set(FFMPEG_ENV set ${FFMPEG_ENV} &&) set(FFMPEG_ENV set ${FFMPEG_ENV} &&)
@@ -79,7 +79,6 @@ ExternalProject_Add(external_ffmpeg
--disable-librtmp --disable-librtmp
--enable-libx264 --enable-libx264
--enable-libxvid --enable-libxvid
--enable-libaom
--disable-libopencore-amrnb --disable-libopencore-amrnb
--disable-libopencore-amrwb --disable-libopencore-amrwb
--disable-libdc1394 --disable-libdc1394
@@ -126,7 +125,6 @@ add_dependencies(
external_vorbis external_vorbis
external_ogg external_ogg
external_lame external_lame
external_aom
) )
if(WIN32) if(WIN32)
add_dependencies( add_dependencies(

View File

@@ -5,6 +5,8 @@ ExternalProject_Add(external_flex
URL_HASH ${FLEX_HASH_TYPE}=${FLEX_HASH} URL_HASH ${FLEX_HASH_TYPE}=${FLEX_HASH}
DOWNLOAD_DIR ${DOWNLOAD_DIR} DOWNLOAD_DIR ${DOWNLOAD_DIR}
PREFIX ${BUILD_DIR}/flex PREFIX ${BUILD_DIR}/flex
# This patch fixes build with some versions of glibc (https://github.com/westes/flex/commit/24fd0551333e7eded87b64dd36062da3df2f6380)
PATCH_COMMAND ${PATCH_CMD} -d ${BUILD_DIR}/flex/src/external_flex < ${PATCH_DIR}/flex.diff
CONFIGURE_COMMAND ${CONFIGURE_ENV} && cd ${BUILD_DIR}/flex/src/external_flex/ && ${CONFIGURE_COMMAND} --prefix=${LIBDIR}/flex CONFIGURE_COMMAND ${CONFIGURE_ENV} && cd ${BUILD_DIR}/flex/src/external_flex/ && ${CONFIGURE_COMMAND} --prefix=${LIBDIR}/flex
BUILD_COMMAND ${CONFIGURE_ENV} && cd ${BUILD_DIR}/flex/src/external_flex/ && make -j${MAKE_THREADS} BUILD_COMMAND ${CONFIGURE_ENV} && cd ${BUILD_DIR}/flex/src/external_flex/ && make -j${MAKE_THREADS}
INSTALL_COMMAND ${CONFIGURE_ENV} && cd ${BUILD_DIR}/flex/src/external_flex/ && make install INSTALL_COMMAND ${CONFIGURE_ENV} && cd ${BUILD_DIR}/flex/src/external_flex/ && make install

View File

@@ -0,0 +1,16 @@
# SPDX-License-Identifier: GPL-2.0-or-later
set(GLEW_EXTRA_ARGS
-DBUILD_UTILS=Off
-DBUILD_SHARED_LIBS=Off
)
ExternalProject_Add(external_glew
URL file://${PACKAGE_DIR}/${GLEW_FILE}
DOWNLOAD_DIR ${DOWNLOAD_DIR}
URL_HASH ${GLEW_HASH_TYPE}=${GLEW_HASH}
PATCH_COMMAND COMMAND ${CMAKE_COMMAND} -E copy ${PATCH_DIR}/cmakelists_glew.txt ${BUILD_DIR}/glew/src/external_glew/CMakeLists.txt
PREFIX ${BUILD_DIR}/glew
CMAKE_ARGS -DCMAKE_POSITION_INDEPENDENT_CODE=ON -DCMAKE_INSTALL_PREFIX=${LIBDIR}/glew ${DEFAULT_CMAKE_FLAGS} ${GLEW_EXTRA_ARGS}
INSTALL_DIR ${LIBDIR}/glew
)

View File

@@ -22,6 +22,12 @@ if(BUILD_MODE STREQUAL Release)
# freeglut-> opengl # freeglut-> opengl
${CMAKE_COMMAND} -E copy ${LIBDIR}/freeglut/lib/freeglut_static.lib ${HARVEST_TARGET}/opengl/lib/freeglut_static.lib && ${CMAKE_COMMAND} -E copy ${LIBDIR}/freeglut/lib/freeglut_static.lib ${HARVEST_TARGET}/opengl/lib/freeglut_static.lib &&
${CMAKE_COMMAND} -E copy_directory ${LIBDIR}/freeglut/include/ ${HARVEST_TARGET}/opengl/include/ && ${CMAKE_COMMAND} -E copy_directory ${LIBDIR}/freeglut/include/ ${HARVEST_TARGET}/opengl/include/ &&
# glew-> opengl
${CMAKE_COMMAND} -E copy ${LIBDIR}/glew/lib/libglew32.lib ${HARVEST_TARGET}/opengl/lib/glew.lib &&
${CMAKE_COMMAND} -E copy_directory ${LIBDIR}/glew/include/ ${HARVEST_TARGET}/opengl/include/ &&
# tiff
${CMAKE_COMMAND} -E copy ${LIBDIR}/tiff/lib/tiff.lib ${HARVEST_TARGET}/tiff/lib/libtiff.lib &&
${CMAKE_COMMAND} -E copy_directory ${LIBDIR}/tiff/include/ ${HARVEST_TARGET}/tiff/include/
DEPENDS DEPENDS
) )
endif() endif()
@@ -40,8 +46,7 @@ function(harvest from to)
install( install(
FILES ${LIBDIR}/${from} FILES ${LIBDIR}/${from}
DESTINATION ${HARVEST_TARGET}/${dirpath} DESTINATION ${HARVEST_TARGET}/${dirpath}
RENAME ${filename} RENAME ${filename})
)
else() else()
install( install(
DIRECTORY ${LIBDIR}/${from}/ DIRECTORY ${LIBDIR}/${from}/
@@ -51,8 +56,7 @@ function(harvest from to)
PATTERN "pkgconfig" EXCLUDE PATTERN "pkgconfig" EXCLUDE
PATTERN "cmake" EXCLUDE PATTERN "cmake" EXCLUDE
PATTERN "__pycache__" EXCLUDE PATTERN "__pycache__" EXCLUDE
PATTERN "tests" EXCLUDE PATTERN "tests" EXCLUDE)
)
endif() endif()
endfunction() endfunction()
@@ -72,8 +76,8 @@ harvest(fftw3/lib fftw3/lib "*.a")
harvest(flac/lib sndfile/lib "libFLAC.a") harvest(flac/lib sndfile/lib "libFLAC.a")
harvest(freetype/include freetype/include "*.h") harvest(freetype/include freetype/include "*.h")
harvest(freetype/lib/libfreetype2ST.a freetype/lib/libfreetype.a) harvest(freetype/lib/libfreetype2ST.a freetype/lib/libfreetype.a)
harvest(epoxy/include epoxy/include "*.h") harvest(glew/include glew/include "*.h")
harvest(epoxy/lib epoxy/lib "*.a") harvest(glew/lib glew/lib "*.a")
harvest(gmp/include gmp/include "*.h") harvest(gmp/include gmp/include "*.h")
harvest(gmp/lib gmp/lib "*.a") harvest(gmp/lib gmp/lib "*.a")
harvest(jemalloc/include jemalloc/include "*.h") harvest(jemalloc/include jemalloc/include "*.h")
@@ -173,7 +177,6 @@ harvest(opus/lib ffmpeg/lib "*.a")
harvest(vpx/lib ffmpeg/lib "*.a") harvest(vpx/lib ffmpeg/lib "*.a")
harvest(x264/lib ffmpeg/lib "*.a") harvest(x264/lib ffmpeg/lib "*.a")
harvest(xvidcore/lib ffmpeg/lib "*.a") harvest(xvidcore/lib ffmpeg/lib "*.a")
harvest(aom/lib ffmpeg/lib "*.a")
harvest(webp/lib webp/lib "*.a") harvest(webp/lib webp/lib "*.a")
harvest(webp/include webp/include "*.h") harvest(webp/include webp/include "*.h")
harvest(usd/include usd/include "*.h") harvest(usd/include usd/include "*.h")

View File

@@ -18,15 +18,9 @@ if(WIN32)
set(PNG_LIBNAME libpng16_static${LIBEXT}) set(PNG_LIBNAME libpng16_static${LIBEXT})
set(OIIO_SIMD_FLAGS -DUSE_SIMD=sse2) set(OIIO_SIMD_FLAGS -DUSE_SIMD=sse2)
set(OPENJPEG_POSTFIX _msvc) set(OPENJPEG_POSTFIX _msvc)
if(BUILD_MODE STREQUAL Debug)
set(TIFF_POSTFIX d)
else()
set(TIFF_POSTFIX)
endif()
else() else()
set(PNG_LIBNAME libpng${LIBEXT}) set(PNG_LIBNAME libpng${LIBEXT})
set(OIIO_SIMD_FLAGS) set(OIIO_SIMD_FLAGS)
set(TIFF_POSTFIX)
endif() endif()
if(MSVC) if(MSVC)
@@ -71,7 +65,7 @@ set(OPENIMAGEIO_EXTRA_ARGS
-DZLIB_INCLUDE_DIR=${LIBDIR}/zlib/include -DZLIB_INCLUDE_DIR=${LIBDIR}/zlib/include
-DPNG_LIBRARY=${LIBDIR}/png/lib/${PNG_LIBNAME} -DPNG_LIBRARY=${LIBDIR}/png/lib/${PNG_LIBNAME}
-DPNG_PNG_INCLUDE_DIR=${LIBDIR}/png/include -DPNG_PNG_INCLUDE_DIR=${LIBDIR}/png/include
-DTIFF_LIBRARY=${LIBDIR}/tiff/lib/${LIBPREFIX}tiff${TIFF_POSTFIX}${LIBEXT} -DTIFF_LIBRARY=${LIBDIR}/tiff/lib/${LIBPREFIX}tiff${LIBEXT}
-DTIFF_INCLUDE_DIR=${LIBDIR}/tiff/include -DTIFF_INCLUDE_DIR=${LIBDIR}/tiff/include
-DJPEG_LIBRARY=${LIBDIR}/jpeg/lib/${JPEG_LIBRARY} -DJPEG_LIBRARY=${LIBDIR}/jpeg/lib/${JPEG_LIBRARY}
-DJPEG_INCLUDE_DIR=${LIBDIR}/jpeg/include -DJPEG_INCLUDE_DIR=${LIBDIR}/jpeg/include

View File

@@ -27,7 +27,6 @@ if(WIN32)
PREFIX ${BUILD_DIR}/python PREFIX ${BUILD_DIR}/python
CONFIGURE_COMMAND "" CONFIGURE_COMMAND ""
BUILD_COMMAND cd ${BUILD_DIR}/python/src/external_python/pcbuild/ && set IncludeTkinter=false && call build.bat -e -p x64 -c ${BUILD_MODE} BUILD_COMMAND cd ${BUILD_DIR}/python/src/external_python/pcbuild/ && set IncludeTkinter=false && call build.bat -e -p x64 -c ${BUILD_MODE}
PATCH_COMMAND ${PATCH_CMD} --verbose -p1 -d ${BUILD_DIR}/python/src/external_python < ${PATCH_DIR}/python_windows.diff
INSTALL_COMMAND ${PYTHON_BINARY_INTERNAL} ${PYTHON_SRC}/PC/layout/main.py -b ${PYTHON_SRC}/PCbuild/amd64 -s ${PYTHON_SRC} -t ${PYTHON_SRC}/tmp/ --include-stable --include-pip --include-dev --include-launchers --include-venv --include-symbols ${PYTHON_EXTRA_INSTLAL_FLAGS} --copy ${LIBDIR}/python INSTALL_COMMAND ${PYTHON_BINARY_INTERNAL} ${PYTHON_SRC}/PC/layout/main.py -b ${PYTHON_SRC}/PCbuild/amd64 -s ${PYTHON_SRC} -t ${PYTHON_SRC}/tmp/ --include-stable --include-pip --include-dev --include-launchers --include-venv --include-symbols ${PYTHON_EXTRA_INSTLAL_FLAGS} --copy ${LIBDIR}/python
) )

View File

@@ -1,9 +1,9 @@
# SPDX-License-Identifier: GPL-2.0-or-later # SPDX-License-Identifier: GPL-2.0-or-later
ExternalProject_Add(external_sse2neon ExternalProject_Add(external_sse2neon
URL file://${PACKAGE_DIR}/${SSE2NEON_FILE} GIT_REPOSITORY ${SSE2NEON_GIT}
GIT_TAG ${SSE2NEON_GIT_HASH}
DOWNLOAD_DIR ${DOWNLOAD_DIR} DOWNLOAD_DIR ${DOWNLOAD_DIR}
URL_HASH ${SSE2NEON_HASH_TYPE}=${SSE2NEON_HASH}
PREFIX ${BUILD_DIR}/sse2neon PREFIX ${BUILD_DIR}/sse2neon
CONFIGURE_COMMAND echo sse2neon - Nothing to configure CONFIGURE_COMMAND echo sse2neon - Nothing to configure
BUILD_COMMAND echo sse2neon - nothing to build BUILD_COMMAND echo sse2neon - nothing to build

View File

@@ -3,8 +3,6 @@
set(TIFF_EXTRA_ARGS set(TIFF_EXTRA_ARGS
-DZLIB_LIBRARY=${LIBDIR}/zlib/lib/${ZLIB_LIBRARY} -DZLIB_LIBRARY=${LIBDIR}/zlib/lib/${ZLIB_LIBRARY}
-DZLIB_INCLUDE_DIR=${LIBDIR}/zlib/include -DZLIB_INCLUDE_DIR=${LIBDIR}/zlib/include
-DJPEG_LIBRARY=${LIBDIR}/jpeg/lib/${JPEG_LIBRARY}
-DJPEG_INCLUDE_DIR=${LIBDIR}/jpeg/include
-DPNG_STATIC=ON -DPNG_STATIC=ON
-DBUILD_SHARED_LIBS=OFF -DBUILD_SHARED_LIBS=OFF
-Dlzma=OFF -Dlzma=OFF
@@ -26,12 +24,10 @@ add_dependencies(
external_tiff external_tiff
external_zlib external_zlib
) )
if(WIN32)
if(BUILD_MODE STREQUAL Release) if(WIN32 AND BUILD_MODE STREQUAL Debug)
ExternalProject_Add_Step(external_tiff after_install ExternalProject_Add_Step(external_tiff after_install
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tiff/lib/tiff.lib ${HARVEST_TARGET}/tiff/lib/libtiff.lib && COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tiff/lib/tiffd${LIBEXT} ${LIBDIR}/tiff/lib/tiff${LIBEXT}
${CMAKE_COMMAND} -E copy_directory ${LIBDIR}/tiff/include/ ${HARVEST_TARGET}/tiff/include/ DEPENDEES install
DEPENDEES install )
)
endif()
endif() endif()

View File

@@ -45,15 +45,15 @@ set(PTHREADS_HASH f3bf81bb395840b3446197bcf4ecd653)
set(PTHREADS_HASH_TYPE MD5) set(PTHREADS_HASH_TYPE MD5)
set(PTHREADS_FILE pthreads4w-code-${PTHREADS_VERSION}.zip) set(PTHREADS_FILE pthreads4w-code-${PTHREADS_VERSION}.zip)
set(OPENEXR_VERSION 3.1.5) set(OPENEXR_VERSION 3.1.4)
set(OPENEXR_URI https://github.com/AcademySoftwareFoundation/openexr/archive/v${OPENEXR_VERSION}.tar.gz) set(OPENEXR_URI https://github.com/AcademySoftwareFoundation/openexr/archive/v${OPENEXR_VERSION}.tar.gz)
set(OPENEXR_HASH a92f38eedd43e56c0af56d4852506886) set(OPENEXR_HASH e990be1ff765797bc2d93a8060e1c1f2)
set(OPENEXR_HASH_TYPE MD5) set(OPENEXR_HASH_TYPE MD5)
set(OPENEXR_FILE openexr-${OPENEXR_VERSION}.tar.gz) set(OPENEXR_FILE openexr-${OPENEXR_VERSION}.tar.gz)
set(IMATH_VERSION 3.1.5) set(IMATH_VERSION 3.1.4)
set(IMATH_URI https://github.com/AcademySoftwareFoundation/Imath/archive/v${OPENEXR_VERSION}.tar.gz) set(IMATH_URI https://github.com/AcademySoftwareFoundation/Imath/archive/v${OPENEXR_VERSION}.tar.gz)
set(IMATH_HASH dd375574276c54872b7b3d54053baff0) set(IMATH_HASH fddf14ec73e12c34e74c3c175e311a3f)
set(IMATH_HASH_TYPE MD5) set(IMATH_HASH_TYPE MD5)
set(IMATH_FILE imath-${IMATH_VERSION}.tar.gz) set(IMATH_FILE imath-${IMATH_VERSION}.tar.gz)
@@ -80,11 +80,11 @@ set(FREETYPE_HASH bd4e3b007474319909a6b79d50908e85)
set(FREETYPE_HASH_TYPE MD5) set(FREETYPE_HASH_TYPE MD5)
set(FREETYPE_FILE freetype-${FREETYPE_VERSION}.tar.gz) set(FREETYPE_FILE freetype-${FREETYPE_VERSION}.tar.gz)
set(EPOXY_VERSION 1.5.10) set(GLEW_VERSION 1.13.0)
set(EPOXY_URI https://github.com/anholt/libepoxy/archive/refs/tags/${EPOXY_VERSION}.tar.gz) set(GLEW_URI http://prdownloads.sourceforge.net/glew/glew/${GLEW_VERSION}/glew-${GLEW_VERSION}.tgz)
set(EPOXY_HASH f0730aad115c952e77591fcc805b1dc1) set(GLEW_HASH 7cbada3166d2aadfc4169c4283701066)
set(EPOXY_HASH_TYPE MD5) set(GLEW_HASH_TYPE MD5)
set(EPOXY_FILE libepoxy-${EPOXY_VERSION}.tar.gz) set(GLEW_FILE glew-${GLEW_VERSION}.tgz)
set(FREEGLUT_VERSION 3.0.0) set(FREEGLUT_VERSION 3.0.0)
set(FREEGLUT_URI http://prdownloads.sourceforge.net/freeglut/freeglut/${FREEGLUT_VERSION}/freeglut-${FREEGLUT_VERSION}.tar.gz) set(FREEGLUT_URI http://prdownloads.sourceforge.net/freeglut/freeglut/${FREEGLUT_VERSION}/freeglut-${FREEGLUT_VERSION}.tar.gz)
@@ -163,9 +163,9 @@ set(ROBINMAP_HASH c08ec4b1bf1c85eb0d6432244a6a89862229da1cb834f3f90fba8dc35d8c8e
set(ROBINMAP_HASH_TYPE SHA256) set(ROBINMAP_HASH_TYPE SHA256)
set(ROBINMAP_FILE robinmap-${ROBINMAP_VERSION}.tar.gz) set(ROBINMAP_FILE robinmap-${ROBINMAP_VERSION}.tar.gz)
set(TIFF_VERSION 4.4.0) set(TIFF_VERSION 4.3.0)
set(TIFF_URI http://download.osgeo.org/libtiff/tiff-${TIFF_VERSION}.tar.gz) set(TIFF_URI http://download.osgeo.org/libtiff/tiff-${TIFF_VERSION}.tar.gz)
set(TIFF_HASH 376f17f189e9d02280dfe709b2b2bbea) set(TIFF_HASH 0a2e4744d1426a8fc8211c0cdbc3a1b3)
set(TIFF_HASH_TYPE MD5) set(TIFF_HASH_TYPE MD5)
set(TIFF_FILE tiff-${TIFF_VERSION}.tar.gz) set(TIFF_FILE tiff-${TIFF_VERSION}.tar.gz)
@@ -488,11 +488,8 @@ set(ZSTD_HASH 5194fbfa781fcf45b98c5e849651aa7b3b0a008c6b72d4a0db760f3002291e94)
set(ZSTD_HASH_TYPE SHA256) set(ZSTD_HASH_TYPE SHA256)
set(ZSTD_FILE zstd-${ZSTD_VERSION}.tar.gz) set(ZSTD_FILE zstd-${ZSTD_VERSION}.tar.gz)
set(SSE2NEON_VERSION fe5ff00bb8d19b327714a3c290f3e2ce81ba3525) set(SSE2NEON_GIT https://github.com/DLTcollab/sse2neon.git)
set(SSE2NEON_URI https://github.com/DLTcollab/sse2neon/archive/${SSE2NEON_VERSION}.tar.gz) set(SSE2NEON_GIT_HASH fe5ff00bb8d19b327714a3c290f3e2ce81ba3525)
set(SSE2NEON_HASH 0780253525d299c31775ef95853698d03db9c7739942af8570000f4a25a5d605)
set(SSE2NEON_HASH_TYPE SHA256)
set(SSE2NEON_FILE sse2neon-${SSE2NEON_VERSION}.tar.gz)
set(BROTLI_VERSION v1.0.9) set(BROTLI_VERSION v1.0.9)
set(BROTLI_URI https://github.com/google/brotli/archive/refs/tags/${BROTLI_VERSION}.tar.gz) set(BROTLI_URI https://github.com/google/brotli/archive/refs/tags/${BROTLI_VERSION}.tar.gz)
@@ -506,9 +503,9 @@ set(LEVEL_ZERO_HASH c39bb05a8e5898aa6c444e1704105b93d3f1888b9c333f8e7e73825ffbfb
set(LEVEL_ZERO_HASH_TYPE SHA256) set(LEVEL_ZERO_HASH_TYPE SHA256)
set(LEVEL_ZERO_FILE level-zero-${LEVEL_ZERO_VERSION}.tar.gz) set(LEVEL_ZERO_FILE level-zero-${LEVEL_ZERO_VERSION}.tar.gz)
set(DPCPP_VERSION 20220812) set(DPCPP_VERSION 20220620)
set(DPCPP_URI https://github.com/intel/llvm/archive/refs/tags/sycl-nightly/${DPCPP_VERSION}.tar.gz) set(DPCPP_URI https://github.com/intel/llvm/archive/refs/tags/sycl-nightly/${DPCPP_VERSION}.tar.gz)
set(DPCPP_HASH 0e3c95346c295f5cf80f3a42d80b1c49481955898530242636ddc002627248d6) set(DPCPP_HASH a5f41abd5229d28afa92cbd8a5d8d786ee698bf239f722929fd686276bad692c)
set(DPCPP_HASH_TYPE SHA256) set(DPCPP_HASH_TYPE SHA256)
set(DPCPP_FILE DPCPP-${DPCPP_VERSION}.tar.gz) set(DPCPP_FILE DPCPP-${DPCPP_VERSION}.tar.gz)
@@ -636,9 +633,3 @@ set(OCLOC_URI https://github.com/intel/compute-runtime/archive/refs/tags/${OCLOC
set(OCLOC_HASH ab22b8bf2560a57fdd3def0e35a62ca75991406f959c0263abb00cd6cd9ae998) set(OCLOC_HASH ab22b8bf2560a57fdd3def0e35a62ca75991406f959c0263abb00cd6cd9ae998)
set(OCLOC_HASH_TYPE SHA256) set(OCLOC_HASH_TYPE SHA256)
set(OCLOC_FILE ocloc-${OCLOC_VERSION}.tar.gz) set(OCLOC_FILE ocloc-${OCLOC_VERSION}.tar.gz)
set(AOM_VERSION 3.4.0)
set(AOM_URI https://storage.googleapis.com/aom-releases/libaom-${AOM_VERSION}.tar.gz)
set(AOM_HASH bd754b58c3fa69f3ffd29da77de591bd9c26970e3b18537951336d6c0252e354)
set(AOM_HASH_TYPE SHA256)
set(AOM_FILE libaom-${AOM_VERSION}.tar.gz)

View File

@@ -1,13 +1,11 @@
# SPDX-License-Identifier: GPL-2.0-or-later # SPDX-License-Identifier: GPL-2.0-or-later
if(WIN32) if(WIN32)
# VPX is determined to use pthreads which it will tell ffmpeg to dynamically if("${CMAKE_SIZEOF_VOID_P}" EQUAL "8")
# link, which is not something we're super into distribution wise. However set(VPX_EXTRA_FLAGS --target=x86_64-win64-gcc --disable-multithread)
# if it cannot find pthread.h it'll happily provide a pthread emulation else()
# layer using win32 threads. So all this patch does is make it not find set(VPX_EXTRA_FLAGS --target=x86-win32-gcc --disable-multithread)
# pthead.h endif()
set(VPX_PATCH ${PATCH_CMD} -p 1 -d ${BUILD_DIR}/vpx/src/external_vpx < ${PATCH_DIR}/vpx_windows.diff)
set(VPX_EXTRA_FLAGS --target=x86_64-win64-gcc )
else() else()
if(APPLE) if(APPLE)
if("${CMAKE_OSX_ARCHITECTURES}" STREQUAL "arm64") if("${CMAKE_OSX_ARCHITECTURES}" STREQUAL "arm64")
@@ -20,16 +18,6 @@ else()
endif() endif()
endif() endif()
if(NOT BLENDER_PLATFORM_ARM)
list(APPEND VPX_EXTRA_FLAGS
--enable-sse4_1
--enable-sse3
--enable-ssse3
--enable-avx
--enable-avx2
)
endif()
ExternalProject_Add(external_vpx ExternalProject_Add(external_vpx
URL file://${PACKAGE_DIR}/${VPX_FILE} URL file://${PACKAGE_DIR}/${VPX_FILE}
DOWNLOAD_DIR ${DOWNLOAD_DIR} DOWNLOAD_DIR ${DOWNLOAD_DIR}
@@ -42,6 +30,11 @@ ExternalProject_Add(external_vpx
--enable-static --enable-static
--disable-install-bins --disable-install-bins
--disable-install-srcs --disable-install-srcs
--disable-sse4_1
--disable-sse3
--disable-ssse3
--disable-avx
--disable-avx2
--disable-unit-tests --disable-unit-tests
--disable-examples --disable-examples
--enable-vp8 --enable-vp8
@@ -49,7 +42,6 @@ ExternalProject_Add(external_vpx
${VPX_EXTRA_FLAGS} ${VPX_EXTRA_FLAGS}
BUILD_COMMAND ${CONFIGURE_ENV} && cd ${BUILD_DIR}/vpx/src/external_vpx/ && make -j${MAKE_THREADS} BUILD_COMMAND ${CONFIGURE_ENV} && cd ${BUILD_DIR}/vpx/src/external_vpx/ && make -j${MAKE_THREADS}
INSTALL_COMMAND ${CONFIGURE_ENV} && cd ${BUILD_DIR}/vpx/src/external_vpx/ && make install INSTALL_COMMAND ${CONFIGURE_ENV} && cd ${BUILD_DIR}/vpx/src/external_vpx/ && make install
PATCH_COMMAND ${VPX_PATCH}
INSTALL_DIR ${LIBDIR}/vpx INSTALL_DIR ${LIBDIR}/vpx
) )

View File

@@ -136,7 +136,7 @@ ARGUMENTS_INFO="\"COMMAND LINE ARGUMENTS:
Build and install the OpenImageDenoise libraries. Build and install the OpenImageDenoise libraries.
--with-nanovdb --with-nanovdb
Build and install NanoVDB together with OpenVDB. Build and install the NanoVDB branch of OpenVDB (instead of official release of OpenVDB).
--with-jack --with-jack
Install the jack libraries. Install the jack libraries.
@@ -385,7 +385,7 @@ CLANG_FORMAT_VERSION="10.0"
CLANG_FORMAT_VERSION_MIN="6.0" CLANG_FORMAT_VERSION_MIN="6.0"
CLANG_FORMAT_VERSION_MEX="14.0" CLANG_FORMAT_VERSION_MEX="14.0"
PYTHON_VERSION="3.10.6" PYTHON_VERSION="3.10.2"
PYTHON_VERSION_SHORT="3.10" PYTHON_VERSION_SHORT="3.10"
PYTHON_VERSION_MIN="3.10" PYTHON_VERSION_MIN="3.10"
PYTHON_VERSION_MEX="3.12" PYTHON_VERSION_MEX="3.12"
@@ -425,7 +425,7 @@ PYTHON_ZSTANDARD_VERSION_MIN="0.15.2"
PYTHON_ZSTANDARD_VERSION_MEX="0.20.0" PYTHON_ZSTANDARD_VERSION_MEX="0.20.0"
PYTHON_ZSTANDARD_NAME="zstandard" PYTHON_ZSTANDARD_NAME="zstandard"
PYTHON_NUMPY_VERSION="1.23.2" PYTHON_NUMPY_VERSION="1.22.0"
PYTHON_NUMPY_VERSION_MIN="1.14" PYTHON_NUMPY_VERSION_MIN="1.14"
PYTHON_NUMPY_VERSION_MEX="2.0" PYTHON_NUMPY_VERSION_MEX="2.0"
PYTHON_NUMPY_NAME="numpy" PYTHON_NUMPY_NAME="numpy"
@@ -453,8 +453,8 @@ PYTHON_MODULES_PIP=(
) )
BOOST_VERSION="1.80.0" BOOST_VERSION="1.78.0"
BOOST_VERSION_SHORT="1.80" BOOST_VERSION_SHORT="1.78"
BOOST_VERSION_MIN="1.49" BOOST_VERSION_MIN="1.49"
BOOST_VERSION_MEX="2.0" BOOST_VERSION_MEX="2.0"
BOOST_FORCE_BUILD=false BOOST_FORCE_BUILD=false
@@ -478,7 +478,7 @@ OCIO_FORCE_BUILD=false
OCIO_FORCE_REBUILD=false OCIO_FORCE_REBUILD=false
OCIO_SKIP=false OCIO_SKIP=false
IMATH_VERSION="3.1.5" IMATH_VERSION="3.1.4"
IMATH_VERSION_SHORT="3.1" IMATH_VERSION_SHORT="3.1"
IMATH_VERSION_MIN="3.0" IMATH_VERSION_MIN="3.0"
IMATH_VERSION_MEX="4.0" IMATH_VERSION_MEX="4.0"
@@ -487,7 +487,7 @@ IMATH_FORCE_REBUILD=false
IMATH_SKIP=false IMATH_SKIP=false
_with_built_imath=false _with_built_imath=false
OPENEXR_VERSION="3.1.5" OPENEXR_VERSION="3.1.4"
OPENEXR_VERSION_SHORT="3.1" OPENEXR_VERSION_SHORT="3.1"
OPENEXR_VERSION_MIN="3.0" OPENEXR_VERSION_MIN="3.0"
OPENEXR_VERSION_MEX="4.0" OPENEXR_VERSION_MEX="4.0"
@@ -496,7 +496,7 @@ OPENEXR_FORCE_REBUILD=false
OPENEXR_SKIP=false OPENEXR_SKIP=false
_with_built_openexr=false _with_built_openexr=false
OIIO_VERSION="2.3.18.0" OIIO_VERSION="2.3.13.0"
OIIO_VERSION_SHORT="2.3" OIIO_VERSION_SHORT="2.3"
OIIO_VERSION_MIN="2.1.12" OIIO_VERSION_MIN="2.1.12"
OIIO_VERSION_MEX="2.4.0" OIIO_VERSION_MEX="2.4.0"
@@ -534,10 +534,10 @@ OSD_SKIP=false
# OpenVDB needs to be compiled for now # OpenVDB needs to be compiled for now
OPENVDB_BLOSC_VERSION="1.21.1" OPENVDB_BLOSC_VERSION="1.21.1"
OPENVDB_VERSION="9.1.0" OPENVDB_VERSION="9.0.0"
OPENVDB_VERSION_SHORT="9.1" OPENVDB_VERSION_SHORT="9.0"
OPENVDB_VERSION_MIN="9.0" OPENVDB_VERSION_MIN="9.0"
OPENVDB_VERSION_MEX="9.2" OPENVDB_VERSION_MEX="9.1"
OPENVDB_FORCE_BUILD=false OPENVDB_FORCE_BUILD=false
OPENVDB_FORCE_REBUILD=false OPENVDB_FORCE_REBUILD=false
OPENVDB_SKIP=false OPENVDB_SKIP=false
@@ -627,9 +627,6 @@ WEBP_DEV=""
VPX_USE=false VPX_USE=false
VPX_VERSION_MIN=0.9.7 VPX_VERSION_MIN=0.9.7
VPX_DEV="" VPX_DEV=""
AOM_USE=false
AOM_VERSION_MIN=3.3.0
AOM_DEV=""
OPUS_USE=false OPUS_USE=false
OPUS_VERSION_MIN=1.1.1 OPUS_VERSION_MIN=1.1.1
OPUS_DEV="" OPUS_DEV=""
@@ -1193,7 +1190,7 @@ Those libraries should be available as packages in all recent distributions (opt
* libx11, libxcursor, libxi, libxrandr, libxinerama (and other libx... as needed). * libx11, libxcursor, libxi, libxrandr, libxinerama (and other libx... as needed).
* libwayland-client0, libwayland-cursor0, libwayland-egl1, libxkbcommon0, libdbus-1-3, libegl1 (Wayland) * libwayland-client0, libwayland-cursor0, libwayland-egl1, libxkbcommon0, libdbus-1-3, libegl1 (Wayland)
* libsqlite3, libzstd, libbz2, libssl, libfftw3, libxml2, libtinyxml, yasm, libyaml-cpp, flex. * libsqlite3, libzstd, libbz2, libssl, libfftw3, libxml2, libtinyxml, yasm, libyaml-cpp, flex.
* libsdl2, libepoxy, libpugixml, libpotrace, [libgmp], fontconfig, [libharu/libhpdf].\"" * libsdl2, libglew, libpugixml, libpotrace, [libgmp], fontconfig, [libharu/libhpdf].\""
DEPS_SPECIFIC_INFO="\"BUILDABLE DEPENDENCIES: DEPS_SPECIFIC_INFO="\"BUILDABLE DEPENDENCIES:
@@ -1212,7 +1209,7 @@ You may also want to build them yourself (optional ones are [between brackets]):
** [NumPy $PYTHON_NUMPY_VERSION] (use pip). ** [NumPy $PYTHON_NUMPY_VERSION] (use pip).
* Boost $BOOST_VERSION (from $BOOST_SOURCE, modules: $BOOST_BUILD_MODULES). * Boost $BOOST_VERSION (from $BOOST_SOURCE, modules: $BOOST_BUILD_MODULES).
* TBB $TBB_VERSION (from $TBB_SOURCE). * TBB $TBB_VERSION (from $TBB_SOURCE).
* [FFMpeg $FFMPEG_VERSION (needs libvorbis, libogg, libtheora, libx264, libmp3lame, libxvidcore, libvpx, libaom, libwebp, ...)] (from $FFMPEG_SOURCE). * [FFMpeg $FFMPEG_VERSION (needs libvorbis, libogg, libtheora, libx264, libmp3lame, libxvidcore, libvpx, libwebp, ...)] (from $FFMPEG_SOURCE).
* [OpenColorIO $OCIO_VERSION] (from $OCIO_SOURCE). * [OpenColorIO $OCIO_VERSION] (from $OCIO_SOURCE).
* Imath $IMATH_VERSION (from $IMATH_SOURCE). * Imath $IMATH_VERSION (from $IMATH_SOURCE).
* OpenEXR $OPENEXR_VERSION (from $OPENEXR_SOURCE). * OpenEXR $OPENEXR_VERSION (from $OPENEXR_SOURCE).
@@ -2919,10 +2916,6 @@ compile_OPENVDB() {
cmake_d="$cmake_d -D CMAKE_INSTALL_PREFIX=$_inst" cmake_d="$cmake_d -D CMAKE_INSTALL_PREFIX=$_inst"
cmake_d="$cmake_d -D USE_STATIC_DEPENDENCIES=OFF" cmake_d="$cmake_d -D USE_STATIC_DEPENDENCIES=OFF"
cmake_d="$cmake_d -D OPENVDB_BUILD_BINARIES=OFF" cmake_d="$cmake_d -D OPENVDB_BUILD_BINARIES=OFF"
# Unfortunately OpenVDB currently forces using recent oneTBB over older versions when it finds it,
# even when TBB_ROOT is specified. So have to prevent any check for system library -
# in the hope it will not break in some other cases.
cmake_d="$cmake_d -D DISABLE_CMAKE_SEARCH_PATHS=ON"
if [ "$WITH_NANOVDB" = true ]; then if [ "$WITH_NANOVDB" = true ]; then
cmake_d="$cmake_d -D USE_NANOVDB=ON" cmake_d="$cmake_d -D USE_NANOVDB=ON"
@@ -2935,6 +2928,7 @@ compile_OPENVDB() {
cmake_d="$cmake_d -D Boost_USE_MULTITHREADED=ON" cmake_d="$cmake_d -D Boost_USE_MULTITHREADED=ON"
cmake_d="$cmake_d -D Boost_NO_SYSTEM_PATHS=ON" cmake_d="$cmake_d -D Boost_NO_SYSTEM_PATHS=ON"
cmake_d="$cmake_d -D Boost_NO_BOOST_CMAKE=ON" cmake_d="$cmake_d -D Boost_NO_BOOST_CMAKE=ON"
cmake_d="$cmake_d -D Boost_NO_BOOST_CMAKE=ON"
fi fi
if [ -d $INST/tbb ]; then if [ -d $INST/tbb ]; then
cmake_d="$cmake_d -D TBB_ROOT=$INST/tbb" cmake_d="$cmake_d -D TBB_ROOT=$INST/tbb"
@@ -3006,7 +3000,7 @@ compile_ALEMBIC() {
fi fi
# To be changed each time we make edits that would modify the compiled result! # To be changed each time we make edits that would modify the compiled result!
alembic_magic=3 alembic_magic=2
_init_alembic _init_alembic
# Force having own builds for the dependencies. # Force having own builds for the dependencies.
@@ -3054,7 +3048,7 @@ compile_ALEMBIC() {
fi fi
if [ "$_with_built_openexr" = true ]; then if [ "$_with_built_openexr" = true ]; then
cmake_d="$cmake_d -D USE_ARNOLD=OFF" cmake_d="$cmake_d -D USE_ARNOLD=OFF"
cmake_d="$cmake_d -D USE_BINARIES=ON" # Tests use some Alembic binaries... cmake_d="$cmake_d -D USE_BINARIES=OFF"
cmake_d="$cmake_d -D USE_EXAMPLES=OFF" cmake_d="$cmake_d -D USE_EXAMPLES=OFF"
cmake_d="$cmake_d -D USE_HDF5=OFF" cmake_d="$cmake_d -D USE_HDF5=OFF"
cmake_d="$cmake_d -D USE_MAYA=OFF" cmake_d="$cmake_d -D USE_MAYA=OFF"
@@ -3198,7 +3192,7 @@ _init_opencollada() {
_inst_shortcut=$INST/opencollada _inst_shortcut=$INST/opencollada
} }
_update_deps_opencollada() { _update_deps_collada() {
: :
} }
@@ -3640,7 +3634,7 @@ compile_FFmpeg() {
fi fi
# To be changed each time we make edits that would modify the compiled result! # To be changed each time we make edits that would modify the compiled result!
ffmpeg_magic=6 ffmpeg_magic=5
_init_ffmpeg _init_ffmpeg
# Force having own builds for the dependencies. # Force having own builds for the dependencies.
@@ -3693,10 +3687,6 @@ compile_FFmpeg() {
extra="$extra --enable-libvpx" extra="$extra --enable-libvpx"
fi fi
if [ "$AOM_USE" = true ]; then
extra="$extra --enable-libaom"
fi
if [ "$WEBP_USE" = true ]; then if [ "$WEBP_USE" = true ]; then
extra="$extra --enable-libwebp" extra="$extra --enable-libwebp"
fi fi
@@ -4065,7 +4055,7 @@ install_DEB() {
libxcursor-dev libxi-dev wget libsqlite3-dev libxrandr-dev libxinerama-dev \ libxcursor-dev libxi-dev wget libsqlite3-dev libxrandr-dev libxinerama-dev \
libwayland-dev wayland-protocols libegl-dev libxkbcommon-dev libdbus-1-dev linux-libc-dev \ libwayland-dev wayland-protocols libegl-dev libxkbcommon-dev libdbus-1-dev linux-libc-dev \
libbz2-dev libncurses5-dev libssl-dev liblzma-dev libreadline-dev \ libbz2-dev libncurses5-dev libssl-dev liblzma-dev libreadline-dev \
libopenal-dev libepoxy-dev yasm \ libopenal-dev libglew-dev yasm \
libsdl2-dev libfftw3-dev patch bzip2 libxml2-dev libtinyxml-dev libjemalloc-dev \ libsdl2-dev libfftw3-dev patch bzip2 libxml2-dev libtinyxml-dev libjemalloc-dev \
libgmp-dev libpugixml-dev libpotrace-dev libhpdf-dev libzstd-dev libpystring-dev" libgmp-dev libpugixml-dev libpotrace-dev libhpdf-dev libzstd-dev libpystring-dev"
@@ -4150,34 +4140,30 @@ install_DEB() {
WEBP_USE=true WEBP_USE=true
fi fi
XVID_DEV="libxvidcore-dev" if [ "$WITH_ALL" = true ]; then
check_package_DEB $XVID_DEV XVID_DEV="libxvidcore-dev"
if [ $? -eq 0 ]; then check_package_DEB $XVID_DEV
XVID_USE=true if [ $? -eq 0 ]; then
fi XVID_USE=true
fi
MP3LAME_DEV="libmp3lame-dev" MP3LAME_DEV="libmp3lame-dev"
check_package_DEB $MP3LAME_DEV check_package_DEB $MP3LAME_DEV
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then
MP3LAME_USE=true MP3LAME_USE=true
fi fi
VPX_DEV="libvpx-dev" VPX_DEV="libvpx-dev"
check_package_version_ge_DEB $VPX_DEV $VPX_VERSION_MIN check_package_version_ge_DEB $VPX_DEV $VPX_VERSION_MIN
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then
VPX_USE=true VPX_USE=true
fi fi
AOM_DEV="libaom-dev" OPUS_DEV="libopus-dev"
check_package_version_ge_DEB $AOM_DEV $AOM_VERSION_MIN check_package_version_ge_DEB $OPUS_DEV $OPUS_VERSION_MIN
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then
AOM_USE=true OPUS_USE=true
fi fi
OPUS_DEV="libopus-dev"
check_package_version_ge_DEB $OPUS_DEV $OPUS_VERSION_MIN
if [ $? -eq 0 ]; then
OPUS_USE=true
fi fi
# Check cmake version and disable features for older distros. # Check cmake version and disable features for older distros.
@@ -4560,9 +4546,6 @@ install_DEB() {
if [ "$VPX_USE" = true ]; then if [ "$VPX_USE" = true ]; then
_packages="$_packages $VPX_DEV" _packages="$_packages $VPX_DEV"
fi fi
if [ "$AOM_USE" = true ]; then
_packages="$_packages $AOM_DEV"
fi
if [ "$OPUS_USE" = true ]; then if [ "$OPUS_USE" = true ]; then
_packages="$_packages $OPUS_DEV" _packages="$_packages $OPUS_DEV"
fi fi
@@ -4775,7 +4758,7 @@ install_RPM() {
libX11-devel libXi-devel libXcursor-devel libXrandr-devel libXinerama-devel \ libX11-devel libXi-devel libXcursor-devel libXrandr-devel libXinerama-devel \
wayland-devel wayland-protocols-devel mesa-libEGL-devel libxkbcommon-devel dbus-devel kernel-headers \ wayland-devel wayland-protocols-devel mesa-libEGL-devel libxkbcommon-devel dbus-devel kernel-headers \
wget ncurses-devel readline-devel $OPENJPEG_DEV openal-soft-devel \ wget ncurses-devel readline-devel $OPENJPEG_DEV openal-soft-devel \
libepoxy-devel yasm patch \ glew-devel yasm patch \
libxml2-devel yaml-cpp-devel tinyxml-devel jemalloc-devel \ libxml2-devel yaml-cpp-devel tinyxml-devel jemalloc-devel \
gmp-devel pugixml-devel potrace-devel libharu-devel libzstd-devel pystring-devel" gmp-devel pugixml-devel potrace-devel libharu-devel libzstd-devel pystring-devel"
@@ -4863,27 +4846,21 @@ install_RPM() {
WEBP_USE=true WEBP_USE=true
fi fi
VPX_DEV="libvpx-devel"
check_package_version_ge_RPM $VPX_DEV $VPX_VERSION_MIN
if [ $? -eq 0 ]; then
VPX_USE=true
fi
AOM_DEV="libaom-devel"
check_package_version_ge_RPM $AOM_DEV $AOM_VERSION_MIN
if [ $? -eq 0 ]; then
AOM_USE=true
fi
OPUS_DEV="libopus-devel"
check_package_version_ge_RPM $OPUS_DEV $OPUS_VERSION_MIN
if [ $? -eq 0 ]; then
OPUS_USE=true
fi
if [ "$WITH_ALL" = true ]; then if [ "$WITH_ALL" = true ]; then
VPX_DEV="libvpx-devel"
check_package_version_ge_RPM $VPX_DEV $VPX_VERSION_MIN
if [ $? -eq 0 ]; then
VPX_USE=true
fi
PRINT "" PRINT ""
install_packages_RPM libspnav-devel install_packages_RPM libspnav-devel
OPUS_DEV="libopus-devel"
check_package_version_ge_RPM $OPUS_DEV $OPUS_VERSION_MIN
if [ $? -eq 0 ]; then
OPUS_USE=true
fi
fi fi
PRINT "" PRINT ""
@@ -5268,9 +5245,6 @@ install_RPM() {
if [ "$VPX_USE" = true ]; then if [ "$VPX_USE" = true ]; then
_packages="$_packages $VPX_DEV" _packages="$_packages $VPX_DEV"
fi fi
if [ "$AOM_USE" = true ]; then
_packages="$_packages $AOM_DEV"
fi
if [ "$OPUS_USE" = true ]; then if [ "$OPUS_USE" = true ]; then
_packages="$_packages $OPUS_DEV" _packages="$_packages $OPUS_DEV"
fi fi
@@ -5415,7 +5389,7 @@ install_ARCH() {
fi fi
_packages="$BASE_DEVEL git cmake fontconfig flex \ _packages="$BASE_DEVEL git cmake fontconfig flex \
libxi libxcursor libxrandr libxinerama libepoxy libpng libtiff wget openal \ libxi libxcursor libxrandr libxinerama glew libpng libtiff wget openal \
$OPENJPEG_DEV yasm sdl2 fftw \ $OPENJPEG_DEV yasm sdl2 fftw \
libxml2 yaml-cpp tinyxml python-requests jemalloc gmp potrace pugixml libharu \ libxml2 yaml-cpp tinyxml python-requests jemalloc gmp potrace pugixml libharu \
zstd pystring" zstd pystring"
@@ -5460,34 +5434,30 @@ install_ARCH() {
WEBP_USE=true WEBP_USE=true
fi fi
XVID_DEV="xvidcore" if [ "$WITH_ALL" = true ]; then
check_package_ARCH $XVID_DEV XVID_DEV="xvidcore"
if [ $? -eq 0 ]; then check_package_ARCH $XVID_DEV
XVID_USE=true if [ $? -eq 0 ]; then
fi XVID_USE=true
fi
MP3LAME_DEV="lame" MP3LAME_DEV="lame"
check_package_ARCH $MP3LAME_DEV check_package_ARCH $MP3LAME_DEV
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then
MP3LAME_USE=true MP3LAME_USE=true
fi fi
VPX_DEV="libvpx" VPX_DEV="libvpx"
check_package_version_ge_ARCH $VPX_DEV $VPX_VERSION_MIN check_package_version_ge_ARCH $VPX_DEV $VPX_VERSION_MIN
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then
VPX_USE=true VPX_USE=true
fi fi
AOM_DEV="libaom" OPUS_DEV="opus"
check_package_version_ge_ARCH $AOM_DEV $AOM_VERSION_MIN check_package_version_ge_ARCH $OPUS_DEV $OPUS_VERSION_MIN
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then
AOM_USE=true OPUS_USE=true
fi fi
OPUS_DEV="opus"
check_package_version_ge_ARCH $OPUS_DEV $OPUS_VERSION_MIN
if [ $? -eq 0 ]; then
OPUS_USE=true
fi fi
@@ -5865,9 +5835,6 @@ install_ARCH() {
if [ "$VPX_USE" = true ]; then if [ "$VPX_USE" = true ]; then
_packages="$_packages $VPX_DEV" _packages="$_packages $VPX_DEV"
fi fi
if [ "$AOM_USE" = true ]; then
_packages="$_packages $AOM_DEV"
fi
if [ "$OPUS_USE" = true ]; then if [ "$OPUS_USE" = true ]; then
_packages="$_packages $OPUS_DEV" _packages="$_packages $OPUS_DEV"
fi fi
@@ -6218,7 +6185,7 @@ print_info() {
fi fi
if [ -d $INST/nanovdb ]; then if [ -d $INST/nanovdb ]; then
_1="-D WITH_NANOVDB=ON" _1="-D WITH_NANOVDB=ON"
_2="-D NANOVDB_ROOT_DIR=$INST/openvdb" _2="-D NANOVDB_ROOT_DIR=$INST/nanovdb"
PRINT " $_1" PRINT " $_1"
PRINT " $_2" PRINT " $_2"
_buildargs="$_buildargs $_1 $_2" _buildargs="$_buildargs $_1 $_2"

View File

@@ -0,0 +1,2 @@
cmake_minimum_required (VERSION 2.4)
add_subdirectory(build/cmake)

View File

@@ -1,3 +1,21 @@
diff -Naur external_dpcpp.orig/sycl/source/CMakeLists.txt external_dpcpp/sycl/source/CMakeLists.txt
--- external_dpcpp.orig/sycl/source/CMakeLists.txt 2022-05-20 04:19:45.067771362 +0000
+++ external_dpcpp/sycl/source/CMakeLists.txt 2022-05-20 04:21:49.708025048 +0000
@@ -66,10 +66,10 @@
target_compile_options(${LIB_OBJ_NAME} PUBLIC
-fvisibility=hidden -fvisibility-inlines-hidden)
set(linker_script "${CMAKE_CURRENT_SOURCE_DIR}/ld-version-script.txt")
- set(abi_linker_script "${CMAKE_CURRENT_SOURCE_DIR}/abi_replacements_linux.txt")
- target_link_libraries(
- ${LIB_NAME} PRIVATE "-Wl,${abi_linker_script}")
- set_target_properties(${LIB_NAME} PROPERTIES LINK_DEPENDS ${abi_linker_script})
+# set(abi_linker_script "${CMAKE_CURRENT_SOURCE_DIR}/abi_replacements_linux.txt")
+# target_link_libraries(
+# ${LIB_NAME} PRIVATE "-Wl,${abi_linker_script}")
+# set_target_properties(${LIB_NAME} PROPERTIES LINK_DEPENDS ${abi_linker_script})
target_link_libraries(
${LIB_NAME} PRIVATE "-Wl,--version-script=${linker_script}")
set_target_properties(${LIB_NAME} PROPERTIES LINK_DEPENDS ${linker_script})
diff -Naur llvm-sycl-nightly-20220501.orig\opencl/CMakeLists.txt llvm-sycl-nightly-20220501\opencl/CMakeLists.txt diff -Naur llvm-sycl-nightly-20220501.orig\opencl/CMakeLists.txt llvm-sycl-nightly-20220501\opencl/CMakeLists.txt
--- llvm-sycl-nightly-20220501.orig/opencl/CMakeLists.txt 2022-04-29 13:47:11 -0600 --- llvm-sycl-nightly-20220501.orig/opencl/CMakeLists.txt 2022-04-29 13:47:11 -0600
+++ llvm-sycl-nightly-20220501/opencl/CMakeLists.txt 2022-05-21 15:25:06 -0600 +++ llvm-sycl-nightly-20220501/opencl/CMakeLists.txt 2022-05-21 15:25:06 -0600

View File

@@ -1,19 +0,0 @@
--- a/src/dispatch_wgl.c 2022-08-04 17:45:13.144924705 +0200
+++ b/src/dispatch_wgl.c 2022-08-04 17:45:47.657482971 +0200
@@ -78,6 +78,8 @@
if (!first_context_current) {
first_context_current = true;
} else {
+ /* BLENDER: disable slow dispatch table switching. */
+#if 0
if (!already_switched_to_dispatch_table) {
already_switched_to_dispatch_table = true;
gl_switch_to_dispatch_table();
@@ -86,6 +88,7 @@
gl_init_dispatch_table();
wgl_init_dispatch_table();
+#endif
}
}

View File

@@ -0,0 +1,15 @@
diff --git a/configure.ac b/configure.ac
index c6f12d644..3c977a4e3 100644
--- a/configure.ac
+++ b/configure.ac
@@ -25,8 +25,10 @@
# autoconf requirements and initialization
AC_INIT([the fast lexical analyser generator],[2.6.4],[flex-help@lists.sourceforge.net],[flex])
+AC_PREREQ([2.60])
AC_CONFIG_SRCDIR([src/scan.l])
AC_CONFIG_AUX_DIR([build-aux])
+AC_USE_SYSTEM_EXTENSIONS
LT_INIT
AM_INIT_AUTOMAKE([1.15 -Wno-portability foreign std-options dist-lzip parallel-tests subdir-objects])
AC_CONFIG_HEADER([src/config.h])

View File

@@ -1,24 +0,0 @@
diff -Naur orig/PCbuild/get_externals.bat Python-3.10.2/PCbuild/get_externals.bat
--- orig/PCbuild/get_externals.bat 2022-01-13 11:52:14 -0700
+++ Python-3.10.2/PCbuild/get_externals.bat 2022-08-17 11:24:42 -0600
@@ -51,7 +51,7 @@
echo.Fetching external libraries...
set libraries=
-set libraries=%libraries% bzip2-1.0.6
+set libraries=%libraries% bzip2-1.0.8
if NOT "%IncludeLibffiSrc%"=="false" set libraries=%libraries% libffi-3.3.0
if NOT "%IncludeSSLSrc%"=="false" set libraries=%libraries% openssl-1.1.1m
set libraries=%libraries% sqlite-3.35.5.0
diff -Naur orig/PCbuild/python.props external_python/PCbuild/python.props
--- orig/PCbuild/python.props 2022-01-13 11:52:14 -0700
+++ external_python/PCbuild/python.props 2022-08-17 11:38:38 -0600
@@ -58,7 +58,7 @@
<ExternalsDir Condition="$(ExternalsDir) == ''">$([System.IO.Path]::GetFullPath(`$(PySourcePath)externals`))</ExternalsDir>
<ExternalsDir Condition="!HasTrailingSlash($(ExternalsDir))">$(ExternalsDir)\</ExternalsDir>
<sqlite3Dir>$(ExternalsDir)sqlite-3.35.5.0\</sqlite3Dir>
- <bz2Dir>$(ExternalsDir)bzip2-1.0.6\</bz2Dir>
+ <bz2Dir>$(ExternalsDir)bzip2-1.0.8\</bz2Dir>
<lzmaDir>$(ExternalsDir)xz-5.2.2\</lzmaDir>
<libffiDir>$(ExternalsDir)libffi-3.3.0\</libffiDir>
<libffiOutDir>$(ExternalsDir)libffi-3.3.0\$(ArchName)\</libffiOutDir>

View File

@@ -1,11 +0,0 @@
diff -Naur orig/configure external_vpx/configure
--- orig/configure 2022-07-06 09:22:04 -0600
+++ external_vpx/configure 2022-07-06 09:24:12 -0600
@@ -270,7 +270,6 @@
HAVE_LIST="
${ARCH_EXT_LIST}
vpx_ports
- pthread_h
unistd_h
"
EXPERIMENT_LIST="

View File

@@ -1,47 +0,0 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright 2022 Blender Foundation.
# This module defines
# Epoxy_INCLUDE_DIRS, where to find epoxy/gl.h
# Epoxy_LIBRARY, where to find the epoxy library.
# Epoxy_ROOT_DIR, The base directory to search for epoxy.
# This can also be an environment variable.
# Epoxy_FOUND, If false, do not try to use epoxy.
IF(NOT EPOXY_ROOT_DIR AND NOT $ENV{EPOXY_ROOT_DIR} STREQUAL "")
SET(EPOXY_ROOT_DIR $ENV{EPOXY_ROOT_DIR})
ENDIF()
FIND_PATH(Epoxy_INCLUDE_DIR
NAMES
epoxy/gl.h
HINTS
${EPOXY_ROOT_DIR}
PATH_SUFFIXES
include
)
FIND_LIBRARY(Epoxy_LIBRARY
NAMES
epoxy
HINTS
${EPOXY_ROOT_DIR}
PATH_SUFFIXES
lib64 lib
)
# handle the QUIETLY and REQUIRED arguments and set Epoxy_FOUND to TRUE if
# all listed variables are TRUE
INCLUDE(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(Epoxy DEFAULT_MSG
Epoxy_LIBRARY Epoxy_INCLUDE_DIR)
IF(Epoxy_FOUND)
SET(Epoxy_INCLUDE_DIRS ${Epoxy_INCLUDE_DIR})
SET(Epoxy_LIBRARIES ${Epoxy_LIBRARY})
ENDIF()
MARK_AS_ADVANCED(
Epoxy_INCLUDE_DIR
Epoxy_LIBRARY
)

View File

@@ -0,0 +1,58 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright 2014 Blender Foundation.
# - Find GLEW library
# Find the native Glew includes and library
# This module defines
# GLEW_INCLUDE_DIRS, where to find glew.h, Set when
# GLEW_INCLUDE_DIR is found.
# GLEW_ROOT_DIR, The base directory to search for Glew.
# This can also be an environment variable.
# GLEW_FOUND, If false, do not try to use Glew.
#
# also defined,
# GLEW_LIBRARY, where to find the Glew library.
# If GLEW_ROOT_DIR was defined in the environment, use it.
IF(NOT GLEW_ROOT_DIR AND NOT $ENV{GLEW_ROOT_DIR} STREQUAL "")
SET(GLEW_ROOT_DIR $ENV{GLEW_ROOT_DIR})
ENDIF()
SET(_glew_SEARCH_DIRS
${GLEW_ROOT_DIR}
)
FIND_PATH(GLEW_INCLUDE_DIR
NAMES
GL/glew.h
HINTS
${_glew_SEARCH_DIRS}
PATH_SUFFIXES
include
)
FIND_LIBRARY(GLEW_LIBRARY
NAMES
GLEW
HINTS
${_glew_SEARCH_DIRS}
PATH_SUFFIXES
lib64 lib
)
# handle the QUIETLY and REQUIRED arguments and set GLEW_FOUND to TRUE if
# all listed variables are TRUE
INCLUDE(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(GLEW DEFAULT_MSG
GLEW_LIBRARY GLEW_INCLUDE_DIR)
IF(GLEW_FOUND)
SET(GLEW_INCLUDE_DIRS ${GLEW_INCLUDE_DIR})
ENDIF()
MARK_AS_ADVANCED(
GLEW_INCLUDE_DIR
GLEW_LIBRARY
)
UNSET(_glew_SEARCH_DIRS)

View File

@@ -1,47 +0,0 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright 2022 Blender Foundation.
# This module defines
# LibEpoxy_INCLUDE_DIRS, where to find epoxy/gl.h
# LibEpoxy_LIBRARY, where to find the epoxy library.
# LibEpoxy_ROOT_DIR, The base directory to search for libepoxy.
# This can also be an environment variable.
# LibEpoxy_FOUND, If false, do not try to use libepoxy.
IF(NOT EPOXY_ROOT_DIR AND NOT $ENV{EPOXY_ROOT_DIR} STREQUAL "")
SET(EPOXY_ROOT_DIR $ENV{EPOXY_ROOT_DIR})
ENDIF()
FIND_PATH(LibEpoxy_INCLUDE_DIR
NAMES
epoxy/gl.h
HINTS
${EPOXY_ROOT_DIR}
PATH_SUFFIXES
include
)
FIND_LIBRARY(LibEpoxy_LIBRARY
NAMES
epoxy
HINTS
${EPOXY_ROOT_DIR}
PATH_SUFFIXES
lib64 lib
)
# handle the QUIETLY and REQUIRED arguments and set LibEpoxy_FOUND to TRUE if
# all listed variables are TRUE
INCLUDE(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(LibEpoxy DEFAULT_MSG
LibEpoxy_LIBRARY LibEpoxy_INCLUDE_DIR)
IF(LibEpoxy_FOUND)
SET(LibEpoxy_INCLUDE_DIRS ${LibEpoxy_INCLUDE_DIR})
SET(LibEpoxy_LIBRARIES ${LibEpoxy_LIBRARY})
ENDIF()
MARK_AS_ADVANCED(
LibEpoxy_INCLUDE_DIR
LibEpoxy_LIBRARY
)

View File

@@ -0,0 +1,80 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright 2014 Blender Foundation.
# - Try to find OpenGLES
# Once done this will define
#
# OPENGLES_FOUND - system has OpenGLES and EGL
# OPENGL_EGL_FOUND - system has EGL
# OPENGLES_INCLUDE_DIR - the GLES include directory
# OPENGLES_LIBRARY - the GLES library
# OPENGLES_EGL_INCLUDE_DIR - the EGL include directory
# OPENGLES_EGL_LIBRARY - the EGL library
# OPENGLES_LIBRARIES - all libraries needed for OpenGLES
# OPENGLES_INCLUDES - all includes needed for OpenGLES
# If OPENGLES_ROOT_DIR was defined in the environment, use it.
IF(NOT OPENGLES_ROOT_DIR AND NOT $ENV{OPENGLES_ROOT_DIR} STREQUAL "")
SET(OPENGLES_ROOT_DIR $ENV{OPENGLES_ROOT_DIR})
ENDIF()
SET(_opengles_SEARCH_DIRS
${OPENGLES_ROOT_DIR}
)
FIND_PATH(OPENGLES_INCLUDE_DIR
NAMES
GLES2/gl2.h
HINTS
${_opengles_SEARCH_DIRS}
)
FIND_LIBRARY(OPENGLES_LIBRARY
NAMES
GLESv2
PATHS
${_opengles_SEARCH_DIRS}
PATH_SUFFIXES
lib64 lib
)
FIND_PATH(OPENGLES_EGL_INCLUDE_DIR
NAMES
EGL/egl.h
HINTS
${_opengles_SEARCH_DIRS}
)
FIND_LIBRARY(OPENGLES_EGL_LIBRARY
NAMES
EGL
HINTS
${_opengles_SEARCH_DIRS}
PATH_SUFFIXES
lib64 lib
)
IF(OPENGLES_EGL_LIBRARY AND OPENGLES_EGL_INCLUDE_DIR)
SET(OPENGL_EGL_FOUND "YES")
ELSE()
SET(OPENGL_EGL_FOUND "NO")
ENDIF()
IF(OPENGLES_LIBRARY AND OPENGLES_INCLUDE_DIR AND
OPENGLES_EGL_LIBRARY AND OPENGLES_EGL_INCLUDE_DIR)
SET(OPENGLES_LIBRARIES ${OPENGLES_LIBRARY} ${OPENGLES_LIBRARIES}
${OPENGLES_EGL_LIBRARY})
SET(OPENGLES_INCLUDES ${OPENGLES_INCLUDE_DIR} ${OPENGLES_EGL_INCLUDE_DIR})
SET(OPENGLES_FOUND "YES")
ELSE()
SET(OPENGLES_FOUND "NO")
ENDIF()
MARK_AS_ADVANCED(
OPENGLES_EGL_INCLUDE_DIR
OPENGLES_EGL_LIBRARY
OPENGLES_LIBRARY
OPENGLES_INCLUDE_DIR
)
UNSET(_opengles_SEARCH_DIRS)

View File

@@ -34,17 +34,11 @@ SET(PYTHON_VERSION 3.10 CACHE STRING "Python Version (major and minor only)")
MARK_AS_ADVANCED(PYTHON_VERSION) MARK_AS_ADVANCED(PYTHON_VERSION)
if(APPLE) # See: http://docs.python.org/extending/embedding.html#linking-requirements
if(WITH_PYTHON_MODULE) # for why this is needed
set(PYTHON_LINKFLAGS "-undefined dynamic_lookup") SET(PYTHON_LINKFLAGS "-Xlinker -export-dynamic" CACHE STRING "Linker flags for python")
else() MARK_AS_ADVANCED(PYTHON_LINKFLAGS)
set(PYTHON_LINKFLAGS)
endif()
else()
# See: http://docs.python.org/extending/embedding.html#linking-requirements
SET(PYTHON_LINKFLAGS "-Xlinker -export-dynamic" CACHE STRING "Linker flags for python")
MARK_AS_ADVANCED(PYTHON_LINKFLAGS)
endif()
# if the user passes these defines as args, we don't want to overwrite # if the user passes these defines as args, we don't want to overwrite
SET(_IS_INC_DEF OFF) SET(_IS_INC_DEF OFF)

View File

@@ -268,8 +268,7 @@ same as the Google Test name (i.e. ``suite.testcase``); see also
cmake_policy(PUSH) cmake_policy(PUSH)
cmake_policy(SET CMP0057 NEW) # if IN_LIST cmake_policy(SET CMP0057 NEW) # if IN_LIST
# ----------------------------------------------------------------------------- #------------------------------------------------------------------------------
function(gtest_add_tests) function(gtest_add_tests)
if(ARGC LESS 1) if(ARGC LESS 1)

View File

@@ -10,77 +10,40 @@ import tempfile
from typing import ( from typing import (
Any, Any,
List, List,
Tuple,
) )
USE_VERBOSE = (os.environ.get("VERBOSE", None) is not None)
# Could make configurable.
USE_VERBOSE_PROGRESS = True
CHECKER_BIN = "cppcheck" USE_QUIET = (os.environ.get("QUIET", None) is not None)
CHECKER_IGNORE_PREFIX = [ CHECKER_IGNORE_PREFIX = [
"extern", "extern",
] ]
CHECKER_EXCLUDE_SOURCE_FILES = set(os.path.join(*f.split("/")) for f in ( CHECKER_BIN = "cppcheck"
# These files hang (taking longer than 5min with v2.8.2 at time of writing).
# All other files process in under around 10seconds.
"source/blender/editors/space_text/text_format_pov.c",
"source/blender/editors/space_text/text_format_pov_ini.c",
))
CHECKER_ARGS = [ CHECKER_ARGS = [
# Speed up execution. # not sure why this is needed, but it is.
# As Blender has many defines, the total number of configurations is large making execution unreasonably slow. "-I" + os.path.join(project_source_info.SOURCE_DIR, "extern", "glew", "include"),
# This could be increased but do so with care. "--suppress=*:%s/extern/glew/include/GL/glew.h:241" % project_source_info.SOURCE_DIR,
"--max-configs=1", "--max-configs=1", # speeds up execution
# "--check-config", # when includes are missing
# Enable this when includes are missing. "--enable=all", # if you want sixty hundred pedantic suggestions
# "--check-config",
# Shows many pedantic issues, some are quite useful.
"--enable=all",
# Also shows useful messages, even if some are false-positives.
"--inconclusive",
# Quiet output, otherwise all defines/includes are printed (overly verbose). # Quiet output, otherwise all defines/includes are printed (overly verbose).
# Only enable this for troubleshooting (if defines are not set as expected for example). # Only enable this for troubleshooting (if defines are not set as expected for example).
*(() if USE_VERBOSE else ("--quiet",)) "--quiet",
# NOTE: `--cppcheck-build-dir=<dir>` is added later as a temporary directory. # NOTE: `--cppcheck-build-dir=<dir>` is added later as a temporary directory.
] ]
if USE_QUIET:
def source_info_filter( CHECKER_ARGS.append("--quiet")
source_info: List[Tuple[str, List[str], List[str]]],
) -> List[Tuple[str, List[str], List[str]]]:
source_dir = project_source_info.SOURCE_DIR
if not source_dir.endswith(os.sep):
source_dir += os.sep
source_info_result = []
for i, item in enumerate(source_info):
c = item[0]
if c.startswith(source_dir):
c_relative = c[len(source_dir):]
if c_relative in CHECKER_EXCLUDE_SOURCE_FILES:
CHECKER_EXCLUDE_SOURCE_FILES.remove(c_relative)
continue
source_info_result.append(item)
if CHECKER_EXCLUDE_SOURCE_FILES:
sys.stderr.write("Error: exclude file(s) are missing: %r\n" % list(sorted(CHECKER_EXCLUDE_SOURCE_FILES)))
sys.exit(1)
return source_info_result
def cppcheck() -> None: def cppcheck() -> None:
source_info = project_source_info.build_info(ignore_prefix_list=CHECKER_IGNORE_PREFIX) source_info = project_source_info.build_info(ignore_prefix_list=CHECKER_IGNORE_PREFIX)
source_defines = project_source_info.build_defines_as_args() source_defines = project_source_info.build_defines_as_args()
# Apply exclusion.
source_info = source_info_filter(source_info)
check_commands = [] check_commands = []
for c, inc_dirs, defs in source_info: for c, inc_dirs, defs in source_info:
cmd = ( cmd = (
@@ -97,7 +60,7 @@ def cppcheck() -> None:
process_functions = [] process_functions = []
def my_process(i: int, c: str, cmd: List[str]) -> subprocess.Popen[Any]: def my_process(i: int, c: str, cmd: List[str]) -> subprocess.Popen[Any]:
if USE_VERBOSE_PROGRESS: if not USE_QUIET:
percent = 100.0 * (i / len(check_commands)) percent = 100.0 * (i / len(check_commands))
percent_str = "[" + ("%.2f]" % percent).rjust(7) + " %:" percent_str = "[" + ("%.2f]" % percent).rjust(7) + " %:"

View File

@@ -13,7 +13,7 @@ set(WITH_BULLET ON CACHE BOOL "" FORCE)
set(WITH_CODEC_AVI ON CACHE BOOL "" FORCE) set(WITH_CODEC_AVI ON CACHE BOOL "" FORCE)
set(WITH_CODEC_FFMPEG ON CACHE BOOL "" FORCE) set(WITH_CODEC_FFMPEG ON CACHE BOOL "" FORCE)
set(WITH_CODEC_SNDFILE ON CACHE BOOL "" FORCE) set(WITH_CODEC_SNDFILE ON CACHE BOOL "" FORCE)
set(WITH_COMPOSITOR_CPU ON CACHE BOOL "" FORCE) set(WITH_COMPOSITOR ON CACHE BOOL "" FORCE)
set(WITH_CYCLES ON CACHE BOOL "" FORCE) set(WITH_CYCLES ON CACHE BOOL "" FORCE)
set(WITH_CYCLES_EMBREE ON CACHE BOOL "" FORCE) set(WITH_CYCLES_EMBREE ON CACHE BOOL "" FORCE)
set(WITH_CYCLES_OSL ON CACHE BOOL "" FORCE) set(WITH_CYCLES_OSL ON CACHE BOOL "" FORCE)

View File

@@ -7,6 +7,8 @@
# cmake -C../blender/build_files/cmake/config/blender_lite.cmake ../blender # cmake -C../blender/build_files/cmake/config/blender_lite.cmake ../blender
# #
set(WITH_INSTALL_PORTABLE ON CACHE BOOL "" FORCE)
set(WITH_ALEMBIC OFF CACHE BOOL "" FORCE) set(WITH_ALEMBIC OFF CACHE BOOL "" FORCE)
set(WITH_AUDASPACE OFF CACHE BOOL "" FORCE) set(WITH_AUDASPACE OFF CACHE BOOL "" FORCE)
set(WITH_BLENDER_THUMBNAILER OFF CACHE BOOL "" FORCE) set(WITH_BLENDER_THUMBNAILER OFF CACHE BOOL "" FORCE)
@@ -16,7 +18,7 @@ set(WITH_BULLET OFF CACHE BOOL "" FORCE)
set(WITH_CODEC_AVI OFF CACHE BOOL "" FORCE) set(WITH_CODEC_AVI OFF CACHE BOOL "" FORCE)
set(WITH_CODEC_FFMPEG OFF CACHE BOOL "" FORCE) set(WITH_CODEC_FFMPEG OFF CACHE BOOL "" FORCE)
set(WITH_CODEC_SNDFILE OFF CACHE BOOL "" FORCE) set(WITH_CODEC_SNDFILE OFF CACHE BOOL "" FORCE)
set(WITH_COMPOSITOR_CPU OFF CACHE BOOL "" FORCE) set(WITH_COMPOSITOR OFF CACHE BOOL "" FORCE)
set(WITH_COREAUDIO OFF CACHE BOOL "" FORCE) set(WITH_COREAUDIO OFF CACHE BOOL "" FORCE)
set(WITH_CYCLES OFF CACHE BOOL "" FORCE) set(WITH_CYCLES OFF CACHE BOOL "" FORCE)
set(WITH_DRACO OFF CACHE BOOL "" FORCE) set(WITH_DRACO OFF CACHE BOOL "" FORCE)
@@ -33,7 +35,6 @@ set(WITH_IMAGE_OPENEXR OFF CACHE BOOL "" FORCE)
set(WITH_IMAGE_OPENJPEG OFF CACHE BOOL "" FORCE) set(WITH_IMAGE_OPENJPEG OFF CACHE BOOL "" FORCE)
set(WITH_IMAGE_TIFF OFF CACHE BOOL "" FORCE) set(WITH_IMAGE_TIFF OFF CACHE BOOL "" FORCE)
set(WITH_IMAGE_WEBP OFF CACHE BOOL "" FORCE) set(WITH_IMAGE_WEBP OFF CACHE BOOL "" FORCE)
set(WITH_INPUT_IME OFF CACHE BOOL "" FORCE)
set(WITH_INPUT_NDOF OFF CACHE BOOL "" FORCE) set(WITH_INPUT_NDOF OFF CACHE BOOL "" FORCE)
set(WITH_INTERNATIONAL OFF CACHE BOOL "" FORCE) set(WITH_INTERNATIONAL OFF CACHE BOOL "" FORCE)
set(WITH_IO_STL OFF CACHE BOOL "" FORCE) set(WITH_IO_STL OFF CACHE BOOL "" FORCE)

View File

@@ -14,7 +14,7 @@ set(WITH_BULLET ON CACHE BOOL "" FORCE)
set(WITH_CODEC_AVI ON CACHE BOOL "" FORCE) set(WITH_CODEC_AVI ON CACHE BOOL "" FORCE)
set(WITH_CODEC_FFMPEG ON CACHE BOOL "" FORCE) set(WITH_CODEC_FFMPEG ON CACHE BOOL "" FORCE)
set(WITH_CODEC_SNDFILE ON CACHE BOOL "" FORCE) set(WITH_CODEC_SNDFILE ON CACHE BOOL "" FORCE)
set(WITH_COMPOSITOR_CPU ON CACHE BOOL "" FORCE) set(WITH_COMPOSITOR ON CACHE BOOL "" FORCE)
set(WITH_CYCLES ON CACHE BOOL "" FORCE) set(WITH_CYCLES ON CACHE BOOL "" FORCE)
set(WITH_CYCLES_EMBREE ON CACHE BOOL "" FORCE) set(WITH_CYCLES_EMBREE ON CACHE BOOL "" FORCE)
set(WITH_CYCLES_OSL ON CACHE BOOL "" FORCE) set(WITH_CYCLES_OSL ON CACHE BOOL "" FORCE)
@@ -78,6 +78,11 @@ if(UNIX AND NOT APPLE)
set(WITH_PULSEAUDIO ON CACHE BOOL "" FORCE) set(WITH_PULSEAUDIO ON CACHE BOOL "" FORCE)
set(WITH_X11_XINPUT ON CACHE BOOL "" FORCE) set(WITH_X11_XINPUT ON CACHE BOOL "" FORCE)
set(WITH_X11_XF86VMODE ON CACHE BOOL "" FORCE) set(WITH_X11_XF86VMODE ON CACHE BOOL "" FORCE)
# Disable oneAPI on Linux for the time being.
# The AoT compilation takes too long to be used officially in the buildbot CI/CD and the JIT
# compilation has ABI compatibility issues when running builds made on centOS on Ubuntu.
set(WITH_CYCLES_DEVICE_ONEAPI OFF CACHE BOOL "" FORCE)
endif() endif()
if(NOT APPLE) if(NOT APPLE)
set(WITH_XR_OPENXR ON CACHE BOOL "" FORCE) set(WITH_XR_OPENXR ON CACHE BOOL "" FORCE)
@@ -87,5 +92,7 @@ if(NOT APPLE)
set(WITH_CYCLES_CUBIN_COMPILER OFF CACHE BOOL "" FORCE) set(WITH_CYCLES_CUBIN_COMPILER OFF CACHE BOOL "" FORCE)
set(WITH_CYCLES_HIP_BINARIES ON CACHE BOOL "" FORCE) set(WITH_CYCLES_HIP_BINARIES ON CACHE BOOL "" FORCE)
set(WITH_CYCLES_DEVICE_ONEAPI ON CACHE BOOL "" FORCE) set(WITH_CYCLES_DEVICE_ONEAPI ON CACHE BOOL "" FORCE)
set(WITH_CYCLES_ONEAPI_BINARIES ON CACHE BOOL "" FORCE)
# Disable AoT kernels compilations until buildbot can deliver them in a reasonabel time.
set(WITH_CYCLES_ONEAPI_BINARIES OFF CACHE BOOL "" FORCE)
endif() endif()

View File

@@ -8,81 +8,41 @@
set(WITH_PYTHON_MODULE ON CACHE BOOL "" FORCE) set(WITH_PYTHON_MODULE ON CACHE BOOL "" FORCE)
# install into the systems python dir
set(WITH_INSTALL_PORTABLE OFF CACHE BOOL "" FORCE)
# ----------------------------------------------------------------------------- # no point int copying python into python
# Installation Configuration.
#
# NOTE: `WITH_INSTALL_PORTABLE` always defaults to ON when building as a Python module and
# isn't set here as it makes changing the setting impractical.
# Python-developers could prefer either ON/OFF depending on their usage:
#
# - When using the system's Python, disabling will install into their `site-packages`,
# allowing them to run Python from any directory and `import bpy`.
# - When using Blender's bundled Python in `./../lib/` it will install there
# which isn't especially useful as it requires running Python from this directory too.
#
# So default `WITH_INSTALL_PORTABLE` to ON, and developers who don't use Python from `./../lib/`
# can disable it if they wish to install into their systems Python.
# There is no point in copying python into Python.
set(WITH_PYTHON_INSTALL OFF CACHE BOOL "" FORCE) set(WITH_PYTHON_INSTALL OFF CACHE BOOL "" FORCE)
# disable audio, its possible some devs may want this but for now disable
# so the python module doesn't hold the audio device and loads quickly.
set(WITH_AUDASPACE OFF CACHE BOOL "" FORCE)
set(WITH_CODEC_FFMPEG OFF CACHE BOOL "" FORCE)
set(WITH_CODEC_SNDFILE OFF CACHE BOOL "" FORCE)
set(WITH_COREAUDIO OFF CACHE BOOL "" FORCE)
set(WITH_JACK OFF CACHE BOOL "" FORCE)
set(WITH_OPENAL OFF CACHE BOOL "" FORCE)
set(WITH_PULSEAUDIO OFF CACHE BOOL "" FORCE)
set(WITH_SDL OFF CACHE BOOL "" FORCE)
set(WITH_WASAPI OFF CACHE BOOL "" FORCE)
# other features which are not especially useful as a python module
set(WITH_ALEMBIC OFF CACHE BOOL "" FORCE)
set(WITH_BULLET OFF CACHE BOOL "" FORCE)
set(WITH_INPUT_NDOF OFF CACHE BOOL "" FORCE)
set(WITH_INTERNATIONAL OFF CACHE BOOL "" FORCE)
set(WITH_NANOVDB OFF CACHE BOOL "" FORCE)
set(WITH_OPENCOLLADA OFF CACHE BOOL "" FORCE)
set(WITH_OPENVDB OFF CACHE BOOL "" FORCE)
set(WITH_X11_XINPUT OFF CACHE BOOL "" FORCE)
# Depends on Python install, do this to quiet warning. # Depends on Python install, do this to quiet warning.
set(WITH_DRACO OFF CACHE BOOL "" FORCE) set(WITH_DRACO OFF CACHE BOOL "" FORCE)
if(WIN32) # Jemalloc does not work with dlopen() of Python modules:
set(WITH_WINDOWS_BUNDLE_CRT OFF CACHE BOOL "" FORCE)
endif()
# -----------------------------------------------------------------------------
# Library Compatibility.
# JEMALLOC does not work with `dlopen()` of Python modules:
# https://github.com/jemalloc/jemalloc/issues/1237 # https://github.com/jemalloc/jemalloc/issues/1237
set(WITH_MEM_JEMALLOC OFF CACHE BOOL "" FORCE) set(WITH_MEM_JEMALLOC OFF CACHE BOOL "" FORCE)
# -----------------------------------------------------------------------------
# Application Support.
# Not useful to include with the Python module.
# Although a way to extract this from Python could be handle,
# this would be better exposed directly via the Python API.
set(WITH_BLENDER_THUMBNAILER OFF CACHE BOOL "" FORCE)
# -----------------------------------------------------------------------------
# Audio Support.
# Disable audio, its possible some developers may want this but for now disable
# so the Python module doesn't hold the audio device and loads quickly.
set(WITH_AUDASPACE OFF CACHE BOOL "" FORCE)
set(WITH_JACK OFF CACHE BOOL "" FORCE)
set(WITH_OPENAL OFF CACHE BOOL "" FORCE)
set(WITH_SDL OFF CACHE BOOL "" FORCE)
if(UNIX AND NOT APPLE)
set(WITH_PULSEAUDIO OFF CACHE BOOL "" FORCE)
endif()
if(WIN32) if(WIN32)
set(WITH_WASAPI OFF CACHE BOOL "" FORCE) set(WITH_WINDOWS_BUNDLE_CRT OFF CACHE BOOL "" FORCE)
endif() endif()
if(APPLE)
set(WITH_COREAUDIO OFF CACHE BOOL "" FORCE)
endif()
# -----------------------------------------------------------------------------
# Input Device Support.
# Other features which are not especially useful as a python module.
set(WITH_INPUT_NDOF OFF CACHE BOOL "" FORCE)
if(WIN32 OR APPLE)
set(WITH_INPUT_IME OFF CACHE BOOL "" FORCE)
endif()
# -----------------------------------------------------------------------------
# Language Support.
set(WITH_INTERNATIONAL OFF CACHE BOOL "" FORCE)

View File

@@ -1,33 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-or-later
# Copyright 2022 Blender Foundation. All rights reserved.
# This file is used to test the system for headers & symbols.
# Variables should use the `HAVE_` prefix.
# Defines should use the same name as the CMAKE variable.
include(CheckSymbolExists)
# Used for: `intern/guardedalloc/intern/mallocn_intern.h`.
# Function `malloc_stats` is only available on GLIBC,
# so check that before defining `HAVE_MALLOC_STATS`.
check_symbol_exists(malloc_stats "malloc.h" HAVE_MALLOC_STATS_H)
# Used for: `source/creator/creator_signals.c`.
# The function `feenableexcept` is not present non-GLIBC systems,
# hence we need to check if it's available in the `fenv.h` file.
set(HAVE_FEENABLEEXCEPT OFF)
if(CMAKE_SYSTEM_NAME STREQUAL "Linux")
check_symbol_exists(feenableexcept "fenv.h" HAVE_FEENABLEEXCEPT)
endif()
# Used for: `source/blender/blenlib/intern/system.c`.
# `execinfo` is not available on non-GLIBC systems (at least not on MUSL-LIBC),
# so check the presence of the header before including it and using the it for back-trace.
set(HAVE_EXECINFO_H OFF)
if(NOT MSVC)
include(CheckIncludeFiles)
check_include_files("execinfo.h" HAVE_EXECINFO_H)
if(HAVE_EXECINFO_H)
add_definitions(-DHAVE_EXECINFO_H)
endif()
endif()

View File

@@ -418,13 +418,6 @@ function(blender_add_test_lib
library_deps library_deps
) )
# Not currently supported for Python module due to different required
# Python link flags.
if(WITH_PYTHON_MODULE)
add_custom_target(${name})
return()
endif()
add_cc_flags_custom_test(${name} PARENT_SCOPE) add_cc_flags_custom_test(${name} PARENT_SCOPE)
# Otherwise external projects will produce warnings that we cannot fix. # Otherwise external projects will produce warnings that we cannot fix.
@@ -471,13 +464,6 @@ function(blender_add_test_executable
library_deps library_deps
) )
# Not currently supported for Python module due to different required
# Python link flags.
if(WITH_PYTHON_MODULE)
add_custom_target(${name})
return()
endif()
add_cc_flags_custom_test(${name} PARENT_SCOPE) add_cc_flags_custom_test(${name} PARENT_SCOPE)
## Otherwise external projects will produce warnings that we cannot fix. ## Otherwise external projects will produce warnings that we cannot fix.
@@ -1222,8 +1208,16 @@ endmacro()
macro(without_system_libs_begin) macro(without_system_libs_begin)
set(CMAKE_IGNORE_PATH "${CMAKE_PLATFORM_IMPLICIT_LINK_DIRECTORIES};${CMAKE_SYSTEM_INCLUDE_PATH};${CMAKE_C_IMPLICIT_INCLUDE_DIRECTORIES};${CMAKE_CXX_IMPLICIT_INCLUDE_DIRECTORIES}") set(CMAKE_IGNORE_PATH "${CMAKE_PLATFORM_IMPLICIT_LINK_DIRECTORIES};${CMAKE_SYSTEM_INCLUDE_PATH};${CMAKE_C_IMPLICIT_INCLUDE_DIRECTORIES};${CMAKE_CXX_IMPLICIT_INCLUDE_DIRECTORIES}")
if(APPLE)
# Avoid searching for headers in frameworks (like Mono), and libraries in LIBDIR.
set(CMAKE_FIND_FRAMEWORK NEVER)
endif()
endmacro() endmacro()
macro(without_system_libs_end) macro(without_system_libs_end)
unset(CMAKE_IGNORE_PATH) unset(CMAKE_IGNORE_PATH)
if(APPLE)
# FIRST is the default.
set(CMAKE_FIND_FRAMEWORK FIRST)
endif()
endmacro() endmacro()

View File

@@ -21,25 +21,9 @@ function(print_found_status
endif() endif()
endfunction() endfunction()
# Utility to install precompiled shared libraries.
macro(add_bundled_libraries library)
if(EXISTS ${LIBDIR})
set(_library_dir ${LIBDIR}/${library}/lib)
file(GLOB _all_library_versions ${_library_dir}/*\.dylib*)
list(APPEND PLATFORM_BUNDLED_LIBRARIES ${_all_library_versions})
list(APPEND PLATFORM_BUNDLED_LIBRARY_DIRS ${_library_dir})
unset(_all_library_versions)
unset(_library_dir)
endif()
endmacro()
# ------------------------------------------------------------------------ # ------------------------------------------------------------------------
# Find system provided libraries. # Find system provided libraries.
# Avoid searching for headers since this would otherwise override our lib
# directory as well as PYTHON_ROOT_DIR.
set(CMAKE_FIND_FRAMEWORK NEVER)
# Find system ZLIB, not the pre-compiled one supplied with OpenCollada. # Find system ZLIB, not the pre-compiled one supplied with OpenCollada.
set(ZLIB_ROOT /usr) set(ZLIB_ROOT /usr)
find_package(ZLIB REQUIRED) find_package(ZLIB REQUIRED)
@@ -79,11 +63,6 @@ if(NOT EXISTS "${LIBDIR}/")
message(FATAL_ERROR "Mac OSX requires pre-compiled libs at: '${LIBDIR}'") message(FATAL_ERROR "Mac OSX requires pre-compiled libs at: '${LIBDIR}'")
endif() endif()
# Optionally use system Python if PYTHON_ROOT_DIR is specified.
if(WITH_PYTHON AND (WITH_PYTHON_MODULE AND PYTHON_ROOT_DIR))
find_package(PythonLibsUnix REQUIRED)
endif()
# Prefer lib directory paths # Prefer lib directory paths
file(GLOB LIB_SUBDIRS ${LIBDIR}/*) file(GLOB LIB_SUBDIRS ${LIBDIR}/*)
set(CMAKE_PREFIX_PATH ${LIB_SUBDIRS}) set(CMAKE_PREFIX_PATH ${LIB_SUBDIRS})
@@ -132,8 +111,34 @@ if(WITH_CODEC_SNDFILE)
unset(_sndfile_VORBISENC_LIBRARY) unset(_sndfile_VORBISENC_LIBRARY)
endif() endif()
if(WITH_PYTHON AND NOT (WITH_PYTHON_MODULE AND PYTHON_ROOT_DIR)) if(WITH_PYTHON)
find_package(PythonLibsUnix REQUIRED) # Use precompiled libraries by default.
set(PYTHON_VERSION 3.10)
if(NOT WITH_PYTHON_MODULE AND NOT WITH_PYTHON_FRAMEWORK)
# Normally cached but not since we include them with blender.
set(PYTHON_INCLUDE_DIR "${LIBDIR}/python/include/python${PYTHON_VERSION}")
set(PYTHON_EXECUTABLE "${LIBDIR}/python/bin/python${PYTHON_VERSION}")
set(PYTHON_LIBRARY ${LIBDIR}/python/lib/libpython${PYTHON_VERSION}.a)
set(PYTHON_LIBPATH "${LIBDIR}/python/lib/python${PYTHON_VERSION}")
else()
# Module must be compiled against Python framework.
set(_py_framework "/Library/Frameworks/Python.framework/Versions/${PYTHON_VERSION}")
set(PYTHON_INCLUDE_DIR "${_py_framework}/include/python${PYTHON_VERSION}")
set(PYTHON_EXECUTABLE "${_py_framework}/bin/python${PYTHON_VERSION}")
set(PYTHON_LIBPATH "${_py_framework}/lib/python${PYTHON_VERSION}")
unset(_py_framework)
endif()
# uncached vars
set(PYTHON_INCLUDE_DIRS "${PYTHON_INCLUDE_DIR}")
set(PYTHON_LIBRARIES "${PYTHON_LIBRARY}")
# needed for Audaspace, numpy is installed into python site-packages
set(PYTHON_NUMPY_INCLUDE_DIRS "${PYTHON_LIBPATH}/site-packages/numpy/core/include")
if(NOT EXISTS "${PYTHON_EXECUTABLE}")
message(FATAL_ERROR "Python executable missing: ${PYTHON_EXECUTABLE}")
endif()
endif() endif()
if(WITH_FFTW3) if(WITH_FFTW3)
@@ -157,9 +162,6 @@ if(WITH_CODEC_FFMPEG)
mp3lame ogg opus swresample swscale mp3lame ogg opus swresample swscale
theora theoradec theoraenc vorbis vorbisenc theora theoradec theoraenc vorbis vorbisenc
vorbisfile vpx x264 xvidcore) vorbisfile vpx x264 xvidcore)
if(EXISTS ${LIBDIR}/ffmpeg/lib/libaom.a)
list(APPEND FFMPEG_FIND_COMPONENTS aom)
endif()
find_package(FFmpeg) find_package(FFmpeg)
endif() endif()
@@ -196,6 +198,11 @@ if(WITH_JACK)
string(APPEND PLATFORM_LINKFLAGS " -F/Library/Frameworks -weak_framework jackmp") string(APPEND PLATFORM_LINKFLAGS " -F/Library/Frameworks -weak_framework jackmp")
endif() endif()
if(WITH_PYTHON_MODULE OR WITH_PYTHON_FRAMEWORK)
# force cmake to link right framework
string(APPEND PLATFORM_LINKFLAGS " /Library/Frameworks/Python.framework/Versions/${PYTHON_VERSION}/Python")
endif()
if(WITH_OPENCOLLADA) if(WITH_OPENCOLLADA)
find_package(OpenCOLLADA) find_package(OpenCOLLADA)
find_library(PCRE_LIBRARIES NAMES pcre HINTS ${LIBDIR}/opencollada/lib) find_library(PCRE_LIBRARIES NAMES pcre HINTS ${LIBDIR}/opencollada/lib)
@@ -216,9 +223,6 @@ if(WITH_SDL)
endif() endif()
endif() endif()
set(EPOXY_ROOT_DIR ${LIBDIR}/epoxy)
find_package(Epoxy REQUIRED)
set(PNG_ROOT ${LIBDIR}/png) set(PNG_ROOT ${LIBDIR}/png)
find_package(PNG REQUIRED) find_package(PNG REQUIRED)
@@ -405,7 +409,6 @@ if(WITH_OPENMP)
set(OpenMP_LIBRARY_DIR "${LIBDIR}/openmp/lib/") set(OpenMP_LIBRARY_DIR "${LIBDIR}/openmp/lib/")
set(OpenMP_LINKER_FLAGS "-L'${OpenMP_LIBRARY_DIR}' -lomp") set(OpenMP_LINKER_FLAGS "-L'${OpenMP_LIBRARY_DIR}' -lomp")
set(OpenMP_LIBRARY "${OpenMP_LIBRARY_DIR}/libomp.dylib") set(OpenMP_LIBRARY "${OpenMP_LIBRARY_DIR}/libomp.dylib")
add_bundled_libraries(openmp)
endif() endif()
endif() endif()
@@ -440,9 +443,6 @@ if(EXISTS ${LIBDIR})
without_system_libs_end() without_system_libs_end()
endif() endif()
# Restore to default.
set(CMAKE_FIND_FRAMEWORK FIRST)
# --------------------------------------------------------------------- # ---------------------------------------------------------------------
# Set compiler and linker flags. # Set compiler and linker flags.
@@ -467,9 +467,8 @@ string(APPEND CMAKE_CXX_FLAGS " -ftemplate-depth=1024")
# Avoid conflicts with Luxrender, and other plug-ins that may use the same # Avoid conflicts with Luxrender, and other plug-ins that may use the same
# libraries as Blender with a different version or build options. # libraries as Blender with a different version or build options.
set(PLATFORM_SYMBOLS_MAP ${CMAKE_SOURCE_DIR}/source/creator/symbols_apple.map)
string(APPEND PLATFORM_LINKFLAGS string(APPEND PLATFORM_LINKFLAGS
" -Wl,-unexported_symbols_list,'${PLATFORM_SYMBOLS_MAP}'" " -Wl,-unexported_symbols_list,'${CMAKE_SOURCE_DIR}/source/creator/osx_locals.map'"
) )
string(APPEND CMAKE_CXX_FLAGS " -stdlib=libc++") string(APPEND CMAKE_CXX_FLAGS " -stdlib=libc++")
@@ -498,27 +497,17 @@ if(WITH_COMPILER_CCACHE)
endif() endif()
endif() endif()
if(WITH_COMPILER_ASAN) # For binaries that are built but not installed (also not distributed) (datatoc,
list(APPEND PLATFORM_BUNDLED_LIBRARIES ${COMPILER_ASAN_LIBRARY}) # makesdna, tests, etc.), we add an rpath to the OpenMP library dir through
endif() # CMAKE_BUILD_RPATH. This avoids having to make many copies of the dylib next to each binary.
#
# For the installed Python module and installed Blender executable, CMAKE_INSTALL_RPATH
# is modified to find the dylib in an adjacent folder. Install step puts the libraries there.
set(CMAKE_SKIP_BUILD_RPATH FALSE)
list(APPEND CMAKE_BUILD_RPATH "${OpenMP_LIBRARY_DIR}")
if(PLATFORM_BUNDLED_LIBRARIES) set(CMAKE_SKIP_INSTALL_RPATH FALSE)
# For the installed Python module and installed Blender executable, we set the list(APPEND CMAKE_INSTALL_RPATH "@loader_path/../Resources/${BLENDER_VERSION}/lib")
# rpath to the location where install step will copy the shared libraries.
set(CMAKE_SKIP_INSTALL_RPATH FALSE)
if(WITH_PYTHON_MODULE)
list(APPEND CMAKE_INSTALL_RPATH "@loader_path/lib")
else()
list(APPEND CMAKE_INSTALL_RPATH "@loader_path/../Resources/lib")
endif()
# For binaries that are built but not installed (like makesdan or tests), we add
# the original directory of all shared libraries to the rpath. This is needed because
# these can be in different folders, and because the build and install folder may be
# different.
set(CMAKE_SKIP_BUILD_RPATH FALSE)
list(APPEND CMAKE_BUILD_RPATH ${PLATFORM_BUNDLED_LIBRARY_DIRS})
endif()
# Same as `CFBundleIdentifier` in Info.plist. # Same as `CFBundleIdentifier` in Info.plist.
set(CMAKE_XCODE_ATTRIBUTE_PRODUCT_BUNDLE_IDENTIFIER "org.blenderfoundation.blender") set(CMAKE_XCODE_ATTRIBUTE_PRODUCT_BUNDLE_IDENTIFIER "org.blenderfoundation.blender")

View File

@@ -16,16 +16,9 @@ if(NOT DEFINED LIBDIR)
# Choose the best suitable libraries. # Choose the best suitable libraries.
if(EXISTS ${LIBDIR_NATIVE_ABI}) if(EXISTS ${LIBDIR_NATIVE_ABI})
set(LIBDIR ${LIBDIR_NATIVE_ABI}) set(LIBDIR ${LIBDIR_NATIVE_ABI})
set(WITH_LIBC_MALLOC_HOOK_WORKAROUND True)
elseif(EXISTS ${LIBDIR_CENTOS7_ABI}) elseif(EXISTS ${LIBDIR_CENTOS7_ABI})
set(LIBDIR ${LIBDIR_CENTOS7_ABI}) set(LIBDIR ${LIBDIR_CENTOS7_ABI})
set(WITH_CXX11_ABI OFF) set(WITH_CXX11_ABI OFF)
if(WITH_MEM_JEMALLOC)
# jemalloc provides malloc hooks.
set(WITH_LIBC_MALLOC_HOOK_WORKAROUND False)
else()
set(WITH_LIBC_MALLOC_HOOK_WORKAROUND True)
endif()
if(CMAKE_COMPILER_IS_GNUCC AND if(CMAKE_COMPILER_IS_GNUCC AND
CMAKE_C_COMPILER_VERSION VERSION_LESS 9.3) CMAKE_C_COMPILER_VERSION VERSION_LESS 9.3)
@@ -88,15 +81,6 @@ macro(find_package_wrapper)
endif() endif()
endmacro() endmacro()
# Utility to install precompiled shared libraries.
macro(add_bundled_libraries library)
if(EXISTS ${LIBDIR})
file(GLOB _all_library_versions ${LIBDIR}/${library}/lib/*\.so*)
list(APPEND PLATFORM_BUNDLED_LIBRARIES ${_all_library_versions})
unset(_all_library_versions)
endif()
endmacro()
# ---------------------------------------------------------------------------- # ----------------------------------------------------------------------------
# Precompiled Libraries # Precompiled Libraries
# #
@@ -111,19 +95,6 @@ find_package_wrapper(JPEG REQUIRED)
find_package_wrapper(PNG REQUIRED) find_package_wrapper(PNG REQUIRED)
find_package_wrapper(ZLIB REQUIRED) find_package_wrapper(ZLIB REQUIRED)
find_package_wrapper(Zstd REQUIRED) find_package_wrapper(Zstd REQUIRED)
find_package_wrapper(Epoxy REQUIRED)
function(check_freetype_for_brotli)
include(CheckSymbolExists)
set(CMAKE_REQUIRED_INCLUDES ${FREETYPE_INCLUDE_DIRS})
check_symbol_exists(FT_CONFIG_OPTION_USE_BROTLI "freetype/config/ftconfig.h" HAVE_BROTLI)
unset(CMAKE_REQUIRED_INCLUDES)
if(NOT HAVE_BROTLI)
unset(HAVE_BROTLI CACHE)
message(FATAL_ERROR "Freetype needs to be compiled with brotli support!")
endif()
unset(HAVE_BROTLI CACHE)
endfunction()
if(NOT WITH_SYSTEM_FREETYPE) if(NOT WITH_SYSTEM_FREETYPE)
# FreeType compiled with Brotli compression for woff2. # FreeType compiled with Brotli compression for woff2.
@@ -139,42 +110,17 @@ if(NOT WITH_SYSTEM_FREETYPE)
# ${BROTLI_LIBRARIES} # ${BROTLI_LIBRARIES}
# ) # )
endif() endif()
check_freetype_for_brotli()
endif() endif()
if(WITH_PYTHON) if(WITH_PYTHON)
# This could be used, see: D14954 for details. # No way to set py35, remove for now.
# `find_package(PythonLibs)` # find_package(PythonLibs)
# Use our own instead, since without Python is such a rare case, # Use our own instead, since without py is such a rare case,
# require this package. # require this package
# XXX: Linking errors with Debian static Python (sigh). # XXX Linking errors with debian static python :/
# find_package_wrapper(PythonLibsUnix REQUIRED) # find_package_wrapper(PythonLibsUnix REQUIRED)
find_package(PythonLibsUnix REQUIRED) find_package(PythonLibsUnix REQUIRED)
if(WITH_PYTHON_MODULE AND NOT WITH_INSTALL_PORTABLE)
# Installing into `site-packages`, warn when installing into `./../lib/`
# which script authors almost certainly don't want.
if(EXISTS ${LIBDIR})
cmake_path(IS_PREFIX LIBDIR "${PYTHON_SITE_PACKAGES}" NORMALIZE _is_prefix)
if(_is_prefix)
message(WARNING "
Building Blender with the following configuration:
- WITH_PYTHON_MODULE=ON
- WITH_INSTALL_PORTABLE=OFF
- LIBDIR=\"${LIBDIR}\"
- PYTHON_SITE_PACKAGES=\"${PYTHON_SITE_PACKAGES}\"
In this case you may want to either:
- Use the system Python's site-packages, see:
python -c \"import site; print(site.getsitepackages()[0])\"
- Set WITH_INSTALL_PORTABLE=ON to create a stand-alone \"bpy\" module
which you will need to ensure is in Python's module search path.
Proceeding with PYTHON_SITE_PACKAGES install target, you have been warned!"
)
endif()
unset(_is_prefix)
endif()
endif()
endif() endif()
if(WITH_IMAGE_OPENEXR) if(WITH_IMAGE_OPENEXR)
@@ -256,9 +202,6 @@ if(WITH_CODEC_FFMPEG)
vpx vpx
x264 x264
xvidcore) xvidcore)
if(EXISTS ${LIBDIR}/ffmpeg/lib/libaom.a)
list(APPEND FFMPEG_FIND_COMPONENTS aom)
endif()
elseif(FFMPEG) elseif(FFMPEG)
# Old cache variable used for root dir, convert to new standard. # Old cache variable used for root dir, convert to new standard.
set(FFMPEG_ROOT_DIR ${FFMPEG}) set(FFMPEG_ROOT_DIR ${FFMPEG})
@@ -641,7 +584,6 @@ if(WITH_SYSTEM_FREETYPE)
if(NOT FREETYPE_FOUND) if(NOT FREETYPE_FOUND)
message(FATAL_ERROR "Failed finding system FreeType version!") message(FATAL_ERROR "Failed finding system FreeType version!")
endif() endif()
check_freetype_for_brotli()
endif() endif()
if(WITH_LZO AND WITH_SYSTEM_LZO) if(WITH_LZO AND WITH_SYSTEM_LZO)
@@ -684,92 +626,46 @@ endif()
if(WITH_GHOST_WAYLAND) if(WITH_GHOST_WAYLAND)
find_package(PkgConfig) find_package(PkgConfig)
pkg_check_modules(wayland-client wayland-client>=1.12) pkg_check_modules(wayland-client REQUIRED wayland-client>=1.12)
pkg_check_modules(wayland-egl wayland-egl) pkg_check_modules(wayland-egl REQUIRED wayland-egl)
pkg_check_modules(wayland-scanner wayland-scanner) pkg_check_modules(wayland-scanner REQUIRED wayland-scanner)
pkg_check_modules(xkbcommon xkbcommon) pkg_check_modules(xkbcommon REQUIRED xkbcommon)
pkg_check_modules(wayland-cursor wayland-cursor) pkg_check_modules(wayland-cursor REQUIRED wayland-cursor)
pkg_check_modules(wayland-protocols wayland-protocols>=1.15)
if(${wayland-protocols_FOUND}) if(WITH_GHOST_WAYLAND_DBUS)
pkg_get_variable(WAYLAND_PROTOCOLS_DIR wayland-protocols pkgdatadir) pkg_check_modules(dbus REQUIRED dbus-1)
else()
# CentOS 7 packages have too old a version, a newer version exist in the
# precompiled libraries.
find_path(WAYLAND_PROTOCOLS_DIR
NAMES unstable/xdg-decoration/xdg-decoration-unstable-v1.xml
PATH_SUFFIXES share/wayland-protocols
PATHS ${LIBDIR}/wayland-protocols
)
if(EXISTS ${WAYLAND_PROTOCOLS_DIR})
set(wayland-protocols_FOUND ON)
endif()
endif() endif()
if (NOT ${wayland-client_FOUND}) if(WITH_GHOST_WAYLAND_LIBDECOR)
message(STATUS "wayland-client not found, disabling WITH_GHOST_WAYLAND") pkg_check_modules(libdecor REQUIRED libdecor-0>=0.1)
set(WITH_GHOST_WAYLAND OFF)
endif()
if (NOT ${wayland-egl_FOUND})
message(STATUS "wayland-egl not found, disabling WITH_GHOST_WAYLAND")
set(WITH_GHOST_WAYLAND OFF)
endif()
if (NOT ${wayland-scanner_FOUND})
message(STATUS "wayland-scanner not found, disabling WITH_GHOST_WAYLAND")
set(WITH_GHOST_WAYLAND OFF)
endif()
if (NOT ${wayland-cursor_FOUND})
message(STATUS "wayland-cursor not found, disabling WITH_GHOST_WAYLAND")
set(WITH_GHOST_WAYLAND OFF)
endif()
if (NOT ${wayland-protocols_FOUND})
message(STATUS "wayland-protocols not found, disabling WITH_GHOST_WAYLAND")
set(WITH_GHOST_WAYLAND OFF)
endif()
if (NOT ${xkbcommon_FOUND})
message(STATUS "xkbcommon not found, disabling WITH_GHOST_WAYLAND")
set(WITH_GHOST_WAYLAND OFF)
endif() endif()
if(WITH_GHOST_WAYLAND) list(APPEND PLATFORM_LINKLIBS
if(WITH_GHOST_WAYLAND_DBUS) ${xkbcommon_LINK_LIBRARIES}
pkg_check_modules(dbus REQUIRED dbus-1) )
endif()
if(WITH_GHOST_WAYLAND_LIBDECOR)
pkg_check_modules(libdecor REQUIRED libdecor-0>=0.1)
endif()
if(NOT WITH_GHOST_WAYLAND_DYNLOAD)
list(APPEND PLATFORM_LINKLIBS list(APPEND PLATFORM_LINKLIBS
${xkbcommon_LINK_LIBRARIES} ${wayland-client_LINK_LIBRARIES}
${wayland-egl_LINK_LIBRARIES}
${wayland-cursor_LINK_LIBRARIES}
) )
endif()
if(WITH_GHOST_WAYLAND_DBUS)
list(APPEND PLATFORM_LINKLIBS
${dbus_LINK_LIBRARIES}
)
add_definitions(-DWITH_GHOST_WAYLAND_DBUS)
endif()
if(WITH_GHOST_WAYLAND_LIBDECOR)
if(NOT WITH_GHOST_WAYLAND_DYNLOAD) if(NOT WITH_GHOST_WAYLAND_DYNLOAD)
list(APPEND PLATFORM_LINKLIBS list(APPEND PLATFORM_LINKLIBS
${wayland-client_LINK_LIBRARIES} ${libdecor_LIBRARIES}
${wayland-egl_LINK_LIBRARIES}
${wayland-cursor_LINK_LIBRARIES}
) )
endif() endif()
add_definitions(-DWITH_GHOST_WAYLAND_LIBDECOR)
if(WITH_GHOST_WAYLAND_DBUS)
list(APPEND PLATFORM_LINKLIBS
${dbus_LINK_LIBRARIES}
)
add_definitions(-DWITH_GHOST_WAYLAND_DBUS)
endif()
if(WITH_GHOST_WAYLAND_LIBDECOR)
if(NOT WITH_GHOST_WAYLAND_DYNLOAD)
list(APPEND PLATFORM_LINKLIBS
${libdecor_LIBRARIES}
)
endif()
add_definitions(-DWITH_GHOST_WAYLAND_LIBDECOR)
endif()
pkg_get_variable(WAYLAND_SCANNER wayland-scanner wayland_scanner)
endif() endif()
endif() endif()
@@ -890,8 +786,7 @@ if(CMAKE_COMPILER_IS_GNUCC)
"The mold linker could not find the directory containing the linker command " "The mold linker could not find the directory containing the linker command "
"(typically " "(typically "
"\"${MOLD_PREFIX}/libexec/mold/ld\") or " "\"${MOLD_PREFIX}/libexec/mold/ld\") or "
"\"${MOLD_PREFIX}/lib/mold/ld\") using system linker." "\"${MOLD_PREFIX}/lib/mold/ld\") using system linker.")
)
set(WITH_LINKER_MOLD OFF) set(WITH_LINKER_MOLD OFF)
endif() endif()
unset(MOLD_PREFIX) unset(MOLD_PREFIX)
@@ -990,9 +885,8 @@ unset(_IS_LINKER_DEFAULT)
# Avoid conflicts with Mesa llvmpipe, Luxrender, and other plug-ins that may # Avoid conflicts with Mesa llvmpipe, Luxrender, and other plug-ins that may
# use the same libraries as Blender with a different version or build options. # use the same libraries as Blender with a different version or build options.
set(PLATFORM_SYMBOLS_MAP ${CMAKE_SOURCE_DIR}/source/creator/symbols_unix.map)
set(PLATFORM_LINKFLAGS set(PLATFORM_LINKFLAGS
"${PLATFORM_LINKFLAGS} -Wl,--version-script='${PLATFORM_SYMBOLS_MAP}'" "${PLATFORM_LINKFLAGS} -Wl,--version-script='${CMAKE_SOURCE_DIR}/source/creator/blender.map'"
) )
# Don't use position independent executable for portable install since file # Don't use position independent executable for portable install since file
@@ -1030,8 +924,7 @@ function(CONFIGURE_ATOMIC_LIB_IF_NEEDED)
int main(int argc, char **argv) { int main(int argc, char **argv) {
std::atomic<uint64_t> uint64; uint64++; std::atomic<uint64_t> uint64; uint64++;
return 0; return 0;
}" }")
)
include(CheckCXXSourceCompiles) include(CheckCXXSourceCompiles)
check_cxx_source_compiles("${_source}" ATOMIC_OPS_WITHOUT_LIBATOMIC) check_cxx_source_compiles("${_source}" ATOMIC_OPS_WITHOUT_LIBATOMIC)
@@ -1043,7 +936,6 @@ function(CONFIGURE_ATOMIC_LIB_IF_NEEDED)
set(CMAKE_REQUIRED_LIBRARIES atomic) set(CMAKE_REQUIRED_LIBRARIES atomic)
check_cxx_source_compiles("${_source}" ATOMIC_OPS_WITH_LIBATOMIC) check_cxx_source_compiles("${_source}" ATOMIC_OPS_WITH_LIBATOMIC)
unset(CMAKE_REQUIRED_LIBRARIES)
if(ATOMIC_OPS_WITH_LIBATOMIC) if(ATOMIC_OPS_WITH_LIBATOMIC)
set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -latomic" PARENT_SCOPE) set(PLATFORM_LINKFLAGS "${PLATFORM_LINKFLAGS} -latomic" PARENT_SCOPE)
@@ -1057,16 +949,3 @@ function(CONFIGURE_ATOMIC_LIB_IF_NEEDED)
endfunction() endfunction()
CONFIGURE_ATOMIC_LIB_IF_NEEDED() CONFIGURE_ATOMIC_LIB_IF_NEEDED()
if(PLATFORM_BUNDLED_LIBRARIES)
# For the installed Python module and installed Blender executable, we set the
# rpath to the relative path where the install step will copy the shared libraries.
set(CMAKE_SKIP_INSTALL_RPATH FALSE)
list(APPEND CMAKE_INSTALL_RPATH $ORIGIN/lib)
# For executables that are built but not installed (mainly tests) we set an absolute
# rpath to the lib folder. This is needed because these can be in different folders,
# and because the build and install folder may be different.
set(CMAKE_SKIP_BUILD_RPATH FALSE)
list(APPEND CMAKE_BUILD_RPATH $ORIGIN/lib ${CMAKE_INSTALL_PREFIX_WITH_CONFIG}/lib)
endif()

View File

@@ -146,7 +146,7 @@ endif()
if(WITH_COMPILER_ASAN AND MSVC AND NOT MSVC_CLANG) if(WITH_COMPILER_ASAN AND MSVC AND NOT MSVC_CLANG)
if(CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 19.28.29828) if(CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 19.28.29828)
#set a flag so we don't have to do this comparison all the time #set a flag so we don't have to do this comparison all the time
set(MSVC_ASAN ON) SET(MSVC_ASAN ON)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /fsanitize=address") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /fsanitize=address")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /fsanitize=address") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /fsanitize=address")
string(APPEND CMAKE_EXE_LINKER_FLAGS_DEBUG " /INCREMENTAL:NO") string(APPEND CMAKE_EXE_LINKER_FLAGS_DEBUG " /INCREMENTAL:NO")
@@ -185,20 +185,20 @@ endif()
if(WITH_WINDOWS_SCCACHE) if(WITH_WINDOWS_SCCACHE)
set(CMAKE_C_COMPILER_LAUNCHER sccache) set(CMAKE_C_COMPILER_LAUNCHER sccache)
set(CMAKE_CXX_COMPILER_LAUNCHER sccache) set(CMAKE_CXX_COMPILER_LAUNCHER sccache)
set(SYMBOL_FORMAT /Z7)
set(SYMBOL_FORMAT_RELEASE /Z7)
else()
unset(CMAKE_C_COMPILER_LAUNCHER)
unset(CMAKE_CXX_COMPILER_LAUNCHER)
if(MSVC_ASAN)
set(SYMBOL_FORMAT /Z7) set(SYMBOL_FORMAT /Z7)
set(SYMBOL_FORMAT_RELEASE /Z7) set(SYMBOL_FORMAT_RELEASE /Z7)
else() else()
set(SYMBOL_FORMAT /ZI) unset(CMAKE_C_COMPILER_LAUNCHER)
set(SYMBOL_FORMAT_RELEASE /Zi) unset(CMAKE_CXX_COMPILER_LAUNCHER)
endif() if(MSVC_ASAN)
set(SYMBOL_FORMAT /Z7)
set(SYMBOL_FORMAT_RELEASE /Z7)
else()
set(SYMBOL_FORMAT /ZI)
set(SYMBOL_FORMAT_RELEASE /Zi)
endif()
endif() endif()
if(WITH_WINDOWS_PDB) if(WITH_WINDOWS_PDB)
@@ -323,13 +323,6 @@ if(NOT JPEG_FOUND)
set(JPEG_LIBRARIES ${LIBDIR}/jpeg/lib/libjpeg.lib) set(JPEG_LIBRARIES ${LIBDIR}/jpeg/lib/libjpeg.lib)
endif() endif()
set(EPOXY_ROOT_DIR ${LIBDIR}/epoxy)
windows_find_package(Epoxy REQUIRED)
if(NOT EPOXY_FOUND)
set(Epoxy_INCLUDE_DIRS ${LIBDIR}/epoxy/include)
set(Epoxy_LIBRARIES ${LIBDIR}/epoxy/lib/epoxy.lib)
endif()
set(PTHREADS_INCLUDE_DIRS ${LIBDIR}/pthreads/include) set(PTHREADS_INCLUDE_DIRS ${LIBDIR}/pthreads/include)
set(PTHREADS_LIBRARIES ${LIBDIR}/pthreads/lib/pthreadVC3.lib) set(PTHREADS_LIBRARIES ${LIBDIR}/pthreads/lib/pthreadVC3.lib)
@@ -423,7 +416,7 @@ if(WITH_CODEC_FFMPEG)
${LIBDIR}/ffmpeg/lib/avdevice.lib ${LIBDIR}/ffmpeg/lib/avdevice.lib
${LIBDIR}/ffmpeg/lib/avutil.lib ${LIBDIR}/ffmpeg/lib/avutil.lib
${LIBDIR}/ffmpeg/lib/swscale.lib ${LIBDIR}/ffmpeg/lib/swscale.lib
) )
endif() endif()
endif() endif()
@@ -504,16 +497,12 @@ if(WITH_JACK)
endif() endif()
if(WITH_PYTHON) if(WITH_PYTHON)
# Cache version for make_bpy_wheel.py to detect. set(PYTHON_VERSION 3.10) # CACHE STRING)
unset(PYTHON_VERSION CACHE)
set(PYTHON_VERSION "3.10" CACHE STRING "Python version")
string(REPLACE "." "" _PYTHON_VERSION_NO_DOTS ${PYTHON_VERSION}) string(REPLACE "." "" _PYTHON_VERSION_NO_DOTS ${PYTHON_VERSION})
set(PYTHON_LIBRARY ${LIBDIR}/python/${_PYTHON_VERSION_NO_DOTS}/libs/python${_PYTHON_VERSION_NO_DOTS}.lib) set(PYTHON_LIBRARY ${LIBDIR}/python/${_PYTHON_VERSION_NO_DOTS}/libs/python${_PYTHON_VERSION_NO_DOTS}.lib)
set(PYTHON_LIBRARY_DEBUG ${LIBDIR}/python/${_PYTHON_VERSION_NO_DOTS}/libs/python${_PYTHON_VERSION_NO_DOTS}_d.lib) set(PYTHON_LIBRARY_DEBUG ${LIBDIR}/python/${_PYTHON_VERSION_NO_DOTS}/libs/python${_PYTHON_VERSION_NO_DOTS}_d.lib)
set(PYTHON_EXECUTABLE ${LIBDIR}/python/${_PYTHON_VERSION_NO_DOTS}/bin/python$<$<CONFIG:Debug>:_d>.exe)
set(PYTHON_INCLUDE_DIR ${LIBDIR}/python/${_PYTHON_VERSION_NO_DOTS}/include) set(PYTHON_INCLUDE_DIR ${LIBDIR}/python/${_PYTHON_VERSION_NO_DOTS}/include)
set(PYTHON_NUMPY_INCLUDE_DIRS ${LIBDIR}/python/${_PYTHON_VERSION_NO_DOTS}/lib/site-packages/numpy/core/include) set(PYTHON_NUMPY_INCLUDE_DIRS ${LIBDIR}/python/${_PYTHON_VERSION_NO_DOTS}/lib/site-packages/numpy/core/include)
set(NUMPY_FOUND ON) set(NUMPY_FOUND ON)
@@ -576,14 +565,12 @@ if(WITH_BOOST)
if(WITH_CYCLES AND WITH_CYCLES_OSL) if(WITH_CYCLES AND WITH_CYCLES_OSL)
set(BOOST_LIBRARIES ${BOOST_LIBRARIES} set(BOOST_LIBRARIES ${BOOST_LIBRARIES}
optimized ${BOOST_LIBPATH}/libboost_wave-${BOOST_POSTFIX} optimized ${BOOST_LIBPATH}/libboost_wave-${BOOST_POSTFIX}
debug ${BOOST_LIBPATH}/libboost_wave-${BOOST_DEBUG_POSTFIX} debug ${BOOST_LIBPATH}/libboost_wave-${BOOST_DEBUG_POSTFIX})
)
endif() endif()
if(WITH_INTERNATIONAL) if(WITH_INTERNATIONAL)
set(BOOST_LIBRARIES ${BOOST_LIBRARIES} set(BOOST_LIBRARIES ${BOOST_LIBRARIES}
optimized ${BOOST_LIBPATH}/libboost_locale-${BOOST_POSTFIX} optimized ${BOOST_LIBPATH}/libboost_locale-${BOOST_POSTFIX}
debug ${BOOST_LIBPATH}/libboost_locale-${BOOST_DEBUG_POSTFIX} debug ${BOOST_LIBPATH}/libboost_locale-${BOOST_DEBUG_POSTFIX})
)
endif() endif()
else() # we found boost using find_package else() # we found boost using find_package
set(BOOST_INCLUDE_DIR ${Boost_INCLUDE_DIRS}) set(BOOST_INCLUDE_DIR ${Boost_INCLUDE_DIRS})
@@ -690,8 +677,7 @@ if(WITH_OPENIMAGEDENOISE)
optimized ${OPENIMAGEDENOISE_LIBPATH}/dnnl.lib optimized ${OPENIMAGEDENOISE_LIBPATH}/dnnl.lib
debug ${OPENIMAGEDENOISE_LIBPATH}/OpenImageDenoise_d.lib debug ${OPENIMAGEDENOISE_LIBPATH}/OpenImageDenoise_d.lib
debug ${OPENIMAGEDENOISE_LIBPATH}/common_d.lib debug ${OPENIMAGEDENOISE_LIBPATH}/common_d.lib
debug ${OPENIMAGEDENOISE_LIBPATH}/dnnl_d.lib debug ${OPENIMAGEDENOISE_LIBPATH}/dnnl_d.lib)
)
set(OPENIMAGEDENOISE_DEFINITIONS) set(OPENIMAGEDENOISE_DEFINITIONS)
endif() endif()
@@ -846,8 +832,7 @@ if(WITH_CYCLES AND WITH_CYCLES_EMBREE)
debug ${LIBDIR}/embree/lib/math_d.lib debug ${LIBDIR}/embree/lib/math_d.lib
debug ${LIBDIR}/embree/lib/simd_d.lib debug ${LIBDIR}/embree/lib/simd_d.lib
debug ${LIBDIR}/embree/lib/sys_d.lib debug ${LIBDIR}/embree/lib/sys_d.lib
debug ${LIBDIR}/embree/lib/tasking_d.lib debug ${LIBDIR}/embree/lib/tasking_d.lib)
)
endif() endif()
endif() endif()

View File

@@ -3,22 +3,20 @@
# First generate the manifest for tests since it will not need the dependency on the CRT. # First generate the manifest for tests since it will not need the dependency on the CRT.
configure_file(${CMAKE_SOURCE_DIR}/release/windows/manifest/blender.exe.manifest.in ${CMAKE_CURRENT_BINARY_DIR}/tests.exe.manifest @ONLY) configure_file(${CMAKE_SOURCE_DIR}/release/windows/manifest/blender.exe.manifest.in ${CMAKE_CURRENT_BINARY_DIR}/tests.exe.manifest @ONLY)
# Always detect system libraries, since they are also used by oneAPI.
# But don't always install them, only for WITH_WINDOWS_BUNDLE_CRT=ON.
set(CMAKE_INSTALL_SYSTEM_RUNTIME_LIBS_SKIP TRUE)
set(CMAKE_INSTALL_UCRT_LIBRARIES TRUE)
set(CMAKE_INSTALL_OPENMP_LIBRARIES ${WITH_OPENMP})
# This sometimes can change when updates are installed and the compiler version
# changes, so test if it exists and if not, give InstallRequiredSystemLibraries
# another chance to figure out the path.
if(MSVC_REDIST_DIR AND NOT EXISTS "${MSVC_REDIST_DIR}")
unset(MSVC_REDIST_DIR CACHE)
endif()
include(InstallRequiredSystemLibraries)
if(WITH_WINDOWS_BUNDLE_CRT) if(WITH_WINDOWS_BUNDLE_CRT)
set(CMAKE_INSTALL_SYSTEM_RUNTIME_LIBS_SKIP TRUE)
set(CMAKE_INSTALL_UCRT_LIBRARIES TRUE)
set(CMAKE_INSTALL_OPENMP_LIBRARIES ${WITH_OPENMP})
# This sometimes can change when updates are installed and the compiler version
# changes, so test if it exists and if not, give InstallRequiredSystemLibraries
# another chance to figure out the path.
if(MSVC_REDIST_DIR AND NOT EXISTS "${MSVC_REDIST_DIR}")
unset(MSVC_REDIST_DIR CACHE)
endif()
include(InstallRequiredSystemLibraries)
# ucrtbase(d).dll cannot be in the manifest, due to the way windows 10 handles # ucrtbase(d).dll cannot be in the manifest, due to the way windows 10 handles
# redirects for this dll, for details see T88813. # redirects for this dll, for details see T88813.
foreach(lib ${CMAKE_INSTALL_SYSTEM_RUNTIME_LIBS}) foreach(lib ${CMAKE_INSTALL_SYSTEM_RUNTIME_LIBS})

View File

@@ -30,8 +30,6 @@ from typing import (
cast, cast,
) )
import shlex
SOURCE_DIR = join(dirname(__file__), "..", "..") SOURCE_DIR = join(dirname(__file__), "..", "..")
SOURCE_DIR = normpath(SOURCE_DIR) SOURCE_DIR = normpath(SOURCE_DIR)
@@ -162,7 +160,7 @@ def build_info(
for c in compilers: for c in compilers:
args = args.replace(c, fake_compiler) args = args.replace(c, fake_compiler)
args = shlex.split(args) args = args.split()
# end # end
# remove compiler # remove compiler

View File

@@ -1,222 +0,0 @@
#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0-or-later
"""
Make Python wheel package (`*.whl`) file from Blender built with 'WITH_PYTHON_MODULE' enabled.
Example
=======
If the "bpy" module was build on Linux using the command:
make bpy lite
The command to package it as a wheel is:
./build_files/utils/make_bpy_wheel.py ../build_linux_bpy_lite/bin --output-dir=./
This will create a `*.whl` file in the current directory.
"""
import argparse
import make_utils
import os
import re
import platform
import string
import setuptools # type: ignore
import sys
from typing import (
Generator,
List,
Optional,
Sequence,
Tuple,
)
# ------------------------------------------------------------------------------
# Generic Functions
def find_dominating_file(
path: str,
search: Sequence[str],
) -> str:
while True:
for d in search:
if os.path.exists(os.path.join(path, d)):
return os.path.join(path, d)
path_next = os.path.normpath(os.path.join(path, ".."))
if path == path_next:
break
path = path_next
return ""
# ------------------------------------------------------------------------------
# CMake Cache Access
def cmake_cache_var_iter(filepath_cmake_cache: str) -> Generator[Tuple[str, str, str], None, None]:
import re
re_cache = re.compile(r"([A-Za-z0-9_\-]+)?:?([A-Za-z0-9_\-]+)?=(.*)$")
with open(filepath_cmake_cache, "r", encoding="utf-8") as cache_file:
for l in cache_file:
match = re_cache.match(l.strip())
if match is not None:
var, type_, val = match.groups()
yield (var, type_ or "", val)
def cmake_cache_var(filepath_cmake_cache: str, var: str) -> Optional[str]:
for var_iter, type_iter, value_iter in cmake_cache_var_iter(filepath_cmake_cache):
if var == var_iter:
return value_iter
return None
def cmake_cache_var_or_exit(filepath_cmake_cache: str, var: str) -> str:
value = cmake_cache_var(filepath_cmake_cache, var)
if value is None:
sys.stderr.write("Unable to find %r in %r, abort!\n" % (var, filepath_cmake_cache))
sys.exit(1)
return value
# ------------------------------------------------------------------------------
# Argument Parser
def argparse_create() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.RawTextHelpFormatter)
parser.add_argument(
"install_dir",
metavar='INSTALL_DIR',
type=str,
help="The installation directory containing the \"bpy\" package.",
)
parser.add_argument(
"--build-dir",
metavar='BUILD_DIR',
default=None,
help="The build directory containing 'CMakeCache.txt' (search parent directories of INSTALL_DIR when omitted).",
required=False,
)
parser.add_argument(
"--output-dir",
metavar='OUTPUT_DIR',
default=None,
help="The destination directory for the '*.whl' file (use INSTALL_DIR when omitted).",
required=False,
)
return parser
# ------------------------------------------------------------------------------
# Main Function
def main() -> None:
# Parse arguments.
args = argparse_create().parse_args()
install_dir = os.path.abspath(args.install_dir)
output_dir = os.path.abspath(args.output_dir) if args.output_dir else install_dir
if args.build_dir:
build_dir = os.path.abspath(args.build_dir)
filepath_cmake_cache = os.path.join(build_dir, "CMakeCache.txt")
del build_dir
if not os.path.exists(filepath_cmake_cache):
sys.stderr.write("File not found %r, abort!\n" % filepath_cmake_cache)
sys.exit(1)
else:
filepath_cmake_cache = find_dominating_file(install_dir, ("CMakeCache.txt",))
if not filepath_cmake_cache:
# Should never fail.
sys.stderr.write("Unable to find CMakeCache.txt in or above %r, abort!\n" % install_dir)
sys.exit(1)
# Get the major and minor Python version.
python_version = cmake_cache_var_or_exit(filepath_cmake_cache, "PYTHON_VERSION")
python_version_number = (
tuple(int("".join(c for c in digit if c in string.digits)) for digit in python_version.split(".")) +
# Support version without a minor version "3" (add zero).
tuple((0, 0, 0))
)
python_version_str = "%d.%d" % python_version_number[:2]
# Get Blender version.
blender_version_str = str(make_utils.parse_blender_version())
# Set platform tag following conventions.
if sys.platform == "darwin":
target = cmake_cache_var_or_exit(filepath_cmake_cache, "CMAKE_OSX_DEPLOYMENT_TARGET").split(".")
machine = cmake_cache_var_or_exit(filepath_cmake_cache, "CMAKE_OSX_ARCHITECTURES")
platform_tag = "macosx_%d_%d_%s" % (int(target[0]), int(target[1]), machine)
elif sys.platform == "win32":
platform_tag = "win_%s" % (platform.machine().lower())
elif sys.platform == "linux":
glibc = os.confstr("CS_GNU_LIBC_VERSION")
if glibc is None:
sys.stderr.write("Unable to find \"CS_GNU_LIBC_VERSION\", abort!\n")
sys.exit(1)
glibc = "%s_%s" % tuple(glibc.split()[1].split(".")[:2])
platform_tag = "manylinux_%s_%s" % (glibc, platform.machine().lower())
else:
sys.stderr.write("Unsupported platform: %s, abort!\n" % (sys.platform))
sys.exit(1)
os.chdir(install_dir)
# Include all files recursively.
def package_files(root_dir: str) -> List[str]:
paths = []
for path, dirs, files in os.walk(root_dir):
paths += [os.path.join("..", path, f) for f in files]
return paths
# Ensure this wheel is marked platform specific.
class BinaryDistribution(setuptools.dist.Distribution): # type: ignore
def has_ext_modules(self) -> bool:
return True
# Build wheel.
sys.argv = [sys.argv[0], "bdist_wheel"]
setuptools.setup(
name="bpy",
version=blender_version_str,
install_requires=["cython", "numpy", "requests", "zstandard"],
python_requires="==%d.%d.*" % (python_version_number[0], python_version_number[1]),
packages=["bpy"],
package_data={"": package_files("bpy")},
distclass=BinaryDistribution,
options={"bdist_wheel": {"plat_name": platform_tag}},
description="Blender as a Python module",
license="GPL-3.0",
author="Blender Foundation",
author_email="bf-committers@blender.org",
url="https://www.blender.org"
)
if not os.path.exists(output_dir):
os.makedirs(output_dir)
# Move wheel to output directory.
dist_dir = os.path.join(install_dir, "dist")
for f in os.listdir(dist_dir):
if f.endswith(".whl"):
# No apparent way to override this ABI version with setuptools, so rename.
sys_py = "cp%d%d" % (sys.version_info.major, sys.version_info.minor)
sys_py_abi = sys_py + sys.abiflags
blender_py = "cp%d%d" % (python_version_number[0], python_version_number[1])
renamed_f = f.replace(sys_py_abi, blender_py).replace(sys_py, blender_py)
os.rename(os.path.join(dist_dir, f), os.path.join(output_dir, renamed_f))
if __name__ == "__main__":
main()

View File

@@ -2,7 +2,7 @@
# SPDX-License-Identifier: GPL-2.0-or-later # SPDX-License-Identifier: GPL-2.0-or-later
import argparse import argparse
import make_utils import dataclasses
import os import os
import re import re
import subprocess import subprocess
@@ -19,8 +19,6 @@ from typing import Iterable, TextIO, Optional, Any, Union
SKIP_NAMES = { SKIP_NAMES = {
".gitignore", ".gitignore",
".gitmodules", ".gitmodules",
".gitattributes",
".git-blame-ignore-revs",
".arcconfig", ".arcconfig",
".svn", ".svn",
} }
@@ -52,7 +50,7 @@ def main() -> None:
print(f"Output dir: {curdir}") print(f"Output dir: {curdir}")
version = make_utils.parse_blender_version() version = parse_blender_version(blender_srcdir)
tarball = tarball_path(curdir, version, cli_args) tarball = tarball_path(curdir, version, cli_args)
manifest = manifest_path(tarball) manifest = manifest_path(tarball)
packages_dir = packages_path(curdir, cli_args) packages_dir = packages_path(curdir, cli_args)
@@ -64,7 +62,53 @@ def main() -> None:
print("Done!") print("Done!")
def tarball_path(output_dir: Path, version: make_utils.BlenderVersion, cli_args: Any) -> Path: @dataclasses.dataclass
class BlenderVersion:
version: int # 293 for 2.93.1
patch: int # 1 for 2.93.1
cycle: str # 'alpha', 'beta', 'release', maybe others.
@property
def is_release(self) -> bool:
return self.cycle == "release"
def __str__(self) -> str:
"""Convert to version string.
>>> str(BlenderVersion(293, 1, "alpha"))
'2.93.1-alpha'
>>> str(BlenderVersion(327, 0, "release"))
'3.27.0'
"""
version_major = self.version // 100
version_minor = self.version % 100
as_string = f"{version_major}.{version_minor}.{self.patch}"
if self.is_release:
return as_string
return f"{as_string}-{self.cycle}"
def parse_blender_version(blender_srcdir: Path) -> BlenderVersion:
version_path = blender_srcdir / "source/blender/blenkernel/BKE_blender_version.h"
version_info = {}
line_re = re.compile(r"^#define (BLENDER_VERSION[A-Z_]*)\s+([0-9a-z]+)$")
with version_path.open(encoding="utf-8") as version_file:
for line in version_file:
match = line_re.match(line.strip())
if not match:
continue
version_info[match.group(1)] = match.group(2)
return BlenderVersion(
int(version_info["BLENDER_VERSION"]),
int(version_info["BLENDER_VERSION_PATCH"]),
version_info["BLENDER_VERSION_CYCLE"],
)
def tarball_path(output_dir: Path, version: BlenderVersion, cli_args: Any) -> Path:
extra = "" extra = ""
if cli_args.include_packages: if cli_args.include_packages:
extra = "-with-libraries" extra = "-with-libraries"
@@ -104,7 +148,7 @@ def packages_path(current_directory: Path, cli_args: Any) -> Optional[Path]:
def create_manifest( def create_manifest(
version: make_utils.BlenderVersion, version: BlenderVersion,
outpath: Path, outpath: Path,
blender_srcdir: Path, blender_srcdir: Path,
packages_dir: Optional[Path], packages_dir: Optional[Path],
@@ -126,9 +170,9 @@ def main_files_to_manifest(blender_srcdir: Path, outfile: TextIO) -> None:
def submodules_to_manifest( def submodules_to_manifest(
blender_srcdir: Path, version: make_utils.BlenderVersion, outfile: TextIO blender_srcdir: Path, version: BlenderVersion, outfile: TextIO
) -> None: ) -> None:
skip_addon_contrib = version.is_release() skip_addon_contrib = version.is_release
assert not blender_srcdir.is_absolute() assert not blender_srcdir.is_absolute()
for line in git_command("-C", blender_srcdir, "submodule"): for line in git_command("-C", blender_srcdir, "submodule"):
@@ -156,11 +200,7 @@ def packages_to_manifest(outfile: TextIO, packages_dir: Path) -> None:
def create_tarball( def create_tarball(
version: make_utils.BlenderVersion, version: BlenderVersion, tarball: Path, manifest: Path, blender_srcdir: Path, packages_dir: Optional[Path]
tarball: Path,
manifest: Path,
blender_srcdir: Path,
packages_dir: Optional[Path],
) -> None: ) -> None:
print(f'Creating archive: "{tarball}" ...', end="", flush=True) print(f'Creating archive: "{tarball}" ...', end="", flush=True)
command = ["tar"] command = ["tar"]

View File

@@ -110,9 +110,6 @@ def svn_update(args, release_version):
if not make_utils.command_missing(args.svn_command): if not make_utils.command_missing(args.svn_command):
call(svn_non_interactive + ["cleanup", lib_dirpath]) call(svn_non_interactive + ["cleanup", lib_dirpath])
continue continue
elif dirname.startswith("."):
# Temporary paths such as ".mypy_cache" will report a warning, skip hidden directories.
continue
svn_dirpath = os.path.join(dirpath, ".svn") svn_dirpath = os.path.join(dirpath, ".svn")
svn_root_dirpath = os.path.join(lib_dirpath, ".svn") svn_root_dirpath = os.path.join(lib_dirpath, ".svn")

View File

@@ -9,15 +9,9 @@ import re
import shutil import shutil
import subprocess import subprocess
import sys import sys
from pathlib import Path
from typing import (
Sequence,
Optional,
)
def call(cmd: Sequence[str], exit_on_error: bool = True, silent: bool = False) -> int: def call(cmd, exit_on_error=True, silent=False):
if not silent: if not silent:
print(" ".join(cmd)) print(" ".join(cmd))
@@ -35,7 +29,7 @@ def call(cmd: Sequence[str], exit_on_error: bool = True, silent: bool = False) -
return retcode return retcode
def check_output(cmd: Sequence[str], exit_on_error: bool = True) -> str: def check_output(cmd, exit_on_error=True):
# Flush to ensure correct order output on Windows. # Flush to ensure correct order output on Windows.
sys.stdout.flush() sys.stdout.flush()
sys.stderr.flush() sys.stderr.flush()
@@ -52,14 +46,14 @@ def check_output(cmd: Sequence[str], exit_on_error: bool = True) -> str:
return output.strip() return output.strip()
def git_branch_exists(git_command: str, branch: str) -> bool: def git_branch_exists(git_command, branch):
return ( return (
call([git_command, "rev-parse", "--verify", branch], exit_on_error=False, silent=True) == 0 or call([git_command, "rev-parse", "--verify", branch], exit_on_error=False, silent=True) == 0 or
call([git_command, "rev-parse", "--verify", "remotes/origin/" + branch], exit_on_error=False, silent=True) == 0 call([git_command, "rev-parse", "--verify", "remotes/origin/" + branch], exit_on_error=False, silent=True) == 0
) )
def git_branch(git_command: str) -> str: def git_branch(git_command):
# Get current branch name. # Get current branch name.
try: try:
branch = subprocess.check_output([git_command, "rev-parse", "--abbrev-ref", "HEAD"]) branch = subprocess.check_output([git_command, "rev-parse", "--abbrev-ref", "HEAD"])
@@ -70,7 +64,7 @@ def git_branch(git_command: str) -> str:
return branch.strip().decode('utf8') return branch.strip().decode('utf8')
def git_tag(git_command: str) -> Optional[str]: def git_tag(git_command):
# Get current tag name. # Get current tag name.
try: try:
tag = subprocess.check_output([git_command, "describe", "--exact-match"], stderr=subprocess.STDOUT) tag = subprocess.check_output([git_command, "describe", "--exact-match"], stderr=subprocess.STDOUT)
@@ -80,19 +74,18 @@ def git_tag(git_command: str) -> Optional[str]:
return tag.strip().decode('utf8') return tag.strip().decode('utf8')
def git_branch_release_version(branch: str, tag: str) -> Optional[str]: def git_branch_release_version(branch, tag):
re_match = re.search("^blender-v(.*)-release$", branch) release_version = re.search("^blender-v(.*)-release$", branch)
release_version = None if release_version:
if re_match: release_version = release_version.group(1)
release_version = re_match.group(1)
elif tag: elif tag:
re_match = re.search(r"^v([0-9]*\.[0-9]*).*", tag) release_version = re.search(r"^v([0-9]*\.[0-9]*).*", tag)
if re_match: if release_version:
release_version = re_match.group(1) release_version = release_version.group(1)
return release_version return release_version
def svn_libraries_base_url(release_version: Optional[str], branch: Optional[str] = None) -> str: def svn_libraries_base_url(release_version, branch=None):
if release_version: if release_version:
svn_branch = "tags/blender-" + release_version + "-release" svn_branch = "tags/blender-" + release_version + "-release"
elif branch: elif branch:
@@ -102,58 +95,9 @@ def svn_libraries_base_url(release_version: Optional[str], branch: Optional[str]
return "https://svn.blender.org/svnroot/bf-blender/" + svn_branch + "/lib/" return "https://svn.blender.org/svnroot/bf-blender/" + svn_branch + "/lib/"
def command_missing(command: str) -> bool: def command_missing(command):
# Support running with Python 2 for macOS # Support running with Python 2 for macOS
if sys.version_info >= (3, 0): if sys.version_info >= (3, 0):
return shutil.which(command) is None return shutil.which(command) is None
else: else:
return False return False
class BlenderVersion:
def __init__(self, version: int, patch: int, cycle: str):
# 293 for 2.93.1
self.version = version
# 1 for 2.93.1
self.patch = patch
# 'alpha', 'beta', 'release', maybe others.
self.cycle = cycle
def is_release(self) -> bool:
return self.cycle == "release"
def __str__(self) -> str:
"""Convert to version string.
>>> str(BlenderVersion(293, 1, "alpha"))
'2.93.1-alpha'
>>> str(BlenderVersion(327, 0, "release"))
'3.27.0'
"""
version_major = self.version // 100
version_minor = self.version % 100
as_string = f"{version_major}.{version_minor}.{self.patch}"
if self.is_release():
return as_string
return f"{as_string}-{self.cycle}"
def parse_blender_version() -> BlenderVersion:
blender_srcdir = Path(__file__).absolute().parent.parent.parent
version_path = blender_srcdir / "source/blender/blenkernel/BKE_blender_version.h"
version_info = {}
line_re = re.compile(r"^#define (BLENDER_VERSION[A-Z_]*)\s+([0-9a-z]+)$")
with version_path.open(encoding="utf-8") as version_file:
for line in version_file:
match = line_re.match(line.strip())
if not match:
continue
version_info[match.group(1)] = match.group(2)
return BlenderVersion(
int(version_info["BLENDER_VERSION"]),
int(version_info["BLENDER_VERSION_PATCH"]),
version_info["BLENDER_VERSION_CYCLE"],
)

View File

@@ -38,7 +38,7 @@ PROJECT_NAME = Blender
# could be handy for archiving the generated documentation or if some version # could be handy for archiving the generated documentation or if some version
# control system is used. # control system is used.
PROJECT_NUMBER = V3.4 PROJECT_NUMBER = V3.3
# Using the PROJECT_BRIEF tag one can provide an optional one line description # Using the PROJECT_BRIEF tag one can provide an optional one line description
# for a project that appears at the top of each page and should give viewer a # for a project that appears at the top of each page and should give viewer a

View File

@@ -139,7 +139,7 @@ https://www.blender.org''')
l = lines.pop(0) l = lines.pop(0)
if l: if l:
assert l.startswith('\t') assert(l.startswith('\t'))
l = l[1:] # Remove first white-space (tab). l = l[1:] # Remove first white-space (tab).
fh.write('%s\n' % man_format(l)) fh.write('%s\n' % man_format(l))

View File

@@ -6,10 +6,6 @@ It can be useful to perform an action when a property is changed and can be
used to update other properties or synchronize with external data. used to update other properties or synchronize with external data.
All properties define update functions except for CollectionProperty. All properties define update functions except for CollectionProperty.
.. warning::
Remember that these callbacks may be executed in threaded context.
""" """
import bpy import bpy

View File

@@ -6,10 +6,6 @@ Getter/setter functions can be used for boolean, int, float, string and enum pro
If these callbacks are defined the property will not be stored in the ID properties If these callbacks are defined the property will not be stored in the ID properties
automatically. Instead, the `get` and `set` functions will be called when the property automatically. Instead, the `get` and `set` functions will be called when the property
is respectively read or written from the API. is respectively read or written from the API.
.. warning::
Remember that these callbacks may be executed in threaded context.
""" """
import bpy import bpy

View File

@@ -7,15 +7,6 @@ Custom properties can be added to any subclass of an :class:`ID`,
These properties can be animated, accessed by the user interface and python These properties can be animated, accessed by the user interface and python
like Blender's existing properties. like Blender's existing properties.
.. warning::
Access to these properties might happen in threaded context, on a per-data-block level.
This has to be carefully considered when using accessors or update callbacks.
Typically, these callbacks should not affect any other data that the one owned by their data-block.
When accessing external non-Blender data, thread safety mechanisms should be considered.
""" """
import bpy import bpy

View File

@@ -3,8 +3,8 @@ Extending the Button Context Menu
+++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++
This example enables you to insert your own menu entry into the common This example enables you to insert your own menu entry into the common
right click menu that you get while hovering over a UI button (e.g. operator, right click menu that you get while hovering over a value field,
value field, color, string, etc.) color, string, etc.
To make the example work, you have to first select an object To make the example work, you have to first select an object
then right click on an user interface element (maybe a color in the then right click on an user interface element (maybe a color in the
@@ -14,6 +14,7 @@ Executing the operator will then print all values.
""" """
import bpy import bpy
from bpy.types import Menu
def dump(obj, text): def dump(obj, text):
@@ -46,20 +47,36 @@ class WM_OT_button_context_test(bpy.types.Operator):
return {'FINISHED'} return {'FINISHED'}
def draw_menu(self, context): # This class has to be exactly named like that to insert an entry in the right click menu
class WM_MT_button_context(Menu):
bl_label = "Unused"
def draw(self, context):
pass
def menu_func(self, context):
layout = self.layout layout = self.layout
layout.separator() layout.separator()
layout.operator(WM_OT_button_context_test.bl_idname) layout.operator(WM_OT_button_context_test.bl_idname)
classes = (
WM_OT_button_context_test,
WM_MT_button_context,
)
def register(): def register():
bpy.utils.register_class(WM_OT_button_context_test) for cls in classes:
bpy.types.UI_MT_button_context_menu.append(draw_menu) bpy.utils.register_class(cls)
bpy.types.WM_MT_button_context.append(menu_func)
def unregister(): def unregister():
bpy.types.UI_MT_button_context_menu.remove(draw_menu) for cls in classes:
bpy.utils.unregister_class(WM_OT_button_context_test) bpy.utils.unregister_class(cls)
bpy.types.WM_MT_button_context.remove(menu_func)
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -1,6 +1,4 @@
""" """
.. _modal_operator:
Modal Execution Modal Execution
+++++++++++++++ +++++++++++++++

View File

@@ -14,36 +14,33 @@ from random import random
from mathutils import Vector from mathutils import Vector
from gpu_extras.batch import batch_for_shader from gpu_extras.batch import batch_for_shader
vert_out = gpu.types.GPUStageInterfaceInfo("my_interface") vertex_shader = '''
vert_out.smooth('FLOAT', "v_ArcLength") uniform mat4 u_ViewProjectionMatrix;
shader_info = gpu.types.GPUShaderCreateInfo() in vec3 position;
shader_info.push_constant('MAT4', "u_ViewProjectionMatrix") in float arcLength;
shader_info.push_constant('FLOAT', "u_Scale")
shader_info.vertex_in(0, 'VEC3', "position")
shader_info.vertex_in(1, 'FLOAT', "arcLength")
shader_info.vertex_out(vert_out)
shader_info.fragment_out(0, 'VEC4', "FragColor")
shader_info.vertex_source( out float v_ArcLength;
"void main()"
"{"
" v_ArcLength = arcLength;"
" gl_Position = u_ViewProjectionMatrix * vec4(position, 1.0f);"
"}"
)
shader_info.fragment_source( void main()
"void main()" {
"{" v_ArcLength = arcLength;
" if (step(sin(v_ArcLength * u_Scale), 0.5) == 1) discard;" gl_Position = u_ViewProjectionMatrix * vec4(position, 1.0f);
" FragColor = vec4(1.0);" }
"}" '''
)
shader = gpu.shader.create_from_info(shader_info) fragment_shader = '''
del vert_out uniform float u_Scale;
del shader_info
in float v_ArcLength;
out vec4 FragColor;
void main()
{
if (step(sin(v_ArcLength * u_Scale), 0.5) == 1) discard;
FragColor = vec4(1.0);
}
'''
coords = [Vector((random(), random(), random())) * 5 for _ in range(5)] coords = [Vector((random(), random(), random())) * 5 for _ in range(5)]
@@ -51,6 +48,7 @@ arc_lengths = [0]
for a, b in zip(coords[:-1], coords[1:]): for a, b in zip(coords[:-1], coords[1:]):
arc_lengths.append(arc_lengths[-1] + (a - b).length) arc_lengths.append(arc_lengths[-1] + (a - b).length)
shader = gpu.types.GPUShader(vertex_shader, fragment_shader)
batch = batch_for_shader( batch = batch_for_shader(
shader, 'LINE_STRIP', shader, 'LINE_STRIP',
{"position": coords, "arcLength": arc_lengths}, {"position": coords, "arcLength": arc_lengths},

View File

@@ -6,37 +6,33 @@ import bpy
import gpu import gpu
from gpu_extras.batch import batch_for_shader from gpu_extras.batch import batch_for_shader
vertex_shader = '''
uniform mat4 viewProjectionMatrix;
vert_out = gpu.types.GPUStageInterfaceInfo("my_interface") in vec3 position;
vert_out.smooth('VEC3', "pos") out vec3 pos;
shader_info = gpu.types.GPUShaderCreateInfo() void main()
shader_info.push_constant('MAT4', "viewProjectionMatrix") {
shader_info.push_constant('FLOAT', "brightness") pos = position;
shader_info.vertex_in(0, 'VEC3', "position") gl_Position = viewProjectionMatrix * vec4(position, 1.0f);
shader_info.vertex_out(vert_out) }
shader_info.fragment_out(0, 'VEC4', "FragColor") '''
shader_info.vertex_source( fragment_shader = '''
"void main()" uniform float brightness;
"{"
" pos = position;"
" gl_Position = viewProjectionMatrix * vec4(position, 1.0f);"
"}"
)
shader_info.fragment_source( in vec3 pos;
"void main()" out vec4 FragColor;
"{"
" FragColor = vec4(pos * brightness, 1.0);"
"}"
)
shader = gpu.shader.create_from_info(shader_info) void main()
del vert_out {
del shader_info FragColor = vec4(pos * brightness, 1.0);
}
'''
coords = [(1, 1, 1), (2, 0, 0), (-2, -1, 3)] coords = [(1, 1, 1), (2, 0, 0), (-2, -1, 3)]
shader = gpu.types.GPUShader(vertex_shader, fragment_shader)
batch = batch_for_shader(shader, 'TRIS', {"position": coords}) batch = batch_for_shader(shader, 'TRIS', {"position": coords})

View File

@@ -35,37 +35,35 @@ with offscreen.bind():
# Drawing the generated texture in 3D space # Drawing the generated texture in 3D space
############################################# #############################################
vert_out = gpu.types.GPUStageInterfaceInfo("my_interface") vertex_shader = '''
vert_out.smooth('VEC2', "uvInterp") uniform mat4 modelMatrix;
uniform mat4 viewProjectionMatrix;
shader_info = gpu.types.GPUShaderCreateInfo() in vec2 position;
shader_info.push_constant('MAT4', "viewProjectionMatrix") in vec2 uv;
shader_info.push_constant('MAT4', "modelMatrix")
shader_info.sampler(0, 'FLOAT_2D', "image")
shader_info.vertex_in(0, 'VEC2', "position")
shader_info.vertex_in(1, 'VEC2', "uv")
shader_info.vertex_out(vert_out)
shader_info.fragment_out(0, 'VEC4', "FragColor")
shader_info.vertex_source( out vec2 uvInterp;
"void main()"
"{"
" uvInterp = uv;"
" gl_Position = viewProjectionMatrix * modelMatrix * vec4(position, 0.0, 1.0);"
"}"
)
shader_info.fragment_source( void main()
"void main()" {
"{" uvInterp = uv;
" FragColor = texture(image, uvInterp);" gl_Position = viewProjectionMatrix * modelMatrix * vec4(position, 0.0, 1.0);
"}" }
) '''
shader = gpu.shader.create_from_info(shader_info) fragment_shader = '''
del vert_out uniform sampler2D image;
del shader_info
in vec2 uvInterp;
out vec4 FragColor;
void main()
{
FragColor = texture(image, uvInterp);
}
'''
shader = gpu.types.GPUShader(vertex_shader, fragment_shader)
batch = batch_for_shader( batch = batch_for_shader(
shader, 'TRI_FAN', shader, 'TRI_FAN',
{ {

View File

@@ -1,11 +1,11 @@
sphinx==5.1.1 sphinx==5.0.1
# Sphinx dependencies that are important # Sphinx dependencies that are important
Jinja2==3.1.2 Jinja2==3.1.2
Pygments==2.13.0 Pygments==2.12.0
docutils==0.17.1 docutils==0.17.1
snowballstemmer==2.2.0 snowballstemmer==2.2.0
babel==2.10.3 babel==2.10.1
requests==2.27.1 requests==2.27.1
# Only needed to match the theme used for the official documentation. # Only needed to match the theme used for the official documentation.

View File

@@ -1,9 +1,7 @@
:tocdepth: 2 :tocdepth: 2
Change Log Blender API Change Log
********** **********************
Changes in Blender's Python API between releases.
.. note, this document is auto generated by sphinx_changelog_gen.py .. note, this document is auto generated by sphinx_changelog_gen.py

View File

@@ -1,15 +0,0 @@
.. _info_advanced-index:
********
Advanced
********
This chapter covers advanced use (topics which may not be required for typical usage).
.. NOTE(@campbellbarton): Blender-as-a-Python-module is too obscure a topic to list directly on the main-page,
so opt for an "Advanced" page which can be expanded on as needed.
.. toctree::
:maxdepth: 1
info_advanced_blender_as_bpy.rst

View File

@@ -1,124 +0,0 @@
**************************
Blender as a Python Module
**************************
Blender supports being built as a Python module,
allowing ``import bpy`` to be added to any Python script, providing access to Blender's features.
.. note::
At time of writing official builds are not available,
using this requires compiling Blender yourself see
`build instructions <https://wiki.blender.org/w/index.php?title=Building_Blender/Other/BlenderAsPyModule>`__.
Use Cases
=========
Python developers may wish to integrate Blender scripts which don't center around Blender.
Possible uses include:
- Visualizing data by rendering images and animations.
- Image processing using Blender's compositor.
- Video editing (using Blender's sequencer).
- 3D file conversion.
- Development, accessing ``bpy`` from Python IDE's and debugging tools for example.
- Automation.
Usage
=====
For the most part using Blender as a Python module is equivalent to running a script in background-mode
(passing the command-line arguments ``--background`` or ``-b``),
however there are some differences to be aware of.
.. Sorted alphabetically as there isn't an especially a logical order to show them.
Blender's Executable Access
The attribute :class:`bpy.app.binary_path` defaults to an empty string.
If you wish to point this to the location of a known executable you may set the value.
This example searches for the binary, setting it when found:
.. code-block:: python
import bpy
import shutil
blender_bin = shutil.which("blender")
if blender_bin:
print("Found:", blender_bin)
bpy.app.binary_path = blender_bin
else:
print("Unable to find blender!")
Blender's Internal Modules
There are many modules included with Blender such as :mod:`gpu` and :mod:`mathuils`.
It's important that these are imported after ``bpy`` or they will not be found.
Command Line Arguments Unsupported
Functionality controlled by command line arguments (shown by calling ``blender --help`` aren't accessible).
Typically this isn't such a limitation although there are some command line arguments that don't have
equivalents in Blender's Python API (``--threads`` and ``--log`` for example).
.. note::
Access to these settings may be added in the future as needed.
Resource Sharing (GPU)
It's possible other Python modules make use of the GPU in a way that prevents Blender/Cycles from accessing the GPU.
Signal Handlers
Blender's typical signal handlers are not initialized, so there is no special handling for ``Control-C``
to cancel a render and a crash log is not written in the event of a crash.
Startup and Preferences
When the ``bpy`` module loads, the file is not empty as you might expect,
there is a default cube, camera and light. If you wish to start from a blank file use:
``bpy.ops.wm.read_factory_settings(use_empty=True)``.
The users startup and preferences are ignored to prevent your local configuration from impacting scripts behavior.
The Python module behaves as if ``--factory-startup`` was passed as a command line argument.
The users preferences and startup can be loaded using operators:
.. code-block:: python
import bpy
bpy.ops.wm.read_userpref()
bpy.ops.wm.read_homefile()
Limitations
===========
Most constraints of Blender as an application still apply:
Reloading Unsupported
Reloading via ``importlib.reload`` will raise an exception instead of reloading and resetting the module.
The operator ``bpy.ops.wm.read_factory_settings()`` can be used to reset the internal state.
Single Blend File Restriction
Only a single ``.blend`` file can be edited at a time.
.. hint::
As with the application it's possible to start multiple instances,
each with their own ``bpy`` and therefor Blender state.
Python provides the ``multiprocessing`` module to make communicating with sub-processes more convenient.
In some cases the library API may be an alternative to starting separate processes,
although this API operates on reading and writing ID data-blocks and isn't
a complete substitute for loading ``.blend`` files, see:
- :meth:`bpy.types.BlendDataLibraries.load`
- :meth:`bpy.types.BlendDataLibraries.write`
- :meth:`bpy.types.BlendData.temp_data`
supports a temporary data-context to avoid manipulating the current ``.blend`` file.

View File

@@ -1,6 +1,6 @@
******************* *******************
API Reference Usage Reference API Usage
******************* *******************
Blender has many interlinking data types which have an auto-generated reference API which often has the information Blender has many interlinking data types which have an auto-generated reference API which often has the information

View File

@@ -86,8 +86,7 @@ No updates after setting values
Sometimes you want to modify values from Python and immediately access the updated values, e.g: Sometimes you want to modify values from Python and immediately access the updated values, e.g:
Once changing the objects :class:`bpy.types.Object.location` Once changing the objects :class:`bpy.types.Object.location`
you may want to access its transformation right after from :class:`bpy.types.Object.matrix_world`, you may want to access its transformation right after from :class:`bpy.types.Object.matrix_world`,
but this doesn't work as you might expect. There are similar issues with changes to the UI, that but this doesn't work as you might expect.
are covered in the next section.
Consider the calculations that might contribute to the object's final transformation, this includes: Consider the calculations that might contribute to the object's final transformation, this includes:
@@ -111,35 +110,6 @@ Now all dependent data (child objects, modifiers, drivers, etc.)
have been recalculated and are available to the script within the active view layer. have been recalculated and are available to the script within the active view layer.
No updates after changing UI context
------------------------------------
Similar to the previous issue, some changes to the UI may also not have an immediate effect. For example, setting
:class:`bpy.types.Window.workspace` doesn't seem to cause an observable effect in the immediately following code
(:class:`bpy.types.Window.workspace` is still the same), but the UI will in fact reflect the change. Some of the
properties that behave that way are:
- :class:`bpy.types.Window.workspace`
- :class:`bpy.types.Window.screen`
- :class:`bpy.types.Window.scene`
- :class:`bpy.types.Area.type`
- :class:`bpy.types.Area.uitype`
Such changes impact the UI, and with that the context (:class:`bpy.context`) quite drastically. This can break
Blender's context management. So Blender delays this change until after operators have run and just before the UI is
redrawn, making sure that context can be changed safely.
If you rely on executing code with an updated context this can be worked around by executing the code in a delayed
fashion as well. Possible options include:
- :ref:`Modal Operator <modal_operator>`.
- :class:`bpy.app.handlers`.
- :class:`bpy.app.timer`.
It's also possible to depend on drawing callbacks although these should generally be avoided as failure to draw a
hidden panel, region, cursor, etc. could cause your script to be unreliable
Can I redraw during script execution? Can I redraw during script execution?
------------------------------------- -------------------------------------

View File

@@ -1,8 +1,8 @@
.. _info_overview: .. _info_overview:
************ *******************
API Overview Python API Overview
************ *******************
The purpose of this document is to explain how Python and Blender fit together, The purpose of this document is to explain how Python and Blender fit together,
covering some of the functionality that may not be obvious from reading the API references covering some of the functionality that may not be obvious from reading the API references

View File

@@ -241,9 +241,9 @@ def main():
comment_washed = [] comment_washed = []
comment = [] if comment is None else comment comment = [] if comment is None else comment
for i, l in enumerate(comment): for i, l in enumerate(comment):
assert ((l.strip() == "") or assert((l.strip() == "") or
(l in {"/*", " *"}) or (l in {"/*", " *"}) or
(l.startswith(("/* ", " * ")))) (l.startswith(("/* ", " * "))))
l = l[3:] l = l[3:]
if i == 0 and not l.strip(): if i == 0 and not l.strip():
@@ -270,7 +270,7 @@ def main():
tp_sub = None tp_sub = None
else: else:
print(arg) print(arg)
assert 0 assert(0)
tp_str = "" tp_str = ""
@@ -315,7 +315,7 @@ def main():
tp_str += " or any sequence of 3 floats" tp_str += " or any sequence of 3 floats"
elif tp == BMO_OP_SLOT_PTR: elif tp == BMO_OP_SLOT_PTR:
tp_str = "dict" tp_str = "dict"
assert tp_sub is not None assert(tp_sub is not None)
if tp_sub == BMO_OP_SLOT_SUBTYPE_PTR_BMESH: if tp_sub == BMO_OP_SLOT_SUBTYPE_PTR_BMESH:
tp_str = ":class:`bmesh.types.BMesh`" tp_str = ":class:`bmesh.types.BMesh`"
elif tp_sub == BMO_OP_SLOT_SUBTYPE_PTR_SCENE: elif tp_sub == BMO_OP_SLOT_SUBTYPE_PTR_SCENE:
@@ -330,10 +330,10 @@ def main():
tp_str = ":class:`bpy.types.bpy_struct`" tp_str = ":class:`bpy.types.bpy_struct`"
else: else:
print("Can't find", vars_dict_reverse[tp_sub]) print("Can't find", vars_dict_reverse[tp_sub])
assert 0 assert(0)
elif tp == BMO_OP_SLOT_ELEMENT_BUF: elif tp == BMO_OP_SLOT_ELEMENT_BUF:
assert tp_sub is not None assert(tp_sub is not None)
ls = [] ls = []
if tp_sub & BM_VERT: if tp_sub & BM_VERT:
@@ -342,7 +342,7 @@ def main():
ls.append(":class:`bmesh.types.BMEdge`") ls.append(":class:`bmesh.types.BMEdge`")
if tp_sub & BM_FACE: if tp_sub & BM_FACE:
ls.append(":class:`bmesh.types.BMFace`") ls.append(":class:`bmesh.types.BMFace`")
assert ls # Must be at least one. assert(ls) # must be at least one
if tp_sub & BMO_OP_SLOT_SUBTYPE_ELEM_IS_SINGLE: if tp_sub & BMO_OP_SLOT_SUBTYPE_ELEM_IS_SINGLE:
tp_str = "/".join(ls) tp_str = "/".join(ls)
@@ -367,10 +367,10 @@ def main():
tp_str += "unknown internal data, not compatible with python" tp_str += "unknown internal data, not compatible with python"
else: else:
print("Can't find", vars_dict_reverse[tp_sub]) print("Can't find", vars_dict_reverse[tp_sub])
assert 0 assert(0)
else: else:
print("Can't find", vars_dict_reverse[tp]) print("Can't find", vars_dict_reverse[tp])
assert 0 assert(0)
args_wash.append((name, tp_str, comment)) args_wash.append((name, tp_str, comment))
return args_wash return args_wash
@@ -394,7 +394,7 @@ def main():
fw(" :return:\n\n") fw(" :return:\n\n")
for (name, tp, comment) in args_out_wash: for (name, tp, comment) in args_out_wash:
assert name.endswith(".out") assert(name.endswith(".out"))
name = name[:-4] name = name[:-4]
fw(" - ``%s``: %s\n\n" % (name, comment)) fw(" - ``%s``: %s\n\n" % (name, comment))
fw(" **type** %s\n" % tp) fw(" **type** %s\n" % tp)

View File

@@ -101,7 +101,7 @@ def api_dump(args):
version, version_key = api_version() version, version_key = api_version()
if version is None: if version is None:
raise ValueError("API dumps can only be generated from within Blender.") raise(ValueError("API dumps can only be generated from within Blender."))
dump = {} dump = {}
dump_module = dump["bpy.types"] = {} dump_module = dump["bpy.types"] = {}
@@ -250,7 +250,7 @@ def api_changelog(args):
version, version_key = api_version() version, version_key = api_version()
if version is None and (filepath_in_from is None or filepath_in_to is None): if version is None and (filepath_in_from is None or filepath_in_to is None):
raise ValueError("API dumps files must be given when ran outside of Blender.") raise(ValueError("API dumps files must be given when ran outside of Blender."))
with open(indexpath, 'r', encoding='utf-8') as file_handle: with open(indexpath, 'r', encoding='utf-8') as file_handle:
index = json.load(file_handle) index = json.load(file_handle)
@@ -258,21 +258,17 @@ def api_changelog(args):
if filepath_in_to is None: if filepath_in_to is None:
filepath_in_to = index.get(version_key, None) filepath_in_to = index.get(version_key, None)
if filepath_in_to is None: if filepath_in_to is None:
raise ValueError("Cannot find API dump file for Blender version " + str(version) + " in index file.") raise(ValueError("Cannot find API dump file for Blender version " + str(version) + " in index file."))
print("Found to file: %r" % filepath_in_to) print("Found to file: %r" % filepath_in_to)
if filepath_in_from is None: if filepath_in_from is None:
version_from, version_from_key = api_version_previous_in_index(index, version) version_from, version_from_key = api_version_previous_in_index(index, version)
if version_from is None: if version_from is None:
raise ValueError("No previous version of Blender could be found in the index.") raise(ValueError("No previous version of Blender could be found in the index."))
filepath_in_from = index.get(version_from_key, None) filepath_in_from = index.get(version_from_key, None)
if filepath_in_from is None: if filepath_in_from is None:
raise ValueError( raise(ValueError("Cannot find API dump file for previous Blender version " + str(version_from) + " in index file."))
"Cannot find API dump file for previous Blender version " +
str(version_from) +
" in index file."
)
print("Found from file: %r" % filepath_in_from) print("Found from file: %r" % filepath_in_from)
@@ -281,7 +277,7 @@ def api_changelog(args):
with open(os.path.join(rootpath, filepath_in_to), 'r', encoding='utf-8') as file_handle: with open(os.path.join(rootpath, filepath_in_to), 'r', encoding='utf-8') as file_handle:
dump_version, dict_to = json.load(file_handle) dump_version, dict_to = json.load(file_handle)
assert tuple(dump_version) == version assert(tuple(dump_version) == version)
api_changes = [] api_changes = []
@@ -349,10 +345,8 @@ def api_changelog(args):
fw("" fw(""
":tocdepth: 2\n" ":tocdepth: 2\n"
"\n" "\n"
"Change Log\n" "Blender API Change Log\n"
"**********\n" "**********************\n"
"\n"
"Changes in Blender's Python API between releases.\n"
"\n" "\n"
".. note, this document is auto generated by sphinx_changelog_gen.py\n" ".. note, this document is auto generated by sphinx_changelog_gen.py\n"
"\n" "\n"

View File

@@ -387,35 +387,23 @@ EXAMPLE_SET_USED = set()
# RST files directory. # RST files directory.
RST_DIR = os.path.abspath(os.path.join(SCRIPT_DIR, "rst")) RST_DIR = os.path.abspath(os.path.join(SCRIPT_DIR, "rst"))
# Extra info, not api reference docs stored in `./rst/info_*`. # extra info, not api reference docs
# Pairs of (file, description), the title makes from the RST files are displayed before the description. # stored in ./rst/info_*
INFO_DOCS = ( INFO_DOCS = (
("info_quickstart.rst", ("info_quickstart.rst",
"New to Blender or scripting and want to get your feet wet?"), "Quickstart: New to Blender or scripting and want to get your feet wet?"),
("info_overview.rst", ("info_overview.rst",
"A more complete explanation of Python integration."), "API Overview: A more complete explanation of Python integration"),
("info_api_reference.rst", ("info_api_reference.rst",
"Examples of how to use the API reference docs."), "API Reference Usage: examples of how to use the API reference docs"),
("info_best_practice.rst", ("info_best_practice.rst",
"Conventions to follow for writing good scripts."), "Best Practice: Conventions to follow for writing good scripts"),
("info_tips_and_tricks.rst", ("info_tips_and_tricks.rst",
"Hints to help you while writing scripts for Blender."), "Tips and Tricks: Hints to help you while writing scripts for Blender"),
("info_gotcha.rst", ("info_gotcha.rst",
"Some of the problems you may encounter when writing scripts."), "Gotcha's: Some of the problems you may encounter when writing scripts"),
("info_advanced.rst", ("change_log.rst", "Change Log: List of changes since last Blender release"),
"Topics which may not be required for typical usage."),
("change_log.rst",
"List of changes since last Blender release"),
) )
# Referenced indirectly.
INFO_DOCS_OTHER = (
# Included by: `info_advanced.rst`.
"info_advanced_blender_as_bpy.rst",
)
# Hide the actual TOC, use a separate list that links to the items.
# This is done so a short description can be included with each link.
USE_INFO_DOCS_FANCY_INDEX = True
# only support for properties atm. # only support for properties atm.
RNA_BLACKLIST = { RNA_BLACKLIST = {
@@ -1143,7 +1131,6 @@ def pymodule2sphinx(basepath, module_name, module, title, module_all_extra):
# Changes In Blender will force errors here. # Changes In Blender will force errors here.
context_type_map = { context_type_map = {
# context_member: (RNA type, is_collection) # context_member: (RNA type, is_collection)
"active_action": ("Action", False),
"active_annotation_layer": ("GPencilLayer", False), "active_annotation_layer": ("GPencilLayer", False),
"active_bone": ("EditBone", False), "active_bone": ("EditBone", False),
"active_file": ("FileSelectEntry", False), "active_file": ("FileSelectEntry", False),
@@ -1482,7 +1469,7 @@ def pyrna2sphinx(basepath):
struct_module_name = struct.module_name struct_module_name = struct.module_name
if USE_ONLY_BUILTIN_RNA_TYPES: if USE_ONLY_BUILTIN_RNA_TYPES:
assert struct_module_name == "bpy.types" assert(struct_module_name == "bpy.types")
filepath = os.path.join(basepath, "%s.%s.rst" % (struct_module_name, struct.identifier)) filepath = os.path.join(basepath, "%s.%s.rst" % (struct_module_name, struct.identifier))
file = open(filepath, "w", encoding="utf-8") file = open(filepath, "w", encoding="utf-8")
fw = file.write fw = file.write
@@ -1916,7 +1903,7 @@ except ModuleNotFoundError:
# fw(" 'collapse_navigation': True,\n") # fw(" 'collapse_navigation': True,\n")
fw(" 'sticky_navigation': False,\n") fw(" 'sticky_navigation': False,\n")
fw(" 'navigation_depth': 1,\n") fw(" 'navigation_depth': 1,\n")
fw(" 'includehidden': False,\n") # fw(" 'includehidden': True,\n")
# fw(" 'titles_only': False\n") # fw(" 'titles_only': False\n")
fw(" }\n\n") fw(" }\n\n")
@@ -1988,21 +1975,12 @@ def write_rst_index(basepath):
if not EXCLUDE_INFO_DOCS: if not EXCLUDE_INFO_DOCS:
fw(".. toctree::\n") fw(".. toctree::\n")
if USE_INFO_DOCS_FANCY_INDEX:
fw(" :hidden:\n")
fw(" :maxdepth: 1\n") fw(" :maxdepth: 1\n")
fw(" :caption: Documentation\n\n") fw(" :caption: Documentation\n\n")
for info, info_desc in INFO_DOCS: for info, info_desc in INFO_DOCS:
fw(" %s\n" % info) fw(" %s <%s>\n" % (info_desc, info))
fw("\n") fw("\n")
if USE_INFO_DOCS_FANCY_INDEX:
# Show a fake TOC, allowing for an extra description to be shown as well as the title.
fw(title_string("Documentation", "="))
for info, info_desc in INFO_DOCS:
fw("- :doc:`%s`: %s\n" % (info.removesuffix(".rst"), info_desc))
fw("\n")
fw(".. toctree::\n") fw(".. toctree::\n")
fw(" :maxdepth: 1\n") fw(" :maxdepth: 1\n")
fw(" :caption: Application Modules\n\n") fw(" :caption: Application Modules\n\n")
@@ -2335,8 +2313,6 @@ def copy_handwritten_rsts(basepath):
if not EXCLUDE_INFO_DOCS: if not EXCLUDE_INFO_DOCS:
for info, _info_desc in INFO_DOCS: for info, _info_desc in INFO_DOCS:
shutil.copy2(os.path.join(RST_DIR, info), basepath) shutil.copy2(os.path.join(RST_DIR, info), basepath)
for info in INFO_DOCS_OTHER:
shutil.copy2(os.path.join(RST_DIR, info), basepath)
# TODO: put this docs in Blender's code and use import as per modules above. # TODO: put this docs in Blender's code and use import as per modules above.
handwritten_modules = [ handwritten_modules = [

12
extern/CMakeLists.txt vendored
View File

@@ -32,6 +32,14 @@ if(WITH_BINRELOC)
add_subdirectory(binreloc) add_subdirectory(binreloc)
endif() endif()
if(NOT WITH_SYSTEM_GLEW)
if(WITH_GLEW_ES)
add_subdirectory(glew-es)
else()
add_subdirectory(glew)
endif()
endif()
if(WITH_LZO AND NOT WITH_SYSTEM_LZO) if(WITH_LZO AND NOT WITH_SYSTEM_LZO)
add_subdirectory(lzo) add_subdirectory(lzo)
endif() endif()
@@ -40,7 +48,7 @@ if(WITH_LZMA)
add_subdirectory(lzma) add_subdirectory(lzma)
endif() endif()
if(WITH_CYCLES OR WITH_COMPOSITOR_CPU OR WITH_OPENSUBDIV) if(WITH_CYCLES OR WITH_COMPOSITOR OR WITH_OPENSUBDIV)
add_subdirectory(clew) add_subdirectory(clew)
if((WITH_CYCLES_DEVICE_CUDA OR WITH_CYCLES_DEVICE_OPTIX) AND WITH_CUDA_DYNLOAD) if((WITH_CYCLES_DEVICE_CUDA OR WITH_CYCLES_DEVICE_OPTIX) AND WITH_CUDA_DYNLOAD)
add_subdirectory(cuew) add_subdirectory(cuew)
@@ -88,6 +96,6 @@ if(WITH_MOD_FLUID)
add_subdirectory(mantaflow) add_subdirectory(mantaflow)
endif() endif()
if(WITH_COMPOSITOR_CPU) if(WITH_COMPOSITOR)
add_subdirectory(smaa_areatex) add_subdirectory(smaa_areatex)
endif() endif()

View File

@@ -363,8 +363,8 @@ int FFMPEGReader::read_packet(void* opaque, uint8_t* buf, int buf_size)
long long size = std::min(static_cast<long long>(buf_size), reader->m_membuffer->getSize() - reader->m_membufferpos); long long size = std::min(static_cast<long long>(buf_size), reader->m_membuffer->getSize() - reader->m_membufferpos);
if(size <= 0) if(size < 0)
return AVERROR_EOF; return -1;
std::memcpy(buf, ((data_t*)reader->m_membuffer->getBuffer()) + reader->m_membufferpos, size); std::memcpy(buf, ((data_t*)reader->m_membuffer->getBuffer()) + reader->m_membufferpos, size);
reader->m_membufferpos += size; reader->m_membufferpos += size;

View File

@@ -1,13 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-or-later # SPDX-License-Identifier: GPL-2.0-or-later
# Copyright 2016 Blender Foundation. All rights reserved. # Copyright 2016 Blender Foundation. All rights reserved.
# Too noisy for code we don't maintain.
if(CMAKE_COMPILER_IS_GNUCC)
if(NOT "${CMAKE_CXX_COMPILER_VERSION}" VERSION_LESS "8.0")
add_cxx_flag("-Wno-cast-function-type")
endif()
endif()
set(INC set(INC
src src
src/gflags src/gflags

33
extern/glew-es/CMakeLists.txt vendored Normal file
View File

@@ -0,0 +1,33 @@
# SPDX-License-Identifier: GPL-2.0-or-later
# Copyright 2013 Blender Foundation. All rights reserved.
set(INC
include
)
set(INC_SYS
)
if(UNIX)
list(APPEND INC_SYS
${X11_X11_INCLUDE_PATH}
)
endif()
set(SRC
src/glew.c
include/GL/eglew.h
include/GL/glesew.h
include/GL/glew.h
include/GL/glxew.h
include/GL/wglew.h
)
set(LIB
)
add_definitions(${GL_DEFINITIONS})
blender_add_lib(extern_glew_es "${SRC}" "${INC}" "${INC_SYS}" "${LIB}")

73
extern/glew-es/LICENSE.txt vendored Normal file
View File

@@ -0,0 +1,73 @@
The OpenGL Extension Wrangler Library
Copyright (C) 2002-2007, Milan Ikits <milan ikits[]ieee org>
Copyright (C) 2002-2007, Marcelo E. Magallon <mmagallo[]debian org>
Copyright (C) 2002, Lev Povalahev
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* The name of the author may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.
Mesa 3-D graphics library
Version: 7.0
Copyright (C) 1999-2007 Brian Paul All Rights Reserved.
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
BRIAN PAUL BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Copyright (c) 2007 The Khronos Group Inc.
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and/or associated documentation files (the
"Materials"), to deal in the Materials without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Materials, and to
permit persons to whom the Materials are furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Materials.
THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.

5
extern/glew-es/README.blender vendored Normal file
View File

@@ -0,0 +1,5 @@
Project: The OpenGL Extension Wrangler Library
URL: http://glew.sourceforge.net/
License: Check LICENSE.txt
Upstream version: 2.0.0
Local modifications: None

20525
extern/glew-es/include/GL/glew.h vendored Normal file

File diff suppressed because it is too large Load Diff

1649
extern/glew-es/include/GL/glxew.h vendored Normal file

File diff suppressed because it is too large Load Diff

1424
extern/glew-es/include/GL/wglew.h vendored Normal file

File diff suppressed because it is too large Load Diff

22401
extern/glew-es/src/glew.c vendored Normal file

File diff suppressed because it is too large Load Diff

54
extern/glew/CMakeLists.txt vendored Normal file
View File

@@ -0,0 +1,54 @@
# SPDX-License-Identifier: GPL-2.0-or-later
# Copyright 2006 Blender Foundation. All rights reserved.
# avoid noisy warnings
if(CMAKE_COMPILER_IS_GNUCC OR CMAKE_C_COMPILER_ID MATCHES "Clang")
add_c_flag(
"-Wno-strict-prototypes"
)
endif()
if(CMAKE_COMPILER_IS_GNUCC AND (NOT "${CMAKE_C_COMPILER_VERSION}" VERSION_LESS "12.1"))
add_c_flag(
"-Wno-address"
)
endif()
# MSVC's inliner is not having a happy time with glewIsSupported
# causing this to be one of the most expensive things to build
# in blender. Optimize for size rather than speed sidesteps this
# problem, more details at
# https://developercommunity.visualstudio.com/content/problem/732941/slow-compilation-of-glewc-for-visual-studio-2019-x.html
if(MSVC)
add_c_flag("/Os")
endif()
set(INC
include
)
set(INC_SYS
)
if(UNIX)
list(APPEND INC_SYS
${X11_X11_INCLUDE_PATH}
)
endif()
set(SRC
src/glew.c
include/GL/eglew.h
include/GL/glew.h
include/GL/glxew.h
include/GL/wglew.h
)
set(LIB
)
add_definitions(${GL_DEFINITIONS})
blender_add_lib(extern_glew "${SRC}" "${INC}" "${INC_SYS}" "${LIB}")

73
extern/glew/LICENSE.txt vendored Normal file
View File

@@ -0,0 +1,73 @@
The OpenGL Extension Wrangler Library
Copyright (C) 2002-2007, Milan Ikits <milan ikits[]ieee org>
Copyright (C) 2002-2007, Marcelo E. Magallon <mmagallo[]debian org>
Copyright (C) 2002, Lev Povalahev
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* The name of the author may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.
Mesa 3-D graphics library
Version: 7.0
Copyright (C) 1999-2007 Brian Paul All Rights Reserved.
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
BRIAN PAUL BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Copyright (c) 2007 The Khronos Group Inc.
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and/or associated documentation files (the
"Materials"), to deal in the Materials without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Materials, and to
permit persons to whom the Materials are furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Materials.
THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.

5
extern/glew/README.blender vendored Normal file
View File

@@ -0,0 +1,5 @@
Project: The OpenGL Extension Wrangler Library
URL: http://glew.sourceforge.net/
License: Check LICENSE.txt
Upstream version: 2.0.0
Local modifications: None

2261
extern/glew/include/GL/eglew.h vendored Normal file

File diff suppressed because it is too large Load Diff

20113
extern/glew/include/GL/glew.h vendored Normal file

File diff suppressed because it is too large Load Diff

1769
extern/glew/include/GL/glxew.h vendored Normal file

File diff suppressed because it is too large Load Diff

1427
extern/glew/include/GL/wglew.h vendored Normal file

File diff suppressed because it is too large Load Diff

23952
extern/glew/src/glew.c vendored Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -11,6 +11,7 @@ add_subdirectory(memutil)
add_subdirectory(opencolorio) add_subdirectory(opencolorio)
add_subdirectory(opensubdiv) add_subdirectory(opensubdiv)
add_subdirectory(mikktspace) add_subdirectory(mikktspace)
add_subdirectory(glew-mx)
add_subdirectory(eigen) add_subdirectory(eigen)
add_subdirectory(sky) add_subdirectory(sky)

View File

@@ -36,13 +36,8 @@ if(WITH_CYCLES_NATIVE_ONLY)
) )
if(NOT MSVC) if(NOT MSVC)
ADD_CHECK_CXX_COMPILER_FLAG(CMAKE_CXX_FLAGS _has_march_native "-march=native") string(APPEND CMAKE_CXX_FLAGS " -march=native")
if(_has_march_native) set(CYCLES_KERNEL_FLAGS "-march=native")
set(CYCLES_KERNEL_FLAGS "-march=native")
else()
set(CYCLES_KERNEL_FLAGS "")
endif()
unset(_has_march_native)
else() else()
if(NOT MSVC_NATIVE_ARCH_FLAGS) if(NOT MSVC_NATIVE_ARCH_FLAGS)
TRY_RUN( TRY_RUN(

View File

@@ -43,8 +43,9 @@ else()
endif() endif()
if(WITH_CYCLES_STANDALONE AND WITH_CYCLES_STANDALONE_GUI) if(WITH_CYCLES_STANDALONE AND WITH_CYCLES_STANDALONE_GUI)
list(APPEND INC_SYS ${Epoxy_INCLUDE_DIRS} ${SDL2_INCLUDE_DIRS}) add_definitions(${GL_DEFINITIONS})
list(APPEND LIB ${Epoxy_LIBRARIES} ${SDL2_LIBRARIES}) list(APPEND INC_SYS ${GLEW_INCLUDE_DIR} ${SDL2_INCLUDE_DIRS})
list(APPEND LIB ${CYCLES_GL_LIBRARIES} ${CYCLES_GLEW_LIBRARIES} ${SDL2_LIBRARIES})
endif() endif()
cycles_external_libraries_append(LIB) cycles_external_libraries_append(LIB)

View File

@@ -7,8 +7,8 @@
#include "util/log.h" #include "util/log.h"
#include "util/string.h" #include "util/string.h"
#include <GL/glew.h>
#include <SDL.h> #include <SDL.h>
#include <epoxy/gl.h>
CCL_NAMESPACE_BEGIN CCL_NAMESPACE_BEGIN

View File

@@ -6,7 +6,7 @@
#include "util/log.h" #include "util/log.h"
#include "util/string.h" #include "util/string.h"
#include <epoxy/gl.h> #include <GL/glew.h>
CCL_NAMESPACE_BEGIN CCL_NAMESPACE_BEGIN

View File

@@ -11,8 +11,8 @@
#include "util/time.h" #include "util/time.h"
#include "util/version.h" #include "util/version.h"
#include <GL/glew.h>
#include <SDL.h> #include <SDL.h>
#include <epoxy/gl.h>
CCL_NAMESPACE_BEGIN CCL_NAMESPACE_BEGIN
@@ -294,6 +294,7 @@ void window_main_loop(const char *title,
SDL_RaiseWindow(V.window); SDL_RaiseWindow(V.window);
V.gl_context = SDL_GL_CreateContext(V.window); V.gl_context = SDL_GL_CreateContext(V.window);
glewInit();
SDL_GL_MakeCurrent(V.window, nullptr); SDL_GL_MakeCurrent(V.window, nullptr);
window_reshape(width, height); window_reshape(width, height);

View File

@@ -3,19 +3,18 @@
set(INC set(INC
.. ..
../../glew-mx
../../guardedalloc ../../guardedalloc
../../mikktspace ../../mikktspace
../../../source/blender/makesdna ../../../source/blender/makesdna
../../../source/blender/makesrna ../../../source/blender/makesrna
../../../source/blender/blenlib ../../../source/blender/blenlib
../../../source/blender/gpu
../../../source/blender/render
${CMAKE_BINARY_DIR}/source/blender/makesrna/intern ${CMAKE_BINARY_DIR}/source/blender/makesrna/intern
) )
set(INC_SYS set(INC_SYS
${Epoxy_INCLUDE_DIRS}
${PYTHON_INCLUDE_DIRS} ${PYTHON_INCLUDE_DIRS}
${GLEW_INCLUDE_DIR}
) )
set(SRC set(SRC
@@ -65,9 +64,6 @@ set(LIB
cycles_subd cycles_subd
cycles_util cycles_util
bf_intern_mikktspace
${Epoxy_LIBRARIES}
${PYTHON_LINKFLAGS} ${PYTHON_LINKFLAGS}
${PYTHON_LIBRARIES} ${PYTHON_LIBRARIES}
) )
@@ -91,6 +87,8 @@ set(ADDON_FILES
addon/version_update.py addon/version_update.py
) )
add_definitions(${GL_DEFINITIONS})
if(WITH_CYCLES_DEVICE_HIP) if(WITH_CYCLES_DEVICE_HIP)
add_definitions(-DWITH_HIP) add_definitions(-DWITH_HIP)
endif() endif()
@@ -103,10 +101,6 @@ if(WITH_MOD_FLUID)
add_definitions(-DWITH_FLUID) add_definitions(-DWITH_FLUID)
endif() endif()
if(WITH_TBB)
add_definitions(-DWITH_TBB)
endif()
if(WITH_OPENVDB) if(WITH_OPENVDB)
add_definitions(-DWITH_OPENVDB ${OPENVDB_DEFINITIONS}) add_definitions(-DWITH_OPENVDB ${OPENVDB_DEFINITIONS})
list(APPEND INC_SYS list(APPEND INC_SYS

View File

@@ -81,7 +81,7 @@ enum_use_layer_samples = (
) )
enum_sampling_pattern = ( enum_sampling_pattern = (
('SOBOL', "Sobol-Burley", "Use Sobol-Burley random sampling pattern", 0), ('SOBOL', "Sobol", "Use Sobol random sampling pattern", 0),
('PROGRESSIVE_MULTI_JITTER', "Progressive Multi-Jitter", "Use Progressive Multi-Jitter random sampling pattern", 1), ('PROGRESSIVE_MULTI_JITTER', "Progressive Multi-Jitter", "Use Progressive Multi-Jitter random sampling pattern", 1),
) )
@@ -381,7 +381,7 @@ class CyclesRenderSettings(bpy.types.PropertyGroup):
sampling_pattern: EnumProperty( sampling_pattern: EnumProperty(
name="Sampling Pattern", name="Sampling Pattern",
description="Random sampling pattern used by the integrator. When adaptive sampling is enabled, Progressive Multi-Jitter is always used instead of Sobol-Burley", description="Random sampling pattern used by the integrator. When adaptive sampling is enabled, Progressive Multi-Jitter is always used instead of Sobol",
items=enum_sampling_pattern, items=enum_sampling_pattern,
default='PROGRESSIVE_MULTI_JITTER', default='PROGRESSIVE_MULTI_JITTER',
) )
@@ -1558,7 +1558,7 @@ class CyclesPreferences(bpy.types.AddonPreferences):
import sys import sys
col.label(text="Requires Intel GPU with Xe-HPG architecture", icon='BLANK1') col.label(text="Requires Intel GPU with Xe-HPG architecture", icon='BLANK1')
if sys.platform.startswith("win"): if sys.platform.startswith("win"):
col.label(text="and Windows driver version 101.3268 or newer", icon='BLANK1') col.label(text="and Windows driver version 101.1660 or newer", icon='BLANK1')
elif sys.platform.startswith("linux"): elif sys.platform.startswith("linux"):
col.label(text="and Linux driver version xx.xx.23570 or newer", icon='BLANK1') col.label(text="and Linux driver version xx.xx.23570 or newer", icon='BLANK1')
elif device_type == 'METAL': elif device_type == 'METAL':

View File

@@ -296,6 +296,7 @@ class CYCLES_RENDER_PT_sampling_advanced(CyclesButtonsPanel, Panel):
row.prop(cscene, "use_animated_seed", text="", icon='TIME') row.prop(cscene, "use_animated_seed", text="", icon='TIME')
col = layout.column(align=True) col = layout.column(align=True)
col.active = not (cscene.use_adaptive_sampling and cscene.use_preview_adaptive_sampling)
col.prop(cscene, "sampling_pattern", text="Pattern") col.prop(cscene, "sampling_pattern", text="Pattern")
col = layout.column(align=True) col = layout.column(align=True)
@@ -304,7 +305,6 @@ class CYCLES_RENDER_PT_sampling_advanced(CyclesButtonsPanel, Panel):
layout.separator() layout.separator()
heading = layout.column(align=True, heading="Scrambling Distance") heading = layout.column(align=True, heading="Scrambling Distance")
heading.active = cscene.sampling_pattern != 'SOBOL'
heading.prop(cscene, "auto_scrambling_distance", text="Automatic") heading.prop(cscene, "auto_scrambling_distance", text="Automatic")
heading.prop(cscene, "preview_scrambling_distance", text="Viewport") heading.prop(cscene, "preview_scrambling_distance", text="Viewport")
heading.prop(cscene, "scrambling_distance", text="Multiplier") heading.prop(cscene, "scrambling_distance", text="Multiplier")

View File

@@ -55,7 +55,7 @@ static bool ObtainCacheParticleData(
return false; return false;
Transform tfm = get_transform(b_ob->matrix_world()); Transform tfm = get_transform(b_ob->matrix_world());
Transform itfm = transform_inverse(tfm); Transform itfm = transform_quick_inverse(tfm);
for (BL::Modifier &b_mod : b_ob->modifiers) { for (BL::Modifier &b_mod : b_ob->modifiers) {
if ((b_mod.type() == b_mod.type_PARTICLE_SYSTEM) && if ((b_mod.type() == b_mod.type_PARTICLE_SYSTEM) &&
@@ -707,21 +707,6 @@ static void attr_create_motion(Hair *hair, BL::Attribute &b_attribute, const flo
} }
} }
static void attr_create_uv(AttributeSet &attributes,
BL::Curves &b_curves,
BL::Attribute &b_attribute,
const ustring name)
{
BL::Float2Attribute b_float2_attribute{b_attribute};
Attribute *attr = attributes.add(ATTR_STD_UV, name);
float2 *data = attr->data_float2();
fill_generic_attribute(b_curves, data, ATTR_ELEMENT_CURVE, [&](int i) {
BL::Array<float, 2> v = b_float2_attribute.data[i].vector();
return make_float2(v[0], v[1]);
});
}
static void attr_create_generic(Scene *scene, static void attr_create_generic(Scene *scene,
Hair *hair, Hair *hair,
BL::Curves &b_curves, BL::Curves &b_curves,
@@ -730,26 +715,12 @@ static void attr_create_generic(Scene *scene,
{ {
AttributeSet &attributes = hair->attributes; AttributeSet &attributes = hair->attributes;
static const ustring u_velocity("velocity"); static const ustring u_velocity("velocity");
const bool need_uv = hair->need_attribute(scene, ATTR_STD_UV);
bool have_uv = false;
for (BL::Attribute &b_attribute : b_curves.attributes) { for (BL::Attribute &b_attribute : b_curves.attributes) {
const ustring name{b_attribute.name().c_str()}; const ustring name{b_attribute.name().c_str()};
const BL::Attribute::domain_enum b_domain = b_attribute.domain();
const BL::Attribute::data_type_enum b_data_type = b_attribute.data_type();
if (need_motion && name == u_velocity) { if (need_motion && name == u_velocity) {
attr_create_motion(hair, b_attribute, motion_scale); attr_create_motion(hair, b_attribute, motion_scale);
continue;
}
/* Weak, use first float2 attribute as standard UV. */
if (need_uv && !have_uv && b_data_type == BL::Attribute::data_type_FLOAT2 &&
b_domain == BL::Attribute::domain_CURVE) {
attr_create_uv(attributes, b_curves, b_attribute, name);
have_uv = true;
continue;
} }
if (!hair->need_attribute(scene, name)) { if (!hair->need_attribute(scene, name)) {
@@ -759,6 +730,9 @@ static void attr_create_generic(Scene *scene,
continue; continue;
} }
const BL::Attribute::domain_enum b_domain = b_attribute.domain();
const BL::Attribute::data_type_enum b_data_type = b_attribute.data_type();
AttributeElement element = ATTR_ELEMENT_NONE; AttributeElement element = ATTR_ELEMENT_NONE;
switch (b_domain) { switch (b_domain) {
case BL::Attribute::domain_POINT: case BL::Attribute::domain_POINT:
@@ -843,7 +817,7 @@ static float4 hair_point_as_float4(BL::FloatVectorAttribute b_attr_position,
const int index) const int index)
{ {
float4 mP = float3_to_float4(get_float3(b_attr_position.data[index].vector())); float4 mP = float3_to_float4(get_float3(b_attr_position.data[index].vector()));
mP.w = b_attr_radius ? b_attr_radius->data[index].value() : 0.005f; mP.w = b_attr_radius ? b_attr_radius->data[index].value() : 0.0f;
return mP; return mP;
} }
@@ -870,84 +844,79 @@ static void export_hair_curves(Scene *scene,
{ {
/* TODO: optimize so we can straight memcpy arrays from Blender? */ /* TODO: optimize so we can straight memcpy arrays from Blender? */
/* Add requested attributes. */
Attribute *attr_intercept = NULL;
Attribute *attr_length = NULL;
Attribute *attr_random = NULL;
if (hair->need_attribute(scene, ATTR_STD_CURVE_INTERCEPT)) {
attr_intercept = hair->attributes.add(ATTR_STD_CURVE_INTERCEPT);
}
if (hair->need_attribute(scene, ATTR_STD_CURVE_LENGTH)) {
attr_length = hair->attributes.add(ATTR_STD_CURVE_LENGTH);
}
if (hair->need_attribute(scene, ATTR_STD_CURVE_RANDOM)) {
attr_random = hair->attributes.add(ATTR_STD_CURVE_RANDOM);
}
/* Reserve memory. */
const int num_keys = b_curves.points.length(); const int num_keys = b_curves.points.length();
const int num_curves = b_curves.curves.length(); const int num_curves = b_curves.curves.length();
hair->resize_curves(num_curves, num_keys); hair->reserve_curves(num_curves, num_keys);
float3 *curve_keys = hair->get_curve_keys().data();
float *curve_radius = hair->get_curve_radius().data();
int *curve_first_key = hair->get_curve_first_key().data();
int *curve_shader = hair->get_curve_shader().data();
/* Add requested attributes. */
float *attr_intercept = NULL;
float *attr_length = NULL;
float *attr_random = NULL;
if (hair->need_attribute(scene, ATTR_STD_CURVE_INTERCEPT)) {
attr_intercept = hair->attributes.add(ATTR_STD_CURVE_INTERCEPT)->data_float();
}
if (hair->need_attribute(scene, ATTR_STD_CURVE_LENGTH)) {
attr_length = hair->attributes.add(ATTR_STD_CURVE_LENGTH)->data_float();
}
if (hair->need_attribute(scene, ATTR_STD_CURVE_RANDOM)) {
attr_random = hair->attributes.add(ATTR_STD_CURVE_RANDOM)->data_float();
}
BL::FloatVectorAttribute b_attr_position = find_curves_position_attribute(b_curves); BL::FloatVectorAttribute b_attr_position = find_curves_position_attribute(b_curves);
std::optional<BL::FloatAttribute> b_attr_radius = find_curves_radius_attribute(b_curves); std::optional<BL::FloatAttribute> b_attr_radius = find_curves_radius_attribute(b_curves);
/* Export curves and points. */ /* Export curves and points. */
vector<float> points_length;
for (int i = 0; i < num_curves; i++) { for (int i = 0; i < num_curves; i++) {
const int first_point_index = b_curves.curve_offset_data[i].value(); const int first_point_index = b_curves.curve_offset_data[i].value();
const int num_points = b_curves.curve_offset_data[i + 1].value() - first_point_index; const int num_points = b_curves.curve_offset_data[i + 1].value() - first_point_index;
float3 prev_co = zero_float3(); float3 prev_co = zero_float3();
float length = 0.0f; float length = 0.0f;
if (attr_intercept) {
points_length.clear();
points_length.reserve(num_points);
}
/* Position and radius. */ /* Position and radius. */
for (int j = 0; j < num_points; j++) { for (int i = 0; i < num_points; i++) {
const int point_offset = first_point_index + j; const float3 co = get_float3(b_attr_position.data[first_point_index + i].vector());
const float3 co = get_float3(b_attr_position.data[point_offset].vector()); const float radius = b_attr_radius ? b_attr_radius->data[first_point_index + i].value() :
const float radius = b_attr_radius ? b_attr_radius->data[point_offset].value() : 0.005f; 0.005f;
hair->add_curve_key(co, radius);
curve_keys[point_offset] = co; if (attr_intercept) {
curve_radius[point_offset] = radius; if (i > 0) {
if (attr_length || attr_intercept) {
if (j > 0) {
length += len(co - prev_co); length += len(co - prev_co);
points_length.push_back(length);
} }
prev_co = co; prev_co = co;
if (attr_intercept) {
attr_intercept[point_offset] = length;
}
} }
} }
/* Normalized 0..1 attribute along curve. */ /* Normalized 0..1 attribute along curve. */
if (attr_intercept && length > 0.0f) { if (attr_intercept) {
for (int j = 1; j < num_points; j++) { for (int i = 0; i < num_points; i++) {
const int point_offset = first_point_index + j; attr_intercept->add((length == 0.0f) ? 0.0f : points_length[i] / length);
attr_intercept[point_offset] /= length;
} }
} }
/* Curve length. */
if (attr_length) { if (attr_length) {
attr_length[i] = length; attr_length->add(length);
} }
/* Random number per curve. */ /* Random number per curve. */
if (attr_random != NULL) { if (attr_random != NULL) {
attr_random[i] = hash_uint2_to_float(i, 0); attr_random->add(hash_uint2_to_float(i, 0));
} }
/* Curve. */ /* Curve. */
curve_shader[i] = 0; const int shader_index = 0;
curve_first_key[i] = first_point_index; hair->add_curve(first_point_index, shader_index);
} }
attr_create_generic(scene, hair, b_curves, need_motion, motion_scale); attr_create_generic(scene, hair, b_curves, need_motion, motion_scale);

View File

@@ -7,9 +7,21 @@
#include "util/log.h" #include "util/log.h"
#include "util/opengl.h" #include "util/opengl.h"
#include "GPU_platform.h" extern "C" {
struct RenderEngine;
#include "RE_engine.h" bool RE_engine_has_render_context(struct RenderEngine *engine);
void RE_engine_render_context_enable(struct RenderEngine *engine);
void RE_engine_render_context_disable(struct RenderEngine *engine);
bool DRW_opengl_context_release();
void DRW_opengl_context_activate(bool drw_state);
void *WM_opengl_context_create();
void WM_opengl_context_activate(void *gl_context);
void WM_opengl_context_dispose(void *gl_context);
void WM_opengl_context_release(void *context);
}
CCL_NAMESPACE_BEGIN CCL_NAMESPACE_BEGIN
@@ -495,7 +507,6 @@ class DrawTileAndPBO {
DrawTile tile; DrawTile tile;
GLPixelBufferObject buffer_object; GLPixelBufferObject buffer_object;
bool need_update_texture_pixels = false;
}; };
/* -------------------------------------------------------------------- /* --------------------------------------------------------------------
@@ -545,21 +556,18 @@ struct BlenderDisplayDriver::Tiles {
} }
}; };
BlenderDisplayDriver::BlenderDisplayDriver(BL::RenderEngine &b_engine, BlenderDisplayDriver::BlenderDisplayDriver(BL::RenderEngine &b_engine, BL::Scene &b_scene)
BL::Scene &b_scene,
const bool background)
: b_engine_(b_engine), : b_engine_(b_engine),
background_(background),
display_shader_(BlenderDisplayShader::create(b_engine, b_scene)), display_shader_(BlenderDisplayShader::create(b_engine, b_scene)),
tiles_(make_unique<Tiles>()) tiles_(make_unique<Tiles>())
{ {
/* Create context while on the main thread. */ /* Create context while on the main thread. */
gpu_context_create(); gl_context_create();
} }
BlenderDisplayDriver::~BlenderDisplayDriver() BlenderDisplayDriver::~BlenderDisplayDriver()
{ {
gpu_resources_destroy(); gl_resources_destroy();
} }
/* -------------------------------------------------------------------- /* --------------------------------------------------------------------
@@ -577,8 +585,6 @@ void BlenderDisplayDriver::next_tile_begin()
/* Moving to the next tile without giving render data for the current tile is not an expected /* Moving to the next tile without giving render data for the current tile is not an expected
* situation. */ * situation. */
DCHECK(!need_clear_); DCHECK(!need_clear_);
/* Texture should have been updated from the PBO at this point. */
DCHECK(!tiles_->current_tile.need_update_texture_pixels);
tiles_->finished_tiles.tiles.emplace_back(std::move(tiles_->current_tile.tile)); tiles_->finished_tiles.tiles.emplace_back(std::move(tiles_->current_tile.tile));
} }
@@ -590,12 +596,12 @@ bool BlenderDisplayDriver::update_begin(const Params &params,
/* Note that it's the responsibility of BlenderDisplayDriver to ensure updating and drawing /* Note that it's the responsibility of BlenderDisplayDriver to ensure updating and drawing
* the texture does not happen at the same time. This is achieved indirectly. * the texture does not happen at the same time. This is achieved indirectly.
* *
* When enabling the OpenGL context, it uses an internal mutex lock DST.gpu_context_lock. * When enabling the OpenGL context, it uses an internal mutex lock DST.gl_context_lock.
* This same lock is also held when do_draw() is called, which together ensure mutual * This same lock is also held when do_draw() is called, which together ensure mutual
* exclusion. * exclusion.
* *
* This locking is not performed on the Cycles side, because that would cause lock inversion. */ * This locking is not performed on the Cycles side, because that would cause lock inversion. */
if (!gpu_context_enable()) { if (!gl_context_enable()) {
return false; return false;
} }
@@ -616,13 +622,13 @@ bool BlenderDisplayDriver::update_begin(const Params &params,
if (!tiles_->gl_resources_ensure()) { if (!tiles_->gl_resources_ensure()) {
tiles_->gl_resources_destroy(); tiles_->gl_resources_destroy();
gpu_context_disable(); gl_context_disable();
return false; return false;
} }
if (!tiles_->current_tile.gl_resources_ensure()) { if (!tiles_->current_tile.gl_resources_ensure()) {
tiles_->current_tile.gl_resources_destroy(); tiles_->current_tile.gl_resources_destroy();
gpu_context_disable(); gl_context_disable();
return false; return false;
} }
@@ -696,23 +702,13 @@ void BlenderDisplayDriver::update_end()
* One concern with this approach is that if the update happens more often than drawing then * One concern with this approach is that if the update happens more often than drawing then
* doing the unpack here occupies GPU transfer for no good reason. However, the render scheduler * doing the unpack here occupies GPU transfer for no good reason. However, the render scheduler
* takes care of ensuring updates don't happen that often. In regular applications redraw will * takes care of ensuring updates don't happen that often. In regular applications redraw will
* happen much more often than this update. * happen much more often than this update. */
* update_tile_texture_pixels(tiles_->current_tile);
* On some older GPUs on macOS, there is a driver crash when updating the texture for viewport
* renders while Blender is drawing. As a workaround update texture during draw, under assumption
* that there is no graphics interop on macOS and viewport render has a single tile. */
if (!background_ &&
GPU_type_matches_ex(GPU_DEVICE_NVIDIA, GPU_OS_MAC, GPU_DRIVER_ANY, GPU_BACKEND_ANY)) {
tiles_->current_tile.need_update_texture_pixels = true;
}
else {
update_tile_texture_pixels(tiles_->current_tile);
}
gl_upload_sync_ = glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE, 0); gl_upload_sync_ = glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE, 0);
glFlush(); glFlush();
gpu_context_disable(); gl_context_disable();
} }
/* -------------------------------------------------------------------- /* --------------------------------------------------------------------
@@ -760,12 +756,12 @@ BlenderDisplayDriver::GraphicsInterop BlenderDisplayDriver::graphics_interop_get
void BlenderDisplayDriver::graphics_interop_activate() void BlenderDisplayDriver::graphics_interop_activate()
{ {
gpu_context_enable(); gl_context_enable();
} }
void BlenderDisplayDriver::graphics_interop_deactivate() void BlenderDisplayDriver::graphics_interop_deactivate()
{ {
gpu_context_disable(); gl_context_disable();
} }
/* -------------------------------------------------------------------- /* --------------------------------------------------------------------
@@ -899,7 +895,7 @@ void BlenderDisplayDriver::flush()
* If we don't do this, the NVIDIA driver hangs for a few seconds for when ending 3D viewport * If we don't do this, the NVIDIA driver hangs for a few seconds for when ending 3D viewport
* rendering, for unknown reasons. This was found with NVIDIA driver version 470.73 and a Quadro * rendering, for unknown reasons. This was found with NVIDIA driver version 470.73 and a Quadro
* RTX 6000 on Linux. */ * RTX 6000 on Linux. */
if (!gpu_context_enable()) { if (!gl_context_enable()) {
return; return;
} }
@@ -911,12 +907,17 @@ void BlenderDisplayDriver::flush()
glWaitSync((GLsync)gl_render_sync_, 0, GL_TIMEOUT_IGNORED); glWaitSync((GLsync)gl_render_sync_, 0, GL_TIMEOUT_IGNORED);
} }
gpu_context_disable(); gl_context_disable();
} }
void BlenderDisplayDriver::draw(const Params &params) void BlenderDisplayDriver::draw(const Params &params)
{ {
gpu_context_lock(); /* See do_update_begin() for why no locking is required here. */
const bool transparent = true; // TODO(sergey): Derive this from Film.
if (use_gl_context_) {
gl_context_mutex_.lock();
}
if (need_clear_) { if (need_clear_) {
/* Texture is requested to be cleared and was not yet cleared. /* Texture is requested to be cleared and was not yet cleared.
@@ -924,7 +925,9 @@ void BlenderDisplayDriver::draw(const Params &params)
* Do early return which should be equivalent of drawing all-zero texture. * Do early return which should be equivalent of drawing all-zero texture.
* Watch out for the lock though so that the clear happening during update is properly * Watch out for the lock though so that the clear happening during update is properly
* synchronized here. */ * synchronized here. */
gpu_context_unlock(); if (use_gl_context_) {
gl_context_mutex_.unlock();
}
return; return;
} }
@@ -932,8 +935,10 @@ void BlenderDisplayDriver::draw(const Params &params)
glWaitSync((GLsync)gl_upload_sync_, 0, GL_TIMEOUT_IGNORED); glWaitSync((GLsync)gl_upload_sync_, 0, GL_TIMEOUT_IGNORED);
} }
glEnable(GL_BLEND); if (transparent) {
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
}
glActiveTexture(GL_TEXTURE0); glActiveTexture(GL_TEXTURE0);
@@ -952,11 +957,6 @@ void BlenderDisplayDriver::draw(const Params &params)
glEnableVertexAttribArray(texcoord_attribute); glEnableVertexAttribArray(texcoord_attribute);
glEnableVertexAttribArray(position_attribute); glEnableVertexAttribArray(position_attribute);
if (tiles_->current_tile.need_update_texture_pixels) {
update_tile_texture_pixels(tiles_->current_tile);
tiles_->current_tile.need_update_texture_pixels = false;
}
draw_tile(zoom_, draw_tile(zoom_,
texcoord_attribute, texcoord_attribute,
position_attribute, position_attribute,
@@ -975,60 +975,101 @@ void BlenderDisplayDriver::draw(const Params &params)
glDeleteVertexArrays(1, &vertex_array_object); glDeleteVertexArrays(1, &vertex_array_object);
glDisable(GL_BLEND); if (transparent) {
glDisable(GL_BLEND);
}
gl_render_sync_ = glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE, 0); gl_render_sync_ = glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE, 0);
glFlush(); glFlush();
gpu_context_unlock();
VLOG_DEVICE_STATS << "Display driver number of textures: " << GLTexture::num_used; VLOG_DEVICE_STATS << "Display driver number of textures: " << GLTexture::num_used;
VLOG_DEVICE_STATS << "Display driver number of PBOs: " << GLPixelBufferObject::num_used; VLOG_DEVICE_STATS << "Display driver number of PBOs: " << GLPixelBufferObject::num_used;
}
void BlenderDisplayDriver::gpu_context_create() if (use_gl_context_) {
{ gl_context_mutex_.unlock();
if (!RE_engine_gpu_context_create(reinterpret_cast<RenderEngine *>(b_engine_.ptr.data))) {
LOG(ERROR) << "Error creating OpenGL context.";
} }
} }
bool BlenderDisplayDriver::gpu_context_enable() void BlenderDisplayDriver::gl_context_create()
{ {
return RE_engine_gpu_context_enable(reinterpret_cast<RenderEngine *>(b_engine_.ptr.data)); /* When rendering in viewport there is no render context available via engine.
* Check whether own context is to be created here.
*
* NOTE: If the `b_engine_`'s context is not available, we are expected to be on a main thread
* here. */
use_gl_context_ = !RE_engine_has_render_context(
reinterpret_cast<RenderEngine *>(b_engine_.ptr.data));
if (use_gl_context_) {
const bool drw_state = DRW_opengl_context_release();
gl_context_ = WM_opengl_context_create();
if (gl_context_) {
/* On Windows an old context is restored after creation, and subsequent release of context
* generates a Win32 error. Harmless for users, but annoying to have possible misleading
* error prints in the console. */
#ifndef _WIN32
WM_opengl_context_release(gl_context_);
#endif
}
else {
LOG(ERROR) << "Error creating OpenGL context.";
}
DRW_opengl_context_activate(drw_state);
}
} }
void BlenderDisplayDriver::gpu_context_disable() bool BlenderDisplayDriver::gl_context_enable()
{ {
RE_engine_gpu_context_disable(reinterpret_cast<RenderEngine *>(b_engine_.ptr.data)); if (use_gl_context_) {
if (!gl_context_) {
return false;
}
gl_context_mutex_.lock();
WM_opengl_context_activate(gl_context_);
return true;
}
RE_engine_render_context_enable(reinterpret_cast<RenderEngine *>(b_engine_.ptr.data));
return true;
} }
void BlenderDisplayDriver::gpu_context_destroy() void BlenderDisplayDriver::gl_context_disable()
{ {
RE_engine_gpu_context_destroy(reinterpret_cast<RenderEngine *>(b_engine_.ptr.data)); if (use_gl_context_) {
if (gl_context_) {
WM_opengl_context_release(gl_context_);
gl_context_mutex_.unlock();
}
return;
}
RE_engine_render_context_disable(reinterpret_cast<RenderEngine *>(b_engine_.ptr.data));
} }
void BlenderDisplayDriver::gpu_context_lock() void BlenderDisplayDriver::gl_context_dispose()
{ {
RE_engine_gpu_context_lock(reinterpret_cast<RenderEngine *>(b_engine_.ptr.data)); if (gl_context_) {
const bool drw_state = DRW_opengl_context_release();
WM_opengl_context_activate(gl_context_);
WM_opengl_context_dispose(gl_context_);
DRW_opengl_context_activate(drw_state);
}
} }
void BlenderDisplayDriver::gpu_context_unlock() void BlenderDisplayDriver::gl_resources_destroy()
{ {
RE_engine_gpu_context_unlock(reinterpret_cast<RenderEngine *>(b_engine_.ptr.data)); gl_context_enable();
}
void BlenderDisplayDriver::gpu_resources_destroy()
{
gpu_context_enable();
tiles_->current_tile.gl_resources_destroy(); tiles_->current_tile.gl_resources_destroy();
tiles_->finished_tiles.gl_resources_destroy_and_clear(); tiles_->finished_tiles.gl_resources_destroy_and_clear();
tiles_->gl_resources_destroy(); tiles_->gl_resources_destroy();
gpu_context_disable(); gl_context_disable();
gpu_context_destroy(); gl_context_dispose();
} }
CCL_NAMESPACE_END CCL_NAMESPACE_END

View File

@@ -89,7 +89,7 @@ class BlenderDisplaySpaceShader : public BlenderDisplayShader {
/* Display driver implementation which is specific for Blender viewport integration. */ /* Display driver implementation which is specific for Blender viewport integration. */
class BlenderDisplayDriver : public DisplayDriver { class BlenderDisplayDriver : public DisplayDriver {
public: public:
BlenderDisplayDriver(BL::RenderEngine &b_engine, BL::Scene &b_scene, const bool background); BlenderDisplayDriver(BL::RenderEngine &b_engine, BL::Scene &b_scene);
~BlenderDisplayDriver(); ~BlenderDisplayDriver();
virtual void graphics_interop_activate() override; virtual void graphics_interop_activate() override;
@@ -115,18 +115,23 @@ class BlenderDisplayDriver : public DisplayDriver {
virtual void flush() override; virtual void flush() override;
/* Helper function which allocates new GPU context. */ /* Helper function which allocates new GPU context. */
void gpu_context_create(); void gl_context_create();
bool gpu_context_enable(); bool gl_context_enable();
void gpu_context_disable(); void gl_context_disable();
void gpu_context_destroy(); void gl_context_dispose();
void gpu_context_lock();
void gpu_context_unlock();
/* Destroy all GPU resources which are being used by this object. */ /* Destroy all GPU resources which are being used by this object. */
void gpu_resources_destroy(); void gl_resources_destroy();
BL::RenderEngine b_engine_; BL::RenderEngine b_engine_;
bool background_;
/* OpenGL context which is used the render engine doesn't have its own. */
void *gl_context_ = nullptr;
/* The when Blender RenderEngine side context is not available and the DisplayDriver is to create
* its own context. */
bool use_gl_context_ = false;
/* Mutex used to guard the `gl_context_`. */
thread_mutex gl_context_mutex_;
/* Content of the display is to be filled with zeroes. */ /* Content of the display is to be filled with zeroes. */
std::atomic<bool> need_clear_ = true; std::atomic<bool> need_clear_ = true;

View File

@@ -1,8 +1,6 @@
/* SPDX-License-Identifier: Apache-2.0 /* SPDX-License-Identifier: Apache-2.0
* Copyright 2011-2022 Blender Foundation */ * Copyright 2011-2022 Blender Foundation */
#include <optional>
#include "blender/session.h" #include "blender/session.h"
#include "blender/sync.h" #include "blender/sync.h"
#include "blender/util.h" #include "blender/util.h"
@@ -24,23 +22,22 @@
#include "util/log.h" #include "util/log.h"
#include "util/math.h" #include "util/math.h"
#include "mikktspace.hh" #include "mikktspace.h"
#include "DNA_meshdata_types.h"
CCL_NAMESPACE_BEGIN CCL_NAMESPACE_BEGIN
/* Tangent Space */ /* Tangent Space */
template<bool is_subd> struct MikkMeshWrapper { struct MikkUserData {
MikkMeshWrapper(const BL::Mesh &b_mesh, MikkUserData(const BL::Mesh &b_mesh,
const char *layer_name, const char *layer_name,
const Mesh *mesh, const Mesh *mesh,
float3 *tangent, float3 *tangent,
float *tangent_sign) float *tangent_sign)
: mesh(mesh), texface(NULL), orco(NULL), tangent(tangent), tangent_sign(tangent_sign) : mesh(mesh), texface(NULL), orco(NULL), tangent(tangent), tangent_sign(tangent_sign)
{ {
const AttributeSet &attributes = is_subd ? mesh->subd_attributes : mesh->attributes; const AttributeSet &attributes = (mesh->get_num_subd_faces()) ? mesh->subd_attributes :
mesh->attributes;
Attribute *attr_vN = attributes.find(ATTR_STD_VERTEX_NORMAL); Attribute *attr_vN = attributes.find(ATTR_STD_VERTEX_NORMAL);
vertex_normal = attr_vN->data_float3(); vertex_normal = attr_vN->data_float3();
@@ -50,9 +47,7 @@ template<bool is_subd> struct MikkMeshWrapper {
if (attr_orco) { if (attr_orco) {
orco = attr_orco->data_float3(); orco = attr_orco->data_float3();
float3 orco_size;
mesh_texture_space(*(BL::Mesh *)&b_mesh, orco_loc, orco_size); mesh_texture_space(*(BL::Mesh *)&b_mesh, orco_loc, orco_size);
inv_orco_size = 1.0f / orco_size;
} }
} }
else { else {
@@ -63,126 +58,160 @@ template<bool is_subd> struct MikkMeshWrapper {
} }
} }
int GetNumFaces()
{
if constexpr (is_subd) {
return mesh->get_num_subd_faces();
}
else {
return mesh->num_triangles();
}
}
int GetNumVerticesOfFace(const int face_num)
{
if constexpr (is_subd) {
return mesh->get_subd_num_corners()[face_num];
}
else {
return 3;
}
}
int CornerIndex(const int face_num, const int vert_num)
{
if constexpr (is_subd) {
const Mesh::SubdFace &face = mesh->get_subd_face(face_num);
return face.start_corner + vert_num;
}
else {
return face_num * 3 + vert_num;
}
}
int VertexIndex(const int face_num, const int vert_num)
{
int corner = CornerIndex(face_num, vert_num);
if constexpr (is_subd) {
return mesh->get_subd_face_corners()[corner];
}
else {
return mesh->get_triangles()[corner];
}
}
mikk::float3 GetPosition(const int face_num, const int vert_num)
{
const float3 vP = mesh->get_verts()[VertexIndex(face_num, vert_num)];
return mikk::float3(vP.x, vP.y, vP.z);
}
mikk::float3 GetTexCoord(const int face_num, const int vert_num)
{
/* TODO: Check whether introducing a template boolean in order to
* turn this into a constexpr is worth it. */
if (texface != NULL) {
const int corner_index = CornerIndex(face_num, vert_num);
float2 tfuv = texface[corner_index];
return mikk::float3(tfuv.x, tfuv.y, 1.0f);
}
else if (orco != NULL) {
const int vertex_index = VertexIndex(face_num, vert_num);
const float2 uv = map_to_sphere((orco[vertex_index] + orco_loc) * inv_orco_size);
return mikk::float3(uv.x, uv.y, 1.0f);
}
else {
return mikk::float3(0.0f, 0.0f, 1.0f);
}
}
mikk::float3 GetNormal(const int face_num, const int vert_num)
{
float3 vN;
if (is_subd) {
const Mesh::SubdFace &face = mesh->get_subd_face(face_num);
if (face.smooth) {
const int vertex_index = VertexIndex(face_num, vert_num);
vN = vertex_normal[vertex_index];
}
else {
vN = face.normal(mesh);
}
}
else {
if (mesh->get_smooth()[face_num]) {
const int vertex_index = VertexIndex(face_num, vert_num);
vN = vertex_normal[vertex_index];
}
else {
const Mesh::Triangle tri = mesh->get_triangle(face_num);
vN = tri.compute_normal(&mesh->get_verts()[0]);
}
}
return mikk::float3(vN.x, vN.y, vN.z);
}
void SetTangentSpace(const int face_num, const int vert_num, mikk::float3 T, bool orientation)
{
const int corner_index = CornerIndex(face_num, vert_num);
tangent[corner_index] = make_float3(T.x, T.y, T.z);
if (tangent_sign != NULL) {
tangent_sign[corner_index] = orientation ? 1.0f : -1.0f;
}
}
const Mesh *mesh; const Mesh *mesh;
int num_faces; int num_faces;
float3 *vertex_normal; float3 *vertex_normal;
float2 *texface; float2 *texface;
float3 *orco; float3 *orco;
float3 orco_loc, inv_orco_size; float3 orco_loc, orco_size;
float3 *tangent; float3 *tangent;
float *tangent_sign; float *tangent_sign;
}; };
static int mikk_get_num_faces(const SMikkTSpaceContext *context)
{
const MikkUserData *userdata = (const MikkUserData *)context->m_pUserData;
if (userdata->mesh->get_num_subd_faces()) {
return userdata->mesh->get_num_subd_faces();
}
else {
return userdata->mesh->num_triangles();
}
}
static int mikk_get_num_verts_of_face(const SMikkTSpaceContext *context, const int face_num)
{
const MikkUserData *userdata = (const MikkUserData *)context->m_pUserData;
if (userdata->mesh->get_num_subd_faces()) {
const Mesh *mesh = userdata->mesh;
return mesh->get_subd_num_corners()[face_num];
}
else {
return 3;
}
}
static int mikk_vertex_index(const Mesh *mesh, const int face_num, const int vert_num)
{
if (mesh->get_num_subd_faces()) {
const Mesh::SubdFace &face = mesh->get_subd_face(face_num);
return mesh->get_subd_face_corners()[face.start_corner + vert_num];
}
else {
return mesh->get_triangles()[face_num * 3 + vert_num];
}
}
static int mikk_corner_index(const Mesh *mesh, const int face_num, const int vert_num)
{
if (mesh->get_num_subd_faces()) {
const Mesh::SubdFace &face = mesh->get_subd_face(face_num);
return face.start_corner + vert_num;
}
else {
return face_num * 3 + vert_num;
}
}
static void mikk_get_position(const SMikkTSpaceContext *context,
float P[3],
const int face_num,
const int vert_num)
{
const MikkUserData *userdata = (const MikkUserData *)context->m_pUserData;
const Mesh *mesh = userdata->mesh;
const int vertex_index = mikk_vertex_index(mesh, face_num, vert_num);
const float3 vP = mesh->get_verts()[vertex_index];
P[0] = vP.x;
P[1] = vP.y;
P[2] = vP.z;
}
static void mikk_get_texture_coordinate(const SMikkTSpaceContext *context,
float uv[2],
const int face_num,
const int vert_num)
{
const MikkUserData *userdata = (const MikkUserData *)context->m_pUserData;
const Mesh *mesh = userdata->mesh;
if (userdata->texface != NULL) {
const int corner_index = mikk_corner_index(mesh, face_num, vert_num);
float2 tfuv = userdata->texface[corner_index];
uv[0] = tfuv.x;
uv[1] = tfuv.y;
}
else if (userdata->orco != NULL) {
const int vertex_index = mikk_vertex_index(mesh, face_num, vert_num);
const float3 orco_loc = userdata->orco_loc;
const float3 orco_size = userdata->orco_size;
const float3 orco = (userdata->orco[vertex_index] + orco_loc) / orco_size;
const float2 tmp = map_to_sphere(orco);
uv[0] = tmp.x;
uv[1] = tmp.y;
}
else {
uv[0] = 0.0f;
uv[1] = 0.0f;
}
}
static void mikk_get_normal(const SMikkTSpaceContext *context,
float N[3],
const int face_num,
const int vert_num)
{
const MikkUserData *userdata = (const MikkUserData *)context->m_pUserData;
const Mesh *mesh = userdata->mesh;
float3 vN;
if (mesh->get_num_subd_faces()) {
const Mesh::SubdFace &face = mesh->get_subd_face(face_num);
if (face.smooth) {
const int vertex_index = mikk_vertex_index(mesh, face_num, vert_num);
vN = userdata->vertex_normal[vertex_index];
}
else {
vN = face.normal(mesh);
}
}
else {
if (mesh->get_smooth()[face_num]) {
const int vertex_index = mikk_vertex_index(mesh, face_num, vert_num);
vN = userdata->vertex_normal[vertex_index];
}
else {
const Mesh::Triangle tri = mesh->get_triangle(face_num);
vN = tri.compute_normal(&mesh->get_verts()[0]);
}
}
N[0] = vN.x;
N[1] = vN.y;
N[2] = vN.z;
}
static void mikk_set_tangent_space(const SMikkTSpaceContext *context,
const float T[],
const float sign,
const int face_num,
const int vert_num)
{
MikkUserData *userdata = (MikkUserData *)context->m_pUserData;
const Mesh *mesh = userdata->mesh;
const int corner_index = mikk_corner_index(mesh, face_num, vert_num);
userdata->tangent[corner_index] = make_float3(T[0], T[1], T[2]);
if (userdata->tangent_sign != NULL) {
userdata->tangent_sign[corner_index] = sign;
}
}
static void mikk_compute_tangents( static void mikk_compute_tangents(
const BL::Mesh &b_mesh, const char *layer_name, Mesh *mesh, bool need_sign, bool active_render) const BL::Mesh &b_mesh, const char *layer_name, Mesh *mesh, bool need_sign, bool active_render)
{ {
/* Create tangent attributes. */ /* Create tangent attributes. */
const bool is_subd = mesh->get_num_subd_faces(); AttributeSet &attributes = (mesh->get_num_subd_faces()) ? mesh->subd_attributes :
AttributeSet &attributes = is_subd ? mesh->subd_attributes : mesh->attributes; mesh->attributes;
Attribute *attr; Attribute *attr;
ustring name; ustring name;
if (layer_name != NULL) { if (layer_name != NULL) {
@@ -218,18 +247,24 @@ static void mikk_compute_tangents(
} }
tangent_sign = attr_sign->data_float(); tangent_sign = attr_sign->data_float();
} }
/* Setup userdata. */ /* Setup userdata. */
if (is_subd) { MikkUserData userdata(b_mesh, layer_name, mesh, tangent, tangent_sign);
MikkMeshWrapper<true> userdata(b_mesh, layer_name, mesh, tangent, tangent_sign); /* Setup interface. */
/* Compute tangents. */ SMikkTSpaceInterface sm_interface;
mikk::Mikktspace(userdata).genTangSpace(); memset(&sm_interface, 0, sizeof(sm_interface));
} sm_interface.m_getNumFaces = mikk_get_num_faces;
else { sm_interface.m_getNumVerticesOfFace = mikk_get_num_verts_of_face;
MikkMeshWrapper<false> userdata(b_mesh, layer_name, mesh, tangent, tangent_sign); sm_interface.m_getPosition = mikk_get_position;
/* Compute tangents. */ sm_interface.m_getTexCoord = mikk_get_texture_coordinate;
mikk::Mikktspace(userdata).genTangSpace(); sm_interface.m_getNormal = mikk_get_normal;
} sm_interface.m_setTSpaceBasic = mikk_set_tangent_space;
/* Setup context. */
SMikkTSpaceContext context;
memset(&context, 0, sizeof(context));
context.m_pUserData = &userdata;
context.m_pInterface = &sm_interface;
/* Compute tangents. */
genTangSpaceDefault(&context);
} }
template<typename TypeInCycles, typename GetValueAtIndex> template<typename TypeInCycles, typename GetValueAtIndex>
@@ -242,15 +277,10 @@ static void fill_generic_attribute(BL::Mesh &b_mesh,
switch (b_domain) { switch (b_domain) {
case BL::Attribute::domain_CORNER: { case BL::Attribute::domain_CORNER: {
if (subdivision) { if (subdivision) {
const int polys_num = b_mesh.polygons.length(); for (BL::MeshPolygon &p : b_mesh.polygons) {
if (polys_num == 0) { int n = p.loop_total();
return; for (int i = 0; i < n; i++) {
} *data = get_value_at_index(p.loop_start() + i);
const MPoly *polys = static_cast<const MPoly *>(b_mesh.polygons[0].ptr.data);
for (int i = 0; i < polys_num; i++) {
const MPoly &b_poly = polys[i];
for (int j = 0; j < b_poly.totloop; j++) {
*data = get_value_at_index(b_poly.loopstart + j);
data++; data++;
} }
} }
@@ -267,32 +297,27 @@ static void fill_generic_attribute(BL::Mesh &b_mesh,
break; break;
} }
case BL::Attribute::domain_EDGE: { case BL::Attribute::domain_EDGE: {
const size_t edges_num = b_mesh.edges.length();
if (edges_num == 0) {
return;
}
if constexpr (std::is_same_v<TypeInCycles, uchar4>) { if constexpr (std::is_same_v<TypeInCycles, uchar4>) {
/* uchar4 edge attributes do not exist, and averaging in place /* uchar4 edge attributes do not exist, and averaging in place
* would not work. */ * would not work. */
assert(0); assert(0);
} }
else { else {
const MEdge *edges = static_cast<const MEdge *>(b_mesh.edges[0].ptr.data);
const size_t verts_num = b_mesh.vertices.length();
vector<int> count(verts_num, 0);
/* Average edge attributes at vertices. */ /* Average edge attributes at vertices. */
for (int i = 0; i < edges_num; i++) { const size_t num_verts = b_mesh.vertices.length();
TypeInCycles value = get_value_at_index(i); vector<int> count(num_verts, 0);
const MEdge &b_edge = edges[i]; for (BL::MeshEdge &e : b_mesh.edges) {
data[b_edge.v1] += value; BL::Array<int, 2> vertices = e.vertices();
data[b_edge.v2] += value; TypeInCycles value = get_value_at_index(e.index());
count[b_edge.v1]++;
count[b_edge.v2]++; data[vertices[0]] += value;
data[vertices[1]] += value;
count[vertices[0]]++;
count[vertices[1]]++;
} }
for (size_t i = 0; i < verts_num; i++) { for (size_t i = 0; i < num_verts; i++) {
if (count[i] > 1) { if (count[i] > 1) {
data[i] /= (float)count[i]; data[i] /= (float)count[i];
} }
@@ -576,12 +601,6 @@ static void attr_create_uv_map(Scene *scene, Mesh *mesh, BL::Mesh &b_mesh)
static void attr_create_subd_uv_map(Scene *scene, Mesh *mesh, BL::Mesh &b_mesh, bool subdivide_uvs) static void attr_create_subd_uv_map(Scene *scene, Mesh *mesh, BL::Mesh &b_mesh, bool subdivide_uvs)
{ {
const int polys_num = b_mesh.polygons.length();
if (polys_num == 0) {
return;
}
const MPoly *polys = static_cast<const MPoly *>(b_mesh.polygons[0].ptr.data);
if (!b_mesh.uv_layers.empty()) { if (!b_mesh.uv_layers.empty()) {
BL::Mesh::uv_layers_iterator l; BL::Mesh::uv_layers_iterator l;
int i = 0; int i = 0;
@@ -615,10 +634,10 @@ static void attr_create_subd_uv_map(Scene *scene, Mesh *mesh, BL::Mesh &b_mesh,
float2 *fdata = uv_attr->data_float2(); float2 *fdata = uv_attr->data_float2();
for (int i = 0; i < polys_num; i++) { for (BL::MeshPolygon &p : b_mesh.polygons) {
const MPoly &b_poly = polys[i]; int n = p.loop_total();
for (int j = 0; j < b_poly.totloop; j++) { for (int j = 0; j < n; j++) {
*(fdata++) = get_float2(l->data[b_poly.loopstart + j].uv()); *(fdata++) = get_float2(l->data[p.loop_start() + j].uv());
} }
} }
} }
@@ -681,8 +700,6 @@ static void attr_create_pointiness(Scene *scene, Mesh *mesh, BL::Mesh &b_mesh, b
if (num_verts == 0) { if (num_verts == 0) {
return; return;
} }
const MVert *verts = static_cast<const MVert *>(b_mesh.vertices[0].ptr.data);
/* STEP 1: Find out duplicated vertices and point duplicates to a single /* STEP 1: Find out duplicated vertices and point duplicates to a single
* original vertex. * original vertex.
*/ */
@@ -735,12 +752,10 @@ static void attr_create_pointiness(Scene *scene, Mesh *mesh, BL::Mesh &b_mesh, b
*/ */
vector<float3> vert_normal(num_verts, zero_float3()); vector<float3> vert_normal(num_verts, zero_float3());
/* First we accumulate all vertex normals in the original index. */ /* First we accumulate all vertex normals in the original index. */
const float(*b_vert_normals)[3] = static_cast<const float(*)[3]>(
b_mesh.vertex_normals[0].ptr.data);
for (int vert_index = 0; vert_index < num_verts; ++vert_index) { for (int vert_index = 0; vert_index < num_verts; ++vert_index) {
const float *b_vert_normal = b_vert_normals[vert_index]; const float3 normal = get_float3(b_mesh.vertices[vert_index].normal());
const int orig_index = vert_orig_index[vert_index]; const int orig_index = vert_orig_index[vert_index];
vert_normal[orig_index] += make_float3(b_vert_normal[0], b_vert_normal[1], b_vert_normal[2]); vert_normal[orig_index] += normal;
} }
/* Then we normalize the accumulated result and flush it to all duplicates /* Then we normalize the accumulated result and flush it to all duplicates
* as well. * as well.
@@ -753,24 +768,18 @@ static void attr_create_pointiness(Scene *scene, Mesh *mesh, BL::Mesh &b_mesh, b
vector<int> counter(num_verts, 0); vector<int> counter(num_verts, 0);
vector<float> raw_data(num_verts, 0.0f); vector<float> raw_data(num_verts, 0.0f);
vector<float3> edge_accum(num_verts, zero_float3()); vector<float3> edge_accum(num_verts, zero_float3());
BL::Mesh::edges_iterator e;
EdgeMap visited_edges; EdgeMap visited_edges;
int edge_index = 0;
memset(&counter[0], 0, sizeof(int) * counter.size()); memset(&counter[0], 0, sizeof(int) * counter.size());
for (b_mesh.edges.begin(e); e != b_mesh.edges.end(); ++e, ++edge_index) {
const MEdge *edges = static_cast<MEdge *>(b_mesh.edges[0].ptr.data); const int v0 = vert_orig_index[b_mesh.edges[edge_index].vertices()[0]],
const int edges_num = b_mesh.edges.length(); v1 = vert_orig_index[b_mesh.edges[edge_index].vertices()[1]];
for (int i = 0; i < edges_num; i++) {
const MEdge &b_edge = edges[i];
const int v0 = vert_orig_index[b_edge.v1];
const int v1 = vert_orig_index[b_edge.v2];
if (visited_edges.exists(v0, v1)) { if (visited_edges.exists(v0, v1)) {
continue; continue;
} }
visited_edges.insert(v0, v1); visited_edges.insert(v0, v1);
const MVert &b_vert_0 = verts[v0]; float3 co0 = get_float3(b_mesh.vertices[v0].co()), co1 = get_float3(b_mesh.vertices[v1].co());
const MVert &b_vert_1 = verts[v1];
float3 co0 = make_float3(b_vert_0.co[0], b_vert_0.co[1], b_vert_0.co[2]);
float3 co1 = make_float3(b_vert_1.co[0], b_vert_1.co[1], b_vert_1.co[2]);
float3 edge = normalize(co1 - co0); float3 edge = normalize(co1 - co0);
edge_accum[v0] += edge; edge_accum[v0] += edge;
edge_accum[v1] += -edge; edge_accum[v1] += -edge;
@@ -798,11 +807,11 @@ static void attr_create_pointiness(Scene *scene, Mesh *mesh, BL::Mesh &b_mesh, b
float *data = attr->data_float(); float *data = attr->data_float();
memcpy(data, &raw_data[0], sizeof(float) * raw_data.size()); memcpy(data, &raw_data[0], sizeof(float) * raw_data.size());
memset(&counter[0], 0, sizeof(int) * counter.size()); memset(&counter[0], 0, sizeof(int) * counter.size());
edge_index = 0;
visited_edges.clear(); visited_edges.clear();
for (int i = 0; i < edges_num; i++) { for (b_mesh.edges.begin(e); e != b_mesh.edges.end(); ++e, ++edge_index) {
const MEdge &b_edge = edges[i]; const int v0 = vert_orig_index[b_mesh.edges[edge_index].vertices()[0]],
const int v0 = vert_orig_index[b_edge.v1]; v1 = vert_orig_index[b_mesh.edges[edge_index].vertices()[1]];
const int v1 = vert_orig_index[b_edge.v2];
if (visited_edges.exists(v0, v1)) { if (visited_edges.exists(v0, v1)) {
continue; continue;
} }
@@ -841,7 +850,6 @@ static void attr_create_random_per_island(Scene *scene,
return; return;
} }
const int polys_num = b_mesh.polygons.length();
int number_of_vertices = b_mesh.vertices.length(); int number_of_vertices = b_mesh.vertices.length();
if (number_of_vertices == 0) { if (number_of_vertices == 0) {
return; return;
@@ -849,11 +857,8 @@ static void attr_create_random_per_island(Scene *scene,
DisjointSet vertices_sets(number_of_vertices); DisjointSet vertices_sets(number_of_vertices);
const MEdge *edges = static_cast<MEdge *>(b_mesh.edges[0].ptr.data); for (BL::MeshEdge &e : b_mesh.edges) {
const int edges_num = b_mesh.edges.length(); vertices_sets.join(e.vertices()[0], e.vertices()[1]);
for (int i = 0; i < edges_num; i++) {
vertices_sets.join(edges[i].v1, edges[i].v2);
} }
AttributeSet &attributes = (subdivision) ? mesh->subd_attributes : mesh->attributes; AttributeSet &attributes = (subdivision) ? mesh->subd_attributes : mesh->attributes;
@@ -866,37 +871,14 @@ static void attr_create_random_per_island(Scene *scene,
} }
} }
else { else {
if (polys_num != 0) { for (BL::MeshPolygon &p : b_mesh.polygons) {
const MPoly *polys = static_cast<const MPoly *>(b_mesh.polygons[0].ptr.data); data[p.index()] = hash_uint_to_float(vertices_sets.find(p.vertices()[0]));
const MLoop *loops = static_cast<const MLoop *>(b_mesh.loops[0].ptr.data);
for (int i = 0; i < polys_num; i++) {
const MPoly &b_poly = polys[i];
const MLoop &b_loop = loops[b_poly.loopstart];
data[i] = hash_uint_to_float(vertices_sets.find(b_loop.v));
}
} }
} }
} }
/* Create Mesh */ /* Create Mesh */
static std::optional<BL::IntAttribute> find_material_index_attribute(BL::Mesh b_mesh)
{
for (BL::Attribute &b_attribute : b_mesh.attributes) {
if (b_attribute.domain() != BL::Attribute::domain_FACE) {
continue;
}
if (b_attribute.data_type() != BL::Attribute::data_type_INT) {
continue;
}
if (b_attribute.name() != "material_index") {
continue;
}
return BL::IntAttribute{b_attribute};
}
return std::nullopt;
}
static void create_mesh(Scene *scene, static void create_mesh(Scene *scene,
Mesh *mesh, Mesh *mesh,
BL::Mesh &b_mesh, BL::Mesh &b_mesh,
@@ -908,7 +890,6 @@ static void create_mesh(Scene *scene,
{ {
/* count vertices and faces */ /* count vertices and faces */
int numverts = b_mesh.vertices.length(); int numverts = b_mesh.vertices.length();
const int polys_num = b_mesh.polygons.length();
int numfaces = (!subdivision) ? b_mesh.loop_triangles.length() : b_mesh.polygons.length(); int numfaces = (!subdivision) ? b_mesh.loop_triangles.length() : b_mesh.polygons.length();
int numtris = 0; int numtris = 0;
int numcorners = 0; int numcorners = 0;
@@ -921,17 +902,13 @@ static void create_mesh(Scene *scene,
return; return;
} }
const MVert *verts = static_cast<const MVert *>(b_mesh.vertices[0].ptr.data);
if (!subdivision) { if (!subdivision) {
numtris = numfaces; numtris = numfaces;
} }
else { else {
const MPoly *polys = static_cast<const MPoly *>(b_mesh.polygons[0].ptr.data); for (BL::MeshPolygon &p : b_mesh.polygons) {
for (int i = 0; i < polys_num; i++) { numngons += (p.loop_total() == 4) ? 0 : 1;
const MPoly &b_poly = polys[i]; numcorners += p.loop_total();
numngons += (b_poly.totloop == 4) ? 0 : 1;
numcorners += b_poly.totloop;
} }
} }
@@ -943,23 +920,17 @@ static void create_mesh(Scene *scene,
mesh->reserve_mesh(numverts, numtris); mesh->reserve_mesh(numverts, numtris);
/* create vertex coordinates and normals */ /* create vertex coordinates and normals */
for (int i = 0; i < numverts; i++) { BL::Mesh::vertices_iterator v;
const MVert &b_vert = verts[i]; for (b_mesh.vertices.begin(v); v != b_mesh.vertices.end(); ++v)
mesh->add_vertex(make_float3(b_vert.co[0], b_vert.co[1], b_vert.co[2])); mesh->add_vertex(get_float3(v->co()));
}
AttributeSet &attributes = (subdivision) ? mesh->subd_attributes : mesh->attributes; AttributeSet &attributes = (subdivision) ? mesh->subd_attributes : mesh->attributes;
Attribute *attr_N = attributes.add(ATTR_STD_VERTEX_NORMAL); Attribute *attr_N = attributes.add(ATTR_STD_VERTEX_NORMAL);
float3 *N = attr_N->data_float3(); float3 *N = attr_N->data_float3();
if (subdivision || !use_loop_normals) { for (b_mesh.vertices.begin(v); v != b_mesh.vertices.end(); ++v, ++N)
const float(*b_vert_normals)[3] = static_cast<const float(*)[3]>( *N = get_float3(v->normal());
b_mesh.vertex_normals[0].ptr.data); N = attr_N->data_float3();
for (int i = 0; i < numverts; i++) {
const float *b_vert_normal = b_vert_normals[i];
N[i] = make_float3(b_vert_normal[0], b_vert_normal[1], b_vert_normal[2]);
}
}
/* create generated coordinates from undeformed coordinates */ /* create generated coordinates from undeformed coordinates */
const bool need_default_tangent = (subdivision == false) && (b_mesh.uv_layers.empty()) && const bool need_default_tangent = (subdivision == false) && (b_mesh.uv_layers.empty()) &&
@@ -974,30 +945,19 @@ static void create_mesh(Scene *scene,
float3 *generated = attr->data_float3(); float3 *generated = attr->data_float3();
size_t i = 0; size_t i = 0;
BL::Mesh::vertices_iterator v;
for (b_mesh.vertices.begin(v); v != b_mesh.vertices.end(); ++v) { for (b_mesh.vertices.begin(v); v != b_mesh.vertices.end(); ++v) {
generated[i++] = get_float3(v->undeformed_co()) * size - loc; generated[i++] = get_float3(v->undeformed_co()) * size - loc;
} }
} }
std::optional<BL::IntAttribute> material_indices = find_material_index_attribute(b_mesh);
auto get_material_index = [&](const int poly_index) -> int {
if (material_indices) {
return clamp(material_indices->data[poly_index].value(), 0, used_shaders.size() - 1);
}
return 0;
};
/* create faces */ /* create faces */
const MPoly *polys = static_cast<const MPoly *>(b_mesh.polygons[0].ptr.data);
if (!subdivision) { if (!subdivision) {
for (BL::MeshLoopTriangle &t : b_mesh.loop_triangles) { for (BL::MeshLoopTriangle &t : b_mesh.loop_triangles) {
const int poly_index = t.polygon_index(); BL::MeshPolygon p = b_mesh.polygons[t.polygon_index()];
const MPoly &b_poly = polys[poly_index];
int3 vi = get_int3(t.vertices()); int3 vi = get_int3(t.vertices());
int shader = get_material_index(poly_index); int shader = clamp(p.material_index(), 0, used_shaders.size() - 1);
bool smooth = (b_poly.flag & ME_SMOOTH) || use_loop_normals; bool smooth = p.use_smooth() || use_loop_normals;
if (use_loop_normals) { if (use_loop_normals) {
BL::Array<float, 9> loop_normals = t.split_normals(); BL::Array<float, 9> loop_normals = t.split_normals();
@@ -1017,19 +977,15 @@ static void create_mesh(Scene *scene,
else { else {
vector<int> vi; vector<int> vi;
const MLoop *loops = static_cast<const MLoop *>(b_mesh.loops[0].ptr.data); for (BL::MeshPolygon &p : b_mesh.polygons) {
int n = p.loop_total();
for (int i = 0; i < numfaces; i++) { int shader = clamp(p.material_index(), 0, used_shaders.size() - 1);
const MPoly &b_poly = polys[i]; bool smooth = p.use_smooth() || use_loop_normals;
int n = b_poly.totloop;
int shader = get_material_index(i);
bool smooth = (b_poly.flag & ME_SMOOTH) || use_loop_normals;
vi.resize(n); vi.resize(n);
for (int i = 0; i < n; i++) { for (int i = 0; i < n; i++) {
/* NOTE: Autosmooth is already taken care about. */ /* NOTE: Autosmooth is already taken care about. */
vi[i] = b_mesh.loops[p.loop_start() + i].vertex_index();
vi[i] = loops[b_poly.loopstart + i].v;
} }
/* create subd faces */ /* create subd faces */
@@ -1082,33 +1038,27 @@ static void create_subd_mesh(Scene *scene,
create_mesh(scene, mesh, b_mesh, used_shaders, need_motion, motion_scale, true, subdivide_uvs); create_mesh(scene, mesh, b_mesh, used_shaders, need_motion, motion_scale, true, subdivide_uvs);
const int edges_num = b_mesh.edges.length(); /* export creases */
size_t num_creases = 0;
if (edges_num != 0) { for (BL::MeshEdge &e : b_mesh.edges) {
size_t num_creases = 0; if (e.crease() != 0.0f) {
const MEdge *edges = static_cast<MEdge *>(b_mesh.edges[0].ptr.data); num_creases++;
for (int i = 0; i < edges_num; i++) {
const MEdge &b_edge = edges[i];
if (b_edge.crease != 0) {
num_creases++;
}
} }
}
mesh->reserve_subd_creases(num_creases); mesh->reserve_subd_creases(num_creases);
for (int i = 0; i < edges_num; i++) { for (BL::MeshEdge &e : b_mesh.edges) {
const MEdge &b_edge = edges[i]; if (e.crease() != 0.0f) {
if (b_edge.crease != 0) { mesh->add_edge_crease(e.vertices()[0], e.vertices()[1], e.crease());
mesh->add_edge_crease(b_edge.v1, b_edge.v2, float(b_edge.crease) / 255.0f);
}
} }
}
for (BL::MeshVertexCreaseLayer &c : b_mesh.vertex_creases) { for (BL::MeshVertexCreaseLayer &c : b_mesh.vertex_creases) {
for (int i = 0; i < c.data.length(); ++i) { for (int i = 0; i < c.data.length(); ++i) {
if (c.data[i].value() != 0.0f) { if (c.data[i].value() != 0.0f) {
mesh->add_vertex_crease(i, c.data[i].value()); mesh->add_vertex_crease(i, c.data[i].value());
}
} }
} }
} }
@@ -1229,12 +1179,6 @@ void BlenderSync::sync_mesh_motion(BL::Depsgraph b_depsgraph,
/* TODO(sergey): Perform preliminary check for number of vertices. */ /* TODO(sergey): Perform preliminary check for number of vertices. */
if (b_mesh) { if (b_mesh) {
const int b_verts_num = b_mesh.vertices.length();
if (b_verts_num == 0) {
free_object_to_mesh(b_data, b_ob_info, b_mesh);
return;
}
/* Export deformed coordinates. */ /* Export deformed coordinates. */
/* Find attributes. */ /* Find attributes. */
Attribute *attr_mP = mesh->attributes.find(ATTR_STD_MOTION_VERTEX_POSITION); Attribute *attr_mP = mesh->attributes.find(ATTR_STD_MOTION_VERTEX_POSITION);
@@ -1252,30 +1196,22 @@ void BlenderSync::sync_mesh_motion(BL::Depsgraph b_depsgraph,
/* Load vertex data from mesh. */ /* Load vertex data from mesh. */
float3 *mP = attr_mP->data_float3() + motion_step * numverts; float3 *mP = attr_mP->data_float3() + motion_step * numverts;
float3 *mN = (attr_mN) ? attr_mN->data_float3() + motion_step * numverts : NULL; float3 *mN = (attr_mN) ? attr_mN->data_float3() + motion_step * numverts : NULL;
const MVert *verts = static_cast<const MVert *>(b_mesh.vertices[0].ptr.data);
/* NOTE: We don't copy more that existing amount of vertices to prevent /* NOTE: We don't copy more that existing amount of vertices to prevent
* possible memory corruption. * possible memory corruption.
*/ */
for (int i = 0; i < std::min<size_t>(b_verts_num, numverts); i++) { BL::Mesh::vertices_iterator v;
const MVert &b_vert = verts[i]; int i = 0;
mP[i] = make_float3(b_vert.co[0], b_vert.co[1], b_vert.co[2]); for (b_mesh.vertices.begin(v); v != b_mesh.vertices.end() && i < numverts; ++v, ++i) {
} mP[i] = get_float3(v->co());
if (mN) { if (mN)
const float(*b_vert_normals)[3] = static_cast<const float(*)[3]>( mN[i] = get_float3(v->normal());
b_mesh.vertex_normals[0].ptr.data);
for (int i = 0; i < std::min<size_t>(b_verts_num, numverts); i++) {
const float *b_vert_normal = b_vert_normals[i];
mN[i] = make_float3(b_vert_normal[0], b_vert_normal[1], b_vert_normal[2]);
}
} }
if (new_attribute) { if (new_attribute) {
/* In case of new attribute, we verify if there really was any motion. */ /* In case of new attribute, we verify if there really was any motion. */
if (b_verts_num != numverts || if (b_mesh.vertices.length() != numverts ||
memcmp(mP, &mesh->get_verts()[0], sizeof(float3) * numverts) == 0) { memcmp(mP, &mesh->get_verts()[0], sizeof(float3) * numverts) == 0) {
/* no motion, remove attributes again */ /* no motion, remove attributes again */
if (b_verts_num != numverts) { if (b_mesh.vertices.length() != numverts) {
VLOG_WARNING << "Topology differs, disabling motion blur for object " << ob_name; VLOG_WARNING << "Topology differs, disabling motion blur for object " << ob_name;
} }
else { else {
@@ -1299,7 +1235,7 @@ void BlenderSync::sync_mesh_motion(BL::Depsgraph b_depsgraph,
} }
} }
else { else {
if (b_verts_num != numverts) { if (b_mesh.vertices.length() != numverts) {
VLOG_WARNING << "Topology differs, discarding motion blur for object " << ob_name VLOG_WARNING << "Topology differs, discarding motion blur for object " << ob_name
<< " at time " << motion_step; << " at time " << motion_step;
memcpy(mP, &mesh->get_verts()[0], sizeof(float3) * numverts); memcpy(mP, &mesh->get_verts()[0], sizeof(float3) * numverts);

View File

@@ -66,6 +66,12 @@ bool BlenderSync::object_is_geometry(BObjectInfo &b_ob_info)
return true; return true;
} }
/* Other object types that are not meshes but evaluate to meshes are presented to render engines
* as separate instance objects. Metaballs have not been affected by that change yet. */
if (type == BL::Object::type_META) {
return true;
}
return b_ob_data.is_a(&RNA_Mesh); return b_ob_data.is_a(&RNA_Mesh);
} }

Some files were not shown because too many files have changed in this diff Show More