UI: Region polling support #105088

Merged
Julian Eisel merged 39 commits from JulianEisel/blender:temp-region-poll into main 2023-04-05 15:30:46 +02:00
171 changed files with 22113 additions and 2209 deletions
Showing only changes of commit ca8fa2f7d6 - Show all commits

View File

@ -91,3 +91,7 @@ endif()
if(WITH_COMPOSITOR_CPU)
add_subdirectory(smaa_areatex)
endif()
if(WITH_VULKAN_BACKEND)
add_subdirectory(vulkan_memory_allocator)
endif()

View File

@ -0,0 +1,24 @@
# SPDX-License-Identifier: GPL-2.0-or-later
# Copyright 2022 Blender Foundation. All rights reserved.
set(INC
.
)
set(INC_SYS
${VULKAN_INCLUDE_DIRS}
)
set(SRC
vk_mem_alloc_impl.cc
vk_mem_alloc.h
)
blender_add_lib(extern_vulkan_memory_allocator "${SRC}" "${INC}" "${INC_SYS}" "${LIB}")
if(CMAKE_COMPILER_IS_GNUCC OR CMAKE_C_COMPILER_ID MATCHES "Clang")
target_compile_options(extern_vulkan_memory_allocator
PRIVATE "-Wno-nullability-completeness"
)
endif()

View File

@ -0,0 +1,19 @@
Copyright (c) 2017-2022 Advanced Micro Devices, Inc. All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@ -0,0 +1,5 @@
Project: VulkanMemoryAllocator
URL: https://github.com/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator
License: MIT
Upstream version: a6bfc23
Local modifications: None

175
extern/vulkan_memory_allocator/README.md vendored Normal file
View File

@ -0,0 +1,175 @@
# Vulkan Memory Allocator
Easy to integrate Vulkan memory allocation library.
**Documentation:** Browse online: [Vulkan Memory Allocator](https://gpuopen-librariesandsdks.github.io/VulkanMemoryAllocator/html/) (generated from Doxygen-style comments in [include/vk_mem_alloc.h](include/vk_mem_alloc.h))
**License:** MIT. See [LICENSE.txt](LICENSE.txt)
**Changelog:** See [CHANGELOG.md](CHANGELOG.md)
**Product page:** [Vulkan Memory Allocator on GPUOpen](https://gpuopen.com/gaming-product/vulkan-memory-allocator/)
**Build status:**
- Windows: [![Build status](https://ci.appveyor.com/api/projects/status/4vlcrb0emkaio2pn/branch/master?svg=true)](https://ci.appveyor.com/project/adam-sawicki-amd/vulkanmemoryallocator/branch/master)
- Linux: [![Build Status](https://app.travis-ci.com/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator.svg?branch=master)](https://app.travis-ci.com/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator)
[![Average time to resolve an issue](http://isitmaintained.com/badge/resolution/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator.svg)](http://isitmaintained.com/project/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator "Average time to resolve an issue")
# Problem
Memory allocation and resource (buffer and image) creation in Vulkan is difficult (comparing to older graphics APIs, like D3D11 or OpenGL) for several reasons:
- It requires a lot of boilerplate code, just like everything else in Vulkan, because it is a low-level and high-performance API.
- There is additional level of indirection: `VkDeviceMemory` is allocated separately from creating `VkBuffer`/`VkImage` and they must be bound together.
- Driver must be queried for supported memory heaps and memory types. Different GPU vendors provide different types of it.
- It is recommended to allocate bigger chunks of memory and assign parts of them to particular resources, as there is a limit on maximum number of memory blocks that can be allocated.
# Features
This library can help game developers to manage memory allocations and resource creation by offering some higher-level functions:
1. Functions that help to choose correct and optimal memory type based on intended usage of the memory.
- Required or preferred traits of the memory are expressed using higher-level description comparing to Vulkan flags.
2. Functions that allocate memory blocks, reserve and return parts of them (`VkDeviceMemory` + offset + size) to the user.
- Library keeps track of allocated memory blocks, used and unused ranges inside them, finds best matching unused ranges for new allocations, respects all the rules of alignment and buffer/image granularity.
3. Functions that can create an image/buffer, allocate memory for it and bind them together - all in one call.
Additional features:
- Well-documented - description of all functions and structures provided, along with chapters that contain general description and example code.
- Thread-safety: Library is designed to be used in multithreaded code. Access to a single device memory block referred by different buffers and textures (binding, mapping) is synchronized internally. Memory mapping is reference-counted.
- Configuration: Fill optional members of `VmaAllocatorCreateInfo` structure to provide custom CPU memory allocator, pointers to Vulkan functions and other parameters.
- Customization and integration with custom engines: Predefine appropriate macros to provide your own implementation of all external facilities used by the library like assert, mutex, atomic.
- Support for memory mapping, reference-counted internally. Support for persistently mapped memory: Just allocate with appropriate flag and access the pointer to already mapped memory.
- Support for non-coherent memory. Functions that flush/invalidate memory. `nonCoherentAtomSize` is respected automatically.
- Support for resource aliasing (overlap).
- Support for sparse binding and sparse residency: Convenience functions that allocate or free multiple memory pages at once.
- Custom memory pools: Create a pool with desired parameters (e.g. fixed or limited maximum size) and allocate memory out of it.
- Linear allocator: Create a pool with linear algorithm and use it for much faster allocations and deallocations in free-at-once, stack, double stack, or ring buffer fashion.
- Support for Vulkan 1.0, 1.1, 1.2, 1.3.
- Support for extensions (and equivalent functionality included in new Vulkan versions):
- VK_KHR_dedicated_allocation: Just enable it and it will be used automatically by the library.
- VK_KHR_buffer_device_address: Flag `VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT_KHR` is automatically added to memory allocations where needed.
- VK_EXT_memory_budget: Used internally if available to query for current usage and budget. If not available, it falls back to an estimation based on memory heap sizes.
- VK_EXT_memory_priority: Set `priority` of allocations or custom pools and it will be set automatically using this extension.
- VK_AMD_device_coherent_memory
- Defragmentation of GPU and CPU memory: Let the library move data around to free some memory blocks and make your allocations better compacted.
- Statistics: Obtain brief or detailed statistics about the amount of memory used, unused, number of allocated blocks, number of allocations etc. - globally, per memory heap, and per memory type.
- Debug annotations: Associate custom `void* pUserData` and debug `char* pName` with each allocation.
- JSON dump: Obtain a string in JSON format with detailed map of internal state, including list of allocations, their string names, and gaps between them.
- Convert this JSON dump into a picture to visualize your memory. See [tools/GpuMemDumpVis](tools/GpuMemDumpVis/README.md).
- Debugging incorrect memory usage: Enable initialization of all allocated memory with a bit pattern to detect usage of uninitialized or freed memory. Enable validation of a magic number after every allocation to detect out-of-bounds memory corruption.
- Support for interoperability with OpenGL.
- Virtual allocator: Interface for using core allocation algorithm to allocate any custom data, e.g. pieces of one large buffer.
# Prerequisites
- Self-contained C++ library in single header file. No external dependencies other than standard C and C++ library and of course Vulkan. Some features of C++14 used. STL containers, RTTI, or C++ exceptions are not used.
- Public interface in C, in same convention as Vulkan API. Implementation in C++.
- Error handling implemented by returning `VkResult` error codes - same way as in Vulkan.
- Interface documented using Doxygen-style comments.
- Platform-independent, but developed and tested on Windows using Visual Studio. Continuous integration setup for Windows and Linux. Used also on Android, MacOS, and other platforms.
# Example
Basic usage of this library is very simple. Advanced features are optional. After you created global `VmaAllocator` object, a complete code needed to create a buffer may look like this:
```cpp
VkBufferCreateInfo bufferInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
bufferInfo.size = 65536;
bufferInfo.usage = VK_BUFFER_USAGE_VERTEX_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
VmaAllocationCreateInfo allocInfo = {};
allocInfo.usage = VMA_MEMORY_USAGE_AUTO;
VkBuffer buffer;
VmaAllocation allocation;
vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
```
With this one function call:
1. `VkBuffer` is created.
2. `VkDeviceMemory` block is allocated if needed.
3. An unused region of the memory block is bound to this buffer.
`VmaAllocation` is an object that represents memory assigned to this buffer. It can be queried for parameters like `VkDeviceMemory` handle and offset.
# How to build
On Windows it is recommended to use [CMake UI](https://cmake.org/runningcmake/). Alternatively you can generate a Visual Studio project map using CMake in command line: `cmake -B./build/ -DCMAKE_BUILD_TYPE=Debug -G "Visual Studio 16 2019" -A x64 ./`
On Linux:
```
mkdir build
cd build
cmake ..
make
```
The following targets are available
| Target | Description | CMake option | Default setting |
| ------------- | ------------- | ------------- | ------------- |
| VmaSample | VMA sample application | `VMA_BUILD_SAMPLE` | `OFF` |
| VmaBuildSampleShaders | Shaders for VmaSample | `VMA_BUILD_SAMPLE_SHADERS` | `OFF` |
Please note that while VulkanMemoryAllocator library is supported on other platforms besides Windows, VmaSample is not.
These CMake options are available
| CMake option | Description | Default setting |
| ------------- | ------------- | ------------- |
| `VMA_RECORDING_ENABLED` | Enable VMA memory recording for debugging | `OFF` |
| `VMA_USE_STL_CONTAINERS` | Use C++ STL containers instead of VMA's containers | `OFF` |
| `VMA_STATIC_VULKAN_FUNCTIONS` | Link statically with Vulkan API | `OFF` |
| `VMA_DYNAMIC_VULKAN_FUNCTIONS` | Fetch pointers to Vulkan functions internally (no static linking) | `ON` |
| `VMA_DEBUG_ALWAYS_DEDICATED_MEMORY` | Every allocation will have its own memory block | `OFF` |
| `VMA_DEBUG_INITIALIZE_ALLOCATIONS` | Automatically fill new allocations and destroyed allocations with some bit pattern | `OFF` |
| `VMA_DEBUG_GLOBAL_MUTEX` | Enable single mutex protecting all entry calls to the library | `OFF` |
| `VMA_DEBUG_DONT_EXCEED_MAX_MEMORY_ALLOCATION_COUNT` | Never exceed [VkPhysicalDeviceLimits::maxMemoryAllocationCount](https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#limits-maxMemoryAllocationCount) and return error | `OFF` |
# Binaries
The release comes with precompiled binary executable for "VulkanSample" application which contains test suite. It is compiled using Visual Studio 2019, so it requires appropriate libraries to work, including "MSVCP140.dll", "VCRUNTIME140.dll", "VCRUNTIME140_1.dll". If the launch fails with error message telling about those files missing, please download and install [Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019](https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads), "x64" version.
# Read more
See **[Documentation](https://gpuopen-librariesandsdks.github.io/VulkanMemoryAllocator/html/)**.
# Software using this library
- **[X-Plane](https://x-plane.com/)**
- **[Detroit: Become Human](https://gpuopen.com/learn/porting-detroit-3/)**
- **[Vulkan Samples](https://github.com/LunarG/VulkanSamples)** - official Khronos Vulkan samples. License: Apache-style.
- **[Anvil](https://github.com/GPUOpen-LibrariesAndSDKs/Anvil)** - cross-platform framework for Vulkan. License: MIT.
- **[Filament](https://github.com/google/filament)** - physically based rendering engine for Android, Windows, Linux and macOS, from Google. Apache License 2.0.
- **[Atypical Games - proprietary game engine](https://developer.samsung.com/galaxy-gamedev/gamedev-blog/infinitejet.html)**
- **[Flax Engine](https://flaxengine.com/)**
- **[Godot Engine](https://github.com/godotengine/godot/)** - multi-platform 2D and 3D game engine. License: MIT.
- **[Lightweight Java Game Library (LWJGL)](https://www.lwjgl.org/)** - includes binding of the library for Java. License: BSD.
- **[PowerVR SDK](https://github.com/powervr-graphics/Native_SDK)** - C++ cross-platform 3D graphics SDK, from Imagination. License: MIT.
- **[Skia](https://github.com/google/skia)** - complete 2D graphic library for drawing Text, Geometries, and Images, from Google.
- **[The Forge](https://github.com/ConfettiFX/The-Forge)** - cross-platform rendering framework. Apache License 2.0.
- **[VK9](https://github.com/disks86/VK9)** - Direct3D 9 compatibility layer using Vulkan. Zlib lincese.
- **[vkDOOM3](https://github.com/DustinHLand/vkDOOM3)** - Vulkan port of GPL DOOM 3 BFG Edition. License: GNU GPL.
- **[vkQuake2](https://github.com/kondrak/vkQuake2)** - vanilla Quake 2 with Vulkan support. License: GNU GPL.
- **[Vulkan Best Practice for Mobile Developers](https://github.com/ARM-software/vulkan_best_practice_for_mobile_developers)** from ARM. License: MIT.
- **[RPCS3](https://github.com/RPCS3/rpcs3)** - PlayStation 3 emulator/debugger. License: GNU GPLv2.
- **[PPSSPP](https://github.com/hrydgard/ppsspp)** - Playstation Portable emulator/debugger. License: GNU GPLv2+.
[Many other projects on GitHub](https://github.com/search?q=AMD_VULKAN_MEMORY_ALLOCATOR_H&type=Code) and some game development studios that use Vulkan in their games.
# See also
- **[D3D12 Memory Allocator](https://github.com/GPUOpen-LibrariesAndSDKs/D3D12MemoryAllocator)** - equivalent library for Direct3D 12. License: MIT.
- **[Awesome Vulkan](https://github.com/vinjn/awesome-vulkan)** - a curated list of awesome Vulkan libraries, debuggers and resources.
- **[VulkanMemoryAllocator-Hpp](https://github.com/malte-v/VulkanMemoryAllocator-Hpp)** - C++ binding for this library. License: CC0-1.0.
- **[PyVMA](https://github.com/realitix/pyvma)** - Python wrapper for this library. Author: Jean-Sébastien B. (@realitix). License: Apache 2.0.
- **[vk-mem](https://github.com/gwihlidal/vk-mem-rs)** - Rust binding for this library. Author: Graham Wihlidal. License: Apache 2.0 or MIT.
- **[Haskell bindings](https://hackage.haskell.org/package/VulkanMemoryAllocator)**, **[github](https://github.com/expipiplus1/vulkan/tree/master/VulkanMemoryAllocator)** - Haskell bindings for this library. Author: Ellie Hermaszewska (@expipiplus1). License BSD-3-Clause.
- **[vma_sample_sdl](https://github.com/rextimmy/vma_sample_sdl)** - SDL port of the sample app of this library (with the goal of running it on multiple platforms, including MacOS). Author: @rextimmy. License: MIT.
- **[vulkan-malloc](https://github.com/dylanede/vulkan-malloc)** - Vulkan memory allocation library for Rust. Based on version 1 of this library. Author: Dylan Ede (@dylanede). License: MIT / Apache 2.0.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,12 @@
/* SPDX-License-Identifier: GPL-2.0-or-later
* Copyright 2022 Blender Foundation. All rights reserved. */
#ifdef __APPLE__
# include <MoltenVK/vk_mvk_moltenvk.h>
#else
# include <vulkan/vulkan.h>
#endif
#define VMA_IMPLEMENTATION
#include "vk_mem_alloc.h"

View File

@ -91,7 +91,7 @@ class AddPresetPerformance(AddPresetBase, Operator):
preset_menu = "CYCLES_PT_performance_presets"
preset_defines = [
"render = bpy.context.scene.render"
"render = bpy.context.scene.render",
"cycles = bpy.context.scene.cycles"
]

View File

@ -154,8 +154,9 @@ def use_mnee(context):
# The MNEE kernel doesn't compile on macOS < 13.
if use_metal(context):
import platform
v, _, _ = platform.mac_ver()
if float(v) < 13.0:
version, _, _ = platform.mac_ver()
major_version = version.split(".")[0]
if int(major_version) < 13:
return False
return True

View File

@ -478,6 +478,7 @@ static PyObject *osl_update_node_func(PyObject * /*self*/, PyObject *args)
/* Read metadata. */
bool is_bool_param = false;
bool hide_value = !param->validdefault;
ustring param_label = param->name;
for (const OSL::OSLQuery::Parameter &metadata : param->metadata) {
@ -487,6 +488,9 @@ static PyObject *osl_update_node_func(PyObject * /*self*/, PyObject *args)
if (metadata.sdefault[0] == "boolean" || metadata.sdefault[0] == "checkBox") {
is_bool_param = true;
}
else if (metadata.sdefault[0] == "null") {
hide_value = true;
}
}
else if (metadata.name == "label") {
/* Socket label. */
@ -596,6 +600,9 @@ static PyObject *osl_update_node_func(PyObject * /*self*/, PyObject *args)
if (b_sock.name() != param_label) {
b_sock.name(param_label.string());
}
if (b_sock.hide_value() != hide_value) {
b_sock.hide_value(hide_value);
}
used_sockets.insert(b_sock.ptr.data);
found_existing = true;
}
@ -635,6 +642,8 @@ static PyObject *osl_update_node_func(PyObject * /*self*/, PyObject *args)
set_boolean(b_sock.ptr, "default_value", default_boolean);
}
b_sock.hide_value(hide_value);
used_sockets.insert(b_sock.ptr.data);
}
}

View File

@ -66,7 +66,9 @@ struct SocketType {
LINK_NORMAL = (1 << 8),
LINK_POSITION = (1 << 9),
LINK_TANGENT = (1 << 10),
DEFAULT_LINK_MASK = (1 << 4) | (1 << 5) | (1 << 6) | (1 << 7) | (1 << 8) | (1 << 9) | (1 << 10)
LINK_OSL_INITIALIZER = (1 << 11),
DEFAULT_LINK_MASK = (1 << 4) | (1 << 5) | (1 << 6) | (1 << 7) | (1 << 8) | (1 << 9) |
(1 << 10) | (1 << 11)
};
ustring name;

View File

@ -552,6 +552,7 @@ OSLNode *OSLShaderManager::osl_node(ShaderGraph *graph,
SocketType::Type socket_type;
/* Read type and default value. */
if (param->isclosure) {
socket_type = SocketType::CLOSURE;
}
@ -606,7 +607,21 @@ OSLNode *OSLShaderManager::osl_node(ShaderGraph *graph,
node->add_output(param->name, socket_type);
}
else {
node->add_input(param->name, socket_type);
/* Detect if we should leave parameter initialization to OSL, either though
* not constant default or widget metadata. */
int socket_flags = 0;
if (!param->validdefault) {
socket_flags |= SocketType::LINK_OSL_INITIALIZER;
}
for (const OSL::OSLQuery::Parameter &metadata : param->metadata) {
if (metadata.type == TypeDesc::STRING) {
if (metadata.name == "widget" && metadata.sdefault[0] == "null") {
socket_flags |= SocketType::LINK_OSL_INITIALIZER;
}
}
}
node->add_input(param->name, socket_type, socket_flags);
}
}
@ -731,8 +746,12 @@ void OSLCompiler::add(ShaderNode *node, const char *name, bool isfilepath)
foreach (ShaderInput *input, node->inputs) {
if (!input->link) {
/* checks to untangle graphs */
if (node_skip_input(node, input))
if (node_skip_input(node, input)) {
continue;
}
if (input->flags() & SocketType::LINK_OSL_INITIALIZER) {
continue;
}
string param_name = compatible_name(node, input);
const SocketType &socket = input->socket_type;

View File

@ -485,6 +485,8 @@ void Scene::update_kernel_features()
return;
}
thread_scoped_lock scene_lock(mutex);
/* These features are not being tweaked as often as shaders,
* so could be done selective magic for the viewport as well. */
uint kernel_features = shader_manager->get_kernel_features(this);
@ -571,9 +573,6 @@ bool Scene::update(Progress &progress)
return false;
}
/* Load render kernels, before device update where we upload data to the GPU. */
load_kernels(progress, false);
/* Upload scene data to the GPU. */
progress.set_status("Updating Scene");
MEM_GUARDED_CALL(&progress, device_update, device, progress);
@ -613,13 +612,8 @@ static void log_kernel_features(const uint features)
<< "\n";
}
bool Scene::load_kernels(Progress &progress, bool lock_scene)
bool Scene::load_kernels(Progress &progress)
{
thread_scoped_lock scene_lock;
if (lock_scene) {
scene_lock = thread_scoped_lock(mutex);
}
update_kernel_features();
const uint kernel_features = dscene.data.kernel_features;

View File

@ -270,6 +270,7 @@ class Scene : public NodeOwner {
void enable_update_stats();
bool load_kernels(Progress &progress);
bool update(Progress &progress);
bool has_shadow_catcher();
@ -333,7 +334,6 @@ class Scene : public NodeOwner {
uint loaded_kernel_features;
void update_kernel_features();
bool load_kernels(Progress &progress, bool lock_scene = true);
bool has_shadow_catcher_ = false;
bool shadow_catcher_modified_ = true;

View File

@ -7257,12 +7257,12 @@ char *OSLNode::input_default_value()
return (char *)this + align_up(sizeof(OSLNode), 16) + inputs_size;
}
void OSLNode::add_input(ustring name, SocketType::Type socket_type)
void OSLNode::add_input(ustring name, SocketType::Type socket_type, const int flags)
{
char *memory = input_default_value();
size_t offset = memory - (char *)this;
const_cast<NodeType *>(type)->register_input(
name, name, socket_type, offset, memory, NULL, NULL, SocketType::LINKABLE);
name, name, socket_type, offset, memory, NULL, NULL, flags | SocketType::LINKABLE);
}
void OSLNode::add_output(ustring name, SocketType::Type socket_type)

View File

@ -1525,7 +1525,7 @@ class OSLNode final : public ShaderNode {
ShaderNode *clone(ShaderGraph *graph) const;
char *input_default_value();
void add_input(ustring name, SocketType::Type type);
void add_input(ustring name, SocketType::Type type, const int flags = 0);
void add_output(ustring name, SocketType::Type type);
SHADER_NODE_NO_CLONE_CLASS(OSLNode)

View File

@ -378,6 +378,18 @@ RenderWork Session::run_update_for_next_iteration()
const int width = max(1, buffer_params_.full_width / resolution);
const int height = max(1, buffer_params_.full_height / resolution);
{
/* Load render kernels, before device update where we upload data to the GPU.
* Do it outside of the scene mutex since the heavy part of the loading (i.e. kernel
* compilation) does not depend on the scene and some other functionality (like display
* driver) might be waiting on the scene mutex to synchronize display pass.
*
* The scene will lock itself for the short period if it needs to update kernel features. */
scene_lock.unlock();
scene->load_kernels(progress);
scene_lock.lock();
}
if (update_scene(width, height)) {
profiler.reset(scene->shaders.size(), scene->objects.size());
}

View File

@ -179,13 +179,14 @@ class GHOST_SharedOpenGLResource {
}
if (m_is_initialized) {
if (m_shared.render_target
#if 1
#if 0 /* TODO: Causes an access violation since Blender 3.4 (a296b8f694d1). */
if (m_shared.render_target
# if 1
/* TODO: #wglDXUnregisterObjectNV() causes an access violation on AMD when the shared
* resource is a GL texture. Since there is currently no good alternative, just skip
* unregistering the shared resource. */
&& !m_use_gl_texture2d
#endif
# endif
) {
wglDXUnregisterObjectNV(m_shared.device, m_shared.render_target);
}
@ -199,6 +200,12 @@ class GHOST_SharedOpenGLResource {
else {
glDeleteRenderbuffers(1, &m_gl_render_target);
}
#else
glDeleteFramebuffers(1, &m_shared.fbo);
if (m_use_gl_texture2d) {
glDeleteTextures(1, &m_gl_render_target);
}
#endif
}
}

@ -1 +1 @@
Subproject commit fe221a8bc934385d9f302c46a5c7cbeacddafe3b
Subproject commit ef57e2c2c65933a68811d58b40ed62b775e9b4b0

@ -1 +1 @@
Subproject commit 5a818af95080cccf04dfa8317f0e966bff515c64
Subproject commit 946b62da3f9c93b4add8596aef836bf3a29ea27c

@ -1 +1 @@
Subproject commit c43c0b2bcf08c34d933c3b56f096c9a23c8eff68
Subproject commit 69b1305f4b74fbd7e847acc6a5566755b9803d78

View File

@ -6211,8 +6211,14 @@ class VIEW3D_PT_shading_compositor(Panel):
def draw(self, context):
shading = context.space_data.shading
layout = self.layout
layout.prop(shading, "use_compositor")
import sys
is_macos = sys.platform == "darwin"
row = self.layout.row()
row.active = not is_macos
row.prop(shading, "use_compositor", expand=True)
if is_macos and shading.use_compositor != "DISABLED":
self.layout.label(text="Compositor not supported on MacOS.", icon="ERROR")
class VIEW3D_PT_gizmo_display(Panel):

View File

@ -17,7 +17,7 @@
<rescap:Capability Name="runFullTrust" />
</Capabilities>
<Applications>
<Application Id="BLENDER" Executable="Blender\blender.exe" EntryPoint="Windows.FullTrustApplication">
<Application Id="BLENDER" Executable="Blender\blender-launcher.exe" EntryPoint="Windows.FullTrustApplication">
<uap:VisualElements
BackgroundColor="transparent"
DisplayName="Blender [VERSION]"
@ -49,9 +49,9 @@
</uap2:SupportedVerbs>
</uap3:FileTypeAssociation>
</uap3:Extension>
<uap3:Extension Category="windows.appExecutionAlias" Executable="Blender\blender.exe" EntryPoint="Windows.FullTrustApplication">
<uap3:Extension Category="windows.appExecutionAlias" Executable="Blender\blender-launcher.exe" EntryPoint="Windows.FullTrustApplication">
<uap3:AppExecutionAlias>
<desktop:ExecutionAlias Alias="blender.exe" />
<desktop:ExecutionAlias Alias="blender-launcher.exe" />
</uap3:AppExecutionAlias>
</uap3:Extension>
</Extensions>

View File

@ -29,29 +29,10 @@ AssetRepresentation &AssetStorage::add_external_asset(StringRef name,
bool AssetStorage::remove_asset(AssetRepresentation &asset)
{
auto remove_if_contained_fn = [&asset](StorageT &storage) {
/* Create a "fake" unique_ptr to figure out the hash for the pointed to asset representation.
* The standard requires that this is the same for all unique_ptr's wrapping the same address.
*/
std::unique_ptr<AssetRepresentation> fake_asset_ptr{&asset};
const std::unique_ptr<AssetRepresentation> *real_asset_ptr = storage.lookup_key_ptr_as(
fake_asset_ptr);
/* Make sure the contained storage is not destructed. */
fake_asset_ptr.release();
if (!real_asset_ptr) {
return false;
}
storage.remove_contained(*real_asset_ptr);
return true;
};
if (remove_if_contained_fn(local_id_assets_)) {
if (local_id_assets_.remove_as(&asset)) {
return true;
}
return remove_if_contained_fn(external_assets_);
return external_assets_.remove_as(&asset);
}
void AssetStorage::remap_ids_and_remove_invalid(const IDRemapper &mappings)

View File

@ -432,6 +432,15 @@ bool BKE_mesh_vertex_normals_are_dirty(const struct Mesh *mesh);
*/
bool BKE_mesh_poly_normals_are_dirty(const struct Mesh *mesh);
void BKE_mesh_calc_poly_normal(const struct MPoly *mpoly,
const struct MLoop *loopstart,
const struct MVert *mvarray,
float r_no[3]);
void BKE_mesh_calc_poly_normal_coords(const struct MPoly *mpoly,
const struct MLoop *loopstart,
const float (*vertex_coords)[3],
float r_no[3]);
/**
* Calculate face normals directly into a result array.
*
@ -588,10 +597,10 @@ void BKE_lnor_space_add_loop(MLoopNorSpaceArray *lnors_spacearr,
int ml_index,
void *bm_loop,
bool is_single);
void BKE_lnor_space_custom_data_to_normal(MLoopNorSpace *lnor_space,
void BKE_lnor_space_custom_data_to_normal(const MLoopNorSpace *lnor_space,
const short clnor_data[2],
float r_custom_lnor[3]);
void BKE_lnor_space_custom_normal_to_data(MLoopNorSpace *lnor_space,
void BKE_lnor_space_custom_normal_to_data(const MLoopNorSpace *lnor_space,
const float custom_lnor[3],
short r_clnor_data[2]);
@ -694,14 +703,6 @@ void BKE_mesh_set_custom_normals_from_verts(struct Mesh *mesh, float (*r_custom_
/* *** mesh_evaluate.cc *** */
void BKE_mesh_calc_poly_normal(const struct MPoly *mpoly,
const struct MLoop *loopstart,
const struct MVert *mvarray,
float r_no[3]);
void BKE_mesh_calc_poly_normal_coords(const struct MPoly *mpoly,
const struct MLoop *loopstart,
const float (*vertex_coords)[3],
float r_no[3]);
void BKE_mesh_calc_poly_center(const struct MPoly *mpoly,
const struct MLoop *loopstart,
const struct MVert *mvarray,

View File

@ -171,9 +171,6 @@ typedef struct bNodeSocketType {
void (*interface_draw)(struct bContext *C, struct uiLayout *layout, struct PointerRNA *ptr);
void (*interface_draw_color)(struct bContext *C, struct PointerRNA *ptr, float *r_color);
void (*interface_register_properties)(struct bNodeTree *ntree,
struct bNodeSocket *interface_socket,
struct StructRNA *data_srna);
void (*interface_init_socket)(struct bNodeTree *ntree,
const struct bNodeSocket *interface_socket,
struct bNode *node,
@ -330,6 +327,11 @@ typedef struct bNodeType {
* responsibility of the caller. */
NodeGetCompositorShaderNodeFunction get_compositor_shader_node;
/* A message to display in the node header for unsupported realtime compositor nodes. The message
* is assumed to be static and thus require no memory handling. This field is to be removed when
* all nodes are supported. */
const char *realtime_compositor_unsupported_message;
/* Build a multi-function for this node. */
NodeMultiFunctionBuildFunction build_multi_function;
@ -578,10 +580,6 @@ struct bNodeSocket *ntreeInsertSocketInterfaceFromSocket(struct bNodeTree *ntree
struct bNodeSocket *from_sock);
void ntreeRemoveSocketInterface(struct bNodeTree *ntree, struct bNodeSocket *sock);
struct StructRNA *ntreeInterfaceTypeGet(struct bNodeTree *ntree, bool create);
void ntreeInterfaceTypeFree(struct bNodeTree *ntree);
void ntreeInterfaceTypeUpdate(struct bNodeTree *ntree);
/** \} */
/* -------------------------------------------------------------------- */
@ -1337,12 +1335,6 @@ void BKE_nodetree_remove_layer_n(struct bNodeTree *ntree, struct Scene *scene, i
#define CMP_CHAN_RGB 1
#define CMP_CHAN_A 2
/* track position node, in custom1 */
#define CMP_TRACKPOS_ABSOLUTE 0
#define CMP_TRACKPOS_RELATIVE_START 1
#define CMP_TRACKPOS_RELATIVE_FRAME 2
#define CMP_TRACKPOS_ABSOLUTE_FRAME 3
/* Cryptomatte source. */
#define CMP_CRYPTOMATTE_SRC_RENDER 0
#define CMP_CRYPTOMATTE_SRC_IMAGE 1

View File

@ -46,6 +46,30 @@ class bNodeTreeRuntime : NonCopyable, NonMovable {
*/
uint8_t runtime_flag = 0;
/** Flag to prevent re-entrant update calls. */
short is_updating = 0;
/** Generic temporary flag for recursion check (DFS/BFS). */
short done = 0;
/** Execution data.
*
* XXX It would be preferable to completely move this data out of the underlying node tree,
* so node tree execution could finally run independent of the tree itself.
* This would allow node trees to be merely linked by other data (materials, textures, etc.),
* as ID data is supposed to.
* Execution data is generated from the tree once at execution start and can then be used
* as long as necessary, even while the tree is being modified.
*/
struct bNodeTreeExec *execdata = nullptr;
/* Callbacks. */
void (*progress)(void *, float progress) = nullptr;
/** \warning may be called by different threads */
void (*stats_draw)(void *, const char *str) = nullptr;
bool (*test_break)(void *) = nullptr;
void (*update_draw)(void *) = nullptr;
void *tbh = nullptr, *prh = nullptr, *sdh = nullptr, *udh = nullptr;
/** Information about how inputs and outputs of the node group interact with fields. */
std::unique_ptr<nodes::FieldInferencingInterface> field_inferencing_interface;
@ -104,6 +128,19 @@ class bNodeSocketRuntime : NonCopyable, NonMovable {
/** #eNodeTreeChangedFlag. */
uint32_t changed_flag = 0;
/**
* The location of the sockets, in the view-space of the node editor.
* \note Only calculated when drawing.
*/
float locx = 0;
float locy = 0;
/* Runtime-only cache of the number of input links, for multi-input sockets. */
short total_inputs = 0;
/** Cached data from execution. */
void *cache = nullptr;
/** Only valid when #topology_cache_is_dirty is false. */
Vector<bNodeLink *> directly_linked_links;
Vector<bNodeSocket *> directly_linked_sockets;

View File

@ -397,6 +397,8 @@ void BKE_pbvh_bounding_box(const PBVH *pbvh, float min[3], float max[3]);
*/
unsigned int **BKE_pbvh_grid_hidden(const PBVH *pbvh);
void BKE_pbvh_sync_visibility_from_verts(PBVH *pbvh, struct Mesh *me);
/**
* Returns the number of visible quads in the nodes' grids.
*/
@ -442,6 +444,7 @@ bool BKE_pbvh_bmesh_update_topology(PBVH *pbvh,
void BKE_pbvh_node_mark_update(PBVHNode *node);
void BKE_pbvh_node_mark_update_mask(PBVHNode *node);
void BKE_pbvh_node_mark_update_color(PBVHNode *node);
void BKE_pbvh_node_mark_update_face_sets(PBVHNode *node);
void BKE_pbvh_node_mark_update_visibility(PBVHNode *node);
void BKE_pbvh_node_mark_rebuild_draw(PBVHNode *node);
void BKE_pbvh_node_mark_redraw(PBVHNode *node);
@ -474,6 +477,11 @@ void BKE_pbvh_node_get_loops(PBVH *pbvh,
const int **r_loop_indices,
const struct MLoop **r_loops);
/* Get number of faces in the mesh; for PBVH_GRIDS the
* number of base mesh faces is returned.
*/
int BKE_pbvh_num_faces(const PBVH *pbvh);
void BKE_pbvh_node_get_BB(PBVHNode *node, float bb_min[3], float bb_max[3]);
void BKE_pbvh_node_get_original_BB(PBVHNode *node, float bb_min[3], float bb_max[3]);
@ -699,6 +707,7 @@ typedef struct PBVHFaceIter {
int prim_index_;
const struct SubdivCCG *subdiv_ccg_;
const struct BMesh *bm;
CCGKey subdiv_key_;
int last_face_index_;
} PBVHFaceIter;

View File

@ -469,11 +469,12 @@ inline bool remove(void *owner, const AttributeIDRef &attribute_id)
return provider->try_delete(owner);
}
}
bool success = false;
for (const DynamicAttributesProvider *provider : providers.dynamic_attribute_providers()) {
success = provider->try_delete(owner, attribute_id) || success;
if (provider->try_delete(owner, attribute_id)) {
return true;
}
}
return success;
return false;
}
template<const ComponentAttributeProviders &providers>

View File

@ -236,6 +236,10 @@ void BKE_bpath_missing_files_check(Main *bmain, ReportList *reports)
.flag = BKE_BPATH_FOREACH_PATH_ABSOLUTE | BKE_BPATH_FOREACH_PATH_SKIP_PACKED |
BKE_BPATH_FOREACH_PATH_RESOLVE_TOKEN | BKE_BPATH_TRAVERSE_SKIP_WEAK_REFERENCES,
.user_data = reports});
if (BLI_listbase_is_empty(&reports->list)) {
BKE_reportf(reports, RPT_INFO, "No missing files");
}
}
/** \} */

View File

@ -654,7 +654,7 @@ Span<float3> CurvesGeometry::evaluated_positions() const
case CURVE_TYPE_NURBS: {
curves::nurbs::interpolate_to_evaluated(this->runtime->nurbs_basis_cache[curve_index],
nurbs_orders[curve_index],
nurbs_weights.slice(points),
nurbs_weights.slice_safe(points),
positions.slice(points),
evaluated_positions.slice(evaluated_points));
break;
@ -812,7 +812,7 @@ void CurvesGeometry::interpolate_to_evaluated(const int curve_index,
case CURVE_TYPE_NURBS:
curves::nurbs::interpolate_to_evaluated(this->runtime->nurbs_basis_cache[curve_index],
this->nurbs_orders()[curve_index],
this->nurbs_weights().slice(points),
this->nurbs_weights().slice_safe(points),
src,
dst);
return;
@ -853,7 +853,7 @@ void CurvesGeometry::interpolate_to_evaluated(const GSpan src, GMutableSpan dst)
case CURVE_TYPE_NURBS:
curves::nurbs::interpolate_to_evaluated(this->runtime->nurbs_basis_cache[curve_index],
nurbs_orders[curve_index],
nurbs_weights.slice(points),
nurbs_weights.slice_safe(points),
src.slice(points),
dst.slice(evaluated_points));
continue;
@ -1020,6 +1020,7 @@ bool CurvesGeometry::bounds_min_max(float3 &min, float3 &max) const
if (this->attributes().contains("radius")) {
const VArraySpan<float> radii = this->attributes().lookup<float>("radius");
Array<float> evaluated_radii(this->evaluated_points_num());
this->ensure_can_interpolate_to_evaluated();
this->interpolate_to_evaluated(radii, evaluated_radii.as_mutable_span());
r_bounds = *bounds::min_max_with_radii(positions, evaluated_radii.as_span());
}

View File

@ -161,7 +161,8 @@ const GeometryComponent *GeometrySet::get_component_for_read(
bool GeometrySet::has(const GeometryComponentType component_type) const
{
return components_[component_type].has_value();
const GeometryComponentPtr &component = components_[component_type];
return component.has_value() && !component->is_empty();
}
void GeometrySet::remove(const GeometryComponentType component_type)

View File

@ -38,113 +38,6 @@ using blender::VArray;
/** \name Polygon Calculations
* \{ */
/*
* COMPUTE POLY NORMAL
*
* Computes the normal of a planar
* polygon See Graphics Gems for
* computing newell normal.
*/
static void mesh_calc_ngon_normal(const MPoly *mpoly,
const MLoop *loopstart,
const MVert *mvert,
float normal[3])
{
const int nverts = mpoly->totloop;
const float *v_prev = mvert[loopstart[nverts - 1].v].co;
const float *v_curr;
zero_v3(normal);
/* Newell's Method */
for (int i = 0; i < nverts; i++) {
v_curr = mvert[loopstart[i].v].co;
add_newell_cross_v3_v3v3(normal, v_prev, v_curr);
v_prev = v_curr;
}
if (UNLIKELY(normalize_v3(normal) == 0.0f)) {
normal[2] = 1.0f; /* other axis set to 0.0 */
}
}
void BKE_mesh_calc_poly_normal(const MPoly *mpoly,
const MLoop *loopstart,
const MVert *mvarray,
float r_no[3])
{
if (mpoly->totloop > 4) {
mesh_calc_ngon_normal(mpoly, loopstart, mvarray, r_no);
}
else if (mpoly->totloop == 3) {
normal_tri_v3(
r_no, mvarray[loopstart[0].v].co, mvarray[loopstart[1].v].co, mvarray[loopstart[2].v].co);
}
else if (mpoly->totloop == 4) {
normal_quad_v3(r_no,
mvarray[loopstart[0].v].co,
mvarray[loopstart[1].v].co,
mvarray[loopstart[2].v].co,
mvarray[loopstart[3].v].co);
}
else { /* horrible, two sided face! */
r_no[0] = 0.0;
r_no[1] = 0.0;
r_no[2] = 1.0;
}
}
/* duplicate of function above _but_ takes coords rather than mverts */
static void mesh_calc_ngon_normal_coords(const MPoly *mpoly,
const MLoop *loopstart,
const float (*vertex_coords)[3],
float r_normal[3])
{
const int nverts = mpoly->totloop;
const float *v_prev = vertex_coords[loopstart[nverts - 1].v];
const float *v_curr;
zero_v3(r_normal);
/* Newell's Method */
for (int i = 0; i < nverts; i++) {
v_curr = vertex_coords[loopstart[i].v];
add_newell_cross_v3_v3v3(r_normal, v_prev, v_curr);
v_prev = v_curr;
}
if (UNLIKELY(normalize_v3(r_normal) == 0.0f)) {
r_normal[2] = 1.0f; /* other axis set to 0.0 */
}
}
void BKE_mesh_calc_poly_normal_coords(const MPoly *mpoly,
const MLoop *loopstart,
const float (*vertex_coords)[3],
float r_no[3])
{
if (mpoly->totloop > 4) {
mesh_calc_ngon_normal_coords(mpoly, loopstart, vertex_coords, r_no);
}
else if (mpoly->totloop == 3) {
normal_tri_v3(r_no,
vertex_coords[loopstart[0].v],
vertex_coords[loopstart[1].v],
vertex_coords[loopstart[2].v]);
}
else if (mpoly->totloop == 4) {
normal_quad_v3(r_no,
vertex_coords[loopstart[0].v],
vertex_coords[loopstart[1].v],
vertex_coords[loopstart[2].v],
vertex_coords[loopstart[3].v]);
}
else { /* horrible, two sided face! */
r_no[0] = 0.0;
r_no[1] = 0.0;
r_no[2] = 1.0;
}
}
static void mesh_calc_ngon_center(const MPoly *mpoly,
const MLoop *loopstart,
const MVert *mvert,

View File

@ -153,6 +153,113 @@ bool BKE_mesh_poly_normals_are_dirty(const Mesh *mesh)
/** \name Mesh Normal Calculation (Polygons)
* \{ */
/*
* COMPUTE POLY NORMAL
*
* Computes the normal of a planar
* polygon See Graphics Gems for
* computing newell normal.
*/
static void mesh_calc_ngon_normal(const MPoly *mpoly,
const MLoop *loopstart,
const MVert *mvert,
float normal[3])
{
const int nverts = mpoly->totloop;
const float *v_prev = mvert[loopstart[nverts - 1].v].co;
const float *v_curr;
zero_v3(normal);
/* Newell's Method */
for (int i = 0; i < nverts; i++) {
v_curr = mvert[loopstart[i].v].co;
add_newell_cross_v3_v3v3(normal, v_prev, v_curr);
v_prev = v_curr;
}
if (UNLIKELY(normalize_v3(normal) == 0.0f)) {
normal[2] = 1.0f; /* other axis set to 0.0 */
}
}
void BKE_mesh_calc_poly_normal(const MPoly *mpoly,
const MLoop *loopstart,
const MVert *mvarray,
float r_no[3])
{
if (mpoly->totloop > 4) {
mesh_calc_ngon_normal(mpoly, loopstart, mvarray, r_no);
}
else if (mpoly->totloop == 3) {
normal_tri_v3(
r_no, mvarray[loopstart[0].v].co, mvarray[loopstart[1].v].co, mvarray[loopstart[2].v].co);
}
else if (mpoly->totloop == 4) {
normal_quad_v3(r_no,
mvarray[loopstart[0].v].co,
mvarray[loopstart[1].v].co,
mvarray[loopstart[2].v].co,
mvarray[loopstart[3].v].co);
}
else { /* horrible, two sided face! */
r_no[0] = 0.0;
r_no[1] = 0.0;
r_no[2] = 1.0;
}
}
/* duplicate of function above _but_ takes coords rather than mverts */
static void mesh_calc_ngon_normal_coords(const MPoly *mpoly,
const MLoop *loopstart,
const float (*vertex_coords)[3],
float r_normal[3])
{
const int nverts = mpoly->totloop;
const float *v_prev = vertex_coords[loopstart[nverts - 1].v];
const float *v_curr;
zero_v3(r_normal);
/* Newell's Method */
for (int i = 0; i < nverts; i++) {
v_curr = vertex_coords[loopstart[i].v];
add_newell_cross_v3_v3v3(r_normal, v_prev, v_curr);
v_prev = v_curr;
}
if (UNLIKELY(normalize_v3(r_normal) == 0.0f)) {
r_normal[2] = 1.0f; /* other axis set to 0.0 */
}
}
void BKE_mesh_calc_poly_normal_coords(const MPoly *mpoly,
const MLoop *loopstart,
const float (*vertex_coords)[3],
float r_no[3])
{
if (mpoly->totloop > 4) {
mesh_calc_ngon_normal_coords(mpoly, loopstart, vertex_coords, r_no);
}
else if (mpoly->totloop == 3) {
normal_tri_v3(r_no,
vertex_coords[loopstart[0].v],
vertex_coords[loopstart[1].v],
vertex_coords[loopstart[2].v]);
}
else if (mpoly->totloop == 4) {
normal_quad_v3(r_no,
vertex_coords[loopstart[0].v],
vertex_coords[loopstart[1].v],
vertex_coords[loopstart[2].v],
vertex_coords[loopstart[3].v]);
}
else { /* horrible, two sided face! */
r_no[0] = 0.0;
r_no[1] = 0.0;
r_no[2] = 1.0;
}
}
struct MeshCalcNormalsData_Poly {
const MVert *mvert;
const MLoop *mloop;
@ -620,7 +727,7 @@ MINLINE short unit_float_to_short(const float val)
return short(floorf(val * float(SHRT_MAX) + 0.5f));
}
void BKE_lnor_space_custom_data_to_normal(MLoopNorSpace *lnor_space,
void BKE_lnor_space_custom_data_to_normal(const MLoopNorSpace *lnor_space,
const short clnor_data[2],
float r_custom_lnor[3])
{
@ -654,7 +761,7 @@ void BKE_lnor_space_custom_data_to_normal(MLoopNorSpace *lnor_space,
}
}
void BKE_lnor_space_custom_normal_to_data(MLoopNorSpace *lnor_space,
void BKE_lnor_space_custom_normal_to_data(const MLoopNorSpace *lnor_space,
const float custom_lnor[3],
short r_clnor_data[2])
{

View File

@ -138,7 +138,7 @@ static void ntree_copy_data(Main * /*bmain*/, ID *id_dst, const ID *id_src, cons
ntree_dst->runtime = MEM_new<bNodeTreeRuntime>(__func__);
/* in case a running nodetree is copied */
ntree_dst->execdata = nullptr;
ntree_dst->runtime->execdata = nullptr;
BLI_listbase_clear(&ntree_dst->nodes);
BLI_listbase_clear(&ntree_dst->links);
@ -203,8 +203,6 @@ static void ntree_copy_data(Main * /*bmain*/, ID *id_dst, const ID *id_src, cons
new_node->parent = node_map.lookup(new_node->parent);
}
}
/* node tree will generate its own interface type */
ntree_dst->interface_type = nullptr;
if (ntree_src->runtime->field_inferencing_interface) {
ntree_dst->runtime->field_inferencing_interface = std::make_unique<FieldInferencingInterface>(
@ -227,14 +225,14 @@ static void ntree_free_data(ID *id)
* This should be removed when old tree types no longer require it.
* Currently the execution data for texture nodes remains in the tree
* after execution, until the node tree is updated or freed. */
if (ntree->execdata) {
if (ntree->runtime->execdata) {
switch (ntree->type) {
case NTREE_SHADER:
ntreeShaderEndExecTree(ntree->execdata);
ntreeShaderEndExecTree(ntree->runtime->execdata);
break;
case NTREE_TEXTURE:
ntreeTexEndExecTree(ntree->execdata);
ntree->execdata = nullptr;
ntreeTexEndExecTree(ntree->runtime->execdata);
ntree->runtime->execdata = nullptr;
break;
}
}
@ -242,9 +240,6 @@ static void ntree_free_data(ID *id)
/* XXX not nice, but needed to free localized node groups properly */
free_localized_node_groups(ntree);
/* Unregister associated RNA types. */
ntreeInterfaceTypeFree(ntree);
BLI_freelistN(&ntree->links);
LISTBASE_FOREACH_MUTABLE (bNode *, node, &ntree->nodes) {
@ -620,11 +615,8 @@ static void ntree_blend_write(BlendWriter *writer, ID *id, const void *id_addres
bNodeTree *ntree = (bNodeTree *)id;
/* Clean up, important in undo case to reduce false detection of changed datablocks. */
ntree->is_updating = false;
ntree->typeinfo = nullptr;
ntree->interface_type = nullptr;
ntree->progress = nullptr;
ntree->execdata = nullptr;
ntree->runtime->execdata = nullptr;
BLO_write_id_struct(writer, bNodeTree, id_address, &ntree->id);
@ -641,8 +633,6 @@ static void direct_link_node_socket(BlendDataReader *reader, bNodeSocket *sock)
BLO_read_data_address(reader, &sock->storage);
BLO_read_data_address(reader, &sock->default_value);
BLO_read_data_address(reader, &sock->default_attribute_name);
sock->total_inputs = 0; /* Clear runtime data set before drawing. */
sock->cache = nullptr;
sock->runtime = MEM_new<bNodeSocketRuntime>(__func__);
}
@ -676,12 +666,8 @@ void ntreeBlendReadData(BlendDataReader *reader, ID *owner_id, bNodeTree *ntree)
ntree->owner_id = owner_id;
/* NOTE: writing and reading goes in sync, for speed. */
ntree->is_updating = false;
ntree->typeinfo = nullptr;
ntree->interface_type = nullptr;
ntree->progress = nullptr;
ntree->execdata = nullptr;
ntree->runtime = MEM_new<bNodeTreeRuntime>(__func__);
BKE_ntree_update_tag_missing_runtime_data(ntree);
@ -2262,7 +2248,7 @@ static void node_socket_copy(bNodeSocket *sock_dst, const bNodeSocket *sock_src,
sock_dst->stack_index = 0;
/* XXX some compositor nodes (e.g. image, render layers) still store
* some persistent buffer data here, need to clear this to avoid dangling pointers. */
sock_dst->cache = nullptr;
sock_dst->runtime->cache = nullptr;
}
namespace blender::bke {
@ -2958,9 +2944,9 @@ static void node_free_node(bNodeTree *ntree, bNode *node)
}
/* texture node has bad habit of keeping exec data around */
if (ntree->type == NTREE_TEXTURE && ntree->execdata) {
ntreeTexEndExecTree(ntree->execdata);
ntree->execdata = nullptr;
if (ntree->type == NTREE_TEXTURE && ntree->runtime->execdata) {
ntreeTexEndExecTree(ntree->runtime->execdata);
ntree->runtime->execdata = nullptr;
}
}
@ -3444,127 +3430,6 @@ void ntreeRemoveSocketInterface(bNodeTree *ntree, bNodeSocket *sock)
BKE_ntree_update_tag_interface(ntree);
}
/* generates a valid RNA identifier from the node tree name */
static void ntree_interface_identifier_base(bNodeTree *ntree, char *base)
{
/* generate a valid RNA identifier */
BLI_sprintf(base, "NodeTreeInterface_%s", ntree->id.name + 2);
RNA_identifier_sanitize(base, false);
}
/* check if the identifier is already in use */
static bool ntree_interface_unique_identifier_check(void * /*data*/, const char *identifier)
{
return (RNA_struct_find(identifier) != nullptr);
}
/* generates the actual unique identifier and ui name and description */
static void ntree_interface_identifier(bNodeTree *ntree,
const char *base,
char *identifier,
int maxlen,
char *name,
char *description)
{
/* There is a possibility that different node tree names get mapped to the same identifier
* after sanitation (e.g. "SomeGroup_A", "SomeGroup.A" both get sanitized to "SomeGroup_A").
* On top of the sanitized id string add a number suffix if necessary to avoid duplicates.
*/
identifier[0] = '\0';
BLI_uniquename_cb(
ntree_interface_unique_identifier_check, nullptr, base, '_', identifier, maxlen);
BLI_sprintf(name, "Node Tree %s Interface", ntree->id.name + 2);
BLI_sprintf(description, "Interface properties of node group %s", ntree->id.name + 2);
}
static void ntree_interface_type_create(bNodeTree *ntree)
{
/* strings are generated from base string + ID name, sizes are sufficient */
char base[MAX_ID_NAME + 64], identifier[MAX_ID_NAME + 64], name[MAX_ID_NAME + 64],
description[MAX_ID_NAME + 64];
/* generate a valid RNA identifier */
ntree_interface_identifier_base(ntree, base);
ntree_interface_identifier(ntree, base, identifier, sizeof(identifier), name, description);
/* register a subtype of PropertyGroup */
StructRNA *srna = RNA_def_struct_ptr(&BLENDER_RNA, identifier, &RNA_PropertyGroup);
RNA_def_struct_ui_text(srna, name, description);
RNA_def_struct_duplicate_pointers(&BLENDER_RNA, srna);
/* associate the RNA type with the node tree */
ntree->interface_type = srna;
RNA_struct_blender_type_set(srna, ntree);
/* add socket properties */
LISTBASE_FOREACH (bNodeSocket *, sock, &ntree->inputs) {
bNodeSocketType *stype = sock->typeinfo;
if (stype && stype->interface_register_properties) {
stype->interface_register_properties(ntree, sock, srna);
}
}
LISTBASE_FOREACH (bNodeSocket *, sock, &ntree->outputs) {
bNodeSocketType *stype = sock->typeinfo;
if (stype && stype->interface_register_properties) {
stype->interface_register_properties(ntree, sock, srna);
}
}
}
StructRNA *ntreeInterfaceTypeGet(bNodeTree *ntree, bool create)
{
if (ntree->interface_type) {
/* strings are generated from base string + ID name, sizes are sufficient */
char base[MAX_ID_NAME + 64], identifier[MAX_ID_NAME + 64], name[MAX_ID_NAME + 64],
description[MAX_ID_NAME + 64];
/* A bit of a hack: when changing the ID name, update the RNA type identifier too,
* so that the names match. This is not strictly necessary to keep it working,
* but better for identifying associated NodeTree blocks and RNA types.
*/
StructRNA *srna = ntree->interface_type;
ntree_interface_identifier_base(ntree, base);
/* RNA identifier may have a number suffix, but should start with the idbase string */
if (!STREQLEN(RNA_struct_identifier(srna), base, sizeof(base))) {
/* generate new unique RNA identifier from the ID name */
ntree_interface_identifier(ntree, base, identifier, sizeof(identifier), name, description);
/* rename the RNA type */
RNA_def_struct_free_pointers(&BLENDER_RNA, srna);
RNA_def_struct_identifier(&BLENDER_RNA, srna, identifier);
RNA_def_struct_ui_text(srna, name, description);
RNA_def_struct_duplicate_pointers(&BLENDER_RNA, srna);
}
}
else if (create) {
ntree_interface_type_create(ntree);
}
return ntree->interface_type;
}
void ntreeInterfaceTypeFree(bNodeTree *ntree)
{
if (ntree->interface_type) {
RNA_struct_free(&BLENDER_RNA, ntree->interface_type);
ntree->interface_type = nullptr;
}
}
void ntreeInterfaceTypeUpdate(bNodeTree *ntree)
{
/* XXX it would be sufficient to just recreate all properties
* instead of re-registering the whole struct type,
* but there is currently no good way to do this in the RNA functions.
* Overhead should be negligible.
*/
ntreeInterfaceTypeFree(ntree);
ntree_interface_type_create(ntree);
}
/* ************ find stuff *************** */
bNode *ntreeFindType(const bNodeTree *ntree, int type)

View File

@ -1009,10 +1009,6 @@ class NodeTreeMainUpdater {
result.interface_changed = true;
}
if (result.interface_changed) {
ntreeInterfaceTypeUpdate(&ntree);
}
return result;
}

View File

@ -344,8 +344,10 @@ static void make_recursive_duplis(const DupliContext *ctx,
ctx->instance_stack->append(ob);
rctx.gen->make_duplis(&rctx);
ctx->instance_stack->remove_last();
if (!ctx->dupli_gen_type_stack->is_empty()) {
ctx->dupli_gen_type_stack->remove_last();
if (rctx.gen->type != GEOMETRY_SET_DUPLI_GENERATOR_TYPE) {
if (!ctx->dupli_gen_type_stack->is_empty()) {
ctx->dupli_gen_type_stack->remove_last();
}
}
}
}
@ -391,8 +393,10 @@ static void make_child_duplis(const DupliContext *ctx,
ob->flag |= OB_DONE; /* Doesn't render. */
}
make_child_duplis_cb(&pctx, userdata, ob);
if (!ctx->dupli_gen_type_stack->is_empty()) {
ctx->dupli_gen_type_stack->remove_last();
if (pctx.gen->type != GEOMETRY_SET_DUPLI_GENERATOR_TYPE) {
if (!ctx->dupli_gen_type_stack->is_empty()) {
ctx->dupli_gen_type_stack->remove_last();
}
}
}
}
@ -419,8 +423,10 @@ static void make_child_duplis(const DupliContext *ctx,
}
make_child_duplis_cb(&pctx, userdata, ob);
if (!ctx->dupli_gen_type_stack->is_empty()) {
ctx->dupli_gen_type_stack->remove_last();
if (pctx.gen->type != GEOMETRY_SET_DUPLI_GENERATOR_TYPE) {
if (!ctx->dupli_gen_type_stack->is_empty()) {
ctx->dupli_gen_type_stack->remove_last();
}
}
}
}

View File

@ -746,6 +746,7 @@ void BKE_pbvh_build_mesh(PBVH *pbvh,
pbvh->vdata = vdata;
pbvh->ldata = ldata;
pbvh->pdata = pdata;
pbvh->faces_num = mesh->totpoly;
pbvh->face_sets_color_seed = mesh->face_sets_color_seed;
pbvh->face_sets_color_default = mesh->face_sets_color_default;
@ -833,6 +834,7 @@ void BKE_pbvh_build_grids(PBVH *pbvh,
pbvh->gridkey = *key;
pbvh->grid_hidden = grid_hidden;
pbvh->subdiv_ccg = subdiv_ccg;
pbvh->faces_num = me->totpoly;
/* Find maximum number of grids per face. */
int max_grids = 1;
@ -2018,6 +2020,11 @@ void BKE_pbvh_node_mark_update_color(PBVHNode *node)
node->flag |= PBVH_UpdateColor | PBVH_UpdateDrawBuffers | PBVH_UpdateRedraw;
}
void BKE_pbvh_node_mark_update_face_sets(PBVHNode *node)
{
node->flag |= PBVH_UpdateDrawBuffers | PBVH_UpdateRedraw;
}
void BKE_pbvh_mark_rebuild_pixels(PBVH *pbvh)
{
for (int n = 0; n < pbvh->totnode; n++) {
@ -2122,6 +2129,20 @@ void BKE_pbvh_node_get_loops(PBVH *pbvh,
}
}
int BKE_pbvh_num_faces(const PBVH *pbvh)
{
switch (pbvh->header.type) {
case PBVH_GRIDS:
case PBVH_FACES:
return pbvh->faces_num;
case PBVH_BMESH:
return pbvh->header.bm->totface;
}
BLI_assert_unreachable();
return 0;
}
void BKE_pbvh_node_get_verts(PBVH *pbvh,
PBVHNode *node,
const int **r_vert_indices,
@ -3624,16 +3645,17 @@ static void pbvh_face_iter_step(PBVHFaceIter *fd, bool do_step)
pbvh_face_iter_verts_reserve(fd, mp->totloop);
const MLoop *ml = fd->mloop_ + mp->loopstart;
const int grid_area = fd->subdiv_key_.grid_area;
for (int i = 0; i < mp->totloop; i++, ml++) {
if (fd->pbvh_type_ == PBVH_GRIDS) {
/* Grid corners. */
fd->verts[i].i = mp->loopstart + i;
fd->verts[i].i = (mp->loopstart + i) * grid_area + grid_area - 1;
}
else {
fd->verts[i].i = ml->v;
}
}
break;
}
}
@ -3656,6 +3678,7 @@ void BKE_pbvh_face_iter_init(PBVH *pbvh, PBVHNode *node, PBVHFaceIter *fd)
switch (BKE_pbvh_type(pbvh)) {
case PBVH_GRIDS:
fd->subdiv_ccg_ = pbvh->subdiv_ccg;
fd->subdiv_key_ = pbvh->gridkey;
ATTR_FALLTHROUGH;
case PBVH_FACES:
fd->mpoly_ = pbvh->mpoly;
@ -3702,3 +3725,92 @@ bool BKE_pbvh_face_iter_done(PBVHFaceIter *fd)
return true;
}
}
void BKE_pbvh_sync_visibility_from_verts(PBVH *pbvh, Mesh *mesh)
{
switch (pbvh->header.type) {
case PBVH_FACES: {
BKE_mesh_flush_hidden_from_verts(mesh);
BKE_pbvh_update_hide_attributes_from_mesh(pbvh);
break;
}
case PBVH_BMESH: {
BMIter iter;
BMVert *v;
BMEdge *e;
BMFace *f;
BM_ITER_MESH (f, &iter, pbvh->header.bm, BM_FACES_OF_MESH) {
BM_elem_flag_disable(f, BM_ELEM_HIDDEN);
}
BM_ITER_MESH (e, &iter, pbvh->header.bm, BM_EDGES_OF_MESH) {
BM_elem_flag_disable(e, BM_ELEM_HIDDEN);
}
BM_ITER_MESH (v, &iter, pbvh->header.bm, BM_VERTS_OF_MESH) {
if (!BM_elem_flag_test(v, BM_ELEM_HIDDEN)) {
continue;
}
BMIter iter_l;
BMLoop *l;
BM_ITER_ELEM (l, &iter_l, v, BM_LOOPS_OF_VERT) {
BM_elem_flag_enable(l->e, BM_ELEM_HIDDEN);
BM_elem_flag_enable(l->f, BM_ELEM_HIDDEN);
}
}
break;
}
case PBVH_GRIDS: {
const MPoly *mp = BKE_mesh_polys(mesh);
const MLoop *mloop = BKE_mesh_loops(mesh);
CCGKey key = pbvh->gridkey;
bool *hide_poly = (bool *)CustomData_get_layer_named(
&mesh->pdata, CD_PROP_BOOL, ".hide_poly");
bool delete_hide_poly = true;
for (int face_index = 0; face_index < mesh->totpoly; face_index++, mp++) {
const MLoop *ml = mloop + mp->loopstart;
bool hidden = false;
for (int loop_index = 0; !hidden && loop_index < mp->totloop; loop_index++, ml++) {
int grid_index = mp->loopstart + loop_index;
if (pbvh->grid_hidden[grid_index] &&
BLI_BITMAP_TEST(pbvh->grid_hidden[grid_index], key.grid_area - 1)) {
hidden = true;
break;
}
}
if (hidden && !hide_poly) {
hide_poly = (bool *)CustomData_get_layer_named(&mesh->pdata, CD_PROP_BOOL, ".hide_poly");
if (!hide_poly) {
CustomData_add_layer_named(
&mesh->pdata, CD_PROP_BOOL, CD_CONSTRUCT, NULL, mesh->totpoly, ".hide_poly");
hide_poly = (bool *)CustomData_get_layer_named(
&mesh->pdata, CD_PROP_BOOL, ".hide_poly");
}
}
if (hide_poly) {
delete_hide_poly = delete_hide_poly && !hidden;
hide_poly[face_index] = hidden;
}
}
if (delete_hide_poly) {
CustomData_free_layer_named(&mesh->pdata, ".hide_poly", mesh->totpoly);
}
BKE_mesh_flush_hidden_from_polys(mesh);
BKE_pbvh_update_hide_attributes_from_mesh(pbvh);
break;
}
}
}

View File

@ -148,6 +148,7 @@ struct PBVH {
int *prim_indices;
int totprim;
int totvert;
int faces_num; /* Do not use directly, use BKE_pbvh_num_faces. */
int leaf_limit;

View File

@ -48,6 +48,7 @@
#include "BKE_lib_query.h"
#include "BKE_material.h"
#include "BKE_node.h"
#include "BKE_node_runtime.hh"
#include "BKE_scene.h"
#include "BKE_texture.h"
@ -85,8 +86,8 @@ static void texture_copy_data(Main *bmain, ID *id_dst, const ID *id_src, const i
texture_dst->coba = static_cast<ColorBand *>(MEM_dupallocN(texture_dst->coba));
}
if (texture_src->nodetree) {
if (texture_src->nodetree->execdata) {
ntreeTexEndExecTree(texture_src->nodetree->execdata);
if (texture_src->nodetree->runtime->execdata) {
ntreeTexEndExecTree(texture_src->nodetree->runtime->execdata);
}
if (is_localized) {

View File

@ -236,6 +236,9 @@ class IndexMask {
IndexMask slice(int64_t start, int64_t size) const;
IndexMask slice(IndexRange slice) const;
IndexMask slice_safe(int64_t start, int64_t size) const;
IndexMask slice_safe(IndexRange slice) const;
/**
* Create a sub-mask that is also shifted to the beginning.
* The shifting to the beginning allows code to work with smaller indices,

View File

@ -86,32 +86,16 @@ template<typename T, int Size> struct vec_base : public vec_struct_base<T, Size>
vec_base() = default;
explicit vec_base(uint value)
explicit vec_base(T value)
{
for (int i = 0; i < Size; i++) {
(*this)[i] = T(value);
(*this)[i] = value;
}
}
explicit vec_base(int value)
template<typename U, BLI_ENABLE_IF((std::is_convertible_v<U, T>))>
explicit vec_base(U value) : vec_base(T(value))
{
for (int i = 0; i < Size; i++) {
(*this)[i] = T(value);
}
}
explicit vec_base(float value)
{
for (int i = 0; i < Size; i++) {
(*this)[i] = T(value);
}
}
explicit vec_base(double value)
{
for (int i = 0; i < Size; i++) {
(*this)[i] = T(value);
}
}
/* Workaround issue with template BLI_ENABLE_IF((Size == 2)) not working. */

View File

@ -670,11 +670,6 @@ void minmax_v4v4_v4(float min[4], float max[4], const float vec[4]);
void minmax_v3v3_v3(float min[3], float max[3], const float vec[3]);
void minmax_v2v2_v2(float min[2], float max[2], const float vec[2]);
void minmax_v3v3_v3_array(float r_min[3],
float r_max[3],
const float (*vec_arr)[3],
int var_arr_num);
/** ensure \a v1 is \a dist from \a v2 */
void dist_ensure_v3_v3fl(float v1[3], const float v2[3], float dist);
void dist_ensure_v2_v2fl(float v1[2], const float v2[2], float dist);

View File

@ -141,8 +141,8 @@ template<typename T> class Span {
{
BLI_assert(start >= 0);
BLI_assert(size >= 0);
const int64_t new_size = std::max<int64_t>(0, std::min(size, size_ - start));
return Span(data_ + start, new_size);
BLI_assert(start + size <= size_ || size == 0);
return Span(data_ + start, size);
}
constexpr Span slice(IndexRange range) const
@ -150,6 +150,23 @@ template<typename T> class Span {
return this->slice(range.start(), range.size());
}
/**
* Returns a contiguous part of the array. This invokes undefined behavior when the start or size
* is negative. Clamps the size of the new new span so it fits in the current one.
*/
constexpr Span slice_safe(const int64_t start, const int64_t size) const
{
BLI_assert(start >= 0);
BLI_assert(size >= 0);
const int64_t new_size = std::max<int64_t>(0, std::min(size, size_ - start));
return Span(data_ + start, new_size);
}
constexpr Span slice_safe(IndexRange range) const
{
return this->slice_safe(range.start(), range.size());
}
/**
* Returns a new Span with n elements removed from the beginning. This invokes undefined
* behavior when n is negative.
@ -580,8 +597,8 @@ template<typename T> class MutableSpan {
{
BLI_assert(start >= 0);
BLI_assert(size >= 0);
const int64_t new_size = std::max<int64_t>(0, std::min(size, size_ - start));
return MutableSpan(data_ + start, new_size);
BLI_assert(start + size <= size_ || size == 0);
return MutableSpan(data_ + start, size);
}
constexpr MutableSpan slice(IndexRange range) const
@ -589,6 +606,23 @@ template<typename T> class MutableSpan {
return this->slice(range.start(), range.size());
}
/**
* Returns a contiguous part of the array. This invokes undefined behavior when the start or size
* is negative. Clamps the size of the new new span so it fits in the current one.
*/
constexpr MutableSpan slice_safe(const int64_t start, const int64_t size) const
{
BLI_assert(start >= 0);
BLI_assert(size >= 0);
const int64_t new_size = std::max<int64_t>(0, std::min(size, size_ - start));
return MutableSpan(data_ + start, new_size);
}
constexpr MutableSpan slice_safe(IndexRange range) const
{
return this->slice_safe(range.start(), range.size());
}
/**
* Returns a new MutableSpan with n elements removed from the beginning. This invokes
* undefined behavior when n is negative.

View File

@ -15,6 +15,16 @@ IndexMask IndexMask::slice(IndexRange slice) const
return IndexMask(indices_.slice(slice));
}
IndexMask IndexMask::slice_safe(int64_t start, int64_t size) const
{
return this->slice_safe(IndexRange(start, size));
}
IndexMask IndexMask::slice_safe(IndexRange slice) const
{
return IndexMask(indices_.slice_safe(slice));
}
IndexMask IndexMask::slice_and_offset(const IndexRange slice, Vector<int64_t> &r_new_indices) const
{
const int slice_size = slice.size();

View File

@ -904,16 +904,6 @@ void minmax_v2v2_v2(float min[2], float max[2], const float vec[2])
}
}
void minmax_v3v3_v3_array(float r_min[3],
float r_max[3],
const float (*vec_arr)[3],
int var_arr_num)
{
while (var_arr_num--) {
minmax_v3v3_v3(r_min, r_max, *vec_arr++);
}
}
void dist_ensure_v3_v3fl(float v1[3], const float v2[3], const float dist)
{
if (!equals_v3v3(v2, v1)) {

View File

@ -142,8 +142,8 @@ TEST(span, SliceRange)
TEST(span, SliceLargeN)
{
Vector<int> a = {1, 2, 3, 4, 5};
Span<int> slice1 = Span<int>(a).slice(3, 100);
MutableSpan<int> slice2 = MutableSpan<int>(a).slice(3, 100);
Span<int> slice1 = Span<int>(a).slice_safe(3, 100);
MutableSpan<int> slice2 = MutableSpan<int>(a).slice_safe(3, 100);
EXPECT_EQ(slice1.size(), 2);
EXPECT_EQ(slice2.size(), 2);
EXPECT_EQ(slice1[0], 4);

View File

@ -2272,18 +2272,6 @@ void blo_do_versions_250(FileData *fd, Library *lib, Main *bmain)
}
FOREACH_NODETREE_END;
}
{
/* Initialize group tree nodetypes.
* These are used to distinguish tree types and
* associate them with specific node types for polling.
*/
bNodeTree *ntree;
/* all node trees in bmain->nodetree are considered groups */
for (ntree = bmain->nodetrees.first; ntree; ntree = ntree->id.next) {
ntree->nodetype = NODE_GROUP;
}
}
}
if (!MAIN_VERSION_ATLEAST(bmain, 259, 4)) {

View File

@ -69,7 +69,7 @@ static void version_motion_tracking_legacy_camera_object(MovieClip &movieclip)
tracking.act_plane_track_legacy = nullptr;
}
void version_movieclips_legacy_camera_object(Main *bmain)
static void version_movieclips_legacy_camera_object(Main *bmain)
{
LISTBASE_FOREACH (MovieClip *, movieclip, &bmain->movieclips) {
version_motion_tracking_legacy_camera_object(*movieclip);

View File

@ -690,7 +690,6 @@ void BM_loop_interp_from_face(
float *w = BLI_array_alloca(w, f_src->len);
float axis_mat[3][3]; /* use normal to transform into 2d xy coords */
float co[2];
int i;
/* Convert the 3d coords into 2d for projection. */
float axis_dominant[3];
@ -708,7 +707,7 @@ void BM_loop_interp_from_face(
}
axis_dominant_v3_to_m3(axis_mat, axis_dominant);
i = 0;
int i = 0;
l_iter = l_first = BM_FACE_FIRST_LOOP(f_src);
do {
mul_v2_m3v3(cos_2d[i], axis_mat, l_iter->v->co);
@ -742,13 +741,12 @@ void BM_vert_interp_from_face(BMesh *bm, BMVert *v_dst, const BMFace *f_src)
float *w = BLI_array_alloca(w, f_src->len);
float axis_mat[3][3]; /* use normal to transform into 2d xy coords */
float co[2];
int i;
/* convert the 3d coords into 2d for projection */
BLI_assert(BM_face_is_normal_valid(f_src));
axis_dominant_v3_to_m3(axis_mat, f_src->no);
i = 0;
int i = 0;
l_iter = l_first = BM_FACE_FIRST_LOOP(f_src);
do {
mul_v2_m3v3(cos_2d[i], axis_mat, l_iter->v->co);
@ -838,12 +836,9 @@ static void update_data_blocks(BMesh *bm, CustomData *olddata, CustomData *data)
void BM_data_layer_add(BMesh *bm, CustomData *data, int type)
{
CustomData olddata;
olddata = *data;
CustomData olddata = *data;
olddata.layers = (olddata.layers) ? MEM_dupallocN(olddata.layers) : NULL;
/* the pool is now owned by olddata and must not be shared */
/* The pool is now owned by `olddata` and must not be shared. */
data->pool = NULL;
CustomData_add_layer(data, type, CD_SET_DEFAULT, NULL, 0);
@ -856,12 +851,9 @@ void BM_data_layer_add(BMesh *bm, CustomData *data, int type)
void BM_data_layer_add_named(BMesh *bm, CustomData *data, int type, const char *name)
{
CustomData olddata;
olddata = *data;
CustomData olddata = *data;
olddata.layers = (olddata.layers) ? MEM_dupallocN(olddata.layers) : NULL;
/* the pool is now owned by olddata and must not be shared */
/* The pool is now owned by `olddata` and must not be shared. */
data->pool = NULL;
CustomData_add_layer_named(data, type, CD_SET_DEFAULT, NULL, 0, name);
@ -874,19 +866,15 @@ void BM_data_layer_add_named(BMesh *bm, CustomData *data, int type, const char *
void BM_data_layer_free(BMesh *bm, CustomData *data, int type)
{
CustomData olddata;
bool has_layer;
olddata = *data;
CustomData olddata = *data;
olddata.layers = (olddata.layers) ? MEM_dupallocN(olddata.layers) : NULL;
/* the pool is now owned by olddata and must not be shared */
/* The pool is now owned by `olddata` and must not be shared. */
data->pool = NULL;
has_layer = CustomData_free_layer_active(data, type, 0);
const bool had_layer = CustomData_free_layer_active(data, type, 0);
/* Assert because its expensive to realloc - better not do if layer isn't present. */
BLI_assert(has_layer != false);
UNUSED_VARS_NDEBUG(has_layer);
BLI_assert(had_layer != false);
UNUSED_VARS_NDEBUG(had_layer);
update_data_blocks(bm, &olddata, data);
if (olddata.layers) {
@ -898,38 +886,38 @@ bool BM_data_layer_free_named(BMesh *bm, CustomData *data, const char *name)
{
CustomData olddata = *data;
olddata.layers = (olddata.layers) ? MEM_dupallocN(olddata.layers) : NULL;
/* the pool is now owned by olddata and must not be shared */
/* The pool is now owned by `olddata` and must not be shared. */
data->pool = NULL;
const bool has_layer = CustomData_free_layer_named(data, name, 0);
const bool had_layer = CustomData_free_layer_named(data, name, 0);
if (has_layer) {
if (had_layer) {
update_data_blocks(bm, &olddata, data);
}
else {
/* Move pool ownership back to BMesh CustomData, no block reallocation. */
data->pool = olddata.pool;
}
if (olddata.layers) {
MEM_freeN(olddata.layers);
}
return has_layer;
return had_layer;
}
void BM_data_layer_free_n(BMesh *bm, CustomData *data, int type, int n)
{
CustomData olddata;
bool has_layer;
olddata = *data;
CustomData olddata = *data;
olddata.layers = (olddata.layers) ? MEM_dupallocN(olddata.layers) : NULL;
/* the pool is now owned by olddata and must not be shared */
/* The pool is now owned by `olddata` and must not be shared. */
data->pool = NULL;
has_layer = CustomData_free_layer(data, type, 0, CustomData_get_layer_index_n(data, type, n));
const bool had_layer = CustomData_free_layer(
data, type, 0, CustomData_get_layer_index_n(data, type, n));
/* Assert because its expensive to realloc - better not do if layer isn't present. */
BLI_assert(has_layer != false);
UNUSED_VARS_NDEBUG(has_layer);
BLI_assert(had_layer != false);
UNUSED_VARS_NDEBUG(had_layer);
update_data_blocks(bm, &olddata, data);
if (olddata.layers) {

View File

@ -294,7 +294,7 @@ void ExecutionGroup::execute(ExecutionSystem *graph)
if (width_ == 0 || height_ == 0) {
return;
} /** \note Break out... no pixels to calculate. */
if (bTree->test_break && bTree->test_break(bTree->tbh)) {
if (bTree->runtime->test_break && bTree->runtime->test_break(bTree->runtime->tbh)) {
return;
} /** \note Early break out for blur and preview nodes. */
if (chunks_len_ == 0) {
@ -335,8 +335,8 @@ void ExecutionGroup::execute(ExecutionSystem *graph)
start_evaluated = true;
number_evaluated++;
if (bTree->update_draw) {
bTree->update_draw(bTree->udh);
if (bTree->runtime->update_draw) {
bTree->runtime->update_draw(bTree->runtime->udh);
}
break;
}
@ -356,7 +356,7 @@ void ExecutionGroup::execute(ExecutionSystem *graph)
WorkScheduler::finish();
if (bTree->test_break && bTree->test_break(bTree->tbh)) {
if (bTree->runtime->test_break && bTree->runtime->test_break(bTree->runtime->tbh)) {
breaked = true;
}
}
@ -414,12 +414,12 @@ void ExecutionGroup::finalize_chunk_execution(int chunk_number, MemoryBuffer **m
/* Status report is only performed for top level Execution Groups. */
float progress = chunks_finished_;
progress /= chunks_len_;
bTree_->progress(bTree_->prh, progress);
bTree_->runtime->progress(bTree_->runtime->prh, progress);
char buf[128];
BLI_snprintf(
buf, sizeof(buf), TIP_("Compositing | Tile %u-%u"), chunks_finished_, chunks_len_);
bTree_->stats_draw(bTree_->sdh, buf);
bTree_->runtime->stats_draw(bTree_->runtime->sdh, buf);
}
}

View File

@ -163,7 +163,7 @@ void ExecutionSystem::execute_work(const rcti &work_rect,
bool ExecutionSystem::is_breaked() const
{
const bNodeTree *btree = context_.get_bnodetree();
return btree->test_break(btree->tbh);
return btree->runtime->test_break(btree->runtime->tbh);
}
} // namespace blender::compositor

View File

@ -32,7 +32,8 @@ FullFrameExecutionModel::FullFrameExecutionModel(CompositorContext &context,
void FullFrameExecutionModel::execute(ExecutionSystem &exec_system)
{
const bNodeTree *node_tree = this->context_.get_bnodetree();
node_tree->stats_draw(node_tree->sdh, TIP_("Compositing | Initializing execution"));
node_tree->runtime->stats_draw(node_tree->runtime->sdh,
TIP_("Compositing | Initializing execution"));
DebugInfo::graphviz(&exec_system, "compositor_prior_rendering");
@ -272,7 +273,7 @@ void FullFrameExecutionModel::update_progress_bar()
const bNodeTree *tree = context_.get_bnodetree();
if (tree) {
const float progress = num_operations_finished_ / float(operations_.size());
tree->progress(tree->prh, progress);
tree->runtime->progress(tree->runtime->prh, progress);
char buf[128];
BLI_snprintf(buf,
@ -280,7 +281,7 @@ void FullFrameExecutionModel::update_progress_bar()
TIP_("Compositing | Operation %i-%li"),
num_operations_finished_ + 1,
operations_.size());
tree->stats_draw(tree->sdh, buf);
tree->runtime->stats_draw(tree->runtime->sdh, buf);
}
}

View File

@ -17,6 +17,8 @@
#include "COM_MemoryBuffer.h"
#include "COM_MetaData.h"
#include "BKE_node_runtime.hh"
#include "clew.h"
#include "DNA_node_types.h"
@ -555,13 +557,13 @@ class NodeOperation {
inline bool is_braked() const
{
return btree_->test_break(btree_->tbh);
return btree_->runtime->test_break(btree_->runtime->tbh);
}
inline void update_draw()
{
if (btree_->update_draw) {
btree_->update_draw(btree_->udh);
if (btree_->runtime->update_draw) {
btree_->runtime->update_draw(btree_->runtime->udh);
}
}

View File

@ -21,7 +21,8 @@ TiledExecutionModel::TiledExecutionModel(CompositorContext &context,
: ExecutionModel(context, operations), groups_(groups)
{
const bNodeTree *node_tree = context.get_bnodetree();
node_tree->stats_draw(node_tree->sdh, TIP_("Compositing | Determining resolution"));
node_tree->runtime->stats_draw(node_tree->runtime->sdh,
TIP_("Compositing | Determining resolution"));
uint resolution[2];
for (ExecutionGroup *group : groups_) {
@ -100,7 +101,8 @@ void TiledExecutionModel::execute(ExecutionSystem &exec_system)
{
const bNodeTree *editingtree = this->context_.get_bnodetree();
editingtree->stats_draw(editingtree->sdh, TIP_("Compositing | Initializing execution"));
editingtree->runtime->stats_draw(editingtree->runtime->sdh,
TIP_("Compositing | Initializing execution"));
update_read_buffer_offset(operations_);
@ -118,7 +120,8 @@ void TiledExecutionModel::execute(ExecutionSystem &exec_system)
WorkScheduler::finish();
WorkScheduler::stop();
editingtree->stats_draw(editingtree->sdh, TIP_("Compositing | De-initializing execution"));
editingtree->runtime->stats_draw(editingtree->runtime->sdh,
TIP_("Compositing | De-initializing execution"));
for (NodeOperation *operation : operations_) {
operation->deinit_execution();

View File

@ -6,6 +6,7 @@
#include "BLT_translation.h"
#include "BKE_node.h"
#include "BKE_node_runtime.hh"
#include "BKE_scene.h"
#include "COM_ExecutionSystem.h"
@ -41,8 +42,8 @@ static void compositor_init_node_previews(const RenderData *render_data, bNodeTr
static void compositor_reset_node_tree_status(bNodeTree *node_tree)
{
node_tree->progress(node_tree->prh, 0.0);
node_tree->stats_draw(node_tree->sdh, IFACE_("Compositing"));
node_tree->runtime->progress(node_tree->runtime->prh, 0.0);
node_tree->runtime->stats_draw(node_tree->runtime->sdh, IFACE_("Compositing"));
}
void COM_execute(RenderData *render_data,
@ -61,7 +62,7 @@ void COM_execute(RenderData *render_data,
BLI_mutex_lock(&g_compositor.mutex);
if (node_tree->test_break(node_tree->tbh)) {
if (node_tree->runtime->test_break(node_tree->runtime->tbh)) {
/* During editing multiple compositor executions can be triggered.
* Make sure this is the most recent one. */
BLI_mutex_unlock(&g_compositor.mutex);
@ -82,7 +83,7 @@ void COM_execute(RenderData *render_data,
render_data, scene, node_tree, rendering, true, view_name);
fast_pass.execute();
if (node_tree->test_break(node_tree->tbh)) {
if (node_tree->runtime->test_break(node_tree->runtime->tbh)) {
BLI_mutex_unlock(&g_compositor.mutex);
return;
}

View File

@ -30,7 +30,7 @@ static TrackPositionOperation *create_motion_operation(NodeConverter &converter,
operation->set_track_name(trackpos_data->track_name);
operation->set_framenumber(frame_number);
operation->set_axis(axis);
operation->set_position(CMP_TRACKPOS_ABSOLUTE);
operation->set_position(CMP_NODE_TRACK_POSITION_ABSOLUTE);
operation->set_relative_frame(frame_number + delta);
operation->set_speed_output(true);
converter.add_operation(operation);
@ -49,7 +49,7 @@ void TrackPositionNode::convert_to_operations(NodeConverter &converter,
NodeOutput *output_speed = this->get_output_socket(2);
int frame_number;
if (editor_node->custom1 == CMP_TRACKPOS_ABSOLUTE_FRAME) {
if (editor_node->custom1 == CMP_NODE_TRACK_POSITION_ABSOLUTE_FRAME) {
frame_number = editor_node->custom2;
}
else {
@ -62,7 +62,7 @@ void TrackPositionNode::convert_to_operations(NodeConverter &converter,
operationX->set_track_name(trackpos_data->track_name);
operationX->set_framenumber(frame_number);
operationX->set_axis(0);
operationX->set_position(editor_node->custom1);
operationX->set_position(static_cast<CMPNodeTrackPositionMode>(editor_node->custom1));
operationX->set_relative_frame(editor_node->custom2);
converter.add_operation(operationX);
converter.map_output_socket(outputX, operationX->get_output_socket());
@ -73,7 +73,7 @@ void TrackPositionNode::convert_to_operations(NodeConverter &converter,
operationY->set_track_name(trackpos_data->track_name);
operationY->set_framenumber(frame_number);
operationY->set_axis(1);
operationY->set_position(editor_node->custom1);
operationX->set_position(static_cast<CMPNodeTrackPositionMode>(editor_node->custom1));
operationY->set_relative_frame(editor_node->custom2);
converter.add_operation(operationY);
converter.map_output_socket(outputY, operationY->get_output_socket());

View File

@ -192,7 +192,7 @@ static void write_buffer_rect(rcti *rect,
}
offset += size;
if (tree->test_break && tree->test_break(tree->tbh)) {
if (tree->runtime->test_break && tree->runtime->test_break(tree->runtime->tbh)) {
breaked = true;
}
}

View File

@ -50,8 +50,8 @@ void TextureBaseOperation::deinit_execution()
BKE_image_pool_free(pool_);
pool_ = nullptr;
if (texture_ != nullptr && texture_->use_nodes && texture_->nodetree != nullptr &&
texture_->nodetree->execdata != nullptr) {
ntreeTexEndExecTree(texture_->nodetree->execdata);
texture_->nodetree->runtime->execdata != nullptr) {
ntreeTexEndExecTree(texture_->nodetree->runtime->execdata);
}
NodeOperation::deinit_execution();
}

View File

@ -19,7 +19,7 @@ TrackPositionOperation::TrackPositionOperation()
tracking_object_name_[0] = 0;
track_name_[0] = 0;
axis_ = 0;
position_ = CMP_TRACKPOS_ABSOLUTE;
position_ = CMP_NODE_TRACK_POSITION_ABSOLUTE;
relative_frame_ = 0;
speed_output_ = false;
flags_.is_set_operation = true;
@ -80,7 +80,7 @@ void TrackPositionOperation::calc_track_position()
swap_v2_v2(relative_pos_, marker_pos_);
}
}
else if (position_ == CMP_TRACKPOS_RELATIVE_START) {
else if (position_ == CMP_NODE_TRACK_POSITION_RELATIVE_START) {
int i;
for (i = 0; i < track->markersnr; i++) {
@ -93,7 +93,7 @@ void TrackPositionOperation::calc_track_position()
}
}
}
else if (position_ == CMP_TRACKPOS_RELATIVE_FRAME) {
else if (position_ == CMP_NODE_TRACK_POSITION_RELATIVE_FRAME) {
int relative_clip_framenr = BKE_movieclip_remap_scene_to_clip_frame(movie_clip_,
relative_frame_);

View File

@ -25,7 +25,7 @@ class TrackPositionOperation : public ConstantOperation {
char tracking_object_name_[64];
char track_name_[64];
int axis_;
int position_;
CMPNodeTrackPositionMode position_;
int relative_frame_;
bool speed_output_;
@ -63,7 +63,7 @@ class TrackPositionOperation : public ConstantOperation {
{
axis_ = value;
}
void set_position(int value)
void set_position(CMPNodeTrackPositionMode value)
{
position_ = value;
}

View File

@ -163,7 +163,14 @@ static void compositor_engine_free(void *instance_data)
static void compositor_engine_draw(void *data)
{
const COMPOSITOR_Data *compositor_data = static_cast<COMPOSITOR_Data *>(data);
COMPOSITOR_Data *compositor_data = static_cast<COMPOSITOR_Data *>(data);
#if defined(__APPLE__)
blender::StringRef("Viewport compositor not supported on MacOS")
.copy(compositor_data->info, GPU_INFO_SIZE);
return;
#endif
compositor_data->instance_data->draw();
}

View File

@ -70,6 +70,7 @@ enum {
SHADER_BUFFER_TRIS_MULTIPLE_MATERIALS,
SHADER_BUFFER_NORMALS_ACCUMULATE,
SHADER_BUFFER_NORMALS_FINALIZE,
SHADER_BUFFER_CUSTOM_NORMALS_FINALIZE,
SHADER_PATCH_EVALUATION,
SHADER_PATCH_EVALUATION_FVAR,
SHADER_PATCH_EVALUATION_FACE_DOTS,
@ -88,6 +89,10 @@ enum {
static GPUShader *g_subdiv_shaders[NUM_SHADERS];
#define SHADER_CUSTOM_DATA_INTERP_MAX_DIMENSIONS 4
static GPUShader
*g_subdiv_custom_data_shaders[SHADER_CUSTOM_DATA_INTERP_MAX_DIMENSIONS][GPU_COMP_MAX];
static const char *get_shader_code(int shader_type)
{
switch (shader_type) {
@ -208,7 +213,12 @@ static GPUShader *get_patch_evaluation_shader(int shader_type)
const char *compute_code = get_shader_code(shader_type);
const char *defines = nullptr;
if (shader_type == SHADER_PATCH_EVALUATION_FVAR) {
if (shader_type == SHADER_PATCH_EVALUATION) {
defines =
"#define OSD_PATCH_BASIS_GLSL\n"
"#define OPENSUBDIV_GLSL_COMPUTE_USE_1ST_DERIVATIVES\n";
}
else if (shader_type == SHADER_PATCH_EVALUATION_FVAR) {
defines =
"#define OSD_PATCH_BASIS_GLSL\n"
"#define OPENSUBDIV_GLSL_COMPUTE_USE_1ST_DERIVATIVES\n"
@ -234,9 +244,7 @@ static GPUShader *get_patch_evaluation_shader(int shader_type)
"#define ORCO_EVALUATION\n";
}
else {
defines =
"#define OSD_PATCH_BASIS_GLSL\n"
"#define OPENSUBDIV_GLSL_COMPUTE_USE_1ST_DERIVATIVES\n";
BLI_assert_unreachable();
}
/* Merge OpenSubdiv library code with our own library code. */
@ -258,7 +266,7 @@ static GPUShader *get_patch_evaluation_shader(int shader_type)
return g_subdiv_shaders[shader_type];
}
static GPUShader *get_subdiv_shader(int shader_type, const char *defines)
static GPUShader *get_subdiv_shader(int shader_type)
{
if (ELEM(shader_type,
SHADER_PATCH_EVALUATION,
@ -267,14 +275,86 @@ static GPUShader *get_subdiv_shader(int shader_type, const char *defines)
SHADER_PATCH_EVALUATION_ORCO)) {
return get_patch_evaluation_shader(shader_type);
}
BLI_assert(!ELEM(shader_type,
SHADER_COMP_CUSTOM_DATA_INTERP_1D,
SHADER_COMP_CUSTOM_DATA_INTERP_2D,
SHADER_COMP_CUSTOM_DATA_INTERP_3D,
SHADER_COMP_CUSTOM_DATA_INTERP_4D));
if (g_subdiv_shaders[shader_type] == nullptr) {
const char *compute_code = get_shader_code(shader_type);
const char *defines = nullptr;
if (ELEM(shader_type,
SHADER_BUFFER_LINES,
SHADER_BUFFER_LNOR,
SHADER_BUFFER_TRIS,
SHADER_BUFFER_UV_STRETCH_AREA)) {
defines = "#define SUBDIV_POLYGON_OFFSET\n";
}
else if (shader_type == SHADER_BUFFER_TRIS_MULTIPLE_MATERIALS) {
defines =
"#define SUBDIV_POLYGON_OFFSET\n"
"#define SINGLE_MATERIAL\n";
}
else if (shader_type == SHADER_BUFFER_LINES_LOOSE) {
defines = "#define LINES_LOOSE\n";
}
else if (shader_type == SHADER_BUFFER_EDGE_FAC) {
/* No separate shader for the AMD driver case as we assume that the GPU will not change
* during the execution of the program. */
defines = GPU_crappy_amd_driver() ? "#define GPU_AMD_DRIVER_BYTE_BUG\n" : nullptr;
}
else if (shader_type == SHADER_BUFFER_CUSTOM_NORMALS_FINALIZE) {
defines = "#define CUSTOM_NORMALS\n";
}
g_subdiv_shaders[shader_type] = GPU_shader_create_compute(
compute_code, datatoc_common_subdiv_lib_glsl, defines, get_shader_name(shader_type));
}
return g_subdiv_shaders[shader_type];
}
static GPUShader *get_subdiv_custom_data_shader(int comp_type, int dimensions)
{
BLI_assert(dimensions >= 1 && dimensions <= SHADER_CUSTOM_DATA_INTERP_MAX_DIMENSIONS);
if (comp_type == GPU_COMP_U16) {
BLI_assert(dimensions == 4);
}
GPUShader *&shader = g_subdiv_custom_data_shaders[dimensions - 1][comp_type];
if (shader == nullptr) {
const char *compute_code = get_shader_code(SHADER_COMP_CUSTOM_DATA_INTERP_1D + dimensions - 1);
int shader_type = SHADER_COMP_CUSTOM_DATA_INTERP_1D + dimensions - 1;
std::string defines = "#define SUBDIV_POLYGON_OFFSET\n";
defines += "#define DIMENSIONS " + std::to_string(dimensions) + "\n";
switch (comp_type) {
case GPU_COMP_U16:
defines += "#define GPU_COMP_U16\n";
break;
case GPU_COMP_I32:
defines += "#define GPU_COMP_I32\n";
break;
case GPU_COMP_F32:
/* float is the default */
break;
default:
BLI_assert_unreachable();
break;
}
shader = GPU_shader_create_compute(compute_code,
datatoc_common_subdiv_lib_glsl,
defines.c_str(),
get_shader_name(shader_type));
}
return shader;
}
/* -------------------------------------------------------------------- */
/** Vertex formats used for data transfer from OpenSubdiv, and for data processing on our side.
* \{ */
@ -1475,49 +1555,16 @@ void draw_subdiv_extract_uvs(const DRWSubdivCache *cache,
void draw_subdiv_interp_custom_data(const DRWSubdivCache *cache,
GPUVertBuf *src_data,
GPUVertBuf *dst_data,
int comp_type, /*GPUVertCompType*/
int dimensions,
int dst_offset,
bool compress_to_u16)
int dst_offset)
{
GPUShader *shader = nullptr;
if (!draw_subdiv_cache_need_polygon_data(cache)) {
/* Happens on meshes with only loose geometry. */
return;
}
if (dimensions == 1) {
shader = get_subdiv_shader(SHADER_COMP_CUSTOM_DATA_INTERP_1D,
"#define SUBDIV_POLYGON_OFFSET\n"
"#define DIMENSIONS 1\n");
}
else if (dimensions == 2) {
shader = get_subdiv_shader(SHADER_COMP_CUSTOM_DATA_INTERP_2D,
"#define SUBDIV_POLYGON_OFFSET\n"
"#define DIMENSIONS 2\n");
}
else if (dimensions == 3) {
shader = get_subdiv_shader(SHADER_COMP_CUSTOM_DATA_INTERP_3D,
"#define SUBDIV_POLYGON_OFFSET\n"
"#define DIMENSIONS 3\n");
}
else if (dimensions == 4) {
if (compress_to_u16) {
shader = get_subdiv_shader(SHADER_COMP_CUSTOM_DATA_INTERP_4D,
"#define SUBDIV_POLYGON_OFFSET\n"
"#define DIMENSIONS 4\n"
"#define GPU_FETCH_U16_TO_FLOAT\n");
}
else {
shader = get_subdiv_shader(SHADER_COMP_CUSTOM_DATA_INTERP_4D,
"#define SUBDIV_POLYGON_OFFSET\n"
"#define DIMENSIONS 4\n");
}
}
else {
/* Crash if dimensions are not supported. */
}
GPUShader *shader = get_subdiv_custom_data_shader(comp_type, dimensions);
GPU_shader_bind(shader);
int binding_point = 0;
@ -1545,7 +1592,7 @@ void draw_subdiv_build_sculpt_data_buffer(const DRWSubdivCache *cache,
GPUVertBuf *face_set_vbo,
GPUVertBuf *sculpt_data)
{
GPUShader *shader = get_subdiv_shader(SHADER_BUFFER_SCULPT_DATA, nullptr);
GPUShader *shader = get_subdiv_shader(SHADER_BUFFER_SCULPT_DATA);
GPU_shader_bind(shader);
/* Mask VBO is always at binding point 0. */
@ -1574,7 +1621,7 @@ void draw_subdiv_accumulate_normals(const DRWSubdivCache *cache,
GPUVertBuf *vertex_loop_map,
GPUVertBuf *vertex_normals)
{
GPUShader *shader = get_subdiv_shader(SHADER_BUFFER_NORMALS_ACCUMULATE, nullptr);
GPUShader *shader = get_subdiv_shader(SHADER_BUFFER_NORMALS_ACCUMULATE);
GPU_shader_bind(shader);
int binding_point = 0;
@ -1602,7 +1649,7 @@ void draw_subdiv_finalize_normals(const DRWSubdivCache *cache,
GPUVertBuf *subdiv_loop_subdiv_vert_index,
GPUVertBuf *pos_nor)
{
GPUShader *shader = get_subdiv_shader(SHADER_BUFFER_NORMALS_FINALIZE, nullptr);
GPUShader *shader = get_subdiv_shader(SHADER_BUFFER_NORMALS_FINALIZE);
GPU_shader_bind(shader);
int binding_point = 0;
@ -1626,7 +1673,7 @@ void draw_subdiv_finalize_custom_normals(const DRWSubdivCache *cache,
GPUVertBuf *src_custom_normals,
GPUVertBuf *pos_nor)
{
GPUShader *shader = get_subdiv_shader(SHADER_BUFFER_NORMALS_FINALIZE, "#define CUSTOM_NORMALS");
GPUShader *shader = get_subdiv_shader(SHADER_BUFFER_CUSTOM_NORMALS_FINALIZE);
GPU_shader_bind(shader);
int binding_point = 0;
@ -1658,15 +1705,8 @@ void draw_subdiv_build_tris_buffer(const DRWSubdivCache *cache,
const bool do_single_material = material_count <= 1;
const char *defines = "#define SUBDIV_POLYGON_OFFSET\n";
if (do_single_material) {
defines =
"#define SUBDIV_POLYGON_OFFSET\n"
"#define SINGLE_MATERIAL\n";
}
GPUShader *shader = get_subdiv_shader(
do_single_material ? SHADER_BUFFER_TRIS : SHADER_BUFFER_TRIS_MULTIPLE_MATERIALS, defines);
do_single_material ? SHADER_BUFFER_TRIS : SHADER_BUFFER_TRIS_MULTIPLE_MATERIALS);
GPU_shader_bind(shader);
int binding_point = 0;
@ -1768,7 +1808,7 @@ void draw_subdiv_build_fdots_buffers(const DRWSubdivCache *cache,
void draw_subdiv_build_lines_buffer(const DRWSubdivCache *cache, GPUIndexBuf *lines_indices)
{
GPUShader *shader = get_subdiv_shader(SHADER_BUFFER_LINES, "#define SUBDIV_POLYGON_OFFSET\n");
GPUShader *shader = get_subdiv_shader(SHADER_BUFFER_LINES);
GPU_shader_bind(shader);
int binding_point = 0;
@ -1792,7 +1832,7 @@ void draw_subdiv_build_lines_loose_buffer(const DRWSubdivCache *cache,
GPUVertBuf *lines_flags,
uint num_loose_edges)
{
GPUShader *shader = get_subdiv_shader(SHADER_BUFFER_LINES_LOOSE, "#define LINES_LOOSE\n");
GPUShader *shader = get_subdiv_shader(SHADER_BUFFER_LINES_LOOSE);
GPU_shader_bind(shader);
GPU_indexbuf_bind_as_ssbo(lines_indices, 3);
@ -1812,10 +1852,7 @@ void draw_subdiv_build_edge_fac_buffer(const DRWSubdivCache *cache,
GPUVertBuf *edge_idx,
GPUVertBuf *edge_fac)
{
/* No separate shader for the AMD driver case as we assume that the GPU will not change during
* the execution of the program. */
const char *defines = GPU_crappy_amd_driver() ? "#define GPU_AMD_DRIVER_BYTE_BUG\n" : nullptr;
GPUShader *shader = get_subdiv_shader(SHADER_BUFFER_EDGE_FAC, defines);
GPUShader *shader = get_subdiv_shader(SHADER_BUFFER_EDGE_FAC);
GPU_shader_bind(shader);
int binding_point = 0;
@ -1842,7 +1879,7 @@ void draw_subdiv_build_lnor_buffer(const DRWSubdivCache *cache,
return;
}
GPUShader *shader = get_subdiv_shader(SHADER_BUFFER_LNOR, "#define SUBDIV_POLYGON_OFFSET\n");
GPUShader *shader = get_subdiv_shader(SHADER_BUFFER_LNOR);
GPU_shader_bind(shader);
int binding_point = 0;
@ -1870,8 +1907,7 @@ void draw_subdiv_build_edituv_stretch_area_buffer(const DRWSubdivCache *cache,
GPUVertBuf *coarse_data,
GPUVertBuf *subdiv_data)
{
GPUShader *shader = get_subdiv_shader(SHADER_BUFFER_UV_STRETCH_AREA,
"#define SUBDIV_POLYGON_OFFSET\n");
GPUShader *shader = get_subdiv_shader(SHADER_BUFFER_UV_STRETCH_AREA);
GPU_shader_bind(shader);
int binding_point = 0;
@ -1899,7 +1935,7 @@ void draw_subdiv_build_edituv_stretch_angle_buffer(const DRWSubdivCache *cache,
int uvs_offset,
GPUVertBuf *stretch_angles)
{
GPUShader *shader = get_subdiv_shader(SHADER_BUFFER_UV_STRETCH_ANGLE, nullptr);
GPUShader *shader = get_subdiv_shader(SHADER_BUFFER_UV_STRETCH_ANGLE);
GPU_shader_bind(shader);
int binding_point = 0;

View File

@ -420,7 +420,7 @@ std::string DrawMulti::serialize(std::string line_prefix) const
intptr_t offset = grp.start;
if (grp.back_proto_len > 0) {
for (DrawPrototype &proto : prototypes.slice({offset, grp.back_proto_len})) {
for (DrawPrototype &proto : prototypes.slice_safe({offset, grp.back_proto_len})) {
BLI_assert(proto.group_id == group_index);
ResourceHandle handle(proto.resource_handle);
BLI_assert(handle.has_inverted_handedness());
@ -432,7 +432,7 @@ std::string DrawMulti::serialize(std::string line_prefix) const
}
if (grp.front_proto_len > 0) {
for (DrawPrototype &proto : prototypes.slice({offset, grp.front_proto_len})) {
for (DrawPrototype &proto : prototypes.slice_safe({offset, grp.front_proto_len})) {
BLI_assert(proto.group_id == group_index);
ResourceHandle handle(proto.resource_handle);
BLI_assert(!handle.has_inverted_handedness());

View File

@ -45,6 +45,7 @@
#include "DNA_mesh_types.h"
#include "DNA_meshdata_types.h"
#include "DNA_userdef_types.h"
#include "DNA_view3d_types.h"
#include "DNA_world_types.h"
#include "ED_gpencil.h"
@ -1247,7 +1248,7 @@ static bool is_compositor_enabled(void)
return false;
}
if (!(DST.draw_ctx.v3d->shading.flag & V3D_SHADING_COMPOSITOR)) {
if (DST.draw_ctx.v3d->shading.use_compositor == V3D_SHADING_USE_COMPOSITOR_DISABLED) {
return false;
}
@ -1263,6 +1264,11 @@ static bool is_compositor_enabled(void)
return false;
}
if (DST.draw_ctx.v3d->shading.use_compositor == V3D_SHADING_USE_COMPOSITOR_CAMERA &&
DST.draw_ctx.rv3d->persp != RV3D_CAMOB) {
return false;
}
return true;
}

View File

@ -248,9 +248,9 @@ void draw_subdiv_extract_pos_nor(const DRWSubdivCache *cache,
void draw_subdiv_interp_custom_data(const DRWSubdivCache *cache,
struct GPUVertBuf *src_data,
struct GPUVertBuf *dst_data,
int comp_type, /*GPUVertCompType*/
int dimensions,
int dst_offset,
bool compress_to_u16);
int dst_offset);
void draw_subdiv_extract_uvs(const DRWSubdivCache *cache,
struct GPUVertBuf *uvs,

View File

@ -58,13 +58,14 @@ static void extract_lines_iter_poly_mesh(const MeshRenderData *mr,
GPUIndexBufBuilder *elb = static_cast<GPUIndexBufBuilder *>(data);
/* Using poly & loop iterator would complicate accessing the adjacent loop. */
const MLoop *mloop = mr->mloop;
if (mr->use_hide || (mr->e_origindex != nullptr)) {
const int *e_origindex = (mr->edit_bmesh) ? mr->e_origindex : nullptr;
if (mr->use_hide || (e_origindex != nullptr)) {
const int ml_index_last = mp->loopstart + (mp->totloop - 1);
int ml_index = ml_index_last, ml_index_next = mp->loopstart;
do {
const MLoop *ml = &mloop[ml_index];
if (!((mr->use_hide && mr->hide_edge && mr->hide_edge[ml->e]) ||
((mr->e_origindex) && (mr->e_origindex[ml->e] == ORIGINDEX_NONE)))) {
((e_origindex) && (e_origindex[ml->e] == ORIGINDEX_NONE)))) {
GPU_indexbuf_set_line_verts(elb, ml->e, ml_index, ml_index_next);
}
else {
@ -108,8 +109,9 @@ static void extract_lines_iter_ledge_mesh(const MeshRenderData *mr,
GPUIndexBufBuilder *elb = static_cast<GPUIndexBufBuilder *>(data);
const int l_index_offset = mr->edge_len + ledge_index;
const int e_index = mr->ledges[ledge_index];
const int *e_origindex = (mr->edit_bmesh) ? mr->e_origindex : nullptr;
if (!((mr->use_hide && mr->hide_edge && mr->hide_edge[med - mr->medge]) ||
((mr->e_origindex) && (mr->e_origindex[e_index] == ORIGINDEX_NONE)))) {
((e_origindex) && (e_origindex[e_index] == ORIGINDEX_NONE)))) {
const int l_index = mr->loop_len + ledge_index * 2;
GPU_indexbuf_set_line_verts(elb, l_index_offset, l_index, l_index + 1);
}
@ -181,6 +183,7 @@ static void extract_lines_loose_geom_subdiv(const DRWSubdivCache *subdiv_cache,
switch (mr->extract_type) {
case MR_EXTRACT_MESH: {
const int *e_origindex = (mr->edit_bmesh) ? mr->e_origindex : nullptr;
if (mr->e_origindex == nullptr) {
const bool *hide_edge = mr->hide_edge;
if (hide_edge) {
@ -205,7 +208,7 @@ static void extract_lines_loose_geom_subdiv(const DRWSubdivCache *subdiv_cache,
for (DRWSubdivLooseEdge edge : loose_edges) {
int e = edge.coarse_edge_index;
if (mr->e_origindex && mr->e_origindex[e] != ORIGINDEX_NONE) {
if (e_origindex && e_origindex[e] != ORIGINDEX_NONE) {
*flags_data++ = hide_edge[edge.coarse_edge_index];
}
else {

View File

@ -114,9 +114,9 @@ static uint gpu_component_size_for_attribute_type(eCustomDataType type)
static GPUVertFetchMode get_fetch_mode_for_type(eCustomDataType type)
{
switch (type) {
case CD_PROP_INT8:
case CD_PROP_INT32:
return GPU_FETCH_INT_TO_FLOAT;
case CD_PROP_COLOR:
case CD_PROP_BYTE_COLOR:
return GPU_FETCH_INT_TO_FLOAT_UNIT;
default:
@ -127,10 +127,12 @@ static GPUVertFetchMode get_fetch_mode_for_type(eCustomDataType type)
static GPUVertCompType get_comp_type_for_type(eCustomDataType type)
{
switch (type) {
case CD_PROP_INT8:
case CD_PROP_INT32:
return GPU_COMP_I32;
case CD_PROP_COLOR:
case CD_PROP_BYTE_COLOR:
/* This should be u8,
* but u16 is required to store the color in linear space without precission loss */
return GPU_COMP_U16;
default:
return GPU_COMP_F32;
@ -279,16 +281,10 @@ static void extract_attr_generic(const MeshRenderData *mr,
}
}
static void extract_attr_init(
const MeshRenderData *mr, MeshBatchCache *cache, void *buf, void * /*tls_data*/, int index)
static void extract_attr(const MeshRenderData *mr,
GPUVertBuf *vbo,
const DRW_AttributeRequest &request)
{
const DRW_Attributes *attrs_used = &cache->attr_used;
const DRW_AttributeRequest &request = attrs_used->requests[index];
GPUVertBuf *vbo = static_cast<GPUVertBuf *>(buf);
init_vbo_for_attribute(*mr, vbo, request, false, uint32_t(mr->loop_len));
/* TODO(@kevindietrich): float3 is used for scalar attributes as the implicit conversion done by
* OpenGL to vec4 for a scalar `s` will produce a `vec4(s, 0, 0, 1)`. However, following the
* Blender convention, it should be `vec4(s, s, s, 1)`. This could be resolved using a similar
@ -298,10 +294,10 @@ static void extract_attr_init(
extract_attr_generic<bool, float3>(mr, vbo, request);
break;
case CD_PROP_INT8:
extract_attr_generic<int8_t, float3>(mr, vbo, request);
extract_attr_generic<int8_t, int3>(mr, vbo, request);
break;
case CD_PROP_INT32:
extract_attr_generic<int32_t, float3>(mr, vbo, request);
extract_attr_generic<int32_t, int3>(mr, vbo, request);
break;
case CD_PROP_FLOAT:
extract_attr_generic<float, float3>(mr, vbo, request);
@ -313,7 +309,7 @@ static void extract_attr_init(
extract_attr_generic<float3>(mr, vbo, request);
break;
case CD_PROP_COLOR:
extract_attr_generic<MPropCol, gpuMeshCol>(mr, vbo, request);
extract_attr_generic<float4>(mr, vbo, request);
break;
case CD_PROP_BYTE_COLOR:
extract_attr_generic<ColorGeometry4b, gpuMeshCol>(mr, vbo, request);
@ -323,6 +319,19 @@ static void extract_attr_init(
}
}
static void extract_attr_init(
const MeshRenderData *mr, MeshBatchCache *cache, void *buf, void * /*tls_data*/, int index)
{
const DRW_Attributes *attrs_used = &cache->attr_used;
const DRW_AttributeRequest &request = attrs_used->requests[index];
GPUVertBuf *vbo = static_cast<GPUVertBuf *>(buf);
init_vbo_for_attribute(*mr, vbo, request, false, uint32_t(mr->loop_len));
extract_attr(mr, vbo, request);
}
static void extract_attr_init_subdiv(const DRWSubdivCache *subdiv_cache,
const MeshRenderData *mr,
MeshBatchCache *cache,
@ -335,55 +344,26 @@ static void extract_attr_init_subdiv(const DRWSubdivCache *subdiv_cache,
Mesh *coarse_mesh = subdiv_cache->mesh;
GPUVertCompType comp_type = get_comp_type_for_type(request.cd_type);
GPUVertFetchMode fetch_mode = get_fetch_mode_for_type(request.cd_type);
const uint32_t dimensions = gpu_component_size_for_attribute_type(request.cd_type);
/* Prepare VBO for coarse data. The compute shader only expects floats. */
GPUVertBuf *src_data = GPU_vertbuf_calloc();
GPUVertFormat coarse_format = {0};
GPU_vertformat_attr_add(&coarse_format, "data", GPU_COMP_F32, dimensions, GPU_FETCH_FLOAT);
GPU_vertformat_attr_add(&coarse_format, "data", comp_type, dimensions, fetch_mode);
GPU_vertbuf_init_with_format_ex(src_data, &coarse_format, GPU_USAGE_STATIC);
GPU_vertbuf_data_alloc(src_data, uint32_t(coarse_mesh->totloop));
switch (request.cd_type) {
case CD_PROP_BOOL:
extract_attr_generic<bool, float3>(mr, src_data, request);
break;
case CD_PROP_INT8:
extract_attr_generic<int8_t, float3>(mr, src_data, request);
break;
case CD_PROP_INT32:
extract_attr_generic<int32_t, float3>(mr, src_data, request);
break;
case CD_PROP_FLOAT:
extract_attr_generic<float, float3>(mr, src_data, request);
break;
case CD_PROP_FLOAT2:
extract_attr_generic<float2>(mr, src_data, request);
break;
case CD_PROP_FLOAT3:
extract_attr_generic<float3>(mr, src_data, request);
break;
case CD_PROP_COLOR:
extract_attr_generic<MPropCol, gpuMeshCol>(mr, src_data, request);
break;
case CD_PROP_BYTE_COLOR:
extract_attr_generic<ColorGeometry4b, gpuMeshCol>(mr, src_data, request);
break;
default:
BLI_assert_unreachable();
}
extract_attr(mr, src_data, request);
GPUVertBuf *dst_buffer = static_cast<GPUVertBuf *>(buffer);
init_vbo_for_attribute(*mr, dst_buffer, request, true, subdiv_cache->num_subdiv_loops);
/* Ensure data is uploaded properly. */
GPU_vertbuf_tag_dirty(src_data);
draw_subdiv_interp_custom_data(subdiv_cache,
src_data,
dst_buffer,
int(dimensions),
0,
ELEM(request.cd_type, CD_PROP_COLOR, CD_PROP_BYTE_COLOR));
draw_subdiv_interp_custom_data(
subdiv_cache, src_data, dst_buffer, comp_type, int(dimensions), 0);
GPU_vertbuf_discard(src_data);
}

View File

@ -251,7 +251,7 @@ static void extract_pos_nor_init_subdiv(const DRWSubdivCache *subdiv_cache,
dst_custom_normals, get_custom_normals_format(), subdiv_cache->num_subdiv_loops);
draw_subdiv_interp_custom_data(
subdiv_cache, src_custom_normals, dst_custom_normals, 3, 0, false);
subdiv_cache, src_custom_normals, dst_custom_normals, GPU_COMP_F32, 3, 0);
draw_subdiv_finalize_custom_normals(subdiv_cache, dst_custom_normals, vbo);

View File

@ -156,7 +156,7 @@ static void extract_sculpt_data_init_subdiv(const DRWSubdivCache *subdiv_cache,
GPU_vertbuf_init_build_on_device(
subdiv_mask_vbo, &mask_format, subdiv_cache->num_subdiv_loops);
draw_subdiv_interp_custom_data(subdiv_cache, mask_vbo, subdiv_mask_vbo, 1, 0, false);
draw_subdiv_interp_custom_data(subdiv_cache, mask_vbo, subdiv_mask_vbo, GPU_COMP_F32, 1, 0);
}
/* Then, gather face sets. */

View File

@ -303,7 +303,8 @@ static void extract_tan_init_subdiv(const DRWSubdivCache *subdiv_cache,
GPU_vertbuf_tag_dirty(coarse_vbo);
/* Include stride in offset. */
const int dst_offset = int(subdiv_cache->num_subdiv_loops) * 4 * pack_layer_index++;
draw_subdiv_interp_custom_data(subdiv_cache, coarse_vbo, dst_buffer, 4, dst_offset, false);
draw_subdiv_interp_custom_data(
subdiv_cache, coarse_vbo, dst_buffer, GPU_COMP_F32, 4, dst_offset);
}
if (use_orco_tan) {
float(*tan_data)[4] = (float(*)[4])GPU_vertbuf_get_data(coarse_vbo);
@ -318,7 +319,8 @@ static void extract_tan_init_subdiv(const DRWSubdivCache *subdiv_cache,
GPU_vertbuf_tag_dirty(coarse_vbo);
/* Include stride in offset. */
const int dst_offset = int(subdiv_cache->num_subdiv_loops) * 4 * pack_layer_index++;
draw_subdiv_interp_custom_data(subdiv_cache, coarse_vbo, dst_buffer, 4, dst_offset, false);
draw_subdiv_interp_custom_data(
subdiv_cache, coarse_vbo, dst_buffer, GPU_COMP_F32, 4, dst_offset);
}
CustomData_free(&loop_data, mr->loop_len);

View File

@ -187,7 +187,7 @@ static void extract_weights_init_subdiv(const DRWSubdivCache *subdiv_cache,
}
}
draw_subdiv_interp_custom_data(subdiv_cache, coarse_weights, vbo, 1, 0, false);
draw_subdiv_interp_custom_data(subdiv_cache, coarse_weights, vbo, GPU_COMP_F32, 1, 0);
GPU_vertbuf_discard(coarse_weights);
}

View File

@ -3,8 +3,10 @@
layout(std430, binding = 1) readonly restrict buffer sourceBuffer
{
#ifdef GPU_FETCH_U16_TO_FLOAT
#if defined(GPU_COMP_U16)
uint src_data[];
#elif defined(GPU_COMP_I32)
int src_data[];
#else
float src_data[];
#endif
@ -27,8 +29,10 @@ layout(std430, binding = 4) readonly restrict buffer extraCoarseFaceData
layout(std430, binding = 5) writeonly restrict buffer destBuffer
{
#ifdef GPU_FETCH_U16_TO_FLOAT
#if defined(GPU_COMP_U16)
uint dst_data[];
#elif defined(GPU_COMP_I32)
int dst_data[];
#else
float dst_data[];
#endif
@ -48,7 +52,7 @@ void clear(inout Vertex v)
Vertex read_vertex(uint index)
{
Vertex result;
#ifdef GPU_FETCH_U16_TO_FLOAT
#if defined(GPU_COMP_U16)
uint base_index = index * 2;
if (DIMENSIONS == 4) {
uint xy = src_data[base_index];
@ -68,6 +72,11 @@ Vertex read_vertex(uint index)
/* This case is unsupported for now. */
clear(result);
}
#elif defined(GPU_COMP_I32)
uint base_index = index * DIMENSIONS;
for (int i = 0; i < DIMENSIONS; i++) {
result.vertex_data[i] = float(src_data[base_index + i]);
}
#else
uint base_index = index * DIMENSIONS;
for (int i = 0; i < DIMENSIONS; i++) {
@ -79,7 +88,7 @@ Vertex read_vertex(uint index)
void write_vertex(uint index, Vertex v)
{
#ifdef GPU_FETCH_U16_TO_FLOAT
#if defined(GPU_COMP_U16)
uint base_index = dst_offset + index * 2;
if (DIMENSIONS == 4) {
uint x = uint(v.vertex_data[0] * 65535.0);
@ -97,6 +106,11 @@ void write_vertex(uint index, Vertex v)
/* This case is unsupported for now. */
dst_data[base_index] = 0;
}
#elif defined(GPU_COMP_I32)
uint base_index = dst_offset + index * DIMENSIONS;
for (int i = 0; i < DIMENSIONS; i++) {
dst_data[base_index + i] = int(round(v.vertex_data[i]));
}
#else
uint base_index = dst_offset + index * DIMENSIONS;
for (int i = 0; i < DIMENSIONS; i++) {

View File

@ -1605,6 +1605,14 @@ static void icon_preview_startjob_all_sizes(void *customdata,
continue;
}
/* Workaround: Skip preview renders for linked IDs. Preview rendering can be slow and even
* freeze the UI (e.g. on Eevee shader compilation). And since the result will never be stored
* in a file, it's done every time the file is reloaded, so this becomes a frequent annoyance.
*/
if (!use_solid_render_mode && ip->id && ID_IS_LINKED(ip->id)) {
continue;
}
#ifndef NDEBUG
{
int size_index = icon_previewimg_size_index_get(cur_size, prv);

View File

@ -3106,7 +3106,9 @@ static int keyframe_jump_exec(bContext *C, wmOperator *op)
while ((ak != NULL) && (done == false)) {
if (scene->r.cfra != (int)ak->cfra) {
/* this changes the frame, so set the frame and we're done */
scene->r.cfra = (int)ak->cfra;
const int whole_frame = (int)ak->cfra;
scene->r.cfra = whole_frame;
scene->r.subframe = ak->cfra - whole_frame;
done = true;
}
else {

View File

@ -42,7 +42,7 @@ set(SRC
curves_sculpt_smooth.cc
curves_sculpt_snake_hook.cc
paint_canvas.cc
paint_cursor.c
paint_cursor.cc
paint_curve.c
paint_curve_undo.c
paint_hide.c
@ -50,7 +50,7 @@ set(SRC
paint_image_2d.c
paint_image_2d_curve_mask.cc
paint_image_ops_paint.cc
paint_image_proj.c
paint_image_proj.cc
paint_mask.c
paint_ops.c
paint_stroke.c

View File

@ -27,6 +27,7 @@
#include "BKE_context.h"
#include "BKE_curve.h"
#include "BKE_image.h"
#include "BKE_node_runtime.hh"
#include "BKE_object.h"
#include "BKE_paint.h"
@ -64,21 +65,21 @@
* There is also some ugliness with sculpt-specific code.
*/
typedef struct TexSnapshot {
struct TexSnapshot {
GPUTexture *overlay_texture;
int winx;
int winy;
int old_size;
float old_zoom;
bool old_col;
} TexSnapshot;
};
typedef struct CursorSnapshot {
struct CursorSnapshot {
GPUTexture *overlay_texture;
int size;
int zoom;
int curve_preset;
} CursorSnapshot;
};
static TexSnapshot primary_snap = {0};
static TexSnapshot secondary_snap = {0};
@ -140,7 +141,7 @@ static void load_tex_task_cb_ex(void *__restrict userdata,
const int j,
const TaskParallelTLS *__restrict tls)
{
LoadTexData *data = userdata;
LoadTexData *data = static_cast<LoadTexData *>(userdata);
Brush *br = data->br;
ViewContext *vc = data->vc;
@ -154,14 +155,14 @@ static void load_tex_task_cb_ex(void *__restrict userdata,
const float radius = data->radius;
bool convert_to_linear = false;
struct ColorSpace *colorspace = NULL;
struct ColorSpace *colorspace = nullptr;
const int thread_id = BLI_task_parallel_thread_id(tls);
if (mtex->tex && mtex->tex->type == TEX_IMAGE && mtex->tex->ima) {
ImBuf *tex_ibuf = BKE_image_pool_acquire_ibuf(mtex->tex->ima, &mtex->tex->iuser, pool);
/* For consistency, sampling always returns color in linear space. */
if (tex_ibuf && tex_ibuf->rect_float == NULL) {
if (tex_ibuf && tex_ibuf->rect_float == nullptr) {
convert_to_linear = true;
colorspace = tex_ibuf->rect_colorspace;
}
@ -239,7 +240,7 @@ static int load_tex(Brush *br, ViewContext *vc, float zoom, bool col, bool prima
MTex *mtex = (primary) ? &br->mtex : &br->mask_mtex;
ePaintOverlayControlFlags overlay_flags = BKE_paint_get_overlay_flags();
uchar *buffer = NULL;
uchar *buffer = nullptr;
int size;
bool refresh;
@ -254,7 +255,7 @@ static int load_tex(Brush *br, ViewContext *vc, float zoom, bool col, bool prima
init = (target->overlay_texture != 0);
if (refresh) {
struct ImagePool *pool = NULL;
struct ImagePool *pool = nullptr;
/* Stencil is rotated later. */
const float rotation = (mtex->brush_map_mode != MTEX_MAP_MODE_STENCIL) ? -mtex->rot : 0.0f;
const float radius = BKE_brush_size_get(vc->scene, br) * zoom;
@ -286,7 +287,7 @@ static int load_tex(Brush *br, ViewContext *vc, float zoom, bool col, bool prima
if (target->old_size != size || target->old_col != col) {
if (target->overlay_texture) {
GPU_texture_free(target->overlay_texture);
target->overlay_texture = NULL;
target->overlay_texture = nullptr;
}
init = false;
@ -294,10 +295,10 @@ static int load_tex(Brush *br, ViewContext *vc, float zoom, bool col, bool prima
target->old_col = col;
}
if (col) {
buffer = MEM_mallocN(sizeof(uchar) * size * size * 4, "load_tex");
buffer = static_cast<uchar *>(MEM_mallocN(sizeof(uchar) * size * size * 4, "load_tex"));
}
else {
buffer = MEM_mallocN(sizeof(uchar) * size * size, "load_tex");
buffer = static_cast<uchar *>(MEM_mallocN(sizeof(uchar) * size * size, "load_tex"));
}
pool = BKE_image_pool_new();
@ -307,24 +308,23 @@ static int load_tex(Brush *br, ViewContext *vc, float zoom, bool col, bool prima
ntreeTexBeginExecTree(mtex->tex->nodetree);
}
LoadTexData data = {
.br = br,
.vc = vc,
.mtex = mtex,
.buffer = buffer,
.col = col,
.pool = pool,
.size = size,
.rotation = rotation,
.radius = radius,
};
LoadTexData data{};
data.br = br;
data.vc = vc;
data.mtex = mtex;
data.buffer = buffer;
data.col = col;
data.pool = pool;
data.size = size;
data.rotation = rotation;
data.radius = radius;
TaskParallelSettings settings;
BLI_parallel_range_settings_defaults(&settings);
BLI_task_parallel_range(0, size, &data, load_tex_task_cb_ex, &settings);
if (mtex->tex && mtex->tex->nodetree) {
ntreeTexEndExecTree(mtex->tex->nodetree->execdata);
ntreeTexEndExecTree(mtex->tex->nodetree->runtime->execdata);
}
if (pool) {
@ -334,7 +334,7 @@ static int load_tex(Brush *br, ViewContext *vc, float zoom, bool col, bool prima
if (!target->overlay_texture) {
eGPUTextureFormat format = col ? GPU_RGBA8 : GPU_R8;
target->overlay_texture = GPU_texture_create_2d(
"paint_cursor_overlay", size, size, 1, format, NULL);
"paint_cursor_overlay", size, size, 1, format, nullptr);
GPU_texture_update(target->overlay_texture, GPU_DATA_UBYTE, buffer);
if (!col) {
@ -363,7 +363,7 @@ static void load_tex_cursor_task_cb(void *__restrict userdata,
const int j,
const TaskParallelTLS *__restrict UNUSED(tls))
{
LoadTexData *data = userdata;
LoadTexData *data = static_cast<LoadTexData *>(userdata);
Brush *br = data->br;
uchar *buffer = data->buffer;
@ -396,7 +396,7 @@ static int load_tex_cursor(Brush *br, ViewContext *vc, float zoom)
bool init;
ePaintOverlayControlFlags overlay_flags = BKE_paint_get_overlay_flags();
uchar *buffer = NULL;
uchar *buffer = nullptr;
int size;
const bool refresh = !cursor_snap.overlay_texture ||
@ -430,22 +430,21 @@ static int load_tex_cursor(Brush *br, ViewContext *vc, float zoom)
if (cursor_snap.size != size) {
if (cursor_snap.overlay_texture) {
GPU_texture_free(cursor_snap.overlay_texture);
cursor_snap.overlay_texture = NULL;
cursor_snap.overlay_texture = nullptr;
}
init = false;
cursor_snap.size = size;
}
buffer = MEM_mallocN(sizeof(uchar) * size * size, "load_tex");
buffer = static_cast<uchar *>(MEM_mallocN(sizeof(uchar) * size * size, "load_tex"));
BKE_curvemapping_init(br->curve);
LoadTexData data = {
.br = br,
.buffer = buffer,
.size = size,
};
LoadTexData data{};
data.br = br;
data.buffer = buffer;
data.size = size;
TaskParallelSettings settings;
BLI_parallel_range_settings_defaults(&settings);
@ -453,7 +452,7 @@ static int load_tex_cursor(Brush *br, ViewContext *vc, float zoom)
if (!cursor_snap.overlay_texture) {
cursor_snap.overlay_texture = GPU_texture_create_2d(
"cursor_snap_overaly", size, size, 1, GPU_R8, NULL);
"cursor_snap_overaly", size, size, 1, GPU_R8, nullptr);
GPU_texture_update(cursor_snap.overlay_texture, GPU_DATA_UBYTE, buffer);
GPU_texture_swizzle_set(cursor_snap.overlay_texture, "rrrr");
@ -1096,7 +1095,7 @@ static void cursor_draw_point_with_symmetry(const uint gpuattr,
if (i == 0 || (symm & i && (symm != 5 || i != 3) && (symm != 6 || !ELEM(i, 3, 5)))) {
/* Axis Symmetry. */
flip_v3_v3(location, true_location, (char)i);
flip_v3_v3(location, true_location, ePaintSymmetryFlags(i));
cursor_draw_point_screen_space(gpuattr, region, location, ob->object_to_world, 3);
/* Tiling. */
@ -1106,7 +1105,7 @@ static void cursor_draw_point_with_symmetry(const uint gpuattr,
for (char raxis = 0; raxis < 3; raxis++) {
for (int r = 1; r < sd->radial_symm[raxis]; r++) {
float angle = 2 * M_PI * r / sd->radial_symm[(int)raxis];
flip_v3_v3(location, true_location, (char)i);
flip_v3_v3(location, true_location, ePaintSymmetryFlags(i));
unit_m4(symm_rot_mat);
rotate_m4(symm_rot_mat, raxis + 'X', angle);
mul_m4_v3(symm_rot_mat, location);
@ -1263,11 +1262,11 @@ static bool paint_cursor_context_init(bContext *C,
pcontext->scene = CTX_data_scene(C);
pcontext->ups = &pcontext->scene->toolsettings->unified_paint_settings;
pcontext->paint = BKE_paint_get_active_from_context(C);
if (pcontext->paint == NULL) {
if (pcontext->paint == nullptr) {
return false;
}
pcontext->brush = BKE_paint_brush(pcontext->paint);
if (pcontext->brush == NULL) {
if (pcontext->brush == nullptr) {
return false;
}
pcontext->mode = BKE_paintmode_get_active_from_context(C);
@ -1306,7 +1305,7 @@ static bool paint_cursor_context_init(bContext *C,
pcontext->outline_alpha = pcontext->brush->add_col[3];
Object *active_object = pcontext->vc.obact;
pcontext->ss = active_object ? active_object->sculpt : NULL;
pcontext->ss = active_object ? active_object->sculpt : nullptr;
if (pcontext->ss && pcontext->ss->draw_faded_cursor) {
pcontext->outline_alpha = 0.3f;
@ -1351,7 +1350,7 @@ static void paint_cursor_update_pixel_radius(PaintCursorContext *pcontext)
static void paint_cursor_sculpt_session_update_and_init(PaintCursorContext *pcontext)
{
BLI_assert(pcontext->ss != NULL);
BLI_assert(pcontext->ss != nullptr);
BLI_assert(pcontext->mode == PAINT_MODE_SCULPT);
bContext *C = pcontext->C;
@ -1363,8 +1362,8 @@ static void paint_cursor_sculpt_session_update_and_init(PaintCursorContext *pcon
SculptCursorGeometryInfo gi;
const float mval_fl[2] = {
pcontext->x - pcontext->region->winrct.xmin,
pcontext->y - pcontext->region->winrct.ymin,
float(pcontext->x - pcontext->region->winrct.xmin),
float(pcontext->y - pcontext->region->winrct.ymin),
};
/* This updates the active vertex, which is needed for most of the Sculpt/Vertex Colors tools to
@ -1391,7 +1390,7 @@ static void paint_cursor_sculpt_session_update_and_init(PaintCursorContext *pcon
paint_cursor_update_unprojected_radius(ups, brush, vc, pcontext->scene_space_location);
}
pcontext->is_multires = ss->pbvh != NULL && BKE_pbvh_type(ss->pbvh) == PBVH_GRIDS;
pcontext->is_multires = ss->pbvh != nullptr && BKE_pbvh_type(ss->pbvh) == PBVH_GRIDS;
pcontext->sd = CTX_data_tool_settings(pcontext->C)->sculpt;
}
@ -1613,7 +1612,7 @@ static void paint_cursor_draw_3d_view_brush_cursor_inactive(PaintCursorContext *
if (is_brush_tool && brush->sculpt_tool == SCULPT_TOOL_POSE) {
/* Just after switching to the Pose Brush, the active vertex can be the same and the
* cursor won't be tagged to update, so always initialize the preview chain if it is
* null before drawing it. */
* nullptr before drawing it. */
SculptSession *ss = pcontext->ss;
if (update_previews || !ss->pose_ik_chain_preview) {
BKE_sculpt_update_object_for_edit(
@ -1656,9 +1655,9 @@ static void paint_cursor_draw_3d_view_brush_cursor_inactive(PaintCursorContext *
pcontext->scene,
pcontext->region,
CTX_wm_view3d(pcontext->C),
NULL,
NULL,
NULL);
nullptr,
nullptr,
nullptr);
GPU_matrix_push();
GPU_matrix_mul(pcontext->vc.obact->object_to_world);
@ -1721,7 +1720,7 @@ static void paint_cursor_draw_3d_view_brush_cursor_inactive(PaintCursorContext *
static void paint_cursor_cursor_draw_3d_view_brush_cursor_active(PaintCursorContext *pcontext)
{
BLI_assert(pcontext->ss != NULL);
BLI_assert(pcontext->ss != nullptr);
BLI_assert(pcontext->mode == PAINT_MODE_SCULPT);
SculptSession *ss = pcontext->ss;
@ -1748,9 +1747,9 @@ static void paint_cursor_cursor_draw_3d_view_brush_cursor_active(PaintCursorCont
pcontext->scene,
pcontext->region,
CTX_wm_view3d(pcontext->C),
NULL,
NULL,
NULL);
nullptr,
nullptr,
nullptr);
GPU_matrix_push();
GPU_matrix_mul(pcontext->vc.obact->object_to_world);
@ -1940,7 +1939,7 @@ void ED_paint_cursor_start(Paint *p, bool (*poll)(bContext *C))
{
if (p && !p->paint_cursor) {
p->paint_cursor = WM_paint_cursor_activate(
SPACE_TYPE_ANY, RGN_TYPE_ANY, poll, paint_draw_cursor, NULL);
SPACE_TYPE_ANY, RGN_TYPE_ANY, poll, paint_draw_cursor, nullptr);
}
/* Invalidate the paint cursors. */

View File

@ -34,6 +34,7 @@
#include "BKE_main.h"
#include "BKE_material.h"
#include "BKE_mesh.h"
#include "BKE_node_runtime.hh"
#include "BKE_paint.h"
#include "NOD_texture.h"
@ -395,11 +396,11 @@ void paint_brush_exit_tex(Brush *brush)
if (brush) {
MTex *mtex = &brush->mtex;
if (mtex->tex && mtex->tex->nodetree) {
ntreeTexEndExecTree(mtex->tex->nodetree->execdata);
ntreeTexEndExecTree(mtex->tex->nodetree->runtime->execdata);
}
mtex = &brush->mask_mtex;
if (mtex->tex && mtex->tex->nodetree) {
ntreeTexEndExecTree(mtex->tex->nodetree->execdata);
ntreeTexEndExecTree(mtex->tex->nodetree->runtime->execdata);
}
}
}

View File

@ -663,10 +663,11 @@ static bool sculpt_gesture_is_effected_lasso(SculptGestureContext *sgcontext, co
return BLI_BITMAP_TEST_BOOL(lasso->mask_px, scr_co_s[1] * lasso->width + scr_co_s[0]);
}
static bool sculpt_gesture_is_vertex_effected(SculptGestureContext *sgcontext, PBVHVertexIter *vd)
static bool sculpt_gesture_is_vertex_effected(SculptGestureContext *sgcontext, PBVHVertRef vertex)
{
float vertex_normal[3];
SCULPT_vertex_normal_get(sgcontext->ss, vd->vertex, vertex_normal);
const float *co = SCULPT_vertex_co_get(sgcontext->ss, vertex);
SCULPT_vertex_normal_get(sgcontext->ss, vertex, vertex_normal);
float dot = dot_v3v3(sgcontext->view_normal, vertex_normal);
const bool is_effected_front_face = !(sgcontext->front_faces_only && dot < 0.0f);
@ -676,20 +677,31 @@ static bool sculpt_gesture_is_vertex_effected(SculptGestureContext *sgcontext, P
switch (sgcontext->shape_type) {
case SCULPT_GESTURE_SHAPE_BOX:
return isect_point_planes_v3(sgcontext->clip_planes, 4, vd->co);
return isect_point_planes_v3(sgcontext->clip_planes, 4, co);
case SCULPT_GESTURE_SHAPE_LASSO:
return sculpt_gesture_is_effected_lasso(sgcontext, vd->co);
return sculpt_gesture_is_effected_lasso(sgcontext, co);
case SCULPT_GESTURE_SHAPE_LINE:
if (sgcontext->line.use_side_planes) {
return plane_point_side_v3(sgcontext->line.plane, vd->co) > 0.0f &&
plane_point_side_v3(sgcontext->line.side_plane[0], vd->co) > 0.0f &&
plane_point_side_v3(sgcontext->line.side_plane[1], vd->co) > 0.0f;
return plane_point_side_v3(sgcontext->line.plane, co) > 0.0f &&
plane_point_side_v3(sgcontext->line.side_plane[0], co) > 0.0f &&
plane_point_side_v3(sgcontext->line.side_plane[1], co) > 0.0f;
}
return plane_point_side_v3(sgcontext->line.plane, vd->co) > 0.0f;
return plane_point_side_v3(sgcontext->line.plane, co) > 0.0f;
}
return false;
}
static bool sculpt_gesture_is_face_effected(SculptGestureContext *sgcontext, PBVHFaceIter *fd)
{
for (int i = 0; i < fd->verts_num; i++) {
if (sculpt_gesture_is_vertex_effected(sgcontext, fd->verts[i])) {
return true;
}
}
return false;
}
static void sculpt_gesture_apply(bContext *C, SculptGestureContext *sgcontext, wmOperator *op)
{
SculptGestureOperation *operation = sgcontext->operation;
@ -728,9 +740,6 @@ static void sculpt_gesture_face_set_begin(bContext *C, SculptGestureContext *sgc
{
Depsgraph *depsgraph = CTX_data_depsgraph_pointer(C);
BKE_sculpt_update_object_for_edit(depsgraph, sgcontext->vc.obact, true, false, false);
/* Face Sets modifications do a single undo push. */
SCULPT_undo_push_node(sgcontext->vc.obact, NULL, SCULPT_UNDO_FACE_SETS);
}
static void face_set_gesture_apply_task_cb(void *__restrict userdata,
@ -741,16 +750,18 @@ static void face_set_gesture_apply_task_cb(void *__restrict userdata,
SculptGestureFaceSetOperation *face_set_operation = (SculptGestureFaceSetOperation *)
sgcontext->operation;
PBVHNode *node = sgcontext->nodes[i];
PBVHVertexIter vd;
bool any_updated = false;
BKE_pbvh_vertex_iter_begin (sgcontext->ss->pbvh, node, vd, PBVH_ITER_UNIQUE) {
if (sculpt_gesture_is_vertex_effected(sgcontext, &vd)) {
SCULPT_vertex_face_set_set(sgcontext->ss, vd.vertex, face_set_operation->new_face_set_id);
SCULPT_undo_push_node(sgcontext->vc.obact, node, SCULPT_UNDO_FACE_SETS);
PBVHFaceIter fd;
BKE_pbvh_face_iter_begin (sgcontext->ss->pbvh, node, fd) {
if (sculpt_gesture_is_face_effected(sgcontext, &fd)) {
SCULPT_face_set_set(sgcontext->ss, fd.face, face_set_operation->new_face_set_id);
any_updated = true;
}
}
BKE_pbvh_vertex_iter_end;
BKE_pbvh_face_iter_end(fd);
if (any_updated) {
BKE_pbvh_node_mark_update_visibility(node);
@ -821,7 +832,7 @@ static void mask_gesture_apply_task_cb(void *__restrict userdata,
bool redraw = false;
BKE_pbvh_vertex_iter_begin (sgcontext->ss->pbvh, node, vd, PBVH_ITER_UNIQUE) {
if (sculpt_gesture_is_vertex_effected(sgcontext, &vd)) {
if (sculpt_gesture_is_vertex_effected(sgcontext, vd.vertex)) {
float prevmask = vd.mask ? *vd.mask : 0.0f;
if (!any_masked) {
any_masked = true;
@ -1442,7 +1453,7 @@ static void project_line_gesture_apply_task_cb(void *__restrict userdata,
SCULPT_undo_push_node(sgcontext->vc.obact, node, SCULPT_UNDO_COORDS);
BKE_pbvh_vertex_iter_begin (sgcontext->ss->pbvh, node, vd, PBVH_ITER_UNIQUE) {
if (!sculpt_gesture_is_vertex_effected(sgcontext, &vd)) {
if (!sculpt_gesture_is_vertex_effected(sgcontext, vd.vertex)) {
continue;
}

View File

@ -43,6 +43,7 @@
#include "BKE_mesh_mapping.h"
#include "BKE_modifier.h"
#include "BKE_multires.h"
#include "BKE_node_runtime.hh"
#include "BKE_object.h"
#include "BKE_paint.h"
#include "BKE_pbvh.h"
@ -1407,6 +1408,39 @@ void SCULPT_orig_vert_data_update(SculptOrigVertData *orig_data, PBVHVertexIter
}
}
void SCULPT_orig_face_data_unode_init(SculptOrigFaceData *data, Object *ob, SculptUndoNode *unode)
{
SculptSession *ss = ob->sculpt;
BMesh *bm = ss->bm;
memset(data, 0, sizeof(*data));
data->unode = unode;
if (bm) {
data->bm_log = ss->bm_log;
}
else {
data->face_sets = unode->face_sets;
}
}
void SCULPT_orig_face_data_init(SculptOrigFaceData *data,
Object *ob,
PBVHNode *node,
SculptUndoType type)
{
SculptUndoNode *unode;
unode = SCULPT_undo_push_node(ob, node, type);
SCULPT_orig_face_data_unode_init(data, ob, unode);
}
void SCULPT_orig_face_data_update(SculptOrigFaceData *orig_data, PBVHFaceIter *iter)
{
if (orig_data->unode->type == SCULPT_UNDO_FACE_SETS) {
orig_data->face_set = orig_data->face_sets ? orig_data->face_sets[iter->i] : false;
}
}
static void sculpt_rake_data_update(SculptRakeData *srd, const float co[3])
{
float rake_dist = len_v3v3(srd->follow_co, co);
@ -1458,6 +1492,9 @@ static void paint_mesh_restore_co_task_cb(void *__restrict userdata,
case SCULPT_TOOL_SMEAR:
type = SCULPT_UNDO_COLOR;
break;
case SCULPT_TOOL_DRAW_FACE_SETS:
type = ss->cache->alt_smooth ? SCULPT_UNDO_COORDS : SCULPT_UNDO_FACE_SETS;
break;
default:
type = SCULPT_UNDO_COORDS;
break;
@ -1481,6 +1518,9 @@ static void paint_mesh_restore_co_task_cb(void *__restrict userdata,
case SCULPT_UNDO_COLOR:
BKE_pbvh_node_mark_update_color(data->nodes[n]);
break;
case SCULPT_UNDO_FACE_SETS:
BKE_pbvh_node_mark_update_face_sets(data->nodes[n]);
break;
case SCULPT_UNDO_COORDS:
BKE_pbvh_node_mark_update(data->nodes[n]);
break;
@ -1489,30 +1529,50 @@ static void paint_mesh_restore_co_task_cb(void *__restrict userdata,
}
PBVHVertexIter vd;
SculptOrigVertData orig_data;
SculptOrigVertData orig_vert_data;
SculptOrigFaceData orig_face_data;
SCULPT_orig_vert_data_unode_init(&orig_data, data->ob, unode);
if (type != SCULPT_UNDO_FACE_SETS) {
SCULPT_orig_vert_data_unode_init(&orig_vert_data, data->ob, unode);
}
else {
SCULPT_orig_face_data_unode_init(&orig_face_data, data->ob, unode);
}
if (unode->type == SCULPT_UNDO_FACE_SETS) {
PBVHFaceIter fd;
BKE_pbvh_face_iter_begin (ss->pbvh, data->nodes[n], fd) {
SCULPT_orig_face_data_update(&orig_face_data, &fd);
if (fd.face_set) {
*fd.face_set = orig_face_data.face_set;
}
}
BKE_pbvh_face_iter_end(fd);
return;
}
BKE_pbvh_vertex_iter_begin (ss->pbvh, data->nodes[n], vd, PBVH_ITER_UNIQUE) {
SCULPT_orig_vert_data_update(&orig_data, &vd);
SCULPT_orig_vert_data_update(&orig_vert_data, &vd);
if (orig_data.unode->type == SCULPT_UNDO_COORDS) {
copy_v3_v3(vd.co, orig_data.co);
if (orig_vert_data.unode->type == SCULPT_UNDO_COORDS) {
copy_v3_v3(vd.co, orig_vert_data.co);
if (vd.no) {
copy_v3_v3(vd.no, orig_data.no);
copy_v3_v3(vd.no, orig_vert_data.no);
}
else {
copy_v3_v3(vd.fno, orig_data.no);
copy_v3_v3(vd.fno, orig_vert_data.no);
}
if (vd.mvert) {
BKE_pbvh_vert_tag_update_normal(ss->pbvh, vd.vertex);
}
}
else if (orig_data.unode->type == SCULPT_UNDO_MASK) {
*vd.mask = orig_data.mask;
else if (orig_vert_data.unode->type == SCULPT_UNDO_MASK) {
*vd.mask = orig_vert_data.mask;
}
else if (orig_data.unode->type == SCULPT_UNDO_COLOR) {
SCULPT_vertex_color_set(ss, vd.vertex, orig_data.col);
else if (orig_vert_data.unode->type == SCULPT_UNDO_COLOR) {
SCULPT_vertex_color_set(ss, vd.vertex, orig_vert_data.col);
}
}
BKE_pbvh_vertex_iter_end;
@ -3299,13 +3359,16 @@ static void do_brush_action_task_cb(void *__restrict userdata,
bool need_coords = ss->cache->supports_gravity;
/* Face Sets modifications do a single undo push */
if (data->brush->sculpt_tool == SCULPT_TOOL_DRAW_FACE_SETS) {
BKE_pbvh_node_mark_redraw(data->nodes[n]);
BKE_pbvh_node_mark_update_face_sets(data->nodes[n]);
/* Draw face sets in smooth mode moves the vertices. */
if (ss->cache->alt_smooth) {
need_coords = true;
}
else {
SCULPT_undo_push_node(data->ob, data->nodes[n], SCULPT_UNDO_FACE_SETS);
}
}
else if (data->brush->sculpt_tool == SCULPT_TOOL_MASK) {
SCULPT_undo_push_node(data->ob, data->nodes[n], SCULPT_UNDO_MASK);
@ -3383,14 +3446,6 @@ static void do_brush_action(Sculpt *sd,
* and the number of nodes under the brush influence. */
if (brush->sculpt_tool == SCULPT_TOOL_DRAW_FACE_SETS &&
SCULPT_stroke_is_first_brush_step(ss->cache) && !ss->cache->alt_smooth) {
/* Dynamic-topology does not support Face Sets data, so it can't store/restore it from undo. */
/* TODO(pablodp606): This check should be done in the undo code and not here, but the rest of
* the sculpt code is not checking for unsupported undo types that may return a null node. */
if (BKE_pbvh_type(ss->pbvh) != PBVH_BMESH) {
SCULPT_undo_push_node(ob, nullptr, SCULPT_UNDO_FACE_SETS);
}
if (ss->cache->invert) {
/* When inverting the brush, pick the paint face mask ID from the mesh. */
ss->cache->paint_face_set = SCULPT_active_face_set_get(ss);
@ -5194,13 +5249,6 @@ static void sculpt_restore_mesh(Sculpt *sd, Object *ob)
BKE_brush_use_size_pressure(brush)) ||
(brush->flag & BRUSH_DRAG_DOT)) {
SculptUndoNode *unode = SCULPT_undo_get_first_node();
if (unode && unode->type == SCULPT_UNDO_FACE_SETS) {
for (int i = 0; i < ss->totfaces; i++) {
ss->face_sets[i] = unode->face_sets[i];
}
}
paint_mesh_restore_co(sd, ob);
if (ss->cache) {
@ -5568,7 +5616,7 @@ static void sculpt_brush_exit_tex(Sculpt *sd)
MTex *mtex = &brush->mtex;
if (mtex->tex && mtex->tex->nodetree) {
ntreeTexEndExecTree(mtex->tex->nodetree->execdata);
ntreeTexEndExecTree(mtex->tex->nodetree->runtime->execdata);
}
}
@ -6184,4 +6232,30 @@ void SCULPT_stroke_id_ensure(Object *ob)
}
}
int SCULPT_face_set_get(const SculptSession *ss, PBVHFaceRef face)
{
switch (BKE_pbvh_type(ss->pbvh)) {
case PBVH_BMESH:
return 0;
case PBVH_FACES:
case PBVH_GRIDS:
return ss->face_sets[face.i];
}
BLI_assert_unreachable();
return 0;
}
void SCULPT_face_set_set(SculptSession *ss, PBVHFaceRef face, int fset)
{
switch (BKE_pbvh_type(ss->pbvh)) {
case PBVH_BMESH:
break;
case PBVH_FACES:
case PBVH_GRIDS:
ss->face_sets[face.i] = fset;
}
}
/** \} */

View File

@ -545,7 +545,11 @@ float SCULPT_automasking_factor_get(AutomaskingCache *automasking,
}
if (automasking->settings.flags & BRUSH_AUTOMASKING_BOUNDARY_FACE_SETS) {
if (!SCULPT_vertex_has_unique_face_set(ss, vert)) {
bool ignore = ss->cache && ss->cache->brush &&
ss->cache->brush->sculpt_tool == SCULPT_TOOL_DRAW_FACE_SETS &&
SCULPT_vertex_face_set_get(ss, vert) == ss->cache->paint_face_set;
if (!ignore && !SCULPT_vertex_has_unique_face_set(ss, vert)) {
return 0.0f;
}
}

View File

@ -2074,7 +2074,9 @@ static void sculpt_expand_undo_push(Object *ob, ExpandCache *expand_cache)
}
break;
case SCULPT_EXPAND_TARGET_FACE_SETS:
SCULPT_undo_push_node(ob, nodes[0], SCULPT_UNDO_FACE_SETS);
for (int i = 0; i < totnode; i++) {
SCULPT_undo_push_node(ob, nodes[i], SCULPT_UNDO_FACE_SETS);
}
break;
case SCULPT_EXPAND_TARGET_COLORS:
for (int i = 0; i < totnode; i++) {

View File

@ -125,6 +125,7 @@ static void do_draw_face_sets_brush_task_cb_ex(void *__restrict userdata,
SCULPT_automasking_node_begin(
data->ob, ss, ss->cache->automasking, &automask_data, data->nodes[n]);
bool changed = false;
BKE_pbvh_vertex_iter_begin (ss->pbvh, data->nodes[n], vd, PBVH_ITER_UNIQUE) {
SCULPT_automasking_node_update(ss, &automask_data, &vd);
@ -156,6 +157,7 @@ static void do_draw_face_sets_brush_task_cb_ex(void *__restrict userdata,
if (fade > 0.05f) {
ss->face_sets[vert_map->indices[j]] = ss->cache->paint_face_set;
changed = true;
}
}
}
@ -176,10 +178,15 @@ static void do_draw_face_sets_brush_task_cb_ex(void *__restrict userdata,
if (fade > 0.05f) {
SCULPT_vertex_face_set_set(ss, vd.vertex, ss->cache->paint_face_set);
changed = true;
}
}
}
BKE_pbvh_vertex_iter_end;
if (changed) {
SCULPT_undo_push_node(data->ob, data->nodes[n], SCULPT_UNDO_FACE_SETS);
}
}
static void do_relax_face_sets_brush_task_cb_ex(void *__restrict userdata,
@ -344,7 +351,9 @@ static int sculpt_face_set_create_exec(bContext *C, wmOperator *op)
}
SCULPT_undo_push_begin(ob, op);
SCULPT_undo_push_node(ob, nodes[0], SCULPT_UNDO_FACE_SETS);
for (const int i : blender::IndexRange(totnode)) {
SCULPT_undo_push_node(ob, nodes[i], SCULPT_UNDO_FACE_SETS);
}
const int next_face_set = SCULPT_face_set_next_available_get(ss);
@ -637,7 +646,9 @@ static int sculpt_face_set_init_exec(bContext *C, wmOperator *op)
}
SCULPT_undo_push_begin(ob, op);
SCULPT_undo_push_node(ob, nodes[0], SCULPT_UNDO_FACE_SETS);
for (const int i : blender::IndexRange(totnode)) {
SCULPT_undo_push_node(ob, nodes[i], SCULPT_UNDO_FACE_SETS);
}
const float threshold = RNA_float_get(op->ptr, "threshold");
@ -1366,7 +1377,9 @@ static void sculpt_face_set_edit_modify_face_sets(Object *ob,
return;
}
SCULPT_undo_push_begin(ob, op);
SCULPT_undo_push_node(ob, nodes[0], SCULPT_UNDO_FACE_SETS);
for (const int i : blender::IndexRange(totnode)) {
SCULPT_undo_push_node(ob, nodes[i], SCULPT_UNDO_FACE_SETS);
}
sculpt_face_set_apply_edit(ob, abs(active_face_set), mode, modify_hidden);
SCULPT_undo_push_end(ob);
face_set_edit_do_post_visibility_updates(ob, nodes, totnode);

View File

@ -98,6 +98,13 @@ typedef struct {
const float *col;
} SculptOrigVertData;
typedef struct SculptOrigFaceData {
struct SculptUndoNode *unode;
struct BMLog *bm_log;
const int *face_sets;
int face_set;
} SculptOrigFaceData;
/* Flood Fill. */
typedef struct {
GSQueue *queue;
@ -201,6 +208,9 @@ typedef struct SculptUndoNode {
/* Sculpt Face Sets */
int *face_sets;
PBVHFaceRef *faces;
int faces_num;
size_t undo_size;
} SculptUndoNode;
@ -1035,6 +1045,9 @@ int SCULPT_active_face_set_get(SculptSession *ss);
int SCULPT_vertex_face_set_get(SculptSession *ss, PBVHVertRef vertex);
void SCULPT_vertex_face_set_set(SculptSession *ss, PBVHVertRef vertex, int face_set);
int SCULPT_face_set_get(const SculptSession *ss, PBVHFaceRef face);
void SCULPT_face_set_set(SculptSession *ss, PBVHFaceRef face, int fset);
bool SCULPT_vertex_has_face_set(SculptSession *ss, PBVHVertRef vertex, int face_set);
bool SCULPT_vertex_has_unique_face_set(SculptSession *ss, PBVHVertRef vertex);
@ -1067,6 +1080,25 @@ void SCULPT_orig_vert_data_update(SculptOrigVertData *orig_data, PBVHVertexIter
void SCULPT_orig_vert_data_unode_init(SculptOrigVertData *data,
Object *ob,
struct SculptUndoNode *unode);
/**
* Initialize a #SculptOrigFaceData for accessing original face data;
* handles #BMesh, #Mesh, and multi-resolution.
*/
void SCULPT_orig_face_data_init(SculptOrigFaceData *data,
Object *ob,
PBVHNode *node,
SculptUndoType type);
/**
* Update a #SculptOrigFaceData for a particular vertex from the PBVH iterator.
*/
void SCULPT_orig_face_data_update(SculptOrigFaceData *orig_data, PBVHFaceIter *iter);
/**
* Initialize a #SculptOrigVertData for accessing original vertex data;
* handles #BMesh, #Mesh, and multi-resolution.
*/
void SCULPT_orig_face_data_unode_init(SculptOrigFaceData *data,
Object *ob,
struct SculptUndoNode *unode);
/** \} */
/* -------------------------------------------------------------------- */

View File

@ -86,6 +86,8 @@ static void sculpt_expand_task_cb(void *__restrict userdata,
PBVHVertRef active_vertex = SCULPT_active_vertex_get(ss);
int active_vertex_i = BKE_pbvh_vertex_to_index(ss->pbvh, active_vertex);
bool face_sets_changed = false;
BKE_pbvh_vertex_iter_begin (ss->pbvh, node, vd, PBVH_ITER_ALL) {
int vi = vd.index;
float final_mask = *vd.mask;
@ -111,6 +113,7 @@ static void sculpt_expand_task_cb(void *__restrict userdata,
if (data->mask_expand_create_face_set) {
if (final_mask == 1.0f) {
SCULPT_vertex_face_set_set(ss, vd.vertex, ss->filter_cache->new_face_set);
face_sets_changed = true;
}
BKE_pbvh_node_mark_redraw(node);
}
@ -131,6 +134,10 @@ static void sculpt_expand_task_cb(void *__restrict userdata,
}
}
BKE_pbvh_vertex_iter_end;
if (face_sets_changed) {
SCULPT_undo_push_node(data->ob, node, SCULPT_UNDO_FACE_SETS);
}
}
static int sculpt_mask_expand_modal(bContext *C, wmOperator *op, const wmEvent *event)
@ -353,9 +360,9 @@ static int sculpt_mask_expand_invoke(bContext *C, wmOperator *op, const wmEvent
SCULPT_undo_push_begin(ob, op);
if (create_face_set) {
SCULPT_undo_push_node(ob, ss->filter_cache->nodes[0], SCULPT_UNDO_FACE_SETS);
for (int i = 0; i < ss->filter_cache->totnode; i++) {
BKE_pbvh_node_mark_redraw(ss->filter_cache->nodes[i]);
SCULPT_undo_push_node(ob, ss->filter_cache->nodes[i], SCULPT_UNDO_FACE_SETS);
}
}
else {

View File

@ -292,6 +292,7 @@ struct PartialUpdateData {
bool *modified_hidden_verts;
bool *modified_mask_verts;
bool *modified_color_verts;
bool *modified_face_set_faces;
};
/**
@ -350,6 +351,16 @@ static void update_cb_partial(PBVHNode *node, void *userdata)
}
}
}
if (data->modified_face_set_faces) {
PBVHFaceIter fd;
BKE_pbvh_face_iter_begin (data->pbvh, node, fd) {
if (data->modified_face_set_faces[fd.index]) {
BKE_pbvh_node_mark_update_face_sets(node);
break;
}
}
BKE_pbvh_face_iter_end(fd);
}
}
static bool test_swap_v3_v3(float a[3], float b[3])
@ -600,24 +611,32 @@ static bool sculpt_undo_restore_mask(bContext *C, SculptUndoNode *unode, bool *m
return true;
}
static bool sculpt_undo_restore_face_sets(bContext *C, SculptUndoNode *unode)
static bool sculpt_undo_restore_face_sets(bContext *C,
SculptUndoNode *unode,
bool *modified_face_set_faces)
{
const Scene *scene = CTX_data_scene(C);
ViewLayer *view_layer = CTX_data_view_layer(C);
BKE_view_layer_synced_ensure(scene, view_layer);
Object *ob = BKE_view_layer_active_object_get(view_layer);
Mesh *me = BKE_object_get_original_mesh(ob);
SculptSession *ss = ob->sculpt;
int *face_sets = CustomData_get_layer_named(&me->pdata, CD_PROP_INT32, ".sculpt_face_set");
if (!face_sets) {
face_sets = CustomData_add_layer_named(
&me->pdata, CD_PROP_INT32, CD_CONSTRUCT, NULL, me->totpoly, ".sculpt_face_set");
ss->face_sets = BKE_sculpt_face_sets_ensure(me);
BKE_pbvh_face_sets_set(ss->pbvh, ss->face_sets);
bool modified = false;
for (int i = 0; i < unode->faces_num; i++) {
int face_index = unode->faces[i].i;
SWAP(int, unode->face_sets[i], ss->face_sets[face_index]);
modified_face_set_faces[face_index] = unode->face_sets[i] != ss->face_sets[face_index];
modified |= modified_face_set_faces[face_index];
}
for (int i = 0; i < me->totpoly; i++) {
SWAP(int, face_sets[i], unode->face_sets[i]);
}
return false;
return modified;
}
static void sculpt_undo_bmesh_restore_generic_task_cb(
@ -864,6 +883,7 @@ static void sculpt_undo_restore_list(bContext *C, Depsgraph *depsgraph, ListBase
SubdivCCG *subdiv_ccg = ss->subdiv_ccg;
SculptUndoNode *unode;
bool update = false, rebuild = false, update_mask = false, update_visibility = false;
bool update_face_sets = false;
bool need_mask = false;
bool need_refine_subdiv = false;
bool clear_automask_cache = false;
@ -892,34 +912,6 @@ static void sculpt_undo_restore_list(bContext *C, Depsgraph *depsgraph, ListBase
DEG_id_tag_update(&ob->id, ID_RECALC_SHADING);
if (lb->first) {
unode = lb->first;
if (unode->type == SCULPT_UNDO_FACE_SETS) {
sculpt_undo_restore_face_sets(C, unode);
rebuild = true;
BKE_pbvh_search_callback(ss->pbvh, NULL, NULL, update_cb, &rebuild);
BKE_sculpt_update_object_for_edit(depsgraph, ob, true, need_mask, false);
SCULPT_visibility_sync_all_from_faces(ob);
BKE_pbvh_update_vertex_data(ss->pbvh, PBVH_UpdateVisibility);
if (BKE_pbvh_type(ss->pbvh) == PBVH_FACES) {
BKE_mesh_flush_hidden_from_verts(ob->data);
}
DEG_id_tag_update(&ob->id, ID_RECALC_SHADING);
if (!BKE_sculptsession_use_pbvh_draw(ob, rv3d)) {
DEG_id_tag_update(&ob->id, ID_RECALC_GEOMETRY);
}
unode->applied = true;
return;
}
}
if (lb->first != NULL) {
/* Only do early object update for edits if first node needs this.
* Undo steps like geometry does not need object to be updated before they run and will
@ -934,12 +926,13 @@ static void sculpt_undo_restore_list(bContext *C, Depsgraph *depsgraph, ListBase
}
}
/* The PBVH already keeps track of which vertices need updated normals, but it doesn't keep track
* of other updated. In order to tell the corresponding PBVH nodes to update, keep track of which
* elements were updated for specific layers. */
/* The PBVH already keeps track of which vertices need updated normals, but it doesn't keep
* track of other updated. In order to tell the corresponding PBVH nodes to update, keep track
* of which elements were updated for specific layers. */
bool *modified_hidden_verts = NULL;
bool *modified_mask_verts = NULL;
bool *modified_color_verts = NULL;
bool *modified_face_set_faces = NULL;
char *undo_modified_grids = NULL;
bool use_multires_undo = false;
@ -990,6 +983,14 @@ static void sculpt_undo_restore_list(bContext *C, Depsgraph *depsgraph, ListBase
}
break;
case SCULPT_UNDO_FACE_SETS:
if (modified_face_set_faces == NULL) {
modified_face_set_faces = MEM_calloc_arrayN(
BKE_pbvh_num_faces(ss->pbvh), sizeof(bool), __func__);
}
if (sculpt_undo_restore_face_sets(C, unode, modified_face_set_faces)) {
update = true;
update_face_sets = true;
}
break;
case SCULPT_UNDO_COLOR:
if (modified_color_verts == NULL) {
@ -1049,6 +1050,7 @@ static void sculpt_undo_restore_list(bContext *C, Depsgraph *depsgraph, ListBase
.modified_hidden_verts = modified_hidden_verts,
.modified_mask_verts = modified_mask_verts,
.modified_color_verts = modified_color_verts,
.modified_face_set_faces = modified_face_set_faces,
};
BKE_pbvh_search_callback(ss->pbvh, NULL, NULL, update_cb_partial, &data);
BKE_pbvh_update_bounds(ss->pbvh, PBVH_UpdateBB | PBVH_UpdateOriginalBB | PBVH_UpdateRedraw);
@ -1056,8 +1058,19 @@ static void sculpt_undo_restore_list(bContext *C, Depsgraph *depsgraph, ListBase
if (update_mask) {
BKE_pbvh_update_vertex_data(ss->pbvh, PBVH_UpdateMask);
}
if (update_face_sets) {
DEG_id_tag_update(&ob->id, ID_RECALC_SHADING);
BKE_pbvh_update_vertex_data(ss->pbvh, PBVH_RebuildDrawBuffers);
}
if (update_visibility) {
if (ELEM(BKE_pbvh_type(ss->pbvh), PBVH_FACES, PBVH_GRIDS)) {
Mesh *me = (Mesh *)ob->data;
BKE_pbvh_sync_visibility_from_verts(ss->pbvh, me);
BKE_pbvh_update_hide_attributes_from_mesh(ss->pbvh);
}
BKE_pbvh_update_visibility(ss->pbvh);
}
@ -1080,11 +1093,6 @@ static void sculpt_undo_restore_list(bContext *C, Depsgraph *depsgraph, ListBase
BKE_sculptsession_free_deformMats(ss);
}
if (BKE_pbvh_type(ss->pbvh) == PBVH_FACES && update_visibility) {
Mesh *mesh = ob->data;
BKE_mesh_flush_hidden_from_verts(mesh);
}
if (tag_update) {
DEG_id_tag_update(&ob->id, ID_RECALC_GEOMETRY);
}
@ -1096,6 +1104,7 @@ static void sculpt_undo_restore_list(bContext *C, Depsgraph *depsgraph, ListBase
MEM_SAFE_FREE(modified_hidden_verts);
MEM_SAFE_FREE(modified_mask_verts);
MEM_SAFE_FREE(modified_color_verts);
MEM_SAFE_FREE(modified_face_set_faces);
MEM_SAFE_FREE(undo_modified_grids);
}
@ -1119,6 +1128,9 @@ static void sculpt_undo_free_list(ListBase *lb)
if (unode->index) {
MEM_freeN(unode->index);
}
if (unode->faces) {
MEM_freeN(unode->faces);
}
if (unode->loop_index) {
MEM_freeN(unode->loop_index);
}
@ -1268,6 +1280,23 @@ static SculptUndoNode *sculpt_undo_find_or_alloc_node_type(Object *object, Sculp
return sculpt_undo_alloc_node_type(object, type);
}
static void sculpt_undo_store_faces(SculptSession *ss, SculptUndoNode *unode)
{
unode->faces_num = 0;
PBVHFaceIter fd;
BKE_pbvh_face_iter_begin (ss->pbvh, unode->node, fd) {
unode->faces_num++;
}
BKE_pbvh_face_iter_end(fd);
unode->faces = MEM_malloc_arrayN(sizeof(*unode->faces), unode->faces_num, __func__);
BKE_pbvh_face_iter_begin (ss->pbvh, unode->node, fd) {
unode->faces[fd.i] = fd.face;
}
BKE_pbvh_face_iter_end(fd);
}
static SculptUndoNode *sculpt_undo_alloc_node(Object *ob, PBVHNode *node, SculptUndoType type)
{
UndoSculpt *usculpt = sculpt_undo_get_nodes();
@ -1290,6 +1319,7 @@ static SculptUndoNode *sculpt_undo_alloc_node(Object *ob, PBVHNode *node, Sculpt
}
bool need_loops = type == SCULPT_UNDO_COLOR;
const bool need_faces = type == SCULPT_UNDO_FACE_SETS;
if (need_loops) {
int totloop;
@ -1304,6 +1334,12 @@ static SculptUndoNode *sculpt_undo_alloc_node(Object *ob, PBVHNode *node, Sculpt
usculpt->undo_size += alloc_size;
}
if (need_faces) {
sculpt_undo_store_faces(ss, unode);
const size_t alloc_size = sizeof(*unode->faces) * (size_t)unode->faces_num;
usculpt->undo_size += alloc_size;
}
switch (type) {
case SCULPT_UNDO_COORDS: {
size_t alloc_size = sizeof(*unode->co) * (size_t)allvert;
@ -1354,9 +1390,14 @@ static SculptUndoNode *sculpt_undo_alloc_node(Object *ob, PBVHNode *node, Sculpt
case SCULPT_UNDO_DYNTOPO_END:
case SCULPT_UNDO_DYNTOPO_SYMMETRIZE:
BLI_assert_msg(0, "Dynamic topology should've already been handled");
case SCULPT_UNDO_GEOMETRY:
case SCULPT_UNDO_FACE_SETS:
break;
case SCULPT_UNDO_GEOMETRY:
break;
case SCULPT_UNDO_FACE_SETS: {
const size_t alloc_size = sizeof(*unode->face_sets) * (size_t)unode->faces_num;
usculpt->undo_size += alloc_size;
break;
}
}
if (maxgrid) {
@ -1486,32 +1527,15 @@ static SculptUndoNode *sculpt_undo_geometry_push(Object *object, SculptUndoType
return unode;
}
static SculptUndoNode *sculpt_undo_face_sets_push(Object *ob, SculptUndoType type)
static void sculpt_undo_store_face_sets(SculptSession *ss, SculptUndoNode *unode)
{
UndoSculpt *usculpt = sculpt_undo_get_nodes();
SculptUndoNode *unode = MEM_callocN(sizeof(*unode), __func__);
unode->face_sets = MEM_malloc_arrayN(sizeof(*unode->face_sets), unode->faces_num, __func__);
BLI_strncpy(unode->idname, ob->id.name, sizeof(unode->idname));
unode->type = type;
unode->applied = true;
Mesh *me = BKE_object_get_original_mesh(ob);
unode->face_sets = MEM_callocN(me->totpoly * sizeof(int), "sculpt face sets");
const int *face_sets = CustomData_get_layer_named(&me->pdata, CD_PROP_INT32, ".sculpt_face_set");
if (face_sets) {
for (int i = 0; i < me->totpoly; i++) {
unode->face_sets[i] = face_sets[i];
}
PBVHFaceIter fd;
BKE_pbvh_face_iter_begin (ss->pbvh, unode->node, fd) {
unode->face_sets[fd.i] = fd.face_set ? *fd.face_set : SCULPT_FACE_SET_NONE;
}
else {
memset(unode->face_sets, SCULPT_FACE_SET_NONE, sizeof(int) * me->totpoly);
}
BLI_addtail(&usculpt->nodes, unode);
return unode;
BKE_pbvh_face_iter_end(fd);
}
static SculptUndoNode *sculpt_undo_bmesh_push(Object *ob, PBVHNode *node, SculptUndoType type)
@ -1614,11 +1638,6 @@ SculptUndoNode *SCULPT_undo_push_node(Object *ob, PBVHNode *node, SculptUndoType
BLI_thread_unlock(LOCK_CUSTOM1);
return unode;
}
if (type == SCULPT_UNDO_FACE_SETS) {
unode = sculpt_undo_face_sets_push(ob, type);
BLI_thread_unlock(LOCK_CUSTOM1);
return unode;
}
if ((unode = SCULPT_undo_get_node(node, type))) {
BLI_thread_unlock(LOCK_CUSTOM1);
return unode;
@ -1676,7 +1695,9 @@ SculptUndoNode *SCULPT_undo_push_node(Object *ob, PBVHNode *node, SculptUndoType
case SCULPT_UNDO_DYNTOPO_SYMMETRIZE:
BLI_assert_msg(0, "Dynamic topology should've already been handled");
case SCULPT_UNDO_GEOMETRY:
break;
case SCULPT_UNDO_FACE_SETS:
sculpt_undo_store_face_sets(ss, unode);
break;
}

View File

@ -1585,10 +1585,10 @@ static float2 socket_link_connection_location(const bNode &node,
const bNodeSocket &socket,
const bNodeLink &link)
{
const float2 socket_location(socket.locx, socket.locy);
const float2 socket_location(socket.runtime->locx, socket.runtime->locy);
if (socket.is_multi_input() && socket.is_input() && !(node.flag & NODE_HIDDEN)) {
return node_link_calculate_multi_input_position(
socket_location, link.multi_input_socket_index, socket.total_inputs);
socket_location, link.multi_input_socket_index, socket.runtime->total_inputs);
}
return socket_location;
}

View File

@ -107,6 +107,10 @@ struct TreeDrawContext {
* currently drawn node tree can be retrieved from the log below.
*/
geo_log::GeoTreeLog *geo_tree_log = nullptr;
/**
* True if there is an active realtime compositor using the node tree, false otherwise.
*/
bool used_by_realtime_compositor = false;
};
float ED_node_grid_size()
@ -417,8 +421,8 @@ static void node_update_basis(const bContext &C,
buty = min_ii(buty, dy - NODE_DY);
/* Round the socket location to stop it from jiggling. */
socket->locx = round(loc.x + NODE_WIDTH(node));
socket->locy = round(dy - NODE_DYS);
socket->runtime->locx = round(loc.x + NODE_WIDTH(node));
socket->runtime->locy = round(dy - NODE_DYS);
dy = buty;
if (socket->next) {
@ -511,8 +515,9 @@ static void node_update_basis(const bContext &C,
* to account for the increased height of the taller sockets. */
float multi_input_socket_offset = 0.0f;
if (socket->flag & SOCK_MULTI_INPUT) {
if (socket->total_inputs > 2) {
multi_input_socket_offset = (socket->total_inputs - 2) * NODE_MULTI_INPUT_LINK_GAP;
if (socket->runtime->total_inputs > 2) {
multi_input_socket_offset = (socket->runtime->total_inputs - 2) *
NODE_MULTI_INPUT_LINK_GAP;
}
}
dy -= multi_input_socket_offset * 0.5f;
@ -548,9 +553,9 @@ static void node_update_basis(const bContext &C,
/* Ensure minimum socket height in case layout is empty. */
buty = min_ii(buty, dy - NODE_DY);
socket->locx = loc.x;
socket->runtime->locx = loc.x;
/* Round the socket vertical position to stop it from jiggling. */
socket->locy = round(dy - NODE_DYS);
socket->runtime->locy = round(dy - NODE_DYS);
dy = buty - multi_input_socket_offset * 0.5;
if (socket->next) {
@ -620,8 +625,8 @@ static void node_update_hidden(bNode &node, uiBlock &block)
LISTBASE_FOREACH (bNodeSocket *, socket, &node.outputs) {
if (!nodeSocketIsHidden(socket)) {
/* Round the socket location to stop it from jiggling. */
socket->locx = round(node.runtime->totr.xmax - hiddenrad + sinf(rad) * hiddenrad);
socket->locy = round(node.runtime->totr.ymin + hiddenrad + cosf(rad) * hiddenrad);
socket->runtime->locx = round(node.runtime->totr.xmax - hiddenrad + sinf(rad) * hiddenrad);
socket->runtime->locy = round(node.runtime->totr.ymin + hiddenrad + cosf(rad) * hiddenrad);
rad += drad;
}
}
@ -632,8 +637,8 @@ static void node_update_hidden(bNode &node, uiBlock &block)
LISTBASE_FOREACH (bNodeSocket *, socket, &node.inputs) {
if (!nodeSocketIsHidden(socket)) {
/* Round the socket location to stop it from jiggling. */
socket->locx = round(node.runtime->totr.xmin + hiddenrad + sinf(rad) * hiddenrad);
socket->locy = round(node.runtime->totr.ymin + hiddenrad + cosf(rad) * hiddenrad);
socket->runtime->locx = round(node.runtime->totr.xmin + hiddenrad + sinf(rad) * hiddenrad);
socket->runtime->locy = round(node.runtime->totr.ymin + hiddenrad + cosf(rad) * hiddenrad);
rad += drad;
}
}
@ -1172,7 +1177,7 @@ static void node_socket_draw_nested(const bContext &C,
const float size,
const bool selected)
{
const float2 location(sock.locx, sock.locy);
const float2 location(sock.runtime->locx, sock.runtime->locy);
float color[4];
float outline_color[4];
@ -1577,7 +1582,7 @@ static void node_draw_sockets(const View2D &v2d,
node_socket_color_get(C, ntree, node_ptr, *socket, color);
node_socket_outline_color_get(socket->flag & SELECT, socket->type, outline_color);
const float2 location(socket->locx, socket->locy);
const float2 location(socket->runtime->locx, socket->runtime->locy);
node_socket_draw_multi_input(color, outline_color, width, height, location);
}
}
@ -1652,12 +1657,42 @@ static char *node_errors_tooltip_fn(bContext * /*C*/, void *argN, const char * /
#define NODE_HEADER_ICON_SIZE (0.8f * U.widget_unit)
static void node_add_unsupported_compositor_operation_error_message_button(bNode &node,
uiBlock &block,
const rctf &rect,
float &icon_offset)
{
icon_offset -= NODE_HEADER_ICON_SIZE;
UI_block_emboss_set(&block, UI_EMBOSS_NONE);
uiDefIconBut(&block,
UI_BTYPE_BUT,
0,
ICON_ERROR,
icon_offset,
rect.ymax - NODE_DY,
NODE_HEADER_ICON_SIZE,
UI_UNIT_Y,
nullptr,
0,
0,
0,
0,
TIP_(node.typeinfo->realtime_compositor_unsupported_message));
UI_block_emboss_set(&block, UI_EMBOSS);
}
static void node_add_error_message_button(TreeDrawContext &tree_draw_ctx,
bNode &node,
uiBlock &block,
const rctf &rect,
float &icon_offset)
{
if (tree_draw_ctx.used_by_realtime_compositor &&
node.typeinfo->realtime_compositor_unsupported_message) {
node_add_unsupported_compositor_operation_error_message_button(node, block, rect, icon_offset);
return;
}
Span<geo_log::NodeWarning> warnings;
if (tree_draw_ctx.geo_tree_log) {
geo_log::GeoNodeLog *node_log = tree_draw_ctx.geo_tree_log->nodes.lookup_ptr(node.name);
@ -2618,7 +2653,7 @@ static void count_multi_input_socket_links(bNodeTree &ntree, SpaceNode &snode)
LISTBASE_FOREACH (bNode *, node, &ntree.nodes) {
LISTBASE_FOREACH (bNodeSocket *, socket, &node->inputs) {
if (socket->flag & SOCK_MULTI_INPUT) {
socket->total_inputs = counts.lookup_default(socket, 0);
socket->runtime->total_inputs = counts.lookup_default(socket, 0);
}
}
}
@ -2682,12 +2717,12 @@ static void reroute_node_prepare_for_draw(bNode &node)
/* reroute node has exactly one input and one output, both in the same place */
bNodeSocket *socket = (bNodeSocket *)node.outputs.first;
socket->locx = loc.x;
socket->locy = loc.y;
socket->runtime->locx = loc.x;
socket->runtime->locy = loc.y;
socket = (bNodeSocket *)node.inputs.first;
socket->locx = loc.x;
socket->locy = loc.y;
socket->runtime->locx = loc.x;
socket->runtime->locy = loc.y;
const float size = 8.0f;
node.width = size * 2;
@ -3058,6 +3093,48 @@ static void snode_setup_v2d(SpaceNode &snode, ARegion &region, const float2 &cen
// XXX snode->curfont = uiSetCurFont_ext(snode->aspect);
}
/* Similar to is_compositor_enabled() in draw_manager.c but checks all 3D views. */
static bool realtime_compositor_is_in_use(const bContext &context)
{
if (!U.experimental.use_realtime_compositor) {
return false;
}
const Scene *scene = CTX_data_scene(&context);
if (!scene->use_nodes) {
return false;
}
if (!scene->nodetree) {
return false;
}
const Main *main = CTX_data_main(&context);
LISTBASE_FOREACH (bScreen *, screen, &main->screens) {
LISTBASE_FOREACH (ScrArea *, area, &screen->areabase) {
LISTBASE_FOREACH (SpaceLink *, space, &area->spacedata) {
if (space->spacetype != SPACE_VIEW3D) {
continue;
}
const View3D &view_3d = *reinterpret_cast<const View3D *>(space);
if (view_3d.shading.use_compositor == V3D_SHADING_USE_COMPOSITOR_DISABLED) {
continue;
}
if (!(view_3d.shading.type >= OB_MATERIAL)) {
continue;
}
return true;
}
}
}
return false;
}
static void draw_nodetree(const bContext &C,
ARegion &region,
bNodeTree &ntree,
@ -3081,6 +3158,9 @@ static void draw_nodetree(const bContext &C,
tree_draw_ctx.active_geometry_nodes_viewer = viewer_path::find_geometry_nodes_viewer(
workspace->viewer_path, *snode);
}
else if (ntree.type == NTREE_COMPOSIT) {
tree_draw_ctx.used_by_realtime_compositor = realtime_compositor_is_in_use(C);
}
node_update_nodetree(C, tree_draw_ctx, ntree, nodes, blocks);
node_draw_nodetree(C, tree_draw_ctx, region, *snode, ntree, nodes, blocks, parent_key);

View File

@ -99,7 +99,8 @@ float node_socket_calculate_height(const bNodeSocket &socket)
{
float sock_height = NODE_SOCKSIZE * NODE_SOCKSIZE_DRAW_MULIPLIER;
if (socket.flag & SOCK_MULTI_INPUT) {
sock_height += max_ii(NODE_MULTI_INPUT_LINK_GAP * 0.5f * socket.total_inputs, NODE_SOCKSIZE);
sock_height += max_ii(NODE_MULTI_INPUT_LINK_GAP * 0.5f * socket.runtime->total_inputs,
NODE_SOCKSIZE);
}
return sock_height;
}
@ -267,14 +268,14 @@ static void compo_startjob(void *cjv,
cj->do_update = do_update;
cj->progress = progress;
ntree->test_break = compo_breakjob;
ntree->tbh = cj;
ntree->stats_draw = compo_statsdrawjob;
ntree->sdh = cj;
ntree->progress = compo_progressjob;
ntree->prh = cj;
ntree->update_draw = compo_redrawjob;
ntree->udh = cj;
ntree->runtime->test_break = compo_breakjob;
ntree->runtime->tbh = cj;
ntree->runtime->stats_draw = compo_statsdrawjob;
ntree->runtime->sdh = cj;
ntree->runtime->progress = compo_progressjob;
ntree->runtime->prh = cj;
ntree->runtime->update_draw = compo_redrawjob;
ntree->runtime->udh = cj;
// XXX BIF_store_spare();
/* 1 is do_previews */
@ -292,9 +293,9 @@ static void compo_startjob(void *cjv,
}
}
ntree->test_break = nullptr;
ntree->stats_draw = nullptr;
ntree->progress = nullptr;
ntree->runtime->test_break = nullptr;
ntree->runtime->stats_draw = nullptr;
ntree->runtime->progress = nullptr;
}
static void compo_canceljob(void *cjv)
@ -1194,7 +1195,7 @@ void node_set_hidden_sockets(SpaceNode *snode, bNode *node, int set)
static bool cursor_isect_multi_input_socket(const float2 &cursor, const bNodeSocket &socket)
{
const float node_socket_height = node_socket_calculate_height(socket);
const float2 location(socket.locx, socket.locy);
const float2 location(socket.runtime->locx, socket.runtime->locy);
/* `.xmax = socket->locx + NODE_SOCKSIZE * 5.5f`
* would be the same behavior as for regular sockets.
* But keep it smaller because for multi-input socket you
@ -1244,7 +1245,7 @@ bool node_find_indicated_socket(SpaceNode &snode,
if (in_out & SOCK_IN) {
LISTBASE_FOREACH (bNodeSocket *, sock, &node->inputs) {
if (!nodeSocketIsHidden(sock)) {
const float2 location(sock->locx, sock->locy);
const float2 location(sock->runtime->locx, sock->runtime->locy);
if (sock->flag & SOCK_MULTI_INPUT && !(node->flag & NODE_HIDDEN)) {
if (cursor_isect_multi_input_socket(cursor, *sock)) {
if (!socket_is_occluded(location, *node, snode)) {
@ -1267,7 +1268,7 @@ bool node_find_indicated_socket(SpaceNode &snode,
if (in_out & SOCK_OUT) {
LISTBASE_FOREACH (bNodeSocket *, sock, &node->outputs) {
if (!nodeSocketIsHidden(sock)) {
const float2 location(sock->locx, sock->locy);
const float2 location(sock->runtime->locx, sock->runtime->locy);
if (BLI_rctf_isect_pt(&rect, location.x, location.y)) {
if (!socket_is_occluded(location, *node, snode)) {
*nodep = node;
@ -1295,8 +1296,8 @@ float node_link_dim_factor(const View2D &v2d, const bNodeLink &link)
return 1.0f;
}
const float2 from(link.fromsock->locx, link.fromsock->locy);
const float2 to(link.tosock->locx, link.tosock->locy);
const float2 from(link.fromsock->runtime->locx, link.fromsock->runtime->locy);
const float2 to(link.tosock->runtime->locx, link.tosock->runtime->locy);
const float min_endpoint_distance = std::min(
std::max(BLI_rctf_length_x(&v2d.cur, from.x), BLI_rctf_length_y(&v2d.cur, from.y)),
@ -2796,13 +2797,13 @@ static bool node_shader_script_update_text_recursive(RenderEngine *engine,
{
bool found = false;
ntree->done = true;
ntree->runtime->done = true;
/* update each script that is using this text datablock */
LISTBASE_FOREACH (bNode *, node, &ntree->nodes) {
if (node->type == NODE_GROUP) {
bNodeTree *ngroup = (bNodeTree *)node->id;
if (ngroup && !ngroup->done) {
if (ngroup && !ngroup->runtime->done) {
found |= node_shader_script_update_text_recursive(engine, type, ngroup, text);
}
}
@ -2854,14 +2855,14 @@ static int node_shader_script_update_exec(bContext *C, wmOperator *op)
/* clear flags for recursion check */
FOREACH_NODETREE_BEGIN (bmain, ntree, id) {
if (ntree->type == NTREE_SHADER) {
ntree->done = false;
ntree->runtime->done = false;
}
}
FOREACH_NODETREE_END;
FOREACH_NODETREE_BEGIN (bmain, ntree, id) {
if (ntree->type == NTREE_SHADER) {
if (!ntree->done) {
if (!ntree->runtime->done) {
found |= node_shader_script_update_text_recursive(engine, type, ntree, text);
}
}

View File

@ -300,12 +300,12 @@ static void sort_multi_input_socket_links_with_drag(bNode &node,
if (!socket->is_multi_input()) {
continue;
}
const float2 &socket_location = {socket->locx, socket->locy};
const float2 &socket_location = {socket->runtime->locx, socket->runtime->locy};
Vector<LinkAndPosition, 8> links;
for (bNodeLink *link : socket->directly_linked_links()) {
const float2 location = node_link_calculate_multi_input_position(
socket_location, link->multi_input_socket_index, link->tosock->total_inputs);
socket_location, link->multi_input_socket_index, link->tosock->runtime->total_inputs);
links.append({link, location});
};
@ -645,8 +645,8 @@ static int view_socket(const bContext &C,
}
if (viewer_node == nullptr) {
const int viewer_type = get_default_viewer_type(&C);
const float2 location{bsocket_to_view.locx / UI_DPI_FAC + 100,
bsocket_to_view.locy / UI_DPI_FAC};
const float2 location{bsocket_to_view.runtime->locx / UI_DPI_FAC + 100,
bsocket_to_view.runtime->locy / UI_DPI_FAC};
viewer_node = add_static_node(C, viewer_type, location);
}
@ -903,7 +903,7 @@ static void node_link_exit(bContext &C, wmOperator &op, const bool apply_links)
bNodeLinkDrag *nldrag = (bNodeLinkDrag *)op.customdata;
/* avoid updates while applying links */
ntree.is_updating = true;
ntree.runtime->is_updating = true;
for (bNodeLink *link : nldrag->links) {
link->flag &= ~NODE_LINK_DRAGGED;
@ -929,7 +929,7 @@ static void node_link_exit(bContext &C, wmOperator &op, const bool apply_links)
nodeRemLink(&ntree, link);
}
}
ntree.is_updating = false;
ntree.runtime->is_updating = false;
ED_node_tree_propagate_change(&C, bmain, &ntree);

View File

@ -1761,7 +1761,7 @@ void ED_view3d_draw_offscreen_simple(Depsgraph *depsgraph,
GPUOffScreen *ofs,
GPUViewport *viewport)
{
View3D v3d = {nullptr};
View3D v3d = blender::dna::shallow_zero_initialize();
ARegion ar = {nullptr};
RegionView3D rv3d = {{{0}}};
@ -2011,7 +2011,7 @@ ImBuf *ED_view3d_draw_offscreen_imbuf_simple(Depsgraph *depsgraph,
GPUOffScreen *ofs,
char err_out[256])
{
View3D v3d = {nullptr};
View3D v3d = blender::dna::shallow_zero_initialize();
ARegion region = {nullptr};
RegionView3D rv3d = {{{0}}};

View File

@ -612,16 +612,26 @@ static bool bm_face_is_snap_target(BMFace *f, void *UNUSED(user_data))
static eSnapFlag snap_flag_from_spacetype(TransInfo *t)
{
ToolSettings *ts = t->settings;
if (t->spacetype == SPACE_NODE) {
return eSnapFlag(ts->snap_flag_node);
switch (t->spacetype) {
case SPACE_VIEW3D:
return eSnapFlag(ts->snap_flag);
case SPACE_NODE:
return eSnapFlag(ts->snap_flag_node);
case SPACE_IMAGE:
return eSnapFlag(ts->snap_uv_flag);
case SPACE_SEQ:
return eSnapFlag(ts->snap_flag_seq);
case SPACE_GRAPH:
case SPACE_ACTION:
case SPACE_NLA:
/* These editors have their own "Auto-Snap" activation option.
* See #getAnimEdit_SnapMode. */
return eSnapFlag(0);
default:
BLI_assert(false);
break;
}
if (t->spacetype == SPACE_IMAGE) {
return eSnapFlag(ts->snap_uv_flag);
}
if (t->spacetype == SPACE_SEQ) {
return eSnapFlag(ts->snap_flag_seq);
}
return eSnapFlag(ts->snap_flag);
return eSnapFlag(0);
}
static eSnapMode snap_mode_from_spacetype(TransInfo *t)

View File

@ -232,7 +232,7 @@ void execute_materialized(TypeSequence<ParamTags...> /* param_tags */,
/* Outer loop over all chunks. */
for (int64_t chunk_start = 0; chunk_start < mask_size; chunk_start += MaxChunkSize) {
const IndexMask sliced_mask = mask.slice(chunk_start, MaxChunkSize);
const IndexMask sliced_mask = mask.slice_safe(chunk_start, MaxChunkSize);
const int64_t chunk_size = sliced_mask.size();
const bool sliced_mask_is_range = sliced_mask.is_range();

View File

@ -1,5 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#include "BLI_array_utils.hh"
#include "BLI_index_mask_ops.hh"
#include "BLI_map.hh"
#include "BLI_multi_value_map.hh"
@ -468,11 +469,7 @@ Vector<GVArray> evaluate_fields(ResourceScope &scope,
}
/* Still have to copy over the data in the destination provided by the caller. */
if (dst_varray.is_span()) {
/* Materialize into a span. */
threading::parallel_for(mask.index_range(), 2048, [&](const IndexRange range) {
computed_varray.materialize_to_uninitialized(mask.slice(range),
dst_varray.get_internal_span().data());
});
array_utils::copy(computed_varray, mask, dst_varray.get_internal_span());
}
else {
/* Slower materialize into a different structure. */

View File

@ -254,7 +254,7 @@ static CurvesGeometry resample_to_uniform(const CurvesGeometry &src_curves,
bke::CurvesFieldContext field_context{src_curves, ATTR_DOMAIN_CURVE};
fn::FieldEvaluator evaluator{field_context, src_curves.curves_num()};
evaluator.set_selection(selection_field);
evaluator.add_with_destination(count_field, dst_offsets);
evaluator.add_with_destination(count_field, dst_offsets.drop_back(1));
evaluator.evaluate();
const IndexMask selection = evaluator.get_evaluated_selection_as_mask();
const Vector<IndexRange> unselected_ranges = selection.extract_ranges_invert(

View File

@ -265,12 +265,6 @@ if(WITH_OPENGL)
list(APPEND SRC ${OPENGL_SRC})
endif()
if(WITH_VULKAN_BACKEND)
list(APPEND SRC ${VULKAN_SRC})
add_definitions(-DWITH_VULKAN_BACKEND)
endif()
if(WITH_METAL_BACKEND)
list(APPEND SRC ${METAL_SRC})
endif()
@ -279,6 +273,24 @@ set(LIB
${Epoxy_LIBRARIES}
)
if(WITH_VULKAN_BACKEND)
list(APPEND SRC ${VULKAN_SRC})
add_definitions(-DWITH_VULKAN_BACKEND)
list(APPEND INC_SYS
${VULKAN_INCLUDE_DIRS}
)
list(APPEND INC
../../../extern/vulkan_memory_allocator/
)
list(APPEND LIB
extern_vulkan_memory_allocator
)
endif()
set(MSL_SRC
shaders/metal/mtl_shader_defines.msl
shaders/metal/mtl_shader_common.msl

View File

@ -37,6 +37,8 @@ typedef enum {
GPU_COMP_I10,
/* Warning! adjust GPUVertAttr if changing. */
GPU_COMP_MAX
} GPUVertCompType;
typedef enum {

View File

@ -65,9 +65,9 @@ void VKBackend::compute_dispatch_indirect(StorageBuf * /*indirect_buf*/)
{
}
Context *VKBackend::context_alloc(void * /*ghost_window*/, void * /*ghost_context*/)
Context *VKBackend::context_alloc(void *ghost_window, void *ghost_context)
{
return new VKContext();
return new VKContext(ghost_window, ghost_context);
}
Batch *VKBackend::batch_alloc()

View File

@ -7,8 +7,38 @@
#include "vk_context.hh"
#include "GHOST_C-api.h"
namespace blender::gpu {
VKContext::VKContext(void *ghost_window, void *ghost_context)
{
ghost_window_ = ghost_window;
if (ghost_window) {
ghost_context = GHOST_GetDrawingContext((GHOST_WindowHandle)ghost_window);
}
GHOST_GetVulkanHandles((GHOST_ContextHandle)ghost_context,
&instance_,
&physical_device_,
&device_,
&graphic_queue_familly_);
/* Initialize the memory allocator. */
VmaAllocatorCreateInfo info = {};
/* Should use same vulkan version as GHOST. */
info.vulkanApiVersion = VK_API_VERSION_1_2;
info.physicalDevice = physical_device_;
info.device = device_;
info.instance = instance_;
vmaCreateAllocator(&info, &mem_allocator_);
}
VKContext::~VKContext()
{
vmaDestroyAllocator(mem_allocator_);
}
void VKContext::activate()
{
}

Some files were not shown because too many files have changed in this diff Show More