Mesh: Replace auto smooth with node group #108014

Merged
Hans Goudey merged 149 commits from HooglyBoogly/blender:refactor-mesh-corner-normals-lazy into main 2023-10-20 16:54:20 +02:00
42 changed files with 923 additions and 489 deletions
Showing only changes of commit 3fe03c01ac - Show all commits

View File

@ -23,14 +23,16 @@ PERFORMANCE OF THIS SOFTWARE.
** Cuda Wrangler; version cbf465b -- https://github.com/CudaWrangler/cuew
** Draco; version 1.3.6 -- https://google.github.io/draco/
** Embree; version 3.13.4 -- https://github.com/embree/embree
** Intel® Open Path Guiding Library; version v0.3.1-beta --
** Intel(R) oneAPI DPC++ compiler; version 20221019 --
https://github.com/intel/llvm#oneapi-dpc-compiler
** Intel® Open Path Guiding Library; version v0.4.1-beta --
http://www.openpgl.org/
** Mantaflow; version 0.13 -- http://mantaflow.com/
** oneAPI Threading Building Block; version 2020_U3 --
https://software.intel.com/en-us/oneapi/onetbb
** OpenCL Wrangler; version 27a6867 -- https://github.com/OpenCLWrangler/clew
** OpenImageDenoise; version 1.4.3 -- https://www.openimagedenoise.org/
** OpenSSL; version 1.1.1 -- https://www.openssl.org/
** OpenSSL; version 1.1.1q -- https://www.openssl.org/
** OpenXR SDK; version 1.0.17 -- https://khronos.org/openxr
** RangeTree; version 40ebed8aa209 -- https://github.com/ideasman42/rangetree-c
** SDL Extension Wrangler; version 15edf8e --
@ -242,6 +244,8 @@ limitations under the License.
Copyright 2018 The Draco Authors
* For Embree see also this required NOTICE:
Copyright 2009-2020 Intel Corporation
* For Intel(R) oneAPI DPC++ compiler see also this required NOTICE:
Copyright (C) 2021 Intel Corporation
* For Intel® Open Path Guiding Library see also this required NOTICE:
Copyright 2020 Intel Corporation.
* For Mantaflow see also this required NOTICE:
@ -273,7 +277,7 @@ limitations under the License.
Copyright (c) 2016, Alliance for Open Media. All rights reserved.
** NASM; version 2.15.02 -- https://www.nasm.us/
Contributions since 2008-12-15 are Copyright Intel Corporation.
** OpenJPEG; version 2.4.0 -- https://github.com/uclouvain/openjpeg
** OpenJPEG; version 2.5.0 -- https://github.com/uclouvain/openjpeg
Copyright (c) 2002-2014, Universite catholique de Louvain (UCL), Belgium
Copyright (c) 2002-2014, Professor Benoit Macq
Copyright (c) 2003-2014, Antonin Descampe
@ -330,7 +334,7 @@ Copyright Intel Corporation
Copyright (c) 2005-2021, NumPy Developers.
** Ogg; version 1.3.5 -- https://www.xiph.org/ogg/
COPYRIGHT (C) 1994-2019 by the Xiph.Org Foundation https://www.xiph.org/
** Open Shading Language; version 1.11.17.0 --
** Open Shading Language; version 1.12.6.2 --
https://github.com/imageworks/OpenShadingLanguage
Copyright Contributors to the Open Shading Language project.
** OpenColorIO; version 2.1.1 --
@ -339,7 +343,7 @@ Copyright Contributors to the OpenColorIO Project.
** OpenEXR; version 3.1.5 --
https://github.com/AcademySoftwareFoundation/openexr
Copyright Contributors to the OpenEXR Project. All rights reserved.
** OpenImageIO; version 2.3.13.0 -- http://www.openimageio.org
** OpenImageIO; version 2.3.20.0 -- http://www.openimageio.org
Copyright (c) 2008-present by Contributors to the OpenImageIO project. All
Rights Reserved.
** Pystring; version 1.1.3 -- https://github.com/imageworks/pystring
@ -1183,7 +1187,7 @@ Copyright (C) 2003-2021 x264 project
** miniLZO; version 2.08 -- http://www.oberhumer.com/opensource/lzo/
LZO and miniLZO are Copyright (C) 1996-2014 Markus Franz Xaver Oberhumer
All Rights Reserved.
** The FreeType Project; version 2.11.1 --
** The FreeType Project; version 2.12.1 --
https://sourceforge.net/projects/freetype
Copyright (C) 1996-2020 by David Turner, Robert Wilhelm, and Werner Lemberg.
** X Drag and Drop; version 2000-08-08 --
@ -2186,8 +2190,10 @@ of this License. But first, please read <http s ://www.gnu.org/ licenses
------
** FFmpeg; version 5.0 -- http://ffmpeg.org/
** FFmpeg; version 5.1.2 -- http://ffmpeg.org/
-
** Libsndfile; version 1.1.0 -- http://libsndfile.github.io/libsndfile/
Copyright (C) 2011-2016 Erik de Castro Lopo <erikd@mega-nerd.com>
GNU LESSER GENERAL PUBLIC LICENSE
@ -2675,171 +2681,6 @@ That's all there is to it!
------
** Libsndfile; version 1.0.28 -- http://www.mega-nerd.com/libsndfile/
Copyright (C) 2011-2016 Erik de Castro Lopo <erikd@mega-nerd.com>
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http s ://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies of this license
document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates the terms
and conditions of version 3 of the GNU General Public License, supplemented by
the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License, other
than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided by
the Library, but which is not otherwise based on the Library. Defining a
subclass of a class defined by the Library is deemed a mode of using an
interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library with
which the Combined Work was made is also called the "Linked Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code for
portions of the Combined Work that, considered in isolation, are based on
the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the object
code and/or source code for the Application, including any data and
utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License without
being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a facility
refers to a function or data to be supplied by an Application that uses the
facility (other than as an argument passed when the facility is invoked),
then you may convey a copy of the modified version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the function or
data, the facility still operates, and performs whatever part of its
purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of this
License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from a
header file that is part of the Library. You may convey such object code
under terms of your choice, provided that, if the incorporated material is
not limited to numerical parameters, data structure layouts and accessors,
or small macros, inline functions and templates (ten or fewer lines in
length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are covered by
this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that, taken
together, effectively do not restrict modification of the portions of the
Library contained in the Combined Work and reverse engineering for debugging
such modifications, if you also do each of the following:
a) Give prominent notice with each copy of the Combined Work that the
Library is used in it and that the Library and its use are covered by
this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this
license document.
c) For a Combined Work that displays copyright notices during execution,
include the copyright notice for the Library among these notices, as well
as a reference directing the user to the copies of the GNU GPL and this
license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form suitable
for, and under terms that permit, the user to recombine or relink the
Application with a modified version of the Linked Version to produce a
modified Combined Work, in the manner specified by section 6 of the
GNU GPL for conveying Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time a copy
of the Library already present on the user's computer system, and (b)
will operate properly with a modified version of the Library that is
interface-compatible with the Linked Version.
e) Provide Installation Information, but only if you would otherwise be
required to provide such information under section 6 of the GNU GPL, and
only to the extent that such information is necessary to install and
execute a modified version of the Combined Work produced by recombining
or relinking the Application with a modified version of the Linked
Version. (If you use option 4d0, the Installation Information must
accompany the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL for
conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the Library side
by side in a single library together with other library facilities that are
not Applications and are not covered by this License, and convey such a
combined library under terms of your choice, if you do both of the
following:
a) Accompany the combined library with a copy of the same work based on
the Library, uncombined with any other library facilities, conveyed under
the terms of this License.
b) Give prominent notice with the combined library that part of it is a
work based on the Library, and explaining where to find the accompanying
uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions of the
GNU Lesser General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Library as you
received it specifies that a certain numbered version of the GNU Lesser
General Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that published
version or of any later version published by the Free Software Foundation.
If the Library as you received it does not specify a version number of the
GNU Lesser General Public License, you may choose any version of the GNU
Lesser General Public License ever published by the Free Software
Foundation.
If the Library as you received it specifies that a proxy can decide whether
future versions of the GNU Lesser General Public License shall apply, that
proxy's public statement of acceptance of any version is permanent
authorization for you to choose that version for the Library.
------
** LIBPNG; version 1.6.37 -- http://prdownloads.sourceforge.net/libpng
* Copyright (c) 1995-2019 The PNG Reference Library Authors.
* Copyright (c) 2018-2019 Cosmin Truta.
@ -2984,21 +2825,33 @@ Copyright (c) 2009, 2010, 2013-2016 by the Brotli Authors.
** Epoxy; version 1.5.10 -- https://github.com/anholt/libepoxy
Copyright © 2013-2014 Intel Corporation.
Copyright © 2013 The Khronos Group Inc.
** Expat; version 2.4.4 -- https://github.com/libexpat/libexpat/
** Expat; version 2.5.0 -- https://github.com/libexpat/libexpat/
Copyright (c) 1998-2000 Thai Open Source Software Center Ltd and Clark Cooper
Copyright (c) 2001-2019 Expat maintainers
** Intel(R) Graphics Memory Management Library; version 22.1.2 --
** Intel(R) Graphics Compute Runtime; version 22.38.24278 --
https://github.com/intel/compute-runtime
Copyright (C) 2021 Intel Corporation
** Intel(R) Graphics Memory Management Library; version 22.1.8 --
https://github.com/intel/gmmlib
Copyright (c) 2017 Intel Corporation.
Copyright (c) 2016 Gabi Melman.
Copyright 2008, Google Inc. All rights reserved.
** JSON for Modern C++; version 3.10.2 -- https://github.com/nlohmann/json/
Copyright (c) 2013-2021 Niels Lohmann
** Libxml2; version 2.9.10 -- http://xmlsoft.org/
** libdecor; version 0.1.0 -- https://gitlab.freedesktop.org/libdecor/libdecor
Copyright © 2010 Intel Corporation
Copyright © 2011 Benjamin Franzke
Copyright © 2018-2021 Jonas Ådahl
Copyright © 2019 Christian Rauch
Copyright (c) 2006, 2008 Junio C Hamano
Copyright © 2017-2018 Red Hat Inc.
Copyright © 2012 Collabora, Ltd.
Copyright © 2008 Kristian Høgsberg
** Libxml2; version 2.10.3 -- http://xmlsoft.org/
Copyright (C) 1998-2012 Daniel Veillard. All Rights Reserved.
** Mesa 3D; version 21.1.5 -- https://www.mesa3d.org/
Copyright (C) 1999-2007 Brian Paul All Rights Reserved.
** oneAPI Level Zero; version v1.7.15 --
** oneAPI Level Zero; version v1.8.5 --
https://github.com/oneapi-src/level-zero
Copyright (C) 2019-2021 Intel Corporation
** OPENCollada; version 1.6.68 -- https://github.com/KhronosGroup/OpenCOLLADA
@ -3046,9 +2899,6 @@ SOFTWARE.
------
** NanoVDB; version dc37d8a631922e7bef46712947dc19b755f3e841 --
https://github.com/AcademySoftwareFoundation/openvdb
Copyright Contributors to the OpenVDB Project
** OpenVDB; version 9.0.0 -- http://www.openvdb.org/
Copyright Contributors to the OpenVDB Project
@ -3401,7 +3251,7 @@ Copyright (c) 2013-14 Mikko Mononen memon@inside.org
Copyright (C) 1997-2020 Sam Lantinga <slouken@libsdl.org>
** TinyXML; version 2.6.2 -- https://sourceforge.net/projects/tinyxml/
Lee Thomason, Yves Berquin, Andrew Ellerton.
** zlib; version 1.2.12 -- https://zlib.net
** zlib; version 1.2.13 -- https://zlib.net
Copyright (C) 1995-2017 Jean-loup Gailly
zlib License Copyright (c) <year> <copyright holders>
@ -3667,7 +3517,32 @@ disclaims all warranties with regard to this software.
------
** Python; version 3.10.2 -- https://www.python.org
** Wayland; version 1.21.0 -- https://gitlab.freedesktop.org/wayland/wayland
Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
Copyright © 2011 Kristian Høgsberg
Copyright © 2011 Benjamin Franzke
Copyright © 2010-2012 Intel Corporation
Copyright © 2012 Collabora, Ltd.
Copyright © 2015 Giulio Camuffo
Copyright © 2016 Klarälvdalens Datakonsult AB, a KDAB Group company,
info@kdab.com
Copyright © 2012 Jason Ekstrand
Copyright (c) 2014 Red Hat, Inc.
Copyright © 2013 Marek Chalupa
Copyright © 2014 Jonas Ådahl
Copyright © 2016 Yong Bakos
Copyright © 2017 Samsung Electronics Co., Ltd
Copyright © 2002 Keith Packard
Copyright 1999 SuSE, Inc.
Copyright © 2012 Philipp Brüschweiler
Copyright (c) 2020 Simon Ser
Copyright (c) 2006, 2008 Junio C Hamano
MIT Expat
------
** Python; version 3.10.8 -- https://www.python.org
Copyright (c) 2001-2021 Python Software Foundation. All rights reserved.
A. HISTORY OF THE SOFTWARE
@ -4023,6 +3898,38 @@ ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
------
** The OpenGL Extension Wrangler Library; version 2.0.0 --
http://glew.sourceforge.net/
Copyright (C) 2008-2015, Nigel Stewart <nigels[]users sourceforge net>
Copyright (C) 2002-2008, Milan Ikits <milan ikits[]ieee org>
Copyright (C) 2002-2008, Marcelo E. Magallon <mmagallo[]debian org>
Copyright (C) 2002, Lev Povalahev
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* The name of the author may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.
Mesa 3-D graphics library
Version: 7.0

@ -1 +1 @@
Subproject commit fdfd24de034d4bba4fb67731d0aae81dc4940239
Subproject commit 0b0052bd53ad8249ed07dfb87705c338af698bde

View File

@ -8,6 +8,8 @@
#include "BLI_compute_context.hh"
struct bNode;
namespace blender::bke {
class ModifierComputeContext : public ComputeContext {
@ -32,12 +34,20 @@ class NodeGroupComputeContext : public ComputeContext {
private:
static constexpr const char *s_static_type = "NODE_GROUP";
std::string node_name_;
int32_t node_id_;
#ifdef DEBUG
std::string debug_node_name_;
#endif
public:
NodeGroupComputeContext(const ComputeContext *parent, std::string node_name);
NodeGroupComputeContext(const ComputeContext *parent, int32_t node_id);
NodeGroupComputeContext(const ComputeContext *parent, const bNode &node);
StringRefNull node_name() const;
int32_t node_id() const
{
return node_id_;
}
private:
void print_current_in_line(std::ostream &stream) const override;

View File

@ -668,6 +668,7 @@ void nodeUnlinkNode(struct bNodeTree *ntree, struct bNode *node);
* Find the first available, non-duplicate name for a given node.
*/
void nodeUniqueName(struct bNodeTree *ntree, struct bNode *node);
void nodeUniqueID(struct bNodeTree *ntree, struct bNode *node);
/**
* Delete node, associated animation data and ID user count.
@ -687,16 +688,17 @@ namespace blender::bke {
/**
* \note keeps socket list order identical, for copying links.
* \note `unique_name` should usually be true, unless the \a dst_tree is temporary,
* or the names can already be assumed valid.
* \param use_unique: If true, make sure the node's identifier and name are unique in the new
* tree. Must be *true* if the \a dst_tree had nodes that weren't in the source node's tree.
* Must be *false* when simply copying a node tree, so that identifiers don't change.
*/
bNode *node_copy_with_mapping(bNodeTree *dst_tree,
const bNode &node_src,
int flag,
bool unique_name,
bool use_unique,
Map<const bNodeSocket *, bNodeSocket *> &new_socket_map);
bNode *node_copy(bNodeTree *dst_tree, const bNode &src_node, int flag, bool unique_name);
bNode *node_copy(bNodeTree *dst_tree, const bNode &src_node, int flag, bool use_unique);
} // namespace blender::bke

View File

@ -8,6 +8,7 @@
#include "BLI_multi_value_map.hh"
#include "BLI_utility_mixins.hh"
#include "BLI_vector.hh"
#include "BLI_vector_set.hh"
#include "DNA_node_types.h"
@ -24,6 +25,36 @@ class NodeDeclaration;
struct GeometryNodesLazyFunctionGraphInfo;
} // namespace blender::nodes
namespace blender {
struct NodeIDHash {
uint64_t operator()(const bNode *node) const
{
return node->identifier;
}
uint64_t operator()(const int32_t id) const
{
return id;
}
};
struct NodeIDEquality {
bool operator()(const bNode *a, const bNode *b) const
{
return a->identifier == b->identifier;
}
bool operator()(const bNode *a, const int32_t b) const
{
return a->identifier == b;
}
bool operator()(const int32_t a, const bNode *b) const
{
return this->operator()(b, a);
}
};
} // namespace blender
namespace blender::bke {
class bNodeTreeRuntime : NonCopyable, NonMovable {
@ -46,6 +77,13 @@ class bNodeTreeRuntime : NonCopyable, NonMovable {
*/
uint8_t runtime_flag = 0;
/**
* Storage of nodes based on their identifier. Also used as a contiguous array of nodes to
* allow simpler and more cache friendly iteration. Supports lookup by integer or by node.
* Unlike other caches, this is maintained eagerly while changing the tree.
*/
VectorSet<bNode *, DefaultProbingStrategy, NodeIDHash, NodeIDEquality> nodes_by_id;
/** Execution data.
*
* XXX It would be preferable to completely move this data out of the underlying node tree,
@ -91,7 +129,6 @@ class bNodeTreeRuntime : NonCopyable, NonMovable {
mutable std::atomic<int> allow_use_dirty_topology_cache = 0;
/** Only valid when #topology_cache_is_dirty is false. */
Vector<bNode *> nodes;
Vector<bNodeLink *> links;
Vector<bNodeSocket *> sockets;
Vector<bNodeSocket *> input_sockets;
@ -292,6 +329,28 @@ inline bool topology_cache_is_available(const bNodeSocket &socket)
/** \name #bNodeTree Inline Methods
* \{ */
inline blender::Span<const bNode *> bNodeTree::all_nodes() const
{
return this->runtime->nodes_by_id.as_span();
}
inline blender::Span<bNode *> bNodeTree::all_nodes()
{
return this->runtime->nodes_by_id;
}
inline bNode *bNodeTree::node_by_id(const int32_t identifier)
{
bNode *const *node = this->runtime->nodes_by_id.lookup_key_ptr_as(identifier);
return node ? *node : nullptr;
}
inline const bNode *bNodeTree::node_by_id(const int32_t identifier) const
{
const bNode *const *node = this->runtime->nodes_by_id.lookup_key_ptr_as(identifier);
return node ? *node : nullptr;
}
inline blender::Span<bNode *> bNodeTree::nodes_by_type(const blender::StringRefNull type_idname)
{
BLI_assert(blender::bke::node_tree_runtime::topology_cache_is_available(*this));
@ -329,18 +388,6 @@ inline blender::Span<bNode *> bNodeTree::toposort_right_to_left()
return this->runtime->toposort_right_to_left;
}
inline blender::Span<const bNode *> bNodeTree::all_nodes() const
{
BLI_assert(blender::bke::node_tree_runtime::topology_cache_is_available(*this));
return this->runtime->nodes;
}
inline blender::Span<bNode *> bNodeTree::all_nodes()
{
BLI_assert(blender::bke::node_tree_runtime::topology_cache_is_available(*this));
return this->runtime->nodes;
}
inline blender::Span<const bNode *> bNodeTree::group_nodes() const
{
BLI_assert(blender::bke::node_tree_runtime::topology_cache_is_available(*this));

View File

@ -1,5 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#include "DNA_node_types.h"
#include "BKE_compute_contexts.hh"
namespace blender::bke {
@ -17,22 +19,30 @@ void ModifierComputeContext::print_current_in_line(std::ostream &stream) const
stream << "Modifier: " << modifier_name_;
}
NodeGroupComputeContext::NodeGroupComputeContext(const ComputeContext *parent,
std::string node_name)
: ComputeContext(s_static_type, parent), node_name_(std::move(node_name))
NodeGroupComputeContext::NodeGroupComputeContext(const ComputeContext *parent, const int node_id)
: ComputeContext(s_static_type, parent), node_id_(node_id)
{
hash_.mix_in(s_static_type, strlen(s_static_type));
hash_.mix_in(node_name_.data(), node_name_.size());
hash_.mix_in(&node_id_, sizeof(int32_t));
}
StringRefNull NodeGroupComputeContext::node_name() const
NodeGroupComputeContext::NodeGroupComputeContext(const ComputeContext *parent, const bNode &node)
: NodeGroupComputeContext(parent, node.identifier)
{
return node_name_;
#ifdef DEBUG
debug_node_name_ = node.name;
#endif
}
void NodeGroupComputeContext::print_current_in_line(std::ostream &stream) const
{
stream << "Node: " << node_name_;
#ifdef DEBUG
if (!debug_node_name_.empty()) {
stream << "Node: " << debug_node_name_;
return;
}
#endif
stream << "Node ID: " << node_id_;
}
} // namespace blender::bke

View File

@ -36,6 +36,7 @@
#include "BLI_listbase.h"
#include "BLI_map.hh"
#include "BLI_path_util.h"
#include "BLI_rand.hh"
#include "BLI_set.hh"
#include "BLI_stack.hh"
#include "BLI_string.h"
@ -84,6 +85,8 @@
#include "BLO_read_write.h"
#include "PIL_time.h"
#define NODE_DEFAULT_MAX_WIDTH 700
using blender::Array;
@ -136,37 +139,39 @@ static void ntree_copy_data(Main * /*bmain*/, ID *id_dst, const ID *id_src, cons
const int flag_subdata = flag | LIB_ID_CREATE_NO_USER_REFCOUNT;
ntree_dst->runtime = MEM_new<bNodeTreeRuntime>(__func__);
bNodeTreeRuntime &dst_runtime = *ntree_dst->runtime;
/* in case a running nodetree is copied */
ntree_dst->runtime->execdata = nullptr;
BLI_listbase_clear(&ntree_dst->nodes);
BLI_listbase_clear(&ntree_dst->links);
Map<const bNode *, bNode *> node_map;
Map<const bNodeSocket *, bNodeSocket *> socket_map;
dst_runtime.nodes_by_id.reserve(ntree_src->all_nodes().size());
BLI_listbase_clear(&ntree_dst->nodes);
LISTBASE_FOREACH (const bNode *, src_node, &ntree_src->nodes) {
/* Don't find a unique name for every node, since they should have valid names already. */
bNode *new_node = blender::bke::node_copy_with_mapping(
ntree_dst, *src_node, flag_subdata, false, socket_map);
node_map.add(src_node, new_node);
dst_runtime.nodes_by_id.add_new(new_node);
}
/* copy links */
BLI_listbase_clear(&ntree_dst->links);
LISTBASE_FOREACH (const bNodeLink *, src_link, &ntree_src->links) {
bNodeLink *dst_link = (bNodeLink *)MEM_dupallocN(src_link);
dst_link->fromnode = node_map.lookup(src_link->fromnode);
dst_link->fromnode = dst_runtime.nodes_by_id.lookup_key_as(src_link->fromnode->identifier);
dst_link->fromsock = socket_map.lookup(src_link->fromsock);
dst_link->tonode = node_map.lookup(src_link->tonode);
dst_link->tonode = dst_runtime.nodes_by_id.lookup_key_as(src_link->tonode->identifier);
dst_link->tosock = socket_map.lookup(src_link->tosock);
BLI_assert(dst_link->tosock);
dst_link->tosock->link = dst_link;
BLI_addtail(&ntree_dst->links, dst_link);
}
/* update node->parent pointers */
LISTBASE_FOREACH (bNode *, new_node, &ntree_dst->nodes) {
if (new_node->parent) {
new_node->parent = dst_runtime.nodes_by_id.lookup_key_as(new_node->parent->identifier);
}
}
/* copy interface sockets */
BLI_listbase_clear(&ntree_dst->inputs);
LISTBASE_FOREACH (const bNodeSocket *, src_socket, &ntree_src->inputs) {
@ -197,15 +202,8 @@ static void ntree_copy_data(Main * /*bmain*/, ID *id_dst, const ID *id_src, cons
ntree_dst->previews = nullptr;
}
/* update node->parent pointers */
LISTBASE_FOREACH (bNode *, new_node, &ntree_dst->nodes) {
if (new_node->parent) {
new_node->parent = node_map.lookup(new_node->parent);
}
}
if (ntree_src->runtime->field_inferencing_interface) {
ntree_dst->runtime->field_inferencing_interface = std::make_unique<FieldInferencingInterface>(
dst_runtime.field_inferencing_interface = std::make_unique<FieldInferencingInterface>(
*ntree_src->runtime->field_inferencing_interface);
}
@ -679,6 +677,15 @@ void ntreeBlendReadData(BlendDataReader *reader, ID *owner_id, bNodeTree *ntree)
node->runtime = MEM_new<bNodeRuntime>(__func__);
node->typeinfo = nullptr;
/* Create the `nodes_by_id` cache eagerly so it can be expected to be valid. Because
* we create it here we also have to check for zero identifiers from previous versions. */
if (ntree->runtime->nodes_by_id.contains_as(node->identifier)) {
nodeUniqueID(ntree, node);
}
else {
ntree->runtime->nodes_by_id.add_new(node);
}
BLO_read_list(reader, &node->inputs);
BLO_read_list(reader, &node->outputs);
@ -764,6 +771,7 @@ void ntreeBlendReadData(BlendDataReader *reader, ID *owner_id, bNodeTree *ntree)
}
}
BLO_read_list(reader, &ntree->links);
BLI_assert(ntree->all_nodes().size() == BLI_listbase_count(&ntree->nodes));
/* and we connect the rest */
LISTBASE_FOREACH (bNode *, node, &ntree->nodes) {
@ -2176,11 +2184,28 @@ void nodeUniqueName(bNodeTree *ntree, bNode *node)
&ntree->nodes, node, DATA_("Node"), '.', offsetof(bNode, name), sizeof(node->name));
}
void nodeUniqueID(bNodeTree *ntree, bNode *node)
{
/* Use a pointer cast to avoid overflow warnings. */
const double time = PIL_check_seconds_timer() * 1000000.0;
blender::RandomNumberGenerator id_rng{*reinterpret_cast<const uint32_t *>(&time)};
/* In the unlikely case that the random ID doesn't match, choose a new one until it does. */
int32_t new_id = id_rng.get_int32();
while (ntree->runtime->nodes_by_id.contains_as(new_id)) {
new_id = id_rng.get_int32();
}
node->identifier = new_id;
ntree->runtime->nodes_by_id.add_new(node);
}
bNode *nodeAddNode(const bContext *C, bNodeTree *ntree, const char *idname)
{
bNode *node = MEM_cnew<bNode>("new node");
node->runtime = MEM_new<bNodeRuntime>(__func__);
BLI_addtail(&ntree->nodes, node);
nodeUniqueID(ntree, node);
BLI_strncpy(node->idname, idname, sizeof(node->idname));
node_set_typeinfo(C, ntree, node, nodeTypeFind(idname));
@ -2241,7 +2266,7 @@ namespace blender::bke {
bNode *node_copy_with_mapping(bNodeTree *dst_tree,
const bNode &node_src,
const int flag,
const bool unique_name,
const bool use_unique,
Map<const bNodeSocket *, bNodeSocket *> &socket_map)
{
bNode *node_dst = (bNode *)MEM_mallocN(sizeof(bNode), __func__);
@ -2251,8 +2276,9 @@ bNode *node_copy_with_mapping(bNodeTree *dst_tree,
/* Can be called for nodes outside a node tree (e.g. clipboard). */
if (dst_tree) {
if (unique_name) {
if (use_unique) {
nodeUniqueName(dst_tree, node_dst);
nodeUniqueID(dst_tree, node_dst);
}
BLI_addtail(&dst_tree->nodes, node_dst);
}
@ -2314,13 +2340,10 @@ bNode *node_copy_with_mapping(bNodeTree *dst_tree,
return node_dst;
}
bNode *node_copy(bNodeTree *dst_tree,
const bNode &src_node,
const int flag,
const bool unique_name)
bNode *node_copy(bNodeTree *dst_tree, const bNode &src_node, const int flag, const bool use_unique)
{
Map<const bNodeSocket *, bNodeSocket *> socket_map;
return node_copy_with_mapping(dst_tree, src_node, flag, unique_name, socket_map);
return node_copy_with_mapping(dst_tree, src_node, flag, use_unique, socket_map);
}
} // namespace blender::bke
@ -2910,8 +2933,20 @@ static void node_unlink_attached(bNodeTree *ntree, bNode *parent)
}
}
/* Free the node itself. ID user refcounting is up the caller,
* that does not happen here. */
static void rebuild_nodes_vector(bNodeTree &node_tree)
{
/* Rebuild nodes #VectorSet which must have the same order as the list. */
node_tree.runtime->nodes_by_id.clear();
LISTBASE_FOREACH (bNode *, node, &node_tree.nodes) {
node_tree.runtime->nodes_by_id.add_new(node);
}
}
/**
* Free the node itself.
*
* \note: ID user refcounting and changing the `nodes_by_id` vector are up to the caller.
*/
static void node_free_node(bNodeTree *ntree, bNode *node)
{
/* since it is called while free database, node->id is undefined */
@ -2919,6 +2954,8 @@ static void node_free_node(bNodeTree *ntree, bNode *node)
/* can be called for nodes outside a node tree (e.g. clipboard) */
if (ntree) {
BLI_remlink(&ntree->nodes, node);
/* Rebuild nodes #VectorSet which must have the same order as the list. */
rebuild_nodes_vector(*ntree);
/* texture node has bad habit of keeping exec data around */
if (ntree->type == NTREE_TEXTURE && ntree->runtime->execdata) {
@ -2976,6 +3013,7 @@ void ntreeFreeLocalNode(bNodeTree *ntree, bNode *node)
node_unlink_attached(ntree, node);
node_free_node(ntree, node);
rebuild_nodes_vector(*ntree);
}
void nodeRemoveNode(Main *bmain, bNodeTree *ntree, bNode *node, bool do_id_user)
@ -3035,6 +3073,7 @@ void nodeRemoveNode(Main *bmain, bNodeTree *ntree, bNode *node, bool do_id_user)
/* Free node itself. */
node_free_node(ntree, node);
rebuild_nodes_vector(*ntree);
}
static void node_socket_interface_free(bNodeTree * /*ntree*/,

View File

@ -45,15 +45,16 @@ static void double_checked_lock_with_task_isolation(std::mutex &mutex,
static void update_node_vector(const bNodeTree &ntree)
{
bNodeTreeRuntime &tree_runtime = *ntree.runtime;
tree_runtime.nodes.clear();
const Span<bNode *> nodes = tree_runtime.nodes_by_id;
tree_runtime.group_nodes.clear();
tree_runtime.has_undefined_nodes_or_sockets = false;
LISTBASE_FOREACH (bNode *, node, &ntree.nodes) {
node->runtime->index_in_tree = tree_runtime.nodes.append_and_get_index(node);
node->runtime->owner_tree = const_cast<bNodeTree *>(&ntree);
tree_runtime.has_undefined_nodes_or_sockets |= node->typeinfo == &NodeTypeUndefined;
if (node->is_group()) {
tree_runtime.group_nodes.append(node);
for (const int i : nodes.index_range()) {
bNode &node = *nodes[i];
node.runtime->index_in_tree = i;
node.runtime->owner_tree = const_cast<bNodeTree *>(&ntree);
tree_runtime.has_undefined_nodes_or_sockets |= node.typeinfo == &NodeTypeUndefined;
if (node.is_group()) {
tree_runtime.group_nodes.append(&node);
}
}
}
@ -73,7 +74,7 @@ static void update_socket_vectors_and_owner_node(const bNodeTree &ntree)
tree_runtime.sockets.clear();
tree_runtime.input_sockets.clear();
tree_runtime.output_sockets.clear();
for (bNode *node : tree_runtime.nodes) {
for (bNode *node : tree_runtime.nodes_by_id) {
bNodeRuntime &node_runtime = *node->runtime;
node_runtime.inputs.clear();
node_runtime.outputs.clear();
@ -99,7 +100,7 @@ static void update_socket_vectors_and_owner_node(const bNodeTree &ntree)
static void update_internal_link_inputs(const bNodeTree &ntree)
{
bNodeTreeRuntime &tree_runtime = *ntree.runtime;
for (bNode *node : tree_runtime.nodes) {
for (bNode *node : tree_runtime.nodes_by_id) {
for (bNodeSocket *socket : node->runtime->outputs) {
socket->runtime->internal_link_input = nullptr;
}
@ -112,7 +113,7 @@ static void update_internal_link_inputs(const bNodeTree &ntree)
static void update_directly_linked_links_and_sockets(const bNodeTree &ntree)
{
bNodeTreeRuntime &tree_runtime = *ntree.runtime;
for (bNode *node : tree_runtime.nodes) {
for (bNode *node : tree_runtime.nodes_by_id) {
for (bNodeSocket *socket : node->runtime->inputs) {
socket->runtime->directly_linked_links.clear();
socket->runtime->directly_linked_sockets.clear();
@ -207,9 +208,10 @@ static void find_logical_origins_for_socket_recursive(
static void update_logical_origins(const bNodeTree &ntree)
{
bNodeTreeRuntime &tree_runtime = *ntree.runtime;
threading::parallel_for(tree_runtime.nodes.index_range(), 128, [&](const IndexRange range) {
Span<bNode *> nodes = tree_runtime.nodes_by_id;
threading::parallel_for(nodes.index_range(), 128, [&](const IndexRange range) {
for (const int i : range) {
bNode &node = *tree_runtime.nodes[i];
bNode &node = *nodes[i];
for (bNodeSocket *socket : node.runtime->inputs) {
Vector<bNodeSocket *, 16> sockets_in_current_chain;
socket->runtime->logically_linked_sockets.clear();
@ -229,7 +231,7 @@ static void update_nodes_by_type(const bNodeTree &ntree)
{
bNodeTreeRuntime &tree_runtime = *ntree.runtime;
tree_runtime.nodes_by_type.clear();
for (bNode *node : tree_runtime.nodes) {
for (bNode *node : tree_runtime.nodes_by_id) {
tree_runtime.nodes_by_type.add(node->typeinfo, node);
}
}
@ -237,8 +239,9 @@ static void update_nodes_by_type(const bNodeTree &ntree)
static void update_sockets_by_identifier(const bNodeTree &ntree)
{
bNodeTreeRuntime &tree_runtime = *ntree.runtime;
threading::parallel_for(tree_runtime.nodes.index_range(), 128, [&](const IndexRange range) {
for (bNode *node : tree_runtime.nodes.as_span().slice(range)) {
Span<bNode *> nodes = tree_runtime.nodes_by_id;
threading::parallel_for(nodes.index_range(), 128, [&](const IndexRange range) {
for (bNode *node : nodes.slice(range)) {
node->runtime->inputs_by_identifier.clear();
node->runtime->outputs_by_identifier.clear();
for (bNodeSocket *socket : node->runtime->inputs) {
@ -337,11 +340,11 @@ static void update_toposort(const bNodeTree &ntree,
{
bNodeTreeRuntime &tree_runtime = *ntree.runtime;
r_sorted_nodes.clear();
r_sorted_nodes.reserve(tree_runtime.nodes.size());
r_sorted_nodes.reserve(tree_runtime.nodes_by_id.size());
r_cycle_detected = false;
Array<ToposortNodeState> node_states(tree_runtime.nodes.size());
for (bNode *node : tree_runtime.nodes) {
Array<ToposortNodeState> node_states(tree_runtime.nodes_by_id.size());
for (bNode *node : tree_runtime.nodes_by_id) {
if (node_states[node->runtime->index_in_tree].is_done) {
/* Ignore nodes that are done already. */
continue;
@ -355,9 +358,9 @@ static void update_toposort(const bNodeTree &ntree,
toposort_from_start_node(direction, *node, node_states, r_sorted_nodes, r_cycle_detected);
}
if (r_sorted_nodes.size() < tree_runtime.nodes.size()) {
if (r_sorted_nodes.size() < tree_runtime.nodes_by_id.size()) {
r_cycle_detected = true;
for (bNode *node : tree_runtime.nodes) {
for (bNode *node : tree_runtime.nodes_by_id) {
if (node_states[node->runtime->index_in_tree].is_done) {
/* Ignore nodes that are done already. */
continue;
@ -367,13 +370,13 @@ static void update_toposort(const bNodeTree &ntree,
}
}
BLI_assert(tree_runtime.nodes.size() == r_sorted_nodes.size());
BLI_assert(tree_runtime.nodes_by_id.size() == r_sorted_nodes.size());
}
static void update_root_frames(const bNodeTree &ntree)
{
bNodeTreeRuntime &tree_runtime = *ntree.runtime;
Span<bNode *> nodes = tree_runtime.nodes;
Span<bNode *> nodes = tree_runtime.nodes_by_id;
tree_runtime.root_frames.clear();
@ -387,7 +390,7 @@ static void update_root_frames(const bNodeTree &ntree)
static void update_direct_frames_childrens(const bNodeTree &ntree)
{
bNodeTreeRuntime &tree_runtime = *ntree.runtime;
Span<bNode *> nodes = tree_runtime.nodes;
Span<bNode *> nodes = tree_runtime.nodes_by_id;
for (bNode *node : nodes) {
node->runtime->direct_children_in_frame.clear();
@ -432,7 +435,7 @@ static void ensure_topology_cache(const bNodeTree &ntree)
update_internal_link_inputs(ntree);
update_directly_linked_links_and_sockets(ntree);
threading::parallel_invoke(
tree_runtime.nodes.size() > 32,
tree_runtime.nodes_by_id.size() > 32,
[&]() { update_logical_origins(ntree); },
[&]() { update_nodes_by_type(ntree); },
[&]() { update_sockets_by_identifier(ntree); },

View File

@ -1009,6 +1009,15 @@ class NodeTreeMainUpdater {
result.interface_changed = true;
}
#ifdef DEBUG
/* Check the uniqueness of node identifiers. */
Set<int32_t> node_identifiers;
LISTBASE_FOREACH (bNode *, node, &ntree.nodes) {
BLI_assert(node->identifier >= 0);
node_identifiers.add_new(node->identifier);
}
#endif
return result;
}

View File

@ -761,8 +761,7 @@ static void scene_foreach_layer_collection(LibraryForeachIDData *data, ListBase
(lc->collection->id.flag & LIB_EMBEDDED_DATA) != 0) ?
IDWALK_CB_EMBEDDED :
IDWALK_CB_NOP;
BKE_LIB_FOREACHID_PROCESS_IDSUPER(
data, lc->collection, cb_flag | IDWALK_CB_DIRECT_WEAK_LINK);
BKE_LIB_FOREACHID_PROCESS_IDSUPER(data, lc->collection, cb_flag | IDWALK_CB_DIRECT_WEAK_LINK);
scene_foreach_layer_collection(data, &lc->layer_collections);
}
}
@ -835,11 +834,10 @@ static void scene_foreach_id(ID *id, LibraryForeachIDData *data)
BKE_LIB_FOREACHID_PROCESS_IDSUPER(data, view_layer->mat_override, IDWALK_CB_USER);
BKE_view_layer_synced_ensure(scene, view_layer);
LISTBASE_FOREACH (Base *, base, BKE_view_layer_object_bases_get(view_layer)) {
BKE_LIB_FOREACHID_PROCESS_IDSUPER(data,
base->object,
IDWALK_CB_NOP |
IDWALK_CB_OVERRIDE_LIBRARY_NOT_OVERRIDABLE |
IDWALK_CB_DIRECT_WEAK_LINK);
BKE_LIB_FOREACHID_PROCESS_IDSUPER(
data,
base->object,
IDWALK_CB_NOP | IDWALK_CB_OVERRIDE_LIBRARY_NOT_OVERRIDABLE | IDWALK_CB_DIRECT_WEAK_LINK);
}
BKE_LIB_FOREACHID_PROCESS_FUNCTION_CALL(

View File

@ -215,6 +215,7 @@ ViewerPathElem *BKE_viewer_path_elem_copy(const ViewerPathElem *src)
case VIEWER_PATH_ELEM_TYPE_NODE: {
const auto *old_elem = reinterpret_cast<const NodeViewerPathElem *>(src);
auto *new_elem = reinterpret_cast<NodeViewerPathElem *>(dst);
new_elem->node_id = old_elem->node_id;
if (old_elem->node_name != nullptr) {
new_elem->node_name = BLI_strdup(old_elem->node_name);
}
@ -243,7 +244,7 @@ bool BKE_viewer_path_elem_equal(const ViewerPathElem *a, const ViewerPathElem *b
case VIEWER_PATH_ELEM_TYPE_NODE: {
const auto *a_elem = reinterpret_cast<const NodeViewerPathElem *>(a);
const auto *b_elem = reinterpret_cast<const NodeViewerPathElem *>(b);
return StringRef(a_elem->node_name) == StringRef(b_elem->node_name);
return a_elem->node_id == b_elem->node_id;
}
}
return false;

View File

@ -0,0 +1,146 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#pragma once
#include <atomic>
#include "BLI_array.hh"
namespace blender {
/**
* Same as `DisjointSet` but is thread safe (at slightly higher cost for the single threaded case).
*
* The implementation is based on the following paper:
* "Wait-free Parallel Algorithms for the Union-Find Problem"
* by Richard J. Anderson and Heather Woll.
*
* It's also inspired by this implementation: https://github.com/wjakob/dset.
*/
class AtomicDisjointSet {
private:
/* Can generally used relaxed memory order with this algorithm. */
static constexpr auto relaxed = std::memory_order_relaxed;
struct Item {
int parent;
int rank;
};
/**
* An #Item per element. It's important that the entire item is in a single atomic, so that it
* can be updated atomically. */
mutable Array<std::atomic<Item>> items_;
public:
/**
* Create a new disjoing set with the given set. Initially, every element is in a separate set.
*/
AtomicDisjointSet(const int size);
/**
* Join the sets containing elements x and y. Nothing happens when they were in the same set
* before.
*/
void join(int x, int y)
{
while (true) {
x = this->find_root(x);
y = this->find_root(y);
if (x == y) {
/* They are in the same set already. */
return;
}
Item x_item = items_[x].load(relaxed);
Item y_item = items_[y].load(relaxed);
if (
/* Implement union by rank heuristic. */
x_item.rank > y_item.rank
/* If the rank is the same, make a consistent decision. */
|| (x_item.rank == y_item.rank && x < y)) {
std::swap(x_item, y_item);
std::swap(x, y);
}
/* Update parent of item x. */
const Item x_item_new{y, x_item.rank};
if (!items_[x].compare_exchange_strong(x_item, x_item_new, relaxed)) {
/* Another thread has updated item x, start again. */
continue;
}
if (x_item.rank == y_item.rank) {
/* Increase rank of item y. This may fail when another thread has updated item y in the
* meantime. That may lead to worse behavior with the union by rank heurist, but seems to
* be ok in practice. */
const Item y_item_new{y, y_item.rank + 1};
items_[y].compare_exchange_weak(y_item, y_item_new, relaxed);
}
}
}
/**
* Return true when x and y are in the same set.
*/
bool in_same_set(int x, int y) const
{
while (true) {
x = this->find_root(x);
y = this->find_root(y);
if (x == y) {
return true;
}
if (items_[x].load(relaxed).parent == x) {
return false;
}
}
}
/**
* Find the element that represents the set containing x currently.
*/
int find_root(int x) const
{
while (true) {
const Item item = items_[x].load(relaxed);
if (x == item.parent) {
return x;
}
const int new_parent = items_[item.parent].load(relaxed).parent;
if (item.parent != new_parent) {
/* This halves the path for faster future lookups. That fail but that does not change
* correctness. */
Item expected = item;
const Item desired{new_parent, item.rank};
items_[x].compare_exchange_weak(expected, desired, relaxed);
}
x = new_parent;
}
}
/**
* True when x represents a set.
*/
bool is_root(const int x) const
{
const Item item = items_[x].load(relaxed);
return item.parent == x;
}
/**
* Get an identifier for each id. This is deterministic and does not depend on the order of
* joins. The ids are ordered by their first occurence. Consequently, `result[0]` is always zero
* (unless there are no elements).
*/
void calc_reduced_ids(MutableSpan<int> result) const;
/**
* Count the number of disjoint sets.
*/
int count_sets() const;
};
} // namespace blender

View File

@ -102,6 +102,11 @@ template<typename T, BLI_ENABLE_IF((is_math_float_type<T>))> inline T fract(cons
return a - std::floor(a);
}
template<typename T> inline T sqrt(const T &a)
{
return std::sqrt(a);
}
template<typename T> inline T cos(const T &a)
{
return std::cos(a);
@ -132,9 +137,9 @@ template<typename T> inline T atan(const T &a)
return std::atan(a);
}
template<typename T> inline T atan2(const T &a, const T &b)
template<typename T> inline T atan2(const T &y, const T &x)
{
return std::atan2(a, b);
return std::atan2(y, x);
}
template<typename T,

View File

@ -414,6 +414,45 @@ template<typename T> inline int dominant_axis(const vec_base<T, 3> &a)
return ((b.x > b.y) ? ((b.x > b.z) ? 0 : 2) : ((b.y > b.z) ? 1 : 2));
}
/**
* Calculates a perpendicular vector to \a v.
* \note Returned vector can be in any perpendicular direction.
* \note Returned vector might not the same length as \a v.
*/
template<typename T> inline vec_base<T, 3> orthogonal(const vec_base<T, 3> &v)
{
const int axis = dominant_axis(v);
switch (axis) {
case 0:
return {-v.y - v.z, v.x, v.x};
case 1:
return {v.y, -v.x - v.z, v.y};
case 2:
return {v.z, v.z, -v.x - v.y};
}
return v;
}
/**
* Calculates a perpendicular vector to \a v.
* \note Returned vector can be in any perpendicular direction.
*/
template<typename T> inline vec_base<T, 2> orthogonal(const vec_base<T, 2> &v)
{
return {-v.y, v.x};
}
template<typename T, int Size>
inline bool compare(const vec_base<T, Size> &a, const vec_base<T, Size> &b, const T limit)
{
for (int i = 0; i < Size; i++) {
if (std::abs(a[i] - b[i]) > limit) {
return false;
}
}
return true;
}
/** Intersections. */
template<typename T> struct isect_result {

View File

@ -50,6 +50,7 @@ set(SRC
intern/array_utils.c
intern/array_utils.cc
intern/astar.c
intern/atomic_disjoint_set.cc
intern/bitmap.c
intern/bitmap_draw_2d.c
intern/boxpack_2d.c
@ -172,6 +173,7 @@ set(SRC
BLI_asan.h
BLI_assert.h
BLI_astar.h
BLI_atomic_disjoint_set.hh
BLI_bit_vector.hh
BLI_bitmap.h
BLI_bitmap_draw_2d.h

View File

@ -0,0 +1,108 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#include "BLI_atomic_disjoint_set.hh"
#include "BLI_enumerable_thread_specific.hh"
#include "BLI_map.hh"
#include "BLI_sort.hh"
#include "BLI_task.hh"
namespace blender {
AtomicDisjointSet::AtomicDisjointSet(const int size) : items_(size)
{
threading::parallel_for(IndexRange(size), 4096, [&](const IndexRange range) {
for (const int i : range) {
items_[i].store(Item{i, 0}, relaxed);
}
});
}
static void update_first_occurence(Map<int, int> &map, const int root, const int index)
{
map.add_or_modify(
root,
[&](int *first_occurence) { *first_occurence = index; },
[&](int *first_occurence) {
if (index < *first_occurence) {
*first_occurence = index;
}
});
}
void AtomicDisjointSet::calc_reduced_ids(MutableSpan<int> result) const
{
BLI_assert(result.size() == items_.size());
const int size = result.size();
/* Find the root for element. With multi-threading, this root is not deterministic. So
* some postprocessing has to be done to make it deterministic. */
threading::EnumerableThreadSpecific<Map<int, int>> first_occurence_by_root_per_thread;
threading::parallel_for(IndexRange(size), 1024, [&](const IndexRange range) {
Map<int, int> &first_occurence_by_root = first_occurence_by_root_per_thread.local();
for (const int i : range) {
const int root = this->find_root(i);
result[i] = root;
update_first_occurence(first_occurence_by_root, root, i);
}
});
/* Build a map that contains the first element index that has a certain root. */
Map<int, int> &combined_map = first_occurence_by_root_per_thread.local();
for (const Map<int, int> &other_map : first_occurence_by_root_per_thread) {
if (&combined_map == &other_map) {
continue;
}
for (const auto item : other_map.items()) {
update_first_occurence(combined_map, item.key, item.value);
}
}
struct RootOccurence {
int root;
int first_occurence;
};
/* Sort roots by first occurence. This removes the non-determinism above. */
Vector<RootOccurence, 16> root_occurences;
root_occurences.reserve(combined_map.size());
for (const auto item : combined_map.items()) {
root_occurences.append({item.key, item.value});
}
parallel_sort(root_occurences.begin(),
root_occurences.end(),
[](const RootOccurence &a, const RootOccurence &b) {
return a.first_occurence < b.first_occurence;
});
/* Remap original root values with deterministic values. */
Map<int, int> id_by_root;
id_by_root.reserve(root_occurences.size());
for (const int i : root_occurences.index_range()) {
id_by_root.add_new(root_occurences[i].root, i);
}
threading::parallel_for(IndexRange(size), 1024, [&](const IndexRange range) {
for (const int i : range) {
result[i] = id_by_root.lookup(result[i]);
}
});
}
int AtomicDisjointSet::count_sets() const
{
return threading::parallel_reduce<int>(
items_.index_range(),
1024,
0,
[&](const IndexRange range, int count) {
for (const int i : range) {
if (this->is_root(i)) {
count++;
}
}
return count;
},
[](const int a, const int b) { return a + b; });
}
} // namespace blender

View File

@ -35,8 +35,8 @@ Object *parse_object_only(const ViewerPath &viewer_path);
struct ViewerPathForGeometryNodesViewer {
Object *object;
blender::StringRefNull modifier_name;
blender::Vector<blender::StringRefNull> group_node_names;
blender::StringRefNull viewer_node_name;
blender::Vector<int32_t> group_node_ids;
int32_t viewer_node_id;
};
/**

View File

@ -746,8 +746,10 @@ static int apply_objects_internal(bContext *C,
if (ob->type == OB_FONT) {
if (apply_rot || apply_loc) {
BKE_reportf(
reports, RPT_ERROR, "Font's can only have scale applied: \"%s\"", ob->id.name + 2);
BKE_reportf(reports,
RPT_ERROR,
"Text objects can only have scale applied: \"%s\"",
ob->id.name + 2);
changed = false;
}
}

View File

@ -2942,7 +2942,7 @@ static char *current_relpath_append(const FileListReadJob *job_params, const cha
return BLI_strdup(filename);
}
BLI_assert(relbase[strlen(relbase) - 1] == SEP);
BLI_assert(ELEM(relbase[strlen(relbase) - 1], SEP, ALTSEP));
BLI_assert(BLI_path_is_rel(relbase));
char relpath[FILE_MAX_LIBEXTRA];

View File

@ -187,7 +187,8 @@ static void node_socket_add_tooltip_in_node_editor(TreeDrawContext * /*tree_draw
const bNodeSocket *sock,
uiLayout *layout);
static bool compare_nodes(const bNode *a, const bNode *b)
/** Return true when \a a should be behind \a b and false otherwise. */
static bool compare_node_depth(const bNode *a, const bNode *b)
{
/* These tell if either the node or any of the parent nodes is selected.
* A selected parent means an unselected node is also in foreground! */
@ -200,7 +201,7 @@ static bool compare_nodes(const bNode *a, const bNode *b)
for (bNode *parent = a->parent; parent; parent = parent->parent) {
/* If B is an ancestor, it is always behind A. */
if (parent == b) {
return true;
return false;
}
/* Any selected ancestor moves the node forward. */
if (parent->flag & NODE_ACTIVE) {
@ -213,7 +214,7 @@ static bool compare_nodes(const bNode *a, const bNode *b)
for (bNode *parent = b->parent; parent; parent = parent->parent) {
/* If A is an ancestor, it is always behind B. */
if (parent == a) {
return false;
return true;
}
/* Any selected ancestor moves the node forward. */
if (parent->flag & NODE_ACTIVE) {
@ -226,17 +227,23 @@ static bool compare_nodes(const bNode *a, const bNode *b)
/* One of the nodes is in the background and the other not. */
if ((a->flag & NODE_BACKGROUND) && !(b->flag & NODE_BACKGROUND)) {
return false;
}
if (!(a->flag & NODE_BACKGROUND) && (b->flag & NODE_BACKGROUND)) {
return true;
}
if ((b->flag & NODE_BACKGROUND) && !(a->flag & NODE_BACKGROUND)) {
return false;
}
/* One has a higher selection state (active > selected > nothing). */
if (!b_active && a_active) {
if (a_active && !b_active) {
return false;
}
if (b_active && !a_active) {
return true;
}
if (!b_select && (a_active || a_select)) {
return false;
}
if (!a_select && (b_active || b_select)) {
return true;
}
@ -245,57 +252,22 @@ static bool compare_nodes(const bNode *a, const bNode *b)
void node_sort(bNodeTree &ntree)
{
/* Merge sort is the algorithm of choice here. */
int totnodes = BLI_listbase_count(&ntree.nodes);
Array<bNode *> sort_nodes = ntree.all_nodes();
std::stable_sort(sort_nodes.begin(), sort_nodes.end(), compare_node_depth);
int k = 1;
while (k < totnodes) {
bNode *first_a = (bNode *)ntree.nodes.first;
bNode *first_b = first_a;
/* If nothing was changed, exit early. Otherwise the node tree's runtime
* node vector needs to be rebuilt, since it cannot be reordered in place. */
if (sort_nodes == ntree.all_nodes()) {
return;
}
do {
/* Set up first_b pointer. */
for (int b = 0; b < k && first_b; b++) {
first_b = first_b->next;
}
/* All batches merged? */
if (first_b == nullptr) {
break;
}
BKE_ntree_update_tag_node_reordered(&ntree);
/* Merge batches. */
bNode *node_a = first_a;
bNode *node_b = first_b;
int a = 0;
int b = 0;
while (a < k && b < k && node_b) {
if (compare_nodes(node_a, node_b) == 0) {
node_a = node_a->next;
a++;
}
else {
bNode *tmp = node_b;
node_b = node_b->next;
b++;
BLI_remlink(&ntree.nodes, tmp);
BLI_insertlinkbefore(&ntree.nodes, node_a, tmp);
BKE_ntree_update_tag_node_reordered(&ntree);
}
}
/* Set up first pointers for next batch. */
first_b = node_b;
for (; b < k; b++) {
/* All nodes sorted? */
if (first_b == nullptr) {
break;
}
first_b = first_b->next;
}
first_a = first_b;
} while (first_b);
k = k << 1;
ntree.runtime->nodes_by_id.clear();
BLI_listbase_clear(&ntree.nodes);
for (const int i : sort_nodes.index_range()) {
BLI_addtail(&ntree.nodes, sort_nodes[i]);
ntree.runtime->nodes_by_id.add_new(sort_nodes[i]);
}
}
@ -1691,7 +1663,7 @@ static void node_add_error_message_button(TreeDrawContext &tree_draw_ctx,
Span<geo_log::NodeWarning> warnings;
if (tree_draw_ctx.geo_tree_log) {
geo_log::GeoNodeLog *node_log = tree_draw_ctx.geo_tree_log->nodes.lookup_ptr(node.name);
geo_log::GeoNodeLog *node_log = tree_draw_ctx.geo_tree_log->nodes.lookup_ptr(node.identifier);
if (node_log != nullptr) {
warnings = node_log->warnings;
}
@ -1751,7 +1723,8 @@ static std::optional<std::chrono::nanoseconds> node_get_execution_time(
}
}
else {
if (const geo_log::GeoNodeLog *node_log = tree_log->nodes.lookup_ptr_as(tnode->name)) {
if (const geo_log::GeoNodeLog *node_log = tree_log->nodes.lookup_ptr_as(
tnode->identifier)) {
found_node = true;
run_time += node_log->run_time;
}
@ -1762,7 +1735,7 @@ static std::optional<std::chrono::nanoseconds> node_get_execution_time(
}
return std::nullopt;
}
if (const geo_log::GeoNodeLog *node_log = tree_log->nodes.lookup_ptr(node.name)) {
if (const geo_log::GeoNodeLog *node_log = tree_log->nodes.lookup_ptr(node.identifier)) {
return node_log->run_time;
}
return std::nullopt;
@ -1903,7 +1876,7 @@ static std::optional<NodeExtraInfoRow> node_get_accessed_attributes_row(
}
}
tree_draw_ctx.geo_tree_log->ensure_used_named_attributes();
geo_log::GeoNodeLog *node_log = tree_draw_ctx.geo_tree_log->nodes.lookup_ptr(node.name);
geo_log::GeoNodeLog *node_log = tree_draw_ctx.geo_tree_log->nodes.lookup_ptr(node.identifier);
if (node_log == nullptr) {
return std::nullopt;
}
@ -1944,15 +1917,17 @@ static Vector<NodeExtraInfoRow> node_get_extra_info(TreeDrawContext &tree_draw_c
}
}
if (snode.edittree->type == NTREE_GEOMETRY && tree_draw_ctx.geo_tree_log != nullptr) {
tree_draw_ctx.geo_tree_log->ensure_debug_messages();
const geo_log::GeoNodeLog *node_log = tree_draw_ctx.geo_tree_log->nodes.lookup_ptr(node.name);
if (node_log != nullptr) {
for (const StringRef message : node_log->debug_messages) {
NodeExtraInfoRow row;
row.text = message;
row.icon = ICON_INFO;
rows.append(std::move(row));
if (snode.edittree->type == NTREE_GEOMETRY) {
if (geo_log::GeoTreeLog *tree_log = tree_draw_ctx.geo_tree_log) {
tree_log->ensure_debug_messages();
const geo_log::GeoNodeLog *node_log = tree_log->nodes.lookup_ptr(node.identifier);
if (node_log != nullptr) {
for (const StringRef message : node_log->debug_messages) {
NodeExtraInfoRow row;
row.text = message;
row.icon = ICON_INFO;
rows.append(std::move(row));
}
}
}
}

View File

@ -40,7 +40,7 @@ using blender::nodes::geo_eval_log::GeometryAttributeInfo;
namespace blender::ed::space_node {
struct AttributeSearchData {
char node_name[MAX_NAME];
int32_t node_id;
char socket_identifier[MAX_NAME];
};
@ -62,7 +62,7 @@ static Vector<const GeometryAttributeInfo *> get_attribute_info_from_context(
BLI_assert_unreachable();
return {};
}
bNode *node = nodeFindNodebyName(node_tree, data.node_name);
const bNode *node = node_tree->node_by_id(data.node_id);
if (node == nullptr) {
BLI_assert_unreachable();
return {};
@ -84,7 +84,7 @@ static Vector<const GeometryAttributeInfo *> get_attribute_info_from_context(
}
return attributes;
}
GeoNodeLog *node_log = tree_log->nodes.lookup_ptr(node->name);
GeoNodeLog *node_log = tree_log->nodes.lookup_ptr(node->identifier);
if (node_log == nullptr) {
return {};
}
@ -173,7 +173,7 @@ static void attribute_search_exec_fn(bContext *C, void *data_v, void *item_v)
return;
}
AttributeSearchData *data = static_cast<AttributeSearchData *>(data_v);
bNode *node = nodeFindNodebyName(node_tree, data->node_name);
bNode *node = node_tree->node_by_id(data->node_id);
if (node == nullptr) {
BLI_assert_unreachable();
return;
@ -243,7 +243,7 @@ void node_geometry_add_attribute_search_button(const bContext & /*C*/,
const bNodeSocket &socket = *static_cast<const bNodeSocket *>(socket_ptr.data);
AttributeSearchData *data = MEM_new<AttributeSearchData>(__func__);
BLI_strncpy(data->node_name, node.name, sizeof(data->node_name));
data->node_id = node.identifier;
BLI_strncpy(data->socket_identifier, socket.identifier, sizeof(data->socket_identifier));
UI_but_func_search_set_results_are_suggestions(but, true);

View File

@ -249,11 +249,11 @@ static bool node_group_ungroup(Main *bmain, bNodeTree *ntree, bNode *gnode)
/* migrate node */
BLI_remlink(&wgroup->nodes, node);
BLI_addtail(&ntree->nodes, node);
BKE_ntree_update_tag_node_new(ntree, node);
/* ensure unique node name in the node tree */
nodeUniqueID(ntree, node);
nodeUniqueName(ntree, node);
BKE_ntree_update_tag_node_new(ntree, node);
if (wgroup->adt) {
PointerRNA ptr;
RNA_pointer_create(&ntree->id, &RNA_Node, node, &ptr);
@ -494,8 +494,7 @@ static bool node_group_separate_selected(
/* migrate node */
BLI_remlink(&ngroup.nodes, newnode);
BLI_addtail(&ntree.nodes, newnode);
/* ensure unique node name in the node tree */
nodeUniqueID(&ntree, newnode);
nodeUniqueName(&ntree, newnode);
if (!newnode->parent) {
@ -871,11 +870,11 @@ static void node_group_make_insert_selected(const bContext &C, bNodeTree &ntree,
/* change node-collection membership */
BLI_remlink(&ntree.nodes, node);
BLI_addtail(&ngroup->nodes, node);
nodeUniqueID(ngroup, node);
nodeUniqueName(ngroup, node);
BKE_ntree_update_tag_node_removed(&ntree);
BKE_ntree_update_tag_node_new(ngroup, node);
/* ensure unique node name in the ngroup */
nodeUniqueName(ngroup, node);
}
}

View File

@ -196,7 +196,8 @@ void ED_node_set_active_viewer_key(SpaceNode *snode)
if (snode->nodetree && path) {
/* A change in active viewer may result in the change of the output node used by the
* compositor, so we need to get notified about such changes. */
if (snode->nodetree->active_viewer_key.value != path->parent_key.value) {
if (snode->nodetree->active_viewer_key.value != path->parent_key.value &&
snode->nodetree->type == NTREE_COMPOSIT) {
DEG_id_tag_update(&snode->nodetree->id, ID_RECALC_NTREE_OUTPUT);
WM_main_add_notifier(NC_NODE, nullptr);
}

View File

@ -56,13 +56,18 @@ static void viewer_path_for_geometry_node(const SpaceNode &snode,
BLI_addtail(&r_dst.path, modifier_elem);
Vector<const bNodeTreePath *, 16> tree_path = snode.treepath;
for (const bNodeTreePath *tree_path_elem : tree_path.as_span().drop_front(1)) {
for (const int i : tree_path.index_range().drop_back(1)) {
/* The tree path contains the name of the node but not its ID. */
const bNode *node = nodeFindNodebyName(tree_path[i]->nodetree, tree_path[i + 1]->node_name);
/* The name in the tree path should match a group node in the tree. */
BLI_assert(node != nullptr);
NodeViewerPathElem *node_elem = BKE_viewer_path_elem_new_node();
node_elem->node_name = BLI_strdup(tree_path_elem->node_name);
BLI_addtail(&r_dst.path, node_elem);
node_elem->node_id = node->identifier;
node_elem->node_name = BLI_strdup(node->name);
}
NodeViewerPathElem *viewer_node_elem = BKE_viewer_path_elem_new_node();
viewer_node_elem->node_id = node.identifier;
viewer_node_elem->node_name = BLI_strdup(node.name);
BLI_addtail(&r_dst.path, viewer_node_elem);
}
@ -171,19 +176,16 @@ std::optional<ViewerPathForGeometryNodesViewer> parse_geometry_nodes_viewer(
return std::nullopt;
}
remaining_elems = remaining_elems.drop_front(1);
Vector<StringRefNull> node_names;
Vector<int32_t> node_ids;
for (const ViewerPathElem *elem : remaining_elems) {
if (elem->type != VIEWER_PATH_ELEM_TYPE_NODE) {
return std::nullopt;
}
const char *node_name = reinterpret_cast<const NodeViewerPathElem *>(elem)->node_name;
if (node_name == nullptr) {
return std::nullopt;
}
node_names.append(node_name);
const int32_t node_id = reinterpret_cast<const NodeViewerPathElem *>(elem)->node_id;
node_ids.append(node_id);
}
const StringRefNull viewer_node_name = node_names.pop_last();
return ViewerPathForGeometryNodesViewer{root_ob, modifier_name, node_names, viewer_node_name};
const int32_t viewer_node_id = node_ids.pop_last();
return ViewerPathForGeometryNodesViewer{root_ob, modifier_name, node_ids, viewer_node_id};
}
bool exists_geometry_nodes_viewer(const ViewerPathForGeometryNodesViewer &parsed_viewer_path)
@ -207,10 +209,10 @@ bool exists_geometry_nodes_viewer(const ViewerPathForGeometryNodesViewer &parsed
}
const bNodeTree *ngroup = modifier->node_group;
ngroup->ensure_topology_cache();
for (const StringRefNull group_node_name : parsed_viewer_path.group_node_names) {
for (const int32_t group_node_id : parsed_viewer_path.group_node_ids) {
const bNode *group_node = nullptr;
for (const bNode *node : ngroup->group_nodes()) {
if (node->name != group_node_name) {
if (node->identifier != group_node_id) {
continue;
}
group_node = node;
@ -226,7 +228,7 @@ bool exists_geometry_nodes_viewer(const ViewerPathForGeometryNodesViewer &parsed
}
const bNode *viewer_node = nullptr;
for (const bNode *node : ngroup->nodes_by_type("GeometryNodeViewer")) {
if (node->name != parsed_viewer_path.viewer_node_name) {
if (node->identifier != parsed_viewer_path.viewer_node_id) {
continue;
}
viewer_node = node;
@ -238,6 +240,25 @@ bool exists_geometry_nodes_viewer(const ViewerPathForGeometryNodesViewer &parsed
return true;
}
static bool viewer_path_matches_node_editor_path(
const SpaceNode &snode, const ViewerPathForGeometryNodesViewer &parsed_viewer_path)
{
Vector<const bNodeTreePath *, 16> tree_path = snode.treepath;
if (tree_path.size() != parsed_viewer_path.group_node_ids.size() + 1) {
return false;
}
for (const int i : parsed_viewer_path.group_node_ids.index_range()) {
const bNode *node = tree_path[i]->nodetree->node_by_id(parsed_viewer_path.group_node_ids[i]);
if (!node) {
return false;
}
if (!STREQ(node->name, tree_path[i + 1]->node_name)) {
return false;
}
}
return true;
}
bool is_active_geometry_nodes_viewer(const bContext &C,
const ViewerPathForGeometryNodesViewer &parsed_viewer_path)
{
@ -297,29 +318,12 @@ bool is_active_geometry_nodes_viewer(const bContext &C,
if (snode.nodetree != modifier->node_group) {
continue;
}
Vector<const bNodeTreePath *, 16> tree_path = snode.treepath;
if (tree_path.size() != parsed_viewer_path.group_node_names.size() + 1) {
continue;
}
bool valid_path = true;
for (const int i : parsed_viewer_path.group_node_names.index_range()) {
if (parsed_viewer_path.group_node_names[i] != tree_path[i + 1]->node_name) {
valid_path = false;
break;
}
}
if (!valid_path) {
if (!viewer_path_matches_node_editor_path(snode, parsed_viewer_path)) {
continue;
}
const bNodeTree *ngroup = snode.edittree;
ngroup->ensure_topology_cache();
const bNode *viewer_node = nullptr;
for (const bNode *node : ngroup->nodes_by_type("GeometryNodeViewer")) {
if (node->name != parsed_viewer_path.viewer_node_name) {
continue;
}
viewer_node = node;
}
const bNode *viewer_node = ngroup->node_by_id(parsed_viewer_path.viewer_node_id);
if (viewer_node == nullptr) {
continue;
}
@ -342,13 +346,7 @@ bNode *find_geometry_nodes_viewer(const ViewerPath &viewer_path, SpaceNode &snod
}
snode.edittree->ensure_topology_cache();
bNode *possible_viewer = nullptr;
for (bNode *node : snode.edittree->nodes_by_type("GeometryNodeViewer")) {
if (node->name == parsed_viewer_path->viewer_node_name) {
possible_viewer = node;
break;
}
}
bNode *possible_viewer = snode.edittree->node_by_id(parsed_viewer_path->viewer_node_id);
if (possible_viewer == nullptr) {
return nullptr;
}

View File

@ -191,8 +191,10 @@ set(VULKAN_SRC
vulkan/vk_batch.cc
vulkan/vk_context.cc
vulkan/vk_drawlist.cc
vulkan/vk_fence.cc
vulkan/vk_framebuffer.cc
vulkan/vk_index_buffer.cc
vulkan/vk_pixel_buffer.cc
vulkan/vk_query.cc
vulkan/vk_shader.cc
vulkan/vk_storage_buffer.cc
@ -204,8 +206,10 @@ set(VULKAN_SRC
vulkan/vk_batch.hh
vulkan/vk_context.hh
vulkan/vk_drawlist.hh
vulkan/vk_fence.hh
vulkan/vk_framebuffer.hh
vulkan/vk_index_buffer.hh
vulkan/vk_pixel_buffer.hh
vulkan/vk_query.hh
vulkan/vk_shader.hh
vulkan/vk_storage_buffer.hh

View File

@ -10,8 +10,10 @@
#include "vk_batch.hh"
#include "vk_context.hh"
#include "vk_drawlist.hh"
#include "vk_fence.hh"
#include "vk_framebuffer.hh"
#include "vk_index_buffer.hh"
#include "vk_pixel_buffer.hh"
#include "vk_query.hh"
#include "vk_shader.hh"
#include "vk_storage_buffer.hh"
@ -80,6 +82,11 @@ DrawList *VKBackend::drawlist_alloc(int /*list_length*/)
return new VKDrawList();
}
Fence *VKBackend::fence_alloc()
{
return new VKFence();
}
FrameBuffer *VKBackend::framebuffer_alloc(const char *name)
{
return new VKFrameBuffer(name);
@ -90,6 +97,11 @@ IndexBuf *VKBackend::indexbuf_alloc()
return new VKIndexBuffer();
}
PixelBuffer *VKBackend::pixelbuf_alloc(uint size)
{
return new VKPixelBuffer(size);
}
QueryPool *VKBackend::querypool_alloc()
{
return new VKQueryPool();

View File

@ -33,8 +33,10 @@ class VKBackend : public GPUBackend {
Batch *batch_alloc() override;
DrawList *drawlist_alloc(int list_length) override;
Fence *fence_alloc() override;
FrameBuffer *framebuffer_alloc(const char *name) override;
IndexBuf *indexbuf_alloc() override;
PixelBuffer *pixelbuf_alloc(uint size) override;
QueryPool *querypool_alloc() override;
Shader *shader_alloc(const char *name) override;
Texture *texture_alloc(const char *name) override;

View File

@ -0,0 +1,20 @@
/* SPDX-License-Identifier: GPL-2.0-or-later
* Copyright 2022 Blender Foundation. All rights reserved. */
/** \file
* \ingroup gpu
*/
#include "vk_fence.hh"
namespace blender::gpu {
void VKFence::signal()
{
}
void VKFence::wait()
{
}
} // namespace blender::gpu

View File

@ -0,0 +1,20 @@
/* SPDX-License-Identifier: GPL-2.0-or-later
* Copyright 2022 Blender Foundation. All rights reserved. */
/** \file
* \ingroup gpu
*/
#pragma once
#include "gpu_state_private.hh"
namespace blender::gpu {
class VKFence : public Fence {
public:
void signal() override;
void wait() override;
};
} // namespace blender::gpu

View File

@ -0,0 +1,35 @@
/* SPDX-License-Identifier: GPL-2.0-or-later
* Copyright 2022 Blender Foundation. All rights reserved. */
/** \file
* \ingroup gpu
*/
#include "vk_pixel_buffer.hh"
namespace blender::gpu {
VKPixelBuffer::VKPixelBuffer(int64_t size): PixelBuffer(size)
{
}
void *VKPixelBuffer::map()
{
return nullptr;
}
void VKPixelBuffer::unmap()
{
}
int64_t VKPixelBuffer::get_native_handle()
{
return -1;
}
uint VKPixelBuffer::get_size()
{
return size_;
}
} // namespace blender::gpu

View File

@ -0,0 +1,23 @@
/* SPDX-License-Identifier: GPL-2.0-or-later
* Copyright 2022 Blender Foundation. All rights reserved. */
/** \file
* \ingroup gpu
*/
#pragma once
#include "gpu_texture_private.hh"
namespace blender::gpu {
class VKPixelBuffer : public PixelBuffer {
public:
VKPixelBuffer(int64_t size);
void *map() override;
void unmap() override;
int64_t get_native_handle() override;
uint get_size() override;
};
} // namespace blender::gpu

View File

@ -46,6 +46,13 @@ void VKTexture::update_sub(int /*mip*/,
{
}
void VKTexture::update_sub(int /*offset*/[3],
int /*extent*/[3],
eGPUDataFormat /*format*/,
GPUPixelBuffer * /*pixbuf*/)
{
}
/* TODO(fclem): Legacy. Should be removed at some point. */
uint VKTexture::gl_bindcode_get() const
{

View File

@ -26,6 +26,10 @@ class VKTexture : public Texture {
void *read(int mip, eGPUDataFormat format) override;
void update_sub(
int mip, int offset[3], int extent[3], eGPUDataFormat format, const void *data) override;
void update_sub(int offset[3],
int extent[3],
eGPUDataFormat format,
GPUPixelBuffer *pixbuf) override;
/* TODO(fclem): Legacy. Should be removed at some point. */
uint gl_bindcode_get() const override;

View File

@ -317,7 +317,14 @@ typedef struct bNode {
/** Additional offset from loc. */
float offsetx, offsety;
char _pad0[4];
/**
* A value that uniquely identifies a node in a node tree even when the name changes.
* This also allows referencing nodes more efficiently than with strings.
*
* Must be set whenever a node is added to a tree, besides a simple tree copy.
* Must always be positive.
*/
int32_t identifier;
/** Custom user-defined label, MAX_NAME. */
char label[64];
@ -548,6 +555,15 @@ typedef struct bNodeTree {
bNodeTreeRuntimeHandle *runtime;
#ifdef __cplusplus
/** A span containing all nodes in the node tree. */
blender::Span<bNode *> all_nodes();
blender::Span<const bNode *> all_nodes() const;
/** Retrieve a node based on its persistent integer identifier. */
struct bNode *node_by_id(int32_t identifier);
const struct bNode *node_by_id(int32_t identifier) const;
/**
* Update a run-time cache for the node tree based on it's current state. This makes many methods
* available which allow efficient lookup for topology information (like neighboring sockets).
@ -557,9 +573,6 @@ typedef struct bNodeTree {
/* The following methods are only available when #bNodeTree.ensure_topology_cache has been
* called. */
/** A span containing all nodes in the node tree. */
blender::Span<bNode *> all_nodes();
blender::Span<const bNode *> all_nodes() const;
/** A span containing all group nodes in the node tree. */
blender::Span<bNode *> group_nodes();
blender::Span<const bNode *> group_nodes() const;

View File

@ -31,6 +31,14 @@ typedef struct ModifierViewerPathElem {
typedef struct NodeViewerPathElem {
ViewerPathElem base;
int32_t node_id;
char _pad1[4];
/**
* The name of the node with the identifier. Not used to lookup nodes, only for display
* in the UI. Still stored here to avoid looking up the name for every redraw.
*/
char *node_name;
} NodeViewerPathElem;

View File

@ -861,14 +861,8 @@ static void find_side_effect_nodes_for_viewer_path(
const bNodeTree *group = nmd.node_group;
Stack<const bNode *> group_node_stack;
for (const StringRefNull group_node_name : parsed_path->group_node_names) {
const bNode *found_node = nullptr;
for (const bNode *node : group->group_nodes()) {
if (node->name == group_node_name) {
found_node = node;
break;
}
}
for (const int32_t group_node_id : parsed_path->group_node_ids) {
const bNode *found_node = group->node_by_id(group_node_id);
if (found_node == nullptr) {
return;
}
@ -880,16 +874,10 @@ static void find_side_effect_nodes_for_viewer_path(
}
group_node_stack.push(found_node);
group = reinterpret_cast<bNodeTree *>(found_node->id);
compute_context_builder.push<blender::bke::NodeGroupComputeContext>(group_node_name);
compute_context_builder.push<blender::bke::NodeGroupComputeContext>(*found_node);
}
const bNode *found_viewer_node = nullptr;
for (const bNode *viewer_node : group->nodes_by_type("GeometryNodeViewer")) {
if (viewer_node->name == parsed_path->viewer_node_name) {
found_viewer_node = viewer_node;
break;
}
}
const bNode *found_viewer_node = group->node_by_id(parsed_path->viewer_node_id);
if (found_viewer_node == nullptr) {
return;
}

View File

@ -169,36 +169,36 @@ using TimePoint = Clock::time_point;
class GeoTreeLogger {
public:
std::optional<ComputeContextHash> parent_hash;
std::optional<std::string> group_node_name;
std::optional<int32_t> group_node_id;
Vector<ComputeContextHash> children_hashes;
LinearAllocator<> *allocator = nullptr;
struct WarningWithNode {
StringRefNull node_name;
int32_t node_id;
NodeWarning warning;
};
struct SocketValueLog {
StringRefNull node_name;
int32_t node_id;
StringRefNull socket_identifier;
destruct_ptr<ValueLog> value;
};
struct NodeExecutionTime {
StringRefNull node_name;
int32_t node_id;
TimePoint start;
TimePoint end;
};
struct ViewerNodeLogWithNode {
StringRefNull node_name;
int32_t node_id;
destruct_ptr<ViewerNodeLog> viewer_log;
};
struct AttributeUsageWithNode {
StringRefNull node_name;
int32_t node_id;
StringRefNull attribute_name;
NamedAttributeUsage usage;
};
struct DebugMessage {
StringRefNull node_name;
int32_t node_id;
StringRefNull message;
};
@ -269,8 +269,8 @@ class GeoTreeLog {
bool reduced_debug_messages_ = false;
public:
Map<StringRefNull, GeoNodeLog> nodes;
Map<StringRefNull, ViewerNodeLog *, 0> viewer_node_logs;
Map<int32_t, GeoNodeLog> nodes;
Map<int32_t, ViewerNodeLog *, 0> viewer_node_logs;
Vector<NodeWarning> all_warnings;
std::chrono::nanoseconds run_time_sum{0};
Vector<const GeometryAttributeInfo *> existing_attributes;

View File

@ -5,7 +5,8 @@
#include "BKE_mesh.h"
#include "BLI_disjoint_set.hh"
#include "BLI_atomic_disjoint_set.hh"
#include "BLI_task.hh"
#include "node_geometry_util.hh"
@ -35,17 +36,15 @@ class IslandFieldInput final : public bke::MeshFieldInput {
{
const Span<MEdge> edges = mesh.edges();
DisjointSet<int> islands(mesh.totvert);
for (const int i : edges.index_range()) {
islands.join(edges[i].v1, edges[i].v2);
}
AtomicDisjointSet islands(mesh.totvert);
threading::parallel_for(edges.index_range(), 1024, [&](const IndexRange range) {
for (const MEdge &edge : edges.slice(range)) {
islands.join(edge.v1, edge.v2);
}
});
Array<int> output(mesh.totvert);
VectorSet<int> ordered_roots;
for (const int i : IndexRange(mesh.totvert)) {
const int root = islands.find_root(i);
output[i] = ordered_roots.index_of_or_add(root);
}
islands.calc_reduced_ids(output);
return mesh.attributes().adapt_domain<int>(
VArray<int>::ForContainer(std::move(output)), ATTR_DOMAIN_POINT, domain);
@ -81,18 +80,15 @@ class IslandCountFieldInput final : public bke::MeshFieldInput {
{
const Span<MEdge> edges = mesh.edges();
DisjointSet<int> islands(mesh.totvert);
for (const int i : edges.index_range()) {
islands.join(edges[i].v1, edges[i].v2);
}
AtomicDisjointSet islands(mesh.totvert);
threading::parallel_for(edges.index_range(), 1024, [&](const IndexRange range) {
for (const MEdge &edge : edges.slice(range)) {
islands.join(edge.v1, edge.v2);
}
});
Set<int> island_list;
for (const int i_vert : IndexRange(mesh.totvert)) {
const int root = islands.find_root(i_vert);
island_list.add(root);
}
return VArray<int>::ForSingle(island_list.size(), mesh.attributes().domain_size(domain));
const int islands_num = islands.count_sets();
return VArray<int>::ForSingle(islands_num, mesh.attributes().domain_size(domain));
}
uint64_t hash() const override

View File

@ -135,8 +135,7 @@ class LazyFunctionForGeometryNode : public LazyFunction {
if (geo_eval_log::GeoModifierLog *modifier_log = user_data->modifier_data->eval_log) {
geo_eval_log::GeoTreeLogger &tree_logger = modifier_log->get_local_tree_logger(
*user_data->compute_context);
tree_logger.node_execution_times.append(
{tree_logger.allocator->copy_string(node_.name), start_time, end_time});
tree_logger.node_execution_times.append({node_.identifier, start_time, end_time});
}
}
};
@ -663,7 +662,8 @@ class LazyFunctionForGroupNode : public LazyFunction {
}
/* The compute context changes when entering a node group. */
bke::NodeGroupComputeContext compute_context{user_data->compute_context, group_node_.name};
bke::NodeGroupComputeContext compute_context{user_data->compute_context,
group_node_.identifier};
GeoNodesLFUserData group_user_data = *user_data;
group_user_data.compute_context = &compute_context;
@ -1399,8 +1399,7 @@ GeometryNodesLazyFunctionGraphInfo::~GeometryNodesLazyFunctionGraphInfo()
if (!bsockets.is_empty()) {
const bNodeSocket &bsocket = *bsockets[0];
const bNode &bnode = bsocket.owner_node();
tree_logger.debug_messages.append(
{tree_logger.allocator->copy_string(bnode.name), thread_id_str});
tree_logger.debug_messages.append({bnode.identifier, thread_id_str});
return true;
}
}

View File

@ -151,9 +151,8 @@ void GeoTreeLogger::log_value(const bNode &node, const bNodeSocket &socket, cons
auto store_logged_value = [&](destruct_ptr<ValueLog> value_log) {
auto &socket_values = socket.in_out == SOCK_IN ? this->input_socket_values :
this->output_socket_values;
socket_values.append({this->allocator->copy_string(node.name),
this->allocator->copy_string(socket.identifier),
std::move(value_log)});
socket_values.append(
{node.identifier, this->allocator->copy_string(socket.identifier), std::move(value_log)});
};
auto log_generic_value = [&](const CPPType &type, const void *value) {
@ -195,7 +194,7 @@ void GeoTreeLogger::log_viewer_node(const bNode &viewer_node, GeometrySet geomet
destruct_ptr<ViewerNodeLog> log = this->allocator->construct<ViewerNodeLog>();
log->geometry = std::move(geometry);
log->geometry.ensure_owns_direct_data();
this->viewer_node_logs.append({this->allocator->copy_string(viewer_node.name), std::move(log)});
this->viewer_node_logs.append({viewer_node.identifier, std::move(log)});
}
void GeoTreeLog::ensure_node_warnings()
@ -205,17 +204,16 @@ void GeoTreeLog::ensure_node_warnings()
}
for (GeoTreeLogger *tree_logger : tree_loggers_) {
for (const GeoTreeLogger::WarningWithNode &warnings : tree_logger->node_warnings) {
this->nodes.lookup_or_add_default(warnings.node_name).warnings.append(warnings.warning);
this->nodes.lookup_or_add_default(warnings.node_id).warnings.append(warnings.warning);
this->all_warnings.append(warnings.warning);
}
}
for (const ComputeContextHash &child_hash : children_hashes_) {
GeoTreeLog &child_log = modifier_log_->get_tree_log(child_hash);
child_log.ensure_node_warnings();
const std::optional<std::string> &group_node_name =
child_log.tree_loggers_[0]->group_node_name;
if (group_node_name.has_value()) {
this->nodes.lookup_or_add_default(*group_node_name).warnings.extend(child_log.all_warnings);
const std::optional<int32_t> &group_node_id = child_log.tree_loggers_[0]->group_node_id;
if (group_node_id.has_value()) {
this->nodes.lookup_or_add_default(*group_node_id).warnings.extend(child_log.all_warnings);
}
this->all_warnings.extend(child_log.all_warnings);
}
@ -230,17 +228,16 @@ void GeoTreeLog::ensure_node_run_time()
for (GeoTreeLogger *tree_logger : tree_loggers_) {
for (const GeoTreeLogger::NodeExecutionTime &timings : tree_logger->node_execution_times) {
const std::chrono::nanoseconds duration = timings.end - timings.start;
this->nodes.lookup_or_add_default_as(timings.node_name).run_time += duration;
this->nodes.lookup_or_add_default_as(timings.node_id).run_time += duration;
this->run_time_sum += duration;
}
}
for (const ComputeContextHash &child_hash : children_hashes_) {
GeoTreeLog &child_log = modifier_log_->get_tree_log(child_hash);
child_log.ensure_node_run_time();
const std::optional<std::string> &group_node_name =
child_log.tree_loggers_[0]->group_node_name;
if (group_node_name.has_value()) {
this->nodes.lookup_or_add_default(*group_node_name).run_time += child_log.run_time_sum;
const std::optional<int32_t> &group_node_id = child_log.tree_loggers_[0]->group_node_id;
if (group_node_id.has_value()) {
this->nodes.lookup_or_add_default(*group_node_id).run_time += child_log.run_time_sum;
}
this->run_time_sum += child_log.run_time_sum;
}
@ -254,11 +251,11 @@ void GeoTreeLog::ensure_socket_values()
}
for (GeoTreeLogger *tree_logger : tree_loggers_) {
for (const GeoTreeLogger::SocketValueLog &value_log_data : tree_logger->input_socket_values) {
this->nodes.lookup_or_add_as(value_log_data.node_name)
this->nodes.lookup_or_add_as(value_log_data.node_id)
.input_values_.add(value_log_data.socket_identifier, value_log_data.value.get());
}
for (const GeoTreeLogger::SocketValueLog &value_log_data : tree_logger->output_socket_values) {
this->nodes.lookup_or_add_as(value_log_data.node_name)
this->nodes.lookup_or_add_as(value_log_data.node_id)
.output_values_.add(value_log_data.socket_identifier, value_log_data.value.get());
}
}
@ -272,7 +269,7 @@ void GeoTreeLog::ensure_viewer_node_logs()
}
for (GeoTreeLogger *tree_logger : tree_loggers_) {
for (const GeoTreeLogger::ViewerNodeLogWithNode &viewer_log : tree_logger->viewer_node_logs) {
this->viewer_node_logs.add(viewer_log.node_name, viewer_log.viewer_log.get());
this->viewer_node_logs.add(viewer_log.node_id, viewer_log.viewer_log.get());
}
}
reduced_viewer_node_logs_ = true;
@ -316,26 +313,25 @@ void GeoTreeLog::ensure_used_named_attributes()
return;
}
auto add_attribute = [&](const StringRefNull node_name,
auto add_attribute = [&](const int32_t node_id,
const StringRefNull attribute_name,
const NamedAttributeUsage &usage) {
this->nodes.lookup_or_add_default(node_name).used_named_attributes.lookup_or_add(
attribute_name, usage) |= usage;
this->nodes.lookup_or_add_default(node_id).used_named_attributes.lookup_or_add(attribute_name,
usage) |= usage;
this->used_named_attributes.lookup_or_add_as(attribute_name, usage) |= usage;
};
for (GeoTreeLogger *tree_logger : tree_loggers_) {
for (const GeoTreeLogger::AttributeUsageWithNode &item : tree_logger->used_named_attributes) {
add_attribute(item.node_name, item.attribute_name, item.usage);
add_attribute(item.node_id, item.attribute_name, item.usage);
}
}
for (const ComputeContextHash &child_hash : children_hashes_) {
GeoTreeLog &child_log = modifier_log_->get_tree_log(child_hash);
child_log.ensure_used_named_attributes();
if (const std::optional<std::string> &group_node_name =
child_log.tree_loggers_[0]->group_node_name) {
if (const std::optional<int32_t> &group_node_id = child_log.tree_loggers_[0]->group_node_id) {
for (const auto &item : child_log.used_named_attributes.items()) {
add_attribute(*group_node_name, item.key, item.value);
add_attribute(*group_node_id, item.key, item.value);
}
}
}
@ -349,7 +345,7 @@ void GeoTreeLog::ensure_debug_messages()
}
for (GeoTreeLogger *tree_logger : tree_loggers_) {
for (const GeoTreeLogger::DebugMessage &debug_message : tree_logger->debug_messages) {
this->nodes.lookup_or_add_as(debug_message.node_name)
this->nodes.lookup_or_add_as(debug_message.node_id)
.debug_messages.append(debug_message.message);
}
}
@ -378,7 +374,7 @@ ValueLog *GeoTreeLog::find_socket_value_log(const bNodeSocket &query_socket)
while (!sockets_to_check.is_empty()) {
const bNodeSocket &socket = *sockets_to_check.pop();
const bNode &node = socket.owner_node();
if (GeoNodeLog *node_log = this->nodes.lookup_ptr(node.name)) {
if (GeoNodeLog *node_log = this->nodes.lookup_ptr(node.identifier)) {
ValueLog *value_log = socket.is_input() ?
node_log->input_values_.lookup_default(socket.identifier,
nullptr) :
@ -453,7 +449,7 @@ GeoTreeLogger &GeoModifierLog::get_local_tree_logger(const ComputeContext &compu
}
if (const bke::NodeGroupComputeContext *node_group_compute_context =
dynamic_cast<const bke::NodeGroupComputeContext *>(&compute_context)) {
tree_logger.group_node_name.emplace(node_group_compute_context->node_name());
tree_logger.group_node_id.emplace(node_group_compute_context->node_id());
}
return tree_logger;
}
@ -538,8 +534,10 @@ GeoTreeLog *GeoModifierLog::get_tree_log_for_node_editor(const SpaceNode &snode)
ComputeContextBuilder compute_context_builder;
compute_context_builder.push<bke::ModifierComputeContext>(
object_and_modifier->nmd->modifier.name);
for (const bNodeTreePath *path_item : tree_path.as_span().drop_front(1)) {
compute_context_builder.push<bke::NodeGroupComputeContext>(path_item->node_name);
for (const int i : tree_path.index_range().drop_back(1)) {
/* The tree path contains the name of the node but not its ID. */
const bNode *node = nodeFindNodebyName(tree_path[i]->nodetree, tree_path[i + 1]->node_name);
compute_context_builder.push<bke::NodeGroupComputeContext>(*node);
}
return &modifier_log->get_tree_log(compute_context_builder.hash());
}
@ -571,15 +569,15 @@ const ViewerNodeLog *GeoModifierLog::find_viewer_node_log_for_path(const ViewerP
ComputeContextBuilder compute_context_builder;
compute_context_builder.push<bke::ModifierComputeContext>(parsed_path->modifier_name);
for (const StringRef group_node_name : parsed_path->group_node_names) {
compute_context_builder.push<bke::NodeGroupComputeContext>(group_node_name);
for (const int32_t group_node_id : parsed_path->group_node_ids) {
compute_context_builder.push<bke::NodeGroupComputeContext>(group_node_id);
}
const ComputeContextHash context_hash = compute_context_builder.hash();
nodes::geo_eval_log::GeoTreeLog &tree_log = modifier_log->get_tree_log(context_hash);
tree_log.ensure_viewer_node_logs();
const ViewerNodeLog *viewer_log = tree_log.viewer_node_logs.lookup_default(
parsed_path->viewer_node_name, nullptr);
parsed_path->viewer_node_id, nullptr);
return viewer_log;
}

View File

@ -17,8 +17,8 @@ void GeoNodeExecParams::error_message_add(const NodeWarningType type,
const StringRef message) const
{
if (geo_eval_log::GeoTreeLogger *tree_logger = this->get_local_tree_logger()) {
tree_logger->node_warnings.append({tree_logger->allocator->copy_string(node_.name),
{type, tree_logger->allocator->copy_string(message)}});
tree_logger->node_warnings.append(
{node_.identifier, {type, tree_logger->allocator->copy_string(message)}});
}
}
@ -26,9 +26,8 @@ void GeoNodeExecParams::used_named_attribute(const StringRef attribute_name,
const NamedAttributeUsage usage)
{
if (geo_eval_log::GeoTreeLogger *tree_logger = this->get_local_tree_logger()) {
tree_logger->used_named_attributes.append({tree_logger->allocator->copy_string(node_.name),
tree_logger->allocator->copy_string(attribute_name),
usage});
tree_logger->used_named_attributes.append(
{node_.identifier, tree_logger->allocator->copy_string(attribute_name), usage});
}
}

View File

@ -578,8 +578,13 @@ static bNode *ntree_shader_copy_branch(bNodeTree *ntree,
LISTBASE_FOREACH (bNode *, node, &ntree->nodes) {
if (node->runtime->tmp_flag >= 0) {
int id = node->runtime->tmp_flag;
/* Avoid creating unique names in the new tree, since it is very slow. The names on the new
* nodes will be invalid. But identifiers must be created for the `bNodeTree::all_nodes()`
* vector, though they won't match the original. */
nodes_copy[id] = blender::bke::node_copy(
ntree, *node, LIB_ID_CREATE_NO_USER_REFCOUNT | LIB_ID_CREATE_NO_MAIN, false);
nodeUniqueID(ntree, nodes_copy[id]);
nodes_copy[id]->runtime->tmp_flag = -2; /* Copy */
/* Make sure to clear all sockets links as they are invalid. */
LISTBASE_FOREACH (bNodeSocket *, sock, &nodes_copy[id]->inputs) {