Straightforward port. I took the oportunity to remove some C vector
functions (ex: copy_v2_v2).
This makes some changes to DRWView to accomodate the alignement
requirements of the float4x4 type.
Straightforward port. I took the oportunity to remove some C vector
functions (ex: `copy_v2_v2`).
This makes some changes to DRWView to accomodate the alignement
requirements of the float4x4 type.
This can improve performance in some circumstances when there are
vectorized and/or unrolled loops. I especially noticed that this helps
a lot while working on D16970 (got a 10-20% speedup there by avoiding
running into the non-vectorized fallback loop too often).
In most cases it is currently not used, so always having it there
causes unnecessary overhead. In my test file that causes
a 2 % performance improvement.
Previously, `ParamsBuilder` lazily allocated an array for an
output when it was unused, but the called multi-function
wanted to access it. Now, whether the multi-function supports
an output to be unused is part of the signature. This way, the
allocation can happen earlier when the parameters are build.
The benefit is that this makes all methods of `MFParams`
thread-safe again, removing the need for a mutex.
During geometry nodes evaluation some sockets can be determined
to be unused, for example based on the condition input in a switch node.
Once a socket is determined to be unused, that information has to be
propagated backwards through the tree to free any memory that may
have been reserved for those sockets already. This is happening before
this commit already, but in a less ideal way.
Determining that sockets are unused early is good because it helps with
memory reuse and avoids copy-on-write copies caused by shared data.
Now, nodes that are scheduled because an output became unused have
priority over nodes scheduled for other reasons.
This moves all multi-function related code in the `functions` module
into a new `multi_function` namespace. This is similar to how there
is a `lazy_function` namespace.
The main benefit of this is that many types names that were prefixed
with `MF` (for "multi function") can be simplified.
There is also a common shorthand for the `multi_function` namespace: `mf`.
This is also similar to lazy-functions where the shortened namespace
is called `lf`.
* `depends_on_context` was not used for a long time already.
* `param_data_indices` is not used since rB42b88c008861b6.
* The remaining data is moved to a single `Vector` to avoid
having to do two allocations when the size signature becomes
larger than fits into the inline buffer.
This avoids a move of the signature after building it. Tthe value had
to be moved out of `MFSignatureBuilder` in the `build` method.
This also makes the naming a bit less confusing where sometimes
both the `MFSignature` and `MFSignatureBuilder` were referred
to as "signature".
* New `build_mf` namespace for the multi-function builders.
* The type name of the created multi-functions is now "private",
i.e. the caller has to use `auto`. This has the benefit that the
implementation can change more freely without affecting
the caller.
* `CustomMF` does not use `std::function` internally anymore.
This reduces some overhead during code generation and at
run-time.
* `CustomMF` now supports single-mutable parameters.
The use of `std::variant` allows combining the four vectors
into one which more closely matches the intend and avoids
a workaround used before.
Note that this uses `std::get_if` instead of `std::get` because
`std::get` is only available since macOS 10.14.
Previously, the lifetimes of anonymous attributes were determined by
reference counts which were non-deterministic when multiple threads
are used. Now the lifetimes of anonymous attributes are handled
more explicitly and deterministically. This is a prerequisite for any kind
of caching, because caching the output of nodes that do things
non-deterministically and have "invisible inputs" (reference counts)
doesn't really work.
For more details for how deterministic lifetimes are achieved, see D16858.
No functional changes are expected. Small performance changes are expected
as well (within few percent, anything larger regressions should be reported as
bugs).
Differential Revision: https://developer.blender.org/D16858
Previously, this happened when the "node task" first runs, which might
not actually execute the node if there are missing inputs. Deferring the
allocation of storage and default inputs allows for better memory reuse
later (currently the memory is not reused).
This is part of D16858. Iterating over all field inputs allows us to extract
all anonymous attributes used by a field relatively easily which is necessary
for D16858.
This could potentially be used for better field tooltips for nested fields,
but that needs further investigation.
This was an oversight in rBdba2d828462ae22de5.
The evaluator uses multiple threads to initialize node states
but it is still in single threaded mode.
`get_main_or_local_allocator` did not return the right allocator
in this case.
The geometry nodes evaluator supports "lazy threading", i.e. it starts out
single-threaded. But when it determines that multi-threading can be
benefitial, it switches to multi-threaded mode.
Now it only creates an enumerable-thread-specific if it is actually using
multiple threads. This results in a 6% speedup in my test file with many
node groups and math nodes.
Lazy-function graphs are now evaluated properly even if they contain
cycles. Note that cycles are only ok if there is no data dependency cycle.
For example, a node might output something that is fed back into itself.
As long as the output can be computed without the input that it feeds into,
everything is ok.
The code that builds the graph is responsible for making sure that there
are no actual data dependencies.
Nodes that were not connected to any output could still impact performance.
While they were never executed, sometimes their inputs could keep references
to geometries that other nodes want to modify. That caused unnecessary geometry
copies, because a geometry can only be modified if it is not shared.
Now, inputs that will never be used are tagged accordingly and they will never
have references to geometries that others might want to modify.
* Support bidirectional type lookups. E.g. finding the base type of a
field was supported, but not the other way around. This also removes
the todo in `get_vector_type`. To achieve this, types have to be
registered up-front.
* Separate `CPPType` from other "type traits". For example, previously
`ValueOrFieldCPPType` adds additional behavior on top of `CPPType`.
Previously, it was a subclass, now it just contains a reference to the
`CPPType` it corresponds to. This follows the composition-over-inheritance
idea. This makes it easier to have self-contained "type traits" without
having to put everything into `CPPType`.
Differential Revision: https://developer.blender.org/D16479
This is the conventional way of dealing with unused arguments in C++,
since it works on all compilers.
Regex find and replace: `UNUSED\((\w+)\)` -> `/*$1*/`
In large node setup the threading overhead was sometimes very significant.
That's especially true when most nodes do very little work.
This commit improves the scheduling by not using multi-threading in many
cases unless it's likely that it will be worth it. For more details see the comments
in `BLI_lazy_threading.hh`.
Differential Revision: https://developer.blender.org/D15976
This refactors the geometry nodes evaluation system. No changes for the
user are expected. At a high level the goals are:
* Support using geometry nodes outside of the geometry nodes modifier.
* Support using the evaluator infrastructure for other purposes like field evaluation.
* Support more nodes, especially when many of them are disabled behind switch nodes.
* Support doing preprocessing on node groups.
For more details see T98492.
There are fairly detailed comments in the code, but here is a high level overview
for how it works now:
* There is a new "lazy-function" system. It is similar in spirit to the multi-function
system but with different goals. Instead of optimizing throughput for highly
parallelizable work, this system is designed to compute only the data that is actually
necessary. What data is necessary can be determined dynamically during evaluation.
Many lazy-functions can be composed in a graph to form a new lazy-function, which can
again be used in a graph etc.
* Each geometry node group is converted into a lazy-function graph prior to evaluation.
To evaluate geometry nodes, one then just has to evaluate that graph. Node groups are
no longer inlined into their parents.
Next steps for the evaluation system is to reduce the use of threads in some situations
to avoid overhead. Many small node groups don't benefit from multi-threading at all.
This is much easier to do now because not everything has to be inlined in one huge
node tree anymore.
Differential Revision: https://developer.blender.org/D15914
This potentially overallocates buffers so that they are usable
for more data types, which allows buffers to be reused more
easily. That leads to fewer separate allocations and improved
cache usage (in one of my test files the number of separate
allocations went down from 1826 to 1555).
This makes calculation of selected indices slightly faster when the
input is a virtual array (the direct output of various nodes like
Face Area, etc). The utility can be helpful for other areas that
need to find selected indices besides field evaluation.
With the face area node used as a selection with 4 million faces,
the speedup is 3.51 ms to 3.39 ms, just a slight speedup.
Differential Revision: https://developer.blender.org/D15127
This improves performance of the procedure executor on secondary metrics
(i.e. not for the main use case when many elements are processed together,
but for the use case when a single element is processed at a time).
In my benchmark I'm measuring a 50-60% improvement:
* Procedure with a single function (executed many times): `5.8s -> 2.7s`.
* Procedure with 1000 functions (executed many times): `2.4 -> 1.0s`.
The speedup is mainly achieved in multiple ways:
* Store an `Array` of variable states, instead of a map. The array is indexed
with indices stored in each variable. This also avoids separately allocating
variable states.
* Move less data around in the scheduler and use a `Stack` instead of `Map`.
`Map` was used before because it allows for some optimizations that might
be more important in the future, but they don't matter right now (e.g. joining
execution paths that diverged earlier).
* Avoid memory allocations by giving the `LinearAllocator` some memory
from the stack.
The separate geometry and delete geometry nodes often invert the
selection so that deleting elements from a geometry can be implemented
as copying the opposite selection of elements. This should make the two
nodes faster in some cases, since the generic versions of selection
creation functions (i.e. from d3a1e9cbb9) are used instead
of the single threaded code that was used for this node.
The change also makes the deletion/separation code easier to
understand because it doesn't have to pass around the inversion.
Goals:
* Better high level control over where devirtualization occurs. There is always
a trade-off between performance and compile-time/binary-size.
* Simplify using array devirtualization.
* Better performance for cases where devirtualization wasn't used before.
Many geometry nodes accept fields as inputs. Internally, that means that the
execution functions have to accept so called "virtual arrays" as inputs. Those
can be e.g. actual arrays, just single values, or lazily computed arrays.
Due to these different possible virtual arrays implementations, access to
individual elements is slower than it would be if everything was just a normal
array (access does through a virtual function call). For more complex execution
functions, this overhead does not matter, but for small functions (like a simple
addition) it very much does. The virtual function call also prevents the compiler
from doing some optimizations (e.g. loop unrolling and inserting simd instructions).
The solution is to "devirtualize" the virtual arrays for small functions where the
overhead is measurable. Essentially, the function is generated many times with
different array types as input. Then there is a run-time dispatch that calls the
best implementation. We have been doing devirtualization in e.g. math nodes
for a long time already. This patch just generalizes the concept and makes it
easier to control. It also makes it easier to investigate the different trade-offs
when it comes to devirtualization.
Nodes that we've optimized using devirtualization before didn't get a speedup.
However, a couple of nodes are using devirtualization now, that didn't before.
Those got a 2-4x speedup in common cases.
* Map Range
* Random Value
* Switch
* Combine XYZ
Differential Revision: https://developer.blender.org/D14628
Since {rBeae36be372a6b16ee3e76eff0485a47da4f3c230} the distinction
between float and byte colors is more explicit in the ui. So far, geometry
nodes couldn't really deal with byte colors in general. This patch fixes that.
There is still only one color socket, which contains float colors. Conversion
to and from byte colors is done when read from or writing to attributes.
* Support writing to byte color attributes in Store Named Attribute node.
* Support converting to/from byte color in attribute conversion operator.
* Support propagating byte color attributes.
* Add all the implicit conversions from byte colors to the other types.
* Display byte colors as integers in spreadsheet.
Differential Revision: https://developer.blender.org/D14705
This improves performance e.g. when creating an integer attribute
based on an index field. For 4 million vertices, I measured a speedup
from 3.5 ms to 1.2 ms.