This refactors how devirtualization is done in general and how
multi-functions use it.
* The old `Devirtualizer` class has been removed in favor of a simpler
solution. It is also more general in the sense that it is not coupled
with `IndexMask` and `VArray`. Instead there is a function that has
inputs which control how different types are devirtualized. The
new implementation is currently less general with regard to the number
of parameters it supports. This can be changed in the future, but
does not seem necessary now and would make the code less obvious.
* Devirtualizers for different types are now defined in their respective
headers.
* The multi-function builder works with the `GVArray` stored in `MFParams`
directly now, instead of first converting it to a `VArray<T>`. This reduces
some constant overhead, which makes the multi-function slightly
faster. This is only noticable when very few elements are processed though.
No functional changes or performance regressions are expected.
Goals:
* Better high level control over where devirtualization occurs. There is always
a trade-off between performance and compile-time/binary-size.
* Simplify using array devirtualization.
* Better performance for cases where devirtualization wasn't used before.
Many geometry nodes accept fields as inputs. Internally, that means that the
execution functions have to accept so called "virtual arrays" as inputs. Those
can be e.g. actual arrays, just single values, or lazily computed arrays.
Due to these different possible virtual arrays implementations, access to
individual elements is slower than it would be if everything was just a normal
array (access does through a virtual function call). For more complex execution
functions, this overhead does not matter, but for small functions (like a simple
addition) it very much does. The virtual function call also prevents the compiler
from doing some optimizations (e.g. loop unrolling and inserting simd instructions).
The solution is to "devirtualize" the virtual arrays for small functions where the
overhead is measurable. Essentially, the function is generated many times with
different array types as input. Then there is a run-time dispatch that calls the
best implementation. We have been doing devirtualization in e.g. math nodes
for a long time already. This patch just generalizes the concept and makes it
easier to control. It also makes it easier to investigate the different trade-offs
when it comes to devirtualization.
Nodes that we've optimized using devirtualization before didn't get a speedup.
However, a couple of nodes are using devirtualization now, that didn't before.
Those got a 2-4x speedup in common cases.
* Map Range
* Random Value
* Switch
* Combine XYZ
Differential Revision: https://developer.blender.org/D14628