Texture nodes CPU evaluation design #98940

Open
opened 2022-06-16 17:10:20 +02:00 by Brecht Van Lommel · 16 comments

For #54656, we need CPU evaluation of a subset of shader nodes that form the texture nodes.

Prototype under development in D15160.

Overview

The plan is to make this a subset of geometry nodes that can be evaluated using the same mechanisms. Texture nodes can be though of as a geometry nodes field, and when evaluated in the context of geometry nodes they are.

However they can also be evaluated outside of a geometry node tree. In particular, they can be evaluated on:

  • Geometry domains vertex, edge, face, corner, for geometry nodes and baking of textures to attributes. Possibly also sculpting and vertex color painting.
  • Geometry surface, for image texture baking and painting.
  • 2D or 3D space not attached to any geometry. For brushes, 2D image painting, physics effectors.

To support the last two cases in the geometry nodes infrastructure (but not the geometry node evaluation itself), a new field context named TextureFieldContext is added. This represents a continuous texture space, that may be mapped to a geometry surface or not. It has one domain, ATTR_DOMAIN_TEXTURE.

Compilation and Execution

The texture nodes compilaton and evaluation mechanism uses much of the same infrastructure as geometry nodes, but is separate to be usable in different contexts. It's meant to be possible to execute texture nodes efficiently many times, for example paint brushes are multi-threaded and each thread may execute the nodes multiple times. This is unlike geometry nodes where multiple threads may cooperate to execute nodes once.

The current compilation mechanism is as follows:

  • Multi-function nodes are compiled into a MFProcedure, which bundles multiple multi-functions into one for execution. Geometry nodes do not currently use this mechanism, but it is designed for this type of evaluation. This is rather inefficient currently for individual points, but should get more efficient when evaluated in big batches.
  • Remaining nodes are input nodes, and execute geometry_node_execute as part of compilation, which then outputs fields and values. The execution context for these nodes is limited, unlike geometry nodes there is no geometry, object or depsgraph.
  • For evaluation, fields are converted into GVArray using the provided geometry.

We could consider making geometry available and to the compilation and caching the compilation per geometry, though it's not clear there will be nodes that need this. Alternatively we may cache just the GVArray per geometry.

Fields

Input nodes return fields, which are then turned into GVArray for evaluation. For the texture domain, this is a virtual array that interpolates attributes. There may be some possibility here to share code with data transfer field nodes, or nodes that distribute points and curves on surfaces and inherit attributes.

Shader nodes need additional fields that are not currently in geometry nodes:

  • Generated / Rest Position / Orco
  • Normal, taking into account face smooth flag and custom normals
  • UV maps (planned to become available as float2 attributes)
  • Active render UV map and color layer
  • Tangent
  • Pointiness (Cavity)
  • Random Per Island

It's unclear to me what the best way to specify these as fields is, if we should add additional builtin atttribute names or GeometryFieldInput. Some of these like Pointiness would also make sense to cache as attributes, and make available to external renderers.

One constraint is that it must be efficient to evaluate only a small subset of the domain. For many field inputs and operations this works, but there are some that will compute a data array for the entire domain. This can be enforced by just not making such nodes available as texture nodes, or caching the data array on the geometry.

For sculpt and paint modes, there is a question if and how we can make all these attributes available. The challenge is that it has different mesh data structures (bmesh and multires grids), which are not supported by geometry nodes. Additionally, recomputing for example tangents and pointiness as the geometry changes during a brush stroke would be inefficient, especially if it's for the entire mesh and not just the edited subset.

Batched Processing

Fields and multi-functions are not optimized for evaluating one point at a time, there is too much overhead. That means we want to process texture evaluations in batches, and all (non-legacy) code that uses textures should be refactored.

For geometry nodes, modifiers and 2D brushes this is most likely straightforward. However for 3D brushes, particles and effectors it will be more complicated.

There is currently a mechanism to single evaluations possible while code is being converted. However this is rather inefficient, with quite a bit of overhead in MFProcedure execution. It may be possible to do some optimizations there for legacy code that we may not be able to easily convert to batches.

Node Graph Transforms

To support sockets like Image datablocks, and to automatically insert Texture Coordinate or UV Map nodes for textures coordinate sockets, we need to transform the node graph. The texture node layering will also need to do significant node graph transformations. In order to share code with Cycles and other external engines, it may be best to do this as part of node tree localization.

Relation to Other Node Systems

There are many node evaluation systems in Blender, and the question is if we can somehow share this implementation with anything else. I don't think the CPU evaluation can be directly shared, but will make the old texture nodes evaluation obsolete at least. The GPU implementation can probably be largely shared with shader nodes, and that implementation could be refactored to use DerivedNodeTree for consistency and to simplify that code.

For #54656, we need CPU evaluation of a subset of shader nodes that form the texture nodes. Prototype under development in [D15160](https://archive.blender.org/developer/D15160). **Overview** The plan is to make this a subset of geometry nodes that can be evaluated using the same mechanisms. Texture nodes can be though of as a geometry nodes field, and when evaluated in the context of geometry nodes they are. However they can also be evaluated outside of a geometry node tree. In particular, they can be evaluated on: * Geometry domains vertex, edge, face, corner, for geometry nodes and baking of textures to attributes. Possibly also sculpting and vertex color painting. * Geometry surface, for image texture baking and painting. * 2D or 3D space not attached to any geometry. For brushes, 2D image painting, physics effectors. To support the last two cases in the geometry nodes infrastructure (but not the geometry node evaluation itself), a new field context named `TextureFieldContext` is added. This represents a continuous texture space, that may be mapped to a geometry surface or not. It has one domain, `ATTR_DOMAIN_TEXTURE`. **Compilation and Execution** The texture nodes compilaton and evaluation mechanism uses much of the same infrastructure as geometry nodes, but is separate to be usable in different contexts. It's meant to be possible to execute texture nodes efficiently many times, for example paint brushes are multi-threaded and each thread may execute the nodes multiple times. This is unlike geometry nodes where multiple threads may cooperate to execute nodes once. The current compilation mechanism is as follows: * Multi-function nodes are compiled into a `MFProcedure`, which bundles multiple multi-functions into one for execution. Geometry nodes do not currently use this mechanism, but it is designed for this type of evaluation. This is rather inefficient currently for individual points, but should get more efficient when evaluated in big batches. * Remaining nodes are input nodes, and execute `geometry_node_execute` as part of compilation, which then outputs fields and values. The execution context for these nodes is limited, unlike geometry nodes there is no geometry, object or depsgraph. * For evaluation, fields are converted into `GVArray` using the provided geometry. We could consider making geometry available and to the compilation and caching the compilation per geometry, though it's not clear there will be nodes that need this. Alternatively we may cache just the `GVArray` per geometry. **Fields** Input nodes return fields, which are then turned into `GVArray` for evaluation. For the texture domain, this is a virtual array that interpolates attributes. There may be some possibility here to share code with data transfer field nodes, or nodes that distribute points and curves on surfaces and inherit attributes. Shader nodes need additional fields that are not currently in geometry nodes: * Generated / Rest Position / Orco * Normal, taking into account face smooth flag and custom normals * UV maps (planned to become available as float2 attributes) * Active render UV map and color layer * Tangent * Pointiness (Cavity) * Random Per Island It's unclear to me what the best way to specify these as fields is, if we should add additional builtin atttribute names or `GeometryFieldInput`. Some of these like Pointiness would also make sense to cache as attributes, and make available to external renderers. One constraint is that it must be efficient to evaluate only a small subset of the domain. For many field inputs and operations this works, but there are some that will compute a data array for the entire domain. This can be enforced by just not making such nodes available as texture nodes, or caching the data array on the geometry. For sculpt and paint modes, there is a question if and how we can make all these attributes available. The challenge is that it has different mesh data structures (bmesh and multires grids), which are not supported by geometry nodes. Additionally, recomputing for example tangents and pointiness as the geometry changes during a brush stroke would be inefficient, especially if it's for the entire mesh and not just the edited subset. **Batched Processing** Fields and multi-functions are not optimized for evaluating one point at a time, there is too much overhead. That means we want to process texture evaluations in batches, and all (non-legacy) code that uses textures should be refactored. For geometry nodes, modifiers and 2D brushes this is most likely straightforward. However for 3D brushes, particles and effectors it will be more complicated. There is currently a mechanism to single evaluations possible while code is being converted. However this is rather inefficient, with quite a bit of overhead in `MFProcedure` execution. It may be possible to do some optimizations there for legacy code that we may not be able to easily convert to batches. **Node Graph Transforms** To support sockets like Image datablocks, and to automatically insert Texture Coordinate or UV Map nodes for textures coordinate sockets, we need to transform the node graph. The texture node layering will also need to do significant node graph transformations. In order to share code with Cycles and other external engines, it may be best to do this as part of node tree localization. **Relation to Other Node Systems** There are many node evaluation systems in Blender, and the question is if we can somehow share this implementation with anything else. I don't think the CPU evaluation can be directly shared, but will make the old texture nodes evaluation obsolete at least. The GPU implementation can probably be largely shared with shader nodes, and that implementation could be refactored to use DerivedNodeTree for consistency and to simplify that code.
Author
Owner

Changed status from 'Needs Triage' to: 'Confirmed'

Changed status from 'Needs Triage' to: 'Confirmed'
Author
Owner

Added subscribers: @brecht, @HooglyBoogly, @JacquesLucke

Added subscribers: @brecht, @HooglyBoogly, @JacquesLucke
Member

Overall this seems to be going in the right direction, but it feels like there is still a little bit of confusion regarding the link between geometry nodes, field and multi-function evaluation. So I'll try to describe those below. Part of what makes it difficult to separate these concepts currently is that they are only used in one specific combination right now.

The first important thing to note is that these three concepts are not inherintly linked. Geometry nodes evaluation can exist without fields or multi-functions. Fields can exist without geometry nodes or multi-functions. Multi-functions can exist without geometry nodes or fields. Let me try to explain each concept on its own:

  • Geometry Nodes Evaluation: This always happens on the CPU. Every socket type (bNodeSocket) is assigned a CPPType. The data flow defined by a bNodeTree is then evaluated. For this evaluator, there is no difference between sockets that can be fields and those that can't be. It's all just a fixed CPPType per socket type (e.g. GeometrySet for geometry sockets and ValueOrField<int> for integer sockets). In the implementation in master, the geometry nodes evaluator still knows about fields, but that changes with #98492.
  • Fields: A field represents a function that outputs a value based on an arbitrary number of inputs which are provided by some context. Note that this definition does not say that fields must be evaluated using multi-functions nor does it say that the context must be anything related to geometry. The context could also be based on a brush stroke, an image canvas or a 3D grid as was done in 838c4a97f1. For evaluation, a field can be converted into MFProcedure (as is done now), or it could be converted to a latency optimized function, or it can be used to generate shader code that runs on the GPU. All of that also makes sense when just considering geometry nodes (we may want to evaluate some fields on the gpu as well in the future).
  • Multi-Functions: The multi-function system is a specific way to evaluate functions on batches of elements on the CPU.

Given those definitions, it seems like texture and material shader bNodeTree evaluation could be handled by the geometry-nodes evaluator (which shouldn't be called that anymore then). Note that this evaluation does not actually compute the texture values, but it just builds e.g. a Field<Color> (a color depending on some inputs). This field can then be converted to a multi-function in geometry nodes (as is done now), or it can be converted to shader code that can be used by Eevee. The more tricky thing is to get this field into cycles. This could be done by either providing some RNA api or by serializing it somehow (afaik, standalone Cycles can already read materials from XML?).

This approach has the advantage that there are fewer places in Blender that have to deal with evaluating a bNodeTree. That localizes the complexity that comes from dealing with node groups, implicit inputs, implicit type conversions, reroutes and muted nodes. On top of that, concepts that are currently only planned for geometry nodes like loops could also by used in material nodes (it's just a different way to construct a field, the number of loop iterations can't depend on the field context in this case). Furthermore, it also allows us to use the same visual language to differentiate between fields and single values used in geometry nodes, which can also help add support for input sockets for the image texture node.


Given the rough proposal outlined above, the following section comments on various parts of the original proposal.

Texture nodes can be though of as a geometry nodes field

Maybe better to say: Texture nodes are a way to construct a color field that's used by a texture.

a new geometry component named TextureComponent is added

I don't see how that fits in. Do you suggest making textures a geometry component that can be stored in GeometrySet?

For evaluation, fields are converted into GVArray using the provided geometry.

It would be more correct to say: For every field-input, a GVArray is created using the provided geometry.

Shader nodes need additional fields that are not currently in geometry nodes

From what I can tell, all of these could potentially work in geometry nodes as well, if we want that.

It's unclear to me what the best way to specify these as fields is, if we should add additional builtin atttribute names or GeometryFieldInput.

Good question. I think for everything that is actually a named attribute (a data layer that can be edited by the user), just storing the attribute name in the FieldInput should be good enough. For anything that may need more preprocessing based on the context, a different subclass of FieldInput should be used. I don't think we should give attribute names to derived data (we can still use names if it helps, but those must not be confused with attribute names). Note that we may have to generalize FieldInput a bit here, it shouldn't have to know about the concept of GVArray.

Some of these like Pointiness would also make sense to cache as attributes, and make available to external renderers.

I agree with the caching part, but not with the fact that these are "attributes" in a more strict sense. That's because this data cannot be changed by the user directly. Instead, it is derived data (which is also why we want to cache it).

Fields and multi-functions are not optimized for evaluating one point at a time, there is too much overhead. That means we want to process texture evaluations in batches, and all (non-legacy) code that uses textures should be refactored.

I think the reasoning implied here is a bit backward. Processing elements in batches can be implemented much more efficiently then processing one element at a time. Therefore the multi-function system is optimized for that. And that is also the reason why we should refactor other parts of Blender to process data in batches. Note, while multi-functions are optimized for evaluating batches of data, fields are not. Fields don't care if you process one element or multiple elements at a time, because fields themselves are not an evaluation system, just a way to compose functions dynamically.

However for 3D brushes, particles and effectors it will be more complicated.

Still feels doable, but I can see why this would be more difficult. One "just" has to do some loop fission (as opposed to fusion). It may feel like that would be adding a lot of overhead, from what I've learned so far is that this is generally worth it. When the textures become a bit more complex it could be more than an order of magnitude faster to evaluate it in batches than one element at a time (also there are way more optimization opportunities).

To support sockets like Image datablocks, and to automatically insert Texture Coordinate or UV Map nodes for textures coordinate sockets, we need to transform the node graph.

Personally, I wouldn't want to implement these kinds of transformations on the bNodeTree level. That limits what kind of transformations are possible quite a bit. It also leads to IMO weird changes like what was done in 80859a6cb2. It added unavailable Weight sockets to various nodes for the needs of one specific evaluation system.

The texture node layering will also need to do significant node graph transformations.

Can you describe what kind of node graph transformations you have in mind?


I wonder what's the expected timeline for this project? I realize that some of the things I suggest require quite a few changes and that we probably want to break this project down into smaller steps. It's still good to have an idea of where we want to go longer term.

Overall this seems to be going in the right direction, but it feels like there is still a little bit of confusion regarding the link between geometry nodes, field and multi-function evaluation. So I'll try to describe those below. Part of what makes it difficult to separate these concepts currently is that they are only used in one specific combination right now. The first important thing to note is that these three concepts are not inherintly linked. Geometry nodes evaluation can exist without fields or multi-functions. Fields can exist without geometry nodes or multi-functions. Multi-functions can exist without geometry nodes or fields. Let me try to explain each concept on its own: * Geometry Nodes Evaluation: This always happens on the CPU. Every socket type (`bNodeSocket`) is assigned a `CPPType`. The data flow defined by a `bNodeTree` is then evaluated. For this evaluator, there is no difference between sockets that can be fields and those that can't be. It's all just a fixed `CPPType` per socket type (e.g. `GeometrySet` for geometry sockets and `ValueOrField<int>` for integer sockets). In the implementation in master, the geometry nodes evaluator still knows about fields, but that changes with #98492. * Fields: A field represents a function that outputs a value based on an arbitrary number of inputs which are provided by some context. Note that this definition does not say that fields must be evaluated using multi-functions nor does it say that the context must be anything related to geometry. The context could also be based on a brush stroke, an image canvas or a 3D grid as was done in 838c4a97f1. For evaluation, a field can be converted into `MFProcedure` (as is done now), or it could be converted to a latency optimized function, or it can be used to generate shader code that runs on the GPU. All of that also makes sense when just considering geometry nodes (we may want to evaluate some fields on the gpu as well in the future). * Multi-Functions: The multi-function system is a specific way to evaluate functions on batches of elements on the CPU. Given those definitions, it seems like texture and material shader `bNodeTree` evaluation could be handled by the geometry-nodes evaluator (which shouldn't be called that anymore then). Note that this evaluation does not actually compute the texture values, but it just builds e.g. a `Field<Color>` (a color depending on some inputs). This field can then be converted to a multi-function in geometry nodes (as is done now), or it can be converted to shader code that can be used by Eevee. The more tricky thing is to get this field into cycles. This could be done by either providing some RNA api or by serializing it somehow (afaik, standalone Cycles can already read materials from XML?). This approach has the advantage that there are fewer places in Blender that have to deal with evaluating a `bNodeTree`. That localizes the complexity that comes from dealing with node groups, implicit inputs, implicit type conversions, reroutes and muted nodes. On top of that, concepts that are currently only planned for geometry nodes like loops could also by used in material nodes (it's just a different way to construct a field, the number of loop iterations can't depend on the field context in this case). Furthermore, it also allows us to use the same visual language to differentiate between fields and single values used in geometry nodes, which can also help add support for [input sockets for the image texture node](https://devtalk.blender.org/t/input-socket-for-image-texture-shader-node/19601). ------- Given the rough proposal outlined above, the following section comments on various parts of the original proposal. > Texture nodes can be though of as a geometry nodes field Maybe better to say: Texture nodes are a way to construct a color field that's used by a texture. > a new geometry component named `TextureComponent` is added I don't see how that fits in. Do you suggest making textures a geometry component that can be stored in `GeometrySet`? > For evaluation, fields are converted into `GVArray` using the provided geometry. It would be more correct to say: For every field-input, a `GVArray` is created using the provided geometry. > Shader nodes need additional fields that are not currently in geometry nodes From what I can tell, all of these could potentially work in geometry nodes as well, if we want that. > It's unclear to me what the best way to specify these as fields is, if we should add additional builtin atttribute names or `GeometryFieldInput`. Good question. I think for everything that is actually a named attribute (a data layer that can be edited by the user), just storing the attribute name in the `FieldInput` should be good enough. For anything that may need more preprocessing based on the context, a different subclass of `FieldInput` should be used. I don't think we should give attribute names to derived data (we can still use names if it helps, but those must not be confused with attribute names). Note that we may have to generalize `FieldInput` a bit here, it shouldn't have to know about the concept of `GVArray`. > Some of these like Pointiness would also make sense to cache as attributes, and make available to external renderers. I agree with the caching part, but not with the fact that these are "attributes" in a more strict sense. That's because this data cannot be changed by the user directly. Instead, it is derived data (which is also why we want to cache it). > Fields and multi-functions are not optimized for evaluating one point at a time, there is too much overhead. That means we want to process texture evaluations in batches, and all (non-legacy) code that uses textures should be refactored. I think the reasoning implied here is a bit backward. Processing elements in batches can be implemented much more efficiently then processing one element at a time. Therefore the multi-function system is optimized for that. And that is also the reason why we should refactor other parts of Blender to process data in batches. Note, while multi-functions are optimized for evaluating batches of data, fields are not. Fields don't care if you process one element or multiple elements at a time, because fields themselves are not an evaluation system, just a way to compose functions dynamically. > However for 3D brushes, particles and effectors it will be more complicated. Still feels doable, but I can see why this would be more difficult. One "just" has to do some loop fission (as opposed to fusion). It may feel like that would be adding a lot of overhead, from what I've learned so far is that this is generally worth it. When the textures become a bit more complex it could be more than an order of magnitude faster to evaluate it in batches than one element at a time (also there are way more optimization opportunities). > To support sockets like Image datablocks, and to automatically insert Texture Coordinate or UV Map nodes for textures coordinate sockets, we need to transform the node graph. Personally, I wouldn't want to implement these kinds of transformations on the `bNodeTree` level. That limits what kind of transformations are possible quite a bit. It also leads to IMO weird changes like what was done in 80859a6cb2. It added unavailable `Weight` sockets to various nodes for the needs of one specific evaluation system. > The texture node layering will also need to do significant node graph transformations. Can you describe what kind of node graph transformations you have in mind? ---- I wonder what's the expected timeline for this project? I realize that some of the things I suggest require quite a few changes and that we probably want to break this project down into smaller steps. It's still good to have an idea of where we want to go longer term.
Author
Owner

Given those definitions, it seems like texture and material shader bNodeTree evaluation could be handled by the geometry-nodes evaluator (which shouldn't be called that anymore then). Note that this evaluation does not actually compute the texture values, but it just builds e.g. a Field (a color depending on some inputs). This field can then be converted to a multi-function in geometry nodes (as is done now), or it can be converted to shader code that can be used by Eevee. The more tricky thing is to get this field into cycles. This could be done by either providing some RNA api or by serializing it somehow (afaik, standalone Cycles can already read materials from XML?).

For Cycles and external renderers in general, I don't think we want to use fields in the renderer. The longer term in Cycles likely involves more OSL, MaterialX and Hydra, and all those deal with node graphs. Having some alternative way of passing field based shaders from Blender, and node based shaders from other sources is going to have too much complexity. And this would not just be for Cycles, but also external renderers, or file I/O with USD, glTF, MaterialX, etc.

Even disregarding that, I think there is important conceptual difference in how shader and geometry nodes work. Shaders are meant to be decoupled from specific geometry, they will reference some attribute names and image textures, and then the render will match those to what's on the geometry at render time. Shaders only have access to local information at a shading point, not to the geometry as a whole. Being able to couple it more tightly to geometry has some interesting possibilities, but decoupling is also a powerful, and for better or worse that's the industry standard way of handling materials.

For geometry nodes it's fine for fields to be a black box that can do whatever, but for shaders Field<Color> is not enough. I think it's possible to let the geometry nodes evaluator handle CPU texture evaluation and perhaps even Eevee shader compilation. But at least with the conceptual picture of shader and texture nodes I have now, it may be messy to have a single evaluator that handles both.

This approach has the advantage that there are fewer places in Blender that have to deal with evaluating a bNodeTree. That localizes the complexity that comes from dealing with node groups, implicit inputs, implicit type conversions, reroutes and muted nodes. On top of that, concepts that are currently only planned for geometry nodes like loops could also by used in material nodes (it's just a different way to construct a field, the number of loop iterations can't depend on the field context in this case). Furthermore, it also allows us to use the same visual language to differentiate between fields and single values used in geometry nodes, which can also help add support for input sockets for the image texture node.

I think localizing the complexity of all those things is helpful, but as mentioned many of the systems we want to interoperate with don't work with fields as a concept. For them it would be better to have a node graph that has all those things applied.

I don't see how that fits in. Do you suggest making textures a geometry component that can be stored in GeometrySet?

The intent would be to use this only in the context of texture nodes evaluation, so it would never get stored in a GeometrySet. I could imagine a future where an image texture becomes a type of geometry handled by geometry nodes, but not sure that makes a lot of sense.

Still feels doable, but I can see why this would be more difficult. One "just" has to do some loop fission (as opposed to fusion). It may feel like that would be adding a lot of overhead, from what I've learned so far is that this is generally worth it. When the textures become a bit more complex it could be more than an order of magnitude faster to evaluate it in batches than one element at a time (also there are way more optimization opportunities).

3D brushes we have to add batching for. I'm hoping that for particles and effectors I don't have to refactor that legacy code, but it's certainly possible with effort.

Personally, I wouldn't want to implement these kinds of transformations on the bNodeTree level. That limits what kind of transformations are possible quite a bit. It also leads to IMO weird changes like what was done in 80859a6cb2. It added unavailable Weight sockets to various nodes for the needs of one specific evaluation system.

It's a trade-off, the weight sockets are ugly but a completely different node graph representation also adds complexity. It's not clear what the alternative is assuming we want interop with external renderers and file formats.

Can you describe what kind of node graph transformations you have in mind?

It would be for the layer socket type and layer stack node that bundle multiple texture channels, as well as baking. Following the design here.
https://code.blender.org/2022/02/layered-textures-design/

The layer socket and stack node could be converted into a bunch of mix nodes for all the the texture channels. For baked textures, the texture nodes would be replaced by either attribute or image texture nodes that read the baked results instead of doing procedural evaluation.

I wonder what's the expected timeline for this project? I realize that some of the things I suggest require quite a few changes and that we probably want to break this project down into smaller steps. It's still good to have an idea of where we want to go longer term.

In principle the texture project is meant to be worked on this year. Some initial parts to replace old texture nodes could be in 3.4, with a more complete implementation for 3.5 or 3.6? But it's hard to say.

> Given those definitions, it seems like texture and material shader bNodeTree evaluation could be handled by the geometry-nodes evaluator (which shouldn't be called that anymore then). Note that this evaluation does not actually compute the texture values, but it just builds e.g. a Field<Color> (a color depending on some inputs). This field can then be converted to a multi-function in geometry nodes (as is done now), or it can be converted to shader code that can be used by Eevee. The more tricky thing is to get this field into cycles. This could be done by either providing some RNA api or by serializing it somehow (afaik, standalone Cycles can already read materials from XML?). For Cycles and external renderers in general, I don't think we want to use fields in the renderer. The longer term in Cycles likely involves more OSL, MaterialX and Hydra, and all those deal with node graphs. Having some alternative way of passing field based shaders from Blender, and node based shaders from other sources is going to have too much complexity. And this would not just be for Cycles, but also external renderers, or file I/O with USD, glTF, MaterialX, etc. Even disregarding that, I think there is important conceptual difference in how shader and geometry nodes work. Shaders are meant to be decoupled from specific geometry, they will reference some attribute names and image textures, and then the render will match those to what's on the geometry at render time. Shaders only have access to local information at a shading point, not to the geometry as a whole. Being able to couple it more tightly to geometry has some interesting possibilities, but decoupling is also a powerful, and for better or worse that's the industry standard way of handling materials. For geometry nodes it's fine for fields to be a black box that can do whatever, but for shaders `Field<Color>` is not enough. I think it's possible to let the geometry nodes evaluator handle CPU texture evaluation and perhaps even Eevee shader compilation. But at least with the conceptual picture of shader and texture nodes I have now, it may be messy to have a single evaluator that handles both. > This approach has the advantage that there are fewer places in Blender that have to deal with evaluating a bNodeTree. That localizes the complexity that comes from dealing with node groups, implicit inputs, implicit type conversions, reroutes and muted nodes. On top of that, concepts that are currently only planned for geometry nodes like loops could also by used in material nodes (it's just a different way to construct a field, the number of loop iterations can't depend on the field context in this case). Furthermore, it also allows us to use the same visual language to differentiate between fields and single values used in geometry nodes, which can also help add support for input sockets for the image texture node. I think localizing the complexity of all those things is helpful, but as mentioned many of the systems we want to interoperate with don't work with fields as a concept. For them it would be better to have a node graph that has all those things applied. > I don't see how that fits in. Do you suggest making textures a geometry component that can be stored in GeometrySet? The intent would be to use this only in the context of texture nodes evaluation, so it would never get stored in a GeometrySet. I could imagine a future where an image texture becomes a type of geometry handled by geometry nodes, but not sure that makes a lot of sense. > Still feels doable, but I can see why this would be more difficult. One "just" has to do some loop fission (as opposed to fusion). It may feel like that would be adding a lot of overhead, from what I've learned so far is that this is generally worth it. When the textures become a bit more complex it could be more than an order of magnitude faster to evaluate it in batches than one element at a time (also there are way more optimization opportunities). 3D brushes we have to add batching for. I'm hoping that for particles and effectors I don't have to refactor that legacy code, but it's certainly possible with effort. > Personally, I wouldn't want to implement these kinds of transformations on the bNodeTree level. That limits what kind of transformations are possible quite a bit. It also leads to IMO weird changes like what was done in 80859a6cb2. It added unavailable Weight sockets to various nodes for the needs of one specific evaluation system. It's a trade-off, the weight sockets are ugly but a completely different node graph representation also adds complexity. It's not clear what the alternative is assuming we want interop with external renderers and file formats. > Can you describe what kind of node graph transformations you have in mind? It would be for the layer socket type and layer stack node that bundle multiple texture channels, as well as baking. Following the design here. https://code.blender.org/2022/02/layered-textures-design/ The layer socket and stack node could be converted into a bunch of mix nodes for all the the texture channels. For baked textures, the texture nodes would be replaced by either attribute or image texture nodes that read the baked results instead of doing procedural evaluation. > I wonder what's the expected timeline for this project? I realize that some of the things I suggest require quite a few changes and that we probably want to break this project down into smaller steps. It's still good to have an idea of where we want to go longer term. In principle the texture project is meant to be worked on this year. Some initial parts to replace old texture nodes could be in 3.4, with a more complete implementation for 3.5 or 3.6? But it's hard to say.

Added subscriber: @GeorgiaPacific

Added subscriber: @GeorgiaPacific
Member

Shaders are meant to be decoupled from specific geometry, they will reference some attribute names and image textures, and then the render will match those to what's on the geometry at render time

This is getting very theoretical at this point, but I will say, that sounds exactly like the concept of fields to me. They describe an instructions, not the context, and can only be evaluated when some context describes an input.
Anyway, that's not so important, I think the idea of using fields for shaders was more of a theoretical point. I do think it's important to distinguish the "evaluator" that creates a field from a node tree and the evaluator that provides context and evaluates the field though.
I don't have much to add about interoperability or node graph transformation discussion, besides that doing it elsewhere besides bNodeTree is cleaner in my opinion. But that's a tricky issue.


About the texture "geometry component", I see that making more sense as a separate field context. So the "geometry field context" wouldn't be reused, but instead an "image field context" would give image's values for a certain position.
I'd think that interpolating attributes data from a geometry would be implemented as specific field operations (and then multi-functions or shaders).

Refactoring to use batched evaluation for sculpting does seem like quite an undertaking, but I do think it would be worthwhile for other reasons too.

Sorry if I'm repeating something we've already discussed, just coming back to this topic after a while.

>Shaders are meant to be decoupled from specific geometry, they will reference some attribute names and image textures, and then the render will match those to what's on the geometry at render time This is getting very theoretical at this point, but I will say, that sounds exactly like the concept of fields to me. They describe an instructions, not the context, and can only be evaluated when some context describes an input. Anyway, that's not so important, I think the idea of using fields for shaders was more of a theoretical point. I do think it's important to distinguish the "evaluator" that creates a field from a node tree and the evaluator that provides context and evaluates the field though. I don't have much to add about interoperability or node graph transformation discussion, besides that doing it elsewhere besides `bNodeTree` is cleaner in my opinion. But that's a tricky issue. --- About the texture "geometry component", I see that making more sense as a separate field context. So the "geometry field context" wouldn't be reused, but instead an "image field context" would give image's values for a certain position. I'd think that interpolating attributes data from a geometry would be implemented as specific field operations (and then multi-functions or shaders). Refactoring to use batched evaluation for sculpting does seem like quite an undertaking, but I do think it would be worthwhile for other reasons too. Sorry if I'm repeating something we've already discussed, just coming back to this topic after a while.
Author
Owner

In #98940#1386062, @HooglyBoogly wrote:
This is getting very theoretical at this point, but I will say, that sounds exactly like the concept of fields to me. They describe an instructions, not the context, and can only be evaluated when some context describes an input.

It is the same thing at the conceptual level, just the implementation requirements are different.

I don't have much to add about interoperability or node graph transformation discussion, besides that doing it elsewhere besides bNodeTree is cleaner in my opinion. But that's a tricky issue.

I'm not sure where elsewhere would be, introducing another node graph data structure that then also needs to be exposed in the Python API seems like too much complexity.

About the texture "geometry component", I see that making more sense as a separate field context. So the "geometry field context" wouldn't be reused, but instead an "image field context" would give image's values for a certain position.
I'd think that interpolating attributes data from a geometry would be implemented as specific field operations (and then multi-functions or shaders).

I've changed it to field context now.

> In #98940#1386062, @HooglyBoogly wrote: > This is getting very theoretical at this point, but I will say, that sounds exactly like the concept of fields to me. They describe an instructions, not the context, and can only be evaluated when some context describes an input. It is the same thing at the conceptual level, just the implementation requirements are different. > I don't have much to add about interoperability or node graph transformation discussion, besides that doing it elsewhere besides `bNodeTree` is cleaner in my opinion. But that's a tricky issue. I'm not sure where elsewhere would be, introducing another node graph data structure that then also needs to be exposed in the Python API seems like too much complexity. > About the texture "geometry component", I see that making more sense as a separate field context. So the "geometry field context" wouldn't be reused, but instead an "image field context" would give image's values for a certain position. > I'd think that interpolating attributes data from a geometry would be implemented as specific field operations (and then multi-functions or shaders). I've changed it to field context now.

Added subscriber: @zNight

Added subscriber: @zNight
Contributor

Added subscriber: @ecosky

Added subscriber: @ecosky

Added subscriber: @mod_moder

Added subscriber: @mod_moder

Added subscriber: @silex

Added subscriber: @silex

Added subscriber: @blendersamsonov

Added subscriber: @blendersamsonov

Added subscriber: @simen

Added subscriber: @simen

Added subscriber: @Pancir

Added subscriber: @Pancir

Added subscriber: @johannes.wilde

Added subscriber: @johannes.wilde
Member

Added subscriber: @DanielGrauer

Added subscriber: @DanielGrauer
Brecht Van Lommel added this to the Render & Cycles project 2023-02-07 19:08:35 +01:00
Philipp Oeser removed the
Interest
Render & Cycles
label 2023-02-09 14:03:48 +01:00
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No Assignees
13 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#98940
No description provided.