Geometry Nodes Design #74967

Closed
opened 2020-03-20 12:06:37 +01:00 by William Reynish · 196 comments

A big part of the Everything Nodes project, is to convert the modifier stack to a much more powerful Geometry nodes system. This document serves to explain how this will work more precisely from a user point of view.

Geometry nodes encompass what Modifiers used to be, but where modifiers only allowed you to modify geometry, Geometry nodes also allow you to create geometry.

Transition

Moving to a node-based system will take a bit of time, so we will probably need some way to smoothen the transition and to keep old files working for a while. One way to solve this, is to add a Use Nodes toggle for modifiers, just like other areas:
Screenshot 2020-03-20 at 11.42.22.png

If disabled, the modifier stack will continue to work, all the while developers can continue to improve and build the geometry nodes system. This approach would allow us to merge in the geometry quicker, and not have to worry about backwards compatibility as much. However, note it's possible that no new modifiers will be added to the stack.

How are nodes different?

In order to convert the modifier stack to a node-based system, we can't just do a straight 1:1 port of each modifier to a node. In many cases, nodes are different, because you have the concept of inputs and outputs. For this reason, modifiers as nodes would have to become more atomic, generally simpler, and allow users to plug in whatever they need to get the desired effect.

Every time a modifier has a property for limiting the result to a vertex group, or options to perform an operation along some direction or vector, those things should become generic inputs instead, so that you can plug in whatever you want, in order to take full advantage of the power of nodes.

Take this example for the Displace modifier:
Screenshot 2020-03-20 at 11.49.32.png

The Direction and Vertex Groupproperties should simply be inputs, and the Texture property is also moot, because with nodes you can use textures to drive anything.

As a node, Displace is a lot simpler:
Screenshot 2020-03-20 at 11.51.19.png

An example where a UV Sphere is generated, and there is a noise displacement happening along it's normals could look like this:

Screenshot 2020-03-22 at 14.55.20.png

We can compare some more examples below:

For Boolean, currently you specify an object directly in the modifier:
Screenshot 2020-03-20 at 16.37.25.png

As a node, this is not needed - you would simply have two inputs. You can then plug in anything you wish here:
Screenshot 2020-03-20 at 16.34.09.png

Here is a little overview of a few example nodes:
Screenshot 2020-03-20 at 16.33.44.png

Selections / Clusters

When modelling normally using a destructive workflow using the Edit Mode tools, you always operate on the selected items (vertices, edges or faces). In non-destructive modelling, we also need this concept. We could call the Clusters. A Cluster is simply a set of vertices, edges or faces. How do you 'select' items? Well, various nodes can get, generate or manipulate clusters. Here are some examples:

Screenshot 2020-03-22 at 14.46.13.png

The concept of Clusters also allows us to make certain improvements to the modifiers. Often, the current modifiers have certain specific settings that apply some fixed operation to the newly generated geometry. See for example the here in the Bevel modifier:

Screenshot 2020-03-20 at 11.56.54.png

These kinds of controls are quite inflexible and arbitrary, and different modifiers provide a different subset of fixed options. For nodes, it makes sense to generalize this, so that users can apply any operation to newly generated geometry. We can do this a number of ways, but one simple solution could be to automatically generate an attribute output for the newly generated geometry:

Screenshot 2020-03-20 at 11.59.25.png

You could then use this output to chain together nodes in more powerful ways. Here only the result of the extruded geometry is bevelled:

Screenshot 2020-03-20 at 17.06.51.png

Of course you can also use predefined user-created Clusters as inputs:

Screenshot 2020-03-22 at 11.28.33.png

Textures

In a nodes system, you can use textures to drive any input, rather than the arbitrary Texture fields for a certain subset of modifier effects in the current stack. This makes using textures orders magnitude more powerful. We can make textures work much like material textures. Here is a simple example of using a texture to drive the displace strength:

Screenshot 2020-03-20 at 12.01.56.png

Vertex Groups & Attributes

Since we can get rid of all the fixed vertex group fields for the modifiers, we can instead make this part of the node tree, just like how we currently handle vertex colors for materials. Below is an example of using a vertex group to modulate the strength of a texture, which in turn controls the displacement Strength property:

Screenshot 2020-03-23 at 09.20.26.png

Constants/Global values

Many values for these nodes will be per element (vertex, edge, face), but some values are fixed for that node operation, and cannot vary per element. One such example is the Bevel Segments value. You can set it to any value, but it will be the same for all the affected elements during that operation.

It would be nice to still be able to drive these kinds of properties, and also to pipe them into group nodes for higher level control.

We can show these kinds of values differently, and because this will always be a single value, we can even tell the user what the value is:

Screenshot 2020-03-20 at 16.28.55.png

Outputs

The output is much like the Material Output for materials, but it outputs a mesh instead:
Screenshot 2020-03-20 at 12.04.01.png

We can provide some extra features here, for making an output only for the viewport or the render. This could replace these toggles from the stack system:
Screenshot 2020-03-20 at 12.05.34.png

We can also provide an output for visualizing certain textures or effects in the viewport.

It would also be useful to set any node to be the one that is previewed in the viewport:

Screenshot 2020-03-22 at 15.36.36.png

Users will constantly want to preview certain sections of the node tree, and having to manually connect those things up to the output node will be a pain. An easier way to preview the output of any node would be extra useful for geometry nodes, but would also be a benefit for shader or compositing nodes.

Caching & Applying

Some node trees can become heavy or very large, and sometimes it's not useful to have to recalculate the entire node tree every time a property is edited down stream. For this reason, we could provide a Cache node, that takes everything upstream and caches it so it doesn't need to be recalculated.

Screenshot 2020-03-20 at 17.17.23.png

When cached, all nodes upstream will be greyed out:

Screenshot 2020-03-20 at 17.17.48.png

New nodes

When converting to to nodes, we will most likely want to add many more nodes compared to the small list of modifiers today.

For example, we would want to add nodes to do all sorts of common modelling tasks. Here you see the Extrude and Bisect nodes, as an example:

Screenshot 2020-03-22 at 12.29.50.png

Other examples are nodes for creating new geometry:
Screenshot 2020-03-22 at 12.32.03.png

These kind of things are a taste of how much more powerful and broader in scope the geometry nodes system is.

Properties Editor

It's not always useful to have to edit node properties inside the node editor. We can re-use the same concept we already have for node groups, where we specify a series of node inputs which can be manipulated in the Properties Editor.

Here are a set of input nodes:

Screenshot 2020-03-23 at 14.38.59.png

And here you see the controls reflected in the Properties Editor:

Screenshot 2020-03-23 at 14.59.27.png

Adding new objects

An important example is for one of the most basic tasks in Blender: adding new objects. Here you can see a newly added UV Sphere, which automatically gets a geometry node tree and a series of useless inputs:

Screenshot 2020-03-23 at 14.57.02.png

These can be controlled by the user at any time. An Apply button can be added to easily freeze the geometry node tree, so that users can enter edit mode and perform destructive edits.

Open Questions

  • How to handle the physics modifiers? These are different from the regular modifiers, since they need to be simulated. We could move those to the Simulation Node system instead of the Geometry nodes
  • Clusters can be a set of vertices, edges or faces - how do you define what type the Cluster is?
  • How to handle things other than meshes? We could allow the system to also output and use curves or NURBS objects
A big part of the Everything Nodes project, is to convert the modifier stack to a much more powerful *Geometry nodes* system. This document serves to explain how this will work more precisely from a user point of view. Geometry nodes encompass what Modifiers *used* to be, but where modifiers only allowed you to modify geometry, Geometry nodes also allow you to *create* geometry. ## Transition Moving to a node-based system will take a bit of time, so we will probably need some way to smoothen the transition and to keep old files working for a while. One way to solve this, is to add a Use Nodes toggle for modifiers, just like other areas: ![Screenshot 2020-03-20 at 11.42.22.png](https://archive.blender.org/developer/F8417531/Screenshot_2020-03-20_at_11.42.22.png) If disabled, the modifier stack will continue to work, all the while developers can continue to improve and build the geometry nodes system. This approach would allow us to merge in the geometry quicker, and not have to worry about backwards compatibility as much. However, note it's possible that no new modifiers will be added to the stack. ## How are nodes different? In order to convert the modifier stack to a node-based system, we can't just do a straight 1:1 port of each modifier to a node. In many cases, nodes are different, because you have the concept of inputs and outputs. For this reason, modifiers as nodes would have to become more atomic, generally simpler, and allow users to plug in whatever they need to get the desired effect. Every time a modifier has a property for limiting the result to a vertex group, or options to perform an operation along some direction or vector, those things should become generic inputs instead, so that you can plug in whatever you want, in order to take full advantage of the power of nodes. Take this example for the Displace modifier: ![Screenshot 2020-03-20 at 11.49.32.png](https://archive.blender.org/developer/F8417536/Screenshot_2020-03-20_at_11.49.32.png) The *Direction* and *Vertex Group*properties should simply be inputs, and the Texture property is also moot, because with nodes you can use textures to drive anything. As a node, Displace is a lot simpler: ![Screenshot 2020-03-20 at 11.51.19.png](https://archive.blender.org/developer/F8417540/Screenshot_2020-03-20_at_11.51.19.png) An example where a UV Sphere is generated, and there is a noise displacement happening along it's normals could look like this: ![Screenshot 2020-03-22 at 14.55.20.png](https://archive.blender.org/developer/F8422538/Screenshot_2020-03-22_at_14.55.20.png) We can compare some more examples below: For Boolean, currently you specify an object directly in the modifier: ![Screenshot 2020-03-20 at 16.37.25.png](https://archive.blender.org/developer/F8418083/Screenshot_2020-03-20_at_16.37.25.png) As a node, this is not needed - you would simply have two inputs. You can then plug in anything you wish here: ![Screenshot 2020-03-20 at 16.34.09.png](https://archive.blender.org/developer/F8418073/Screenshot_2020-03-20_at_16.34.09.png) Here is a little overview of a few example nodes: ![Screenshot 2020-03-20 at 16.33.44.png](https://archive.blender.org/developer/F8418071/Screenshot_2020-03-20_at_16.33.44.png) ## Selections / Clusters When modelling normally using a destructive workflow using the Edit Mode tools, you always operate on the selected items (vertices, edges or faces). In non-destructive modelling, we also need this concept. We could call the *Clusters*. A Cluster is simply a set of vertices, edges or faces. How do you 'select' items? Well, various nodes can get, generate or manipulate clusters. Here are some examples: ![Screenshot 2020-03-22 at 14.46.13.png](https://archive.blender.org/developer/F8422515/Screenshot_2020-03-22_at_14.46.13.png) The concept of Clusters also allows us to make certain improvements to the modifiers. Often, the current modifiers have certain specific settings that apply some fixed operation to the newly generated geometry. See for example the here in the Bevel modifier: ![Screenshot 2020-03-20 at 11.56.54.png](https://archive.blender.org/developer/F8417559/Screenshot_2020-03-20_at_11.56.54.png) These kinds of controls are quite inflexible and arbitrary, and different modifiers provide a different subset of fixed options. For nodes, it makes sense to generalize this, so that users can apply *any* operation to newly generated geometry. We can do this a number of ways, but one simple solution could be to automatically generate an attribute output for the newly generated geometry: ![Screenshot 2020-03-20 at 11.59.25.png](https://archive.blender.org/developer/F8417569/Screenshot_2020-03-20_at_11.59.25.png) You could then use this output to chain together nodes in more powerful ways. Here only the result of the extruded geometry is bevelled: ![Screenshot 2020-03-20 at 17.06.51.png](https://archive.blender.org/developer/F8418117/Screenshot_2020-03-20_at_17.06.51.png) Of course you can also use predefined user-created Clusters as inputs: ![Screenshot 2020-03-22 at 11.28.33.png](https://archive.blender.org/developer/F8422215/Screenshot_2020-03-22_at_11.28.33.png) ## Textures In a nodes system, you can use textures to drive any input, rather than the arbitrary Texture fields for a certain subset of modifier effects in the current stack. This makes using textures orders magnitude more powerful. We can make textures work much like material textures. Here is a simple example of using a texture to drive the displace strength: ![Screenshot 2020-03-20 at 12.01.56.png](https://archive.blender.org/developer/F8417581/Screenshot_2020-03-20_at_12.01.56.png) ## Vertex Groups & Attributes Since we can get rid of all the fixed vertex group fields for the modifiers, we can instead make this part of the node tree, just like how we currently handle vertex colors for materials. Below is an example of using a vertex group to modulate the strength of a texture, which in turn controls the displacement Strength property: ![Screenshot 2020-03-23 at 09.20.26.png](https://archive.blender.org/developer/F8424501/Screenshot_2020-03-23_at_09.20.26.png) ## Constants/Global values Many values for these nodes will be per element (vertex, edge, face), but some values are fixed for that node operation, and cannot vary per element. One such example is the Bevel Segments value. You can set it to any value, but it will be the same for all the affected elements during that operation. It would be nice to still be able to drive these kinds of properties, and also to pipe them into group nodes for higher level control. We can show these kinds of values differently, and because this will always be a single value, we can even tell the user what the value is: ![Screenshot 2020-03-20 at 16.28.55.png](https://archive.blender.org/developer/F8418069/Screenshot_2020-03-20_at_16.28.55.png) ## Outputs The output is much like the Material Output for materials, but it outputs a mesh instead: ![Screenshot 2020-03-20 at 12.04.01.png](https://archive.blender.org/developer/F8417586/Screenshot_2020-03-20_at_12.04.01.png) We can provide some extra features here, for making an output only for the viewport or the render. This could replace these toggles from the stack system: ![Screenshot 2020-03-20 at 12.05.34.png](https://archive.blender.org/developer/F8417589/Screenshot_2020-03-20_at_12.05.34.png) We can also provide an output for visualizing certain textures or effects in the viewport. It would also be useful to set *any* node to be the one that is previewed in the viewport: ![Screenshot 2020-03-22 at 15.36.36.png](https://archive.blender.org/developer/F8422619/Screenshot_2020-03-22_at_15.36.36.png) Users will constantly want to preview certain sections of the node tree, and having to manually connect those things up to the output node will be a pain. An easier way to preview the output of *any* node would be extra useful for geometry nodes, but would also be a benefit for shader or compositing nodes. ## Caching & Applying Some node trees can become heavy or very large, and sometimes it's not useful to have to recalculate the entire node tree every time a property is edited down stream. For this reason, we could provide a Cache node, that takes everything upstream and caches it so it doesn't need to be recalculated. ![Screenshot 2020-03-20 at 17.17.23.png](https://archive.blender.org/developer/F8418126/Screenshot_2020-03-20_at_17.17.23.png) When cached, all nodes upstream will be greyed out: ![Screenshot 2020-03-20 at 17.17.48.png](https://archive.blender.org/developer/F8418130/Screenshot_2020-03-20_at_17.17.48.png) ## New nodes When converting to to nodes, we will most likely want to add many more nodes compared to the small list of modifiers today. For example, we would want to add nodes to do all sorts of common modelling tasks. Here you see the Extrude and Bisect nodes, as an example: ![Screenshot 2020-03-22 at 12.29.50.png](https://archive.blender.org/developer/F8422309/Screenshot_2020-03-22_at_12.29.50.png) Other examples are nodes for creating new geometry: ![Screenshot 2020-03-22 at 12.32.03.png](https://archive.blender.org/developer/F8422317/Screenshot_2020-03-22_at_12.32.03.png) These kind of things are a taste of how much more powerful and broader in scope the geometry nodes system is. ## Properties Editor It's not always useful to have to edit node properties inside the node editor. We can re-use the same concept we already have for node groups, where we specify a series of node inputs which can be manipulated in the Properties Editor. Here are a set of input nodes: ![Screenshot 2020-03-23 at 14.38.59.png](https://archive.blender.org/developer/F8425006/Screenshot_2020-03-23_at_14.38.59.png) And here you see the controls reflected in the Properties Editor: ![Screenshot 2020-03-23 at 14.59.27.png](https://archive.blender.org/developer/F8425008/Screenshot_2020-03-23_at_14.59.27.png) ## Adding new objects An important example is for one of the most basic tasks in Blender: adding new objects. Here you can see a newly added UV Sphere, which automatically gets a geometry node tree and a series of useless inputs: ![Screenshot 2020-03-23 at 14.57.02.png](https://archive.blender.org/developer/F8425012/Screenshot_2020-03-23_at_14.57.02.png) These can be controlled by the user at any time. An Apply button can be added to easily freeze the geometry node tree, so that users can enter edit mode and perform destructive edits. ## Open Questions - How to handle the physics modifiers? These are different from the regular modifiers, since they need to be simulated. We could move those to the Simulation Node system instead of the Geometry nodes - Clusters can be a set of vertices, edges or faces - how do you define what type the Cluster is? - How to handle things other than meshes? We could allow the system to also output and use curves or NURBS objects

Added subscriber: @WilliamReynish

Added subscriber: @WilliamReynish

Changed status from 'Needs Triage' to: 'Confirmed'

Changed status from 'Needs Triage' to: 'Confirmed'
William Reynish changed title from Modifier/Geometry Nodes to Modifier/Geometry Nodes Design 2020-03-20 12:07:09 +01:00

Added subscriber: @Ko

Added subscriber: @Ko

About example nodes on image: How shrinkwrap node know to which mesh it must wrap its input mesh? Mirror node can go without merge function with separate weld node. Boolean i believe uses threshold only to prevent errors with overlapping geometry? Even if it will stay useful after "newbooleans" branch will be merged, controlling this parameter with socket seems like overkill.

About example nodes on image: How shrinkwrap node know to which mesh it must wrap its input mesh? Mirror node can go without merge function with separate weld node. Boolean i believe uses threshold only to prevent errors with overlapping geometry? Even if it will stay useful after "newbooleans" branch will be merged, controlling this parameter with socket seems like overkill.

@Ko Those were oversights - fixed in the images now.

@Ko Those were oversights - fixed in the images now.

Added subscriber: @lsscpp

Added subscriber: @lsscpp

In #74967#895217, @Ko wrote:
About example nodes on image: How shrinkwrap node know to which mesh it must wrap its input mesh?

Target mesh I guess

> In #74967#895217, @Ko wrote: > About example nodes on image: How shrinkwrap node know to which mesh it must wrap its input mesh? Target mesh I guess

Added subscriber: @hadrien

Added subscriber: @hadrien

Hotthoughts :

For grouping of newly created geometry, it's important to do two things :

  1. give the choice of component type, so that following node can act on either, say, new faces or new edges, such as bevel
  2. split into several groups when relevant, eg extrusion tip and sides

Is 'cluster' socket used for specifying a subset of geometry ? Why the name ?

I noticed parameters for "screw" modifier are not exposed as sockets - I suppose because they're constants ? However we still need to be able to connect those, most notably for bundling in an asset (node group). Maybe just have another socket type for constants.

The 'vector' socket used in some operations (displace, extrude) means there has to be a way to pipe in geometry normals from the previous node - the Cycles way of a global variable doesn't work anymore, since each step in the node tree potentially modifies normals. To generalize this, each geometry attribute should be an output socket on every (?) node.

Great to see these mockups ! Can't wait !

Hotthoughts : For grouping of newly created geometry, it's important to do two things : 1. give the choice of component type, so that following node can act on either, say, new faces or new edges, such as bevel 2. split into several groups when relevant, *eg* extrusion tip and sides Is 'cluster' socket used for specifying a subset of geometry ? Why the name ? I noticed parameters for "screw" modifier are not exposed as sockets - I suppose because they're constants ? However we still need to be able to connect those, most notably for bundling in an asset (node group). Maybe just have another socket type for constants. The 'vector' socket used in some operations (displace, extrude) means there has to be a way to pipe in geometry normals from the previous node - the Cycles way of a global variable doesn't work anymore, since each step in the node tree potentially modifies normals. To generalize this, each geometry attribute should be an output socket on every (?) node. Great to see these mockups ! Can't wait !
Member

Added subscriber: @lichtwerk

Added subscriber: @lichtwerk

Added subscriber: @DuarteRamos

Added subscriber: @DuarteRamos

@hadrien Yes, the issue of constants is something we actually are having some trouble with, since we indeed want users to be able to plug inputs into them, and add them to group node inputs. The main issue is one of communication, since these kinds of values will always be fixed, so you can't really modulate them with a texture.

I updated the doc with a section about this, with a possible solution - it could just be a few visual differences to make it clear to users what's what.

@hadrien Yes, the issue of constants is something we actually are having some trouble with, since we indeed want users to be able to plug inputs into them, and add them to group node inputs. The main issue is one of communication, since these kinds of values will always be fixed, so you can't really modulate them with a texture. I updated the doc with a section about this, with a possible solution - it could just be a few visual differences to make it clear to users what's what.
Member

Added subscriber: @HooglyBoogly

Added subscriber: @HooglyBoogly
Member

I really like what you did with the constant inputs with the bar on the left side of the node. It's the perfect way of displaying that.

I'm wondering if we can merge your idea of a "Cluster" with the idea of vertex groups, and hopefully edge groups and face groups too. That way it would basically be passing around selection states inside the node graph, just like in edit mode. (Could that even allow generating a node graph automatically while editing in edit mode?!)

I really like what you did with the constant inputs with the bar on the left side of the node. It's the perfect way of displaying that. I'm wondering if we can merge your idea of a "Cluster" with the idea of vertex groups, and hopefully edge groups and face groups too. That way it would basically be passing around selection states inside the node graph, just like in edit mode. (Could that even allow generating a node graph automatically while editing in edit mode?!)

@HooglyBoogly Yes, the cluster concept here is just a type of attribute.

IMO we maybe should rename Vertex Groups -> Weight Groups, since they really store a weight float value per vertex. Clusters, instead, could be a way to store a list of vertices, edges or faces. I'm not 100% sure of all the implications of this change, and also not sure how/if you would need to specify if you want a list of verts, edges or faces, but something along those lines I think could make sense.

@HooglyBoogly Yes, the cluster concept here is just a type of attribute. IMO we maybe should rename Vertex Groups -> Weight Groups, since they really store a weight float value per vertex. Clusters, instead, could be a way to store a list of vertices, edges or faces. I'm not 100% sure of all the implications of this change, and also not sure how/if you would need to specify if you want a list of verts, edges or faces, but something along those lines I think could make sense.

Really like the direction this is taking, close to how I'd envision things myself.

In #74967#895413, @hadrien wrote:
For grouping of newly created geometry, it's important to (...) 2. split into several groups when relevant, eg extrusion tip and sides

This is indeed important. Eventually tools to manipulate these ideally generic "clusters" of information would be good node candidates too. Things like "boolean operations" to said groups like intersect, exclude etc., also acting on material slots.

One other thing that is probably out of scope here but very desirable would be to act on other types of geometry, most prominently bezier curves and have said nodes be able to act actual bezier data rather than it's mesh output.
Rather than have Mesh sockets we would have a more generic "Geometry" input/output, capable of acting on say mirroring an open bezier curve and turn it into a closed symmetric shape, or be able to act on its bevel and extrude settings.

Allowing material slot definition for edges and vertex would also be desirable for generative modifiers like Screw or curve bevels so we can have "wire meshes" made only of edges generate a screw with different materials across.

Really like the direction this is taking, close to how I'd envision things myself. > In #74967#895413, @hadrien wrote: > For grouping of newly created geometry, it's important to (...) 2. split into several groups when relevant, *eg* extrusion tip and sides This is indeed important. Eventually tools to manipulate these ideally generic "clusters" of information would be good node candidates too. Things like "boolean operations" to said groups like intersect, exclude etc., also acting on material slots. One other thing that is probably out of scope here but very desirable would be to act on other types of geometry, most prominently bezier curves and have said nodes be able to act actual bezier data rather than it's mesh output. Rather than have *Mesh* sockets we would have a more generic "Geometry" input/output, capable of acting on say mirroring an open bezier curve and turn it into a closed symmetric shape, or be able to act on its bevel and extrude settings. Allowing material slot definition for edges and vertex would also be desirable for generative modifiers like *Screw* or curve bevels so we can have "wire meshes" made only of edges generate a screw with different materials across.

@WilliamReynish I think the bracket-socket works very nicely for constants.

@HooglyBoogly that would work as long as we implicitely convert vertex groups to boolean (rounding?) but that seems kind of hidden ? I think this is why other software separate the concepts of attributes and groups (selection states), but there may be a way to nicely unify them.

(Could that even allow generating a node graph automatically while editing in edit mode?!)

That would be fantastic - I pushed for this in the everything nodes thread but it seemed a little far-fetched to William, which I understand.

Further thoughts concerning attributes : probably all geometry attributes could be carried through the green geometry socket, which would alleviate the need for one output socket per node per attribute - doing this would make presenting normals for the extrude node to use just a matter of connecting the geometry output to the normal input, and picking the relevant data would be left to the extrude node.
That design has the downside of connecting differently-colored sockets together, which is not something we do currently. Another solution would be an intermediary 'pick/extract attribute' node with a geometry input, a dropdown with all existing attributes and a dynamic output.

@WilliamReynish I think the bracket-socket works very nicely for constants. @HooglyBoogly that would work as long as we implicitely convert vertex groups to boolean (rounding?) but that seems kind of hidden ? I think this is why other software separate the concepts of attributes and groups (selection states), but there may be a way to nicely unify them. > (Could that even allow generating a node graph automatically while editing in edit mode?!) That would be fantastic - I pushed for this in the everything nodes thread but it seemed a little far-fetched to William, which I understand. Further thoughts concerning attributes : probably all geometry attributes could be carried through the green geometry socket, which would alleviate the need for one output socket per node per attribute - doing this would make presenting normals for the extrude node to use just a matter of connecting the geometry output to the normal input, and picking the relevant data would be left to the extrude node. That design has the downside of connecting differently-colored sockets together, which is not something we do currently. Another solution would be an intermediary 'pick/extract attribute' node with a geometry input, a dropdown with all existing attributes and a dynamic output.
Contributor

Added subscriber: @Rawalanche

Added subscriber: @Rawalanche
Contributor

Generating node graph automatically while editing would only make sense if it was opt in and off by default, as otherwise unsuspecting users working destructively could end up generating very long, live, complex chain of procedural operators which would degrade the performance linearly with the amount of operations. I am very much looking forward to procedural modeling, but direct destructive modeling still shouldn't go away completely.

Generating node graph automatically while editing would only make sense if it was opt in and off by default, as otherwise unsuspecting users working destructively could end up generating very long, live, complex chain of procedural operators which would degrade the performance linearly with the amount of operations. I am very much looking forward to procedural modeling, but direct destructive modeling still shouldn't go away completely.

I believe the ability to automatically generate geometry node trees using Edit Mode operators and tools is outside the initial scope of what will be supported. Supporting that is tricky there are a ton of potential issues and pitfalls. It would be great to be able to do that eventually perhaps, but initially I don't think it'll be something the core developers will pursue.

I believe the ability to automatically generate geometry node trees using Edit Mode operators and tools is outside the initial scope of what will be supported. Supporting that is tricky there are a ton of potential issues and pitfalls. It would be great to be able to do that eventually perhaps, but initially I don't think it'll be something the core developers will pursue.
Member

It's still worthwhile to keep that in mind while designing the system so that it fits better into the system later.

Some node trees can become heavy or very large, and sometimes it's not useful to have to recalculate the entire node tree every time a property is edited down stream. For this reason, we could provide a Cache node, that takes everything upstream and caches it so it doesn't need to be recalculated.

Couldn't caching be handled automatically? That seems to be a common thread among these node systems. Even then it might still be useful to have a bake node though.

The discussion of groups here seems useful: https://wiki.blender.org/wiki/Source/Nodes/MeshTypeRequirements

It's still worthwhile to keep that in mind while designing the system so that it fits better into the system later. > Some node trees can become heavy or very large, and sometimes it's not useful to have to recalculate the entire node tree every time a property is edited down stream. For this reason, we could provide a Cache node, that takes everything upstream and caches it so it doesn't need to be recalculated. Couldn't caching be handled automatically? That seems to be a common thread among these node systems. Even then it might still be useful to have a bake node though. The discussion of groups here seems useful: https://wiki.blender.org/wiki/Source/Nodes/MeshTypeRequirements
William Reynish changed title from Modifier/Geometry Nodes Design to Geometry Nodes Design 2020-03-20 21:20:41 +01:00

Added subscriber: @brecht

Added subscriber: @brecht

A few things to consider:

  • Are the vertex coordinates in modifier nodes in object space or in world space?
  • Is the input to a Mesh socket only a Mesh datablock, or does it also come with a transform matrix and possibly other attributes of the Object matrix?
  • Should modifier nodes be able to affect the Object datablock transform and other attributes, or do we maintain a strict separation between modifiers and constraints?
  • More generally, are modifier nodes mostly distinct from simulation nodes and potential future nodes that work on object transforms?

I expect modifier modes would work in object space, on Mesh datablocks rather than objects. And they would bring geometry from other objects into the object space of the current object. They would be mostly distinct from other nodes except for nodes like math, textures, color ramps, etc.

This is all similar to how modifiers work now, and I believe it's the same for geometry nodes in Houdini. For numerical precision and instancing, it makes sense that the geometry is evaluated clearly contained within an object.

A few things to consider: * Are the vertex coordinates in modifier nodes in object space or in world space? * Is the input to a Mesh socket only a Mesh datablock, or does it also come with a transform matrix and possibly other attributes of the Object matrix? * Should modifier nodes be able to affect the Object datablock transform and other attributes, or do we maintain a strict separation between modifiers and constraints? * More generally, are modifier nodes mostly distinct from simulation nodes and potential future nodes that work on object transforms? I expect modifier modes would work in object space, on Mesh datablocks rather than objects. And they would bring geometry from other objects into the object space of the current object. They would be mostly distinct from other nodes except for nodes like math, textures, color ramps, etc. This is all similar to how modifiers work now, and I believe it's the same for geometry nodes in Houdini. For numerical precision and instancing, it makes sense that the geometry is evaluated clearly contained within an object.

For transition, and maybe beyond that, I'd rather have a modifier containing a node graph than a "Use Nodes" button. If it's a binary switch for the entire stack, it seems like you would only be able to enable "Use Nodes" once all the modifiers you need are supported as nodes.

For transition, and maybe beyond that, I'd rather have a modifier containing a node graph than a "Use Nodes" button. If it's a binary switch for the entire stack, it seems like you would only be able to enable "Use Nodes" once all the modifiers you need are supported as nodes.

Added subscriber: @deadpin

Added subscriber: @deadpin

A few additional thoughts:

  • Just to clarify the "constant" input item, that's really just saying that a single value will be used for a given execution of a node correct? It would still be possible to, say, have a Distance-to-Camera node or a Current-Frame node drive the Bevel segment count or the SubD levels count? In that sense the number could vary frame to frame (not constant), but just that a single number is applied to the entire mesh at the point of execution.
  • To speed up the transition, giving python addons a better, more intuitive, way to create node graphs, even linear ones like what exists with the stack today, is probably going to be required :-/
  • Can "Apply" be expanded a bit. It's not clear how you would destructively apply a modifier (removing it from the graph; not just cache and gray out) exactly.
A few additional thoughts: - Just to clarify the "constant" input item, that's really just saying that a single value will be used for a given execution of a node correct? It would still be possible to, say, have a Distance-to-Camera node or a Current-Frame node drive the Bevel segment count or the SubD levels count? In that sense the number could vary frame to frame (not constant), but just that a _single_ number is applied to the entire mesh at the point of execution. - To speed up the transition, giving python addons a better, more intuitive, way to create node graphs, even linear ones like what exists with the stack today, is probably going to be required :-/ - Can "Apply" be expanded a bit. It's not clear how you would destructively apply a modifier (removing it from the graph; not just cache and gray out) exactly.

@brecht Yes I also thought of that. But, I am afraid that if the only way to use the nodes is through the modifier stack, that it will become harder to get rid of the old modifiers - also if we add the geometry nodes as a stack modifier, it could have performance implications if you add it it later in the stack? I mean, then it both has to evaluate the stack AND the node graph.

But yes, it has the advantage that you could mix & match the modifiers with geometry nodes.

@brecht Yes I also thought of that. But, I am afraid that if the only way to use the nodes is *through* the modifier stack, that it will become harder to get rid of the old modifiers - also if we add the geometry nodes as a stack modifier, it could have performance implications if you add it it later in the stack? I mean, then it both has to evaluate the stack AND the node graph. But yes, it has the advantage that you could mix & match the modifiers with geometry nodes.

In #74967#895723, @brecht wrote:

  • Should modifier nodes be able to affect the Object datablock transform and other attributes, or do we maintain a strict separation between modifiers and constraints?
  • More generally, are modifier nodes mostly distinct from simulation nodes and potential future nodes that work on object transforms?

Maybe the so called Modifier-Nodes are actually Geometry-Nodes, a subset of more broad Object Nodes, which would include instancing, constraints, simulations etc., much like Texture Nodes are a subset of Shader Nodes which can both be used for modifiers, materials, lamps world shaders, etc

> In #74967#895723, @brecht wrote: > * Should modifier nodes be able to affect the Object datablock transform and other attributes, or do we maintain a strict separation between modifiers and constraints? > * More generally, are modifier nodes mostly distinct from simulation nodes and potential future nodes that work on object transforms? Maybe the so called Modifier-Nodes are actually Geometry-Nodes, a subset of more broad Object Nodes, which would include instancing, constraints, simulations etc., much like Texture Nodes are a subset of Shader Nodes which can both be used for modifiers, materials, lamps world shaders, etc

In #74967#895743, @DuarteRamos wrote:
Maybe the so called Modifier-Nodes are actually Geometry-Nodes, a subset of more broad Object Nodes, which would include instancing, constraints, simulations etc., much like Texture Nodes are a subset of Shader Nodes which can both be used for modifiers, materials, lamps world shaders, etc

That's the question my comment was about. And while it's tempting to do this, I can see why some other applications might have chosen not to.

If geometry nodes deal with object transforms, then either node graphs have to be more complicated or need smart logic to avoid that. But that smart logic may be unpredictable regarding instancing, performance and numerical precision. If all geometry operations happen contained within a single object and object space, that simplifies things.

Not saying it's impossible to make it work well, but how is not obvious to me.

> In #74967#895743, @DuarteRamos wrote: > Maybe the so called Modifier-Nodes are actually Geometry-Nodes, a subset of more broad Object Nodes, which would include instancing, constraints, simulations etc., much like Texture Nodes are a subset of Shader Nodes which can both be used for modifiers, materials, lamps world shaders, etc That's the question my comment was about. And while it's tempting to do this, I can see why some other applications might have chosen not to. If geometry nodes deal with object transforms, then either node graphs have to be more complicated or need smart logic to avoid that. But that smart logic may be unpredictable regarding instancing, performance and numerical precision. If all geometry operations happen contained within a single object and object space, that simplifies things. Not saying it's impossible to make it work well, but how is not obvious to me.

Added subscriber: @Senna

Added subscriber: @Senna

In #74967#895587, @Rawalanche wrote:
I am very much looking forward to procedural modeling, but direct destructive modeling still shouldn't go away completely.

I mostly agree. I think destructive modeling should still have a place in the future. Destructive modeling is simpler and we don't always need the complexity of procedural modeling. I hope the design will feel more like an 'extension' to the current modeling workflow than a procedural replacement. Workflow complexity should grow relative to complexity demands.

> In #74967#895587, @Rawalanche wrote: >I am very much looking forward to procedural modeling, but direct destructive modeling still shouldn't go away completely. I mostly agree. I think destructive modeling should still have a place in the future. Destructive modeling is simpler and we don't always need the complexity of procedural modeling. I hope the design will feel more like an 'extension' to the current modeling workflow than a procedural replacement. Workflow complexity should grow relative to complexity demands.

In #74967#895732, @WilliamReynish wrote:
@brecht Yes I also thought of that. But, I am afraid that if the only way to use the nodes is through the modifier stack, that it will become harder to get rid of the old modifiers - also if we add the geometry nodes as a stack modifier, it could have performance implications if you add it it later in the stack? I mean, then it both has to evaluate the stack AND the node graph.

But yes, it has the advantage that you could mix & match the modifiers with geometry nodes.

I don't think performance would by any different. Mainly for me it's a UI/UX question, if we want a conceptually simpler design where there are only nodes and the node editor, or a more murky design that is also more compact and convenient for simple setups.

By the way, it seems possible to make all modifiers available as nodes immediately. A modifier is effectively a node with only a Mesh input and Mesh output socket and a bunch of properties (ignoring some details). So we can imagine automatically turning every modifier into such a node, and then incrementally adding more sockets and functionality. I'm not sure if that's the best way of doing things, but it's an option.

> In #74967#895732, @WilliamReynish wrote: > @brecht Yes I also thought of that. But, I am afraid that if the only way to use the nodes is *through* the modifier stack, that it will become harder to get rid of the old modifiers - also if we add the geometry nodes as a stack modifier, it could have performance implications if you add it it later in the stack? I mean, then it both has to evaluate the stack AND the node graph. > > But yes, it has the advantage that you could mix & match the modifiers with geometry nodes. I don't think performance would by any different. Mainly for me it's a UI/UX question, if we want a conceptually simpler design where there are only nodes and the node editor, or a more murky design that is also more compact and convenient for simple setups. By the way, it seems possible to make all modifiers available as nodes immediately. A modifier is effectively a node with only a Mesh input and Mesh output socket and a bunch of properties (ignoring some details). So we can imagine automatically turning every modifier into such a node, and then incrementally adding more sockets and functionality. I'm not sure if that's the best way of doing things, but it's an option.

Added subscriber: @ThatAsherGuy

Added subscriber: @ThatAsherGuy

I like the idea of having a stack of geometry node graphs. It'd put the viewport visibility toggles in a convenient place, and it'd fit the general idea of treating graphs as reusable assets. You could skirt a lot of the debate about convenience and nodal atomicity by recreating the old modifiers as 'graph templates' instead of nodes, too.

I like the idea of having a stack of geometry node graphs. It'd put the viewport visibility toggles in a convenient place, and it'd fit the general idea of treating graphs as reusable assets. You could skirt a lot of the debate about convenience and nodal atomicity by recreating the old modifiers as 'graph templates' instead of nodes, too.

Added subscriber: @jeacom

Added subscriber: @jeacom

How is this going to work with add-ons?
Many use simple modifiers as macros to achieve complex results, whats appealing about current modifiers is that they are linear and easy to manage with scripts.

How is the API going to work?

Will it be simple to update or there will have to be a serious reengineering of many add-ons and unnecessary code complexity since now it's a modifier tree and not a modifier list.

And I am really curious about what is going to happen with the Apply Modifier button. Applying individual modifiers regardless of its position is a core feature and would be awful to remove it.

How is this going to work with add-ons? Many use simple modifiers as macros to achieve complex results, whats appealing about current modifiers is that they are linear and easy to manage with scripts. How is the API going to work? Will it be simple to update or there will have to be a serious reengineering of many add-ons and unnecessary code complexity since now it's a modifier tree and not a modifier list. And I am really curious about what is going to happen with the Apply Modifier button. Applying individual modifiers regardless of its position is a core feature and would be awful to remove it.

Hehe, I asked those same questions slightly above :)

Hehe, I asked those same questions slightly above :)

Added subscriber: @billrey

Added subscriber: @billrey

@brecht other programs separate object manipulation (working with transforms, matrices, object instancing...) from geometry manipulation, I'd say that's good UX. I expect within the latter everything would happen in object space, with utilities to convert to other spaces optionally (matrix nodes already present in Jacques' functions branch).

@billrey about the armature node in your mockup : I guess this asks the same question of whether to keep the constraint stack and have "nodetree constraints" fit in there, or turn the entire thing into a nodetree. I figured I'd mention this even though slightly offtopic because it may be wise to have these things consistent throughout.

Hmmm looks like by mentioning you I've added you as a subscriber. Sorry that was unintentional.

@brecht other programs separate object manipulation (working with transforms, matrices, object instancing...) from geometry manipulation, I'd say that's good UX. I expect within the latter everything would happen in object space, with utilities to convert to other spaces optionally (matrix nodes already present in Jacques' functions branch). @billrey about the armature node in your mockup : I guess this asks the same question of whether to keep the constraint stack and have "nodetree constraints" fit in there, or turn the entire thing into a nodetree. I figured I'd mention this even though slightly offtopic because it may be wise to have these things consistent throughout. Hmmm looks like by mentioning you I've added you as a subscriber. Sorry that was unintentional.

Constraint/object nodes is also something that is planned to add, after particle and geometry nodes. But, AFAIK these will stay separate and on a different level from geometry nodes, much like in other node-based apps.

Constraint/object nodes is also something that is planned to add, after particle and geometry nodes. But, AFAIK these will stay separate and on a different level from geometry nodes, much like in other node-based apps.

@brecht

By the way, it seems possible to make all modifiers available as nodes immediately. A modifier is effectively a node with only a Mesh input and Mesh output socket and a bunch of properties (ignoring some details). So we can imagine automatically turning every modifier into such a node, and then incrementally adding more sockets and functionality. I'm not sure if that's the best way of doing things, but it's an option.

We could do that initially while developing, but IMO we should not ship that in any release version of Blender. We'd want to change this quickly anyway and would immediately break compatibility I expect. I'd rather incentivize us to go over each modifier and make the necessary changes, one by one, so that we have a more robust system, even if not all modifiers are initially supported as a node.

Some nodes also seem to be a but unclear what to do with, such as all in the Simulation category. They are really a different kind of thing and maybe should be separated out somehow?

@HooglyBoogly The concept of 'Groups' is analogous to the 'Clusters' concept mentioned here. It's a way to define or pass along a context/selection, which indeed is a very important core aspect.

@brecht > By the way, it seems possible to make all modifiers available as nodes immediately. A modifier is effectively a node with only a Mesh input and Mesh output socket and a bunch of properties (ignoring some details). So we can imagine automatically turning every modifier into such a node, and then incrementally adding more sockets and functionality. I'm not sure if that's the best way of doing things, but it's an option. We could do that initially while developing, but IMO we should not ship that in any release version of Blender. We'd want to change this quickly anyway and would immediately break compatibility I expect. I'd rather incentivize us to go over each modifier and make the necessary changes, one by one, so that we have a more robust system, even if not all modifiers are initially supported as a node. Some nodes also seem to be a but unclear what to do with, such as all in the Simulation category. They are really a different kind of thing and maybe should be separated out somehow? @HooglyBoogly The concept of 'Groups' is analogous to the 'Clusters' concept mentioned here. It's a way to define or pass along a context/selection, which indeed is a very important core aspect.

Added subscriber: @Constantina32

Added subscriber: @Constantina32

Added subscriber: @JohnCox-3

Added subscriber: @JohnCox-3

One thing about the Output node:

I would suggest getting rid of it.

In other systems, there is an Output flag on every node which means switching the visible geometry is trivial.

With a node, changing what's visible in the viewport is awkward due to having to wire the tree differently.
It is absolutely essential to de-bugging a geometry node tree to be able to easily see what is being output
by each node.

There should be a Render flag on each node as well to allow proxy geometry in the viewport and final in the
render. There would be one and only one node visible / renderable per node tree

One thing about the Output node: I would suggest getting rid of it. In other systems, there is an Output flag on every node which means switching the visible geometry is trivial. With a node, changing what's visible in the viewport is awkward due to having to wire the tree differently. It is absolutely essential to de-bugging a geometry node tree to be able to easily see what is being output by each node. There should be a Render flag on each node as well to allow proxy geometry in the viewport and final in the render. There would be one and only one node visible / renderable per node tree

@JohnCox-3 In other node-based apps without output nodes, many users end up creating Null nodes that are effectively used as output nodes. So we might as well build it in.

The idea to preview only a subsection of the node tree with a toggle on each node is meant to replace the need for Viewer nodes, which you may be re-connecting constantly, rather than replacing output nodes altogether.

@JohnCox-3 In other node-based apps without output nodes, many users end up creating Null nodes that are effectively used as output nodes. So we might as well build it in. The idea to preview only a subsection of the node tree with a toggle on each node is meant to replace the need for Viewer nodes, which you may be re-connecting constantly, rather than replacing output nodes altogether.

@WilliamReynish But then would the expectation be that the Output node takes precedence over the toggle?

The benefit of a toggle is you can enforce a single end point to the tree for the final evaluated geometry.

What happens in other apps within their Simulation node trees, there is an explicit Output node but also an
output flag which, in my experience, is confusing for even intermediate users.

The other thing about Nulls is they are often used as bookmarks within the tree, not just at the output. It is an
extremely common workflow to use a Node as a named end point to link data from one node tree into another
(not a goal at the moment but could be) when you need to access data other than the final evaluated geometry.

With a Null this is possible because there is a pass-through socket. The Output node here doesn't have that which
makes gives it a different use.

If the Output node is just there to override the visibility toggle then it seems redundant to me
but maybe I've misunderstood.

@WilliamReynish But then would the expectation be that the Output node takes precedence over the toggle? The benefit of a toggle is you can enforce a single end point to the tree for the final evaluated geometry. What happens in other apps within their Simulation node trees, there is an explicit Output node but *also* an output flag which, in my experience, is confusing for even intermediate users. The other thing about Nulls is they are often used as bookmarks within the tree, not just at the output. It is an extremely common workflow to use a Node as a named end point to link data from one node tree into another (not a goal at the moment but could be) when you need to access data other than the final evaluated geometry. With a Null this is possible because there is a pass-through socket. The Output node here doesn't have that which makes gives it a different use. If the Output node is just there to override the visibility toggle then it seems redundant to me but maybe I've misunderstood.

As long as we're talking about simulation with meshes, and not flips or other kind of data, I think it would make sense to present geometry nodes and higher level simulation nodes in the same place : rigid bodies, soft bodies, cloth all manipulate geometry, the main difference being the latter have to be aware of their previous state to compute current state, hence requiring a simulation step from frame 1 - but the way I see it they're a different tool to manipulate the same data.
To add to that, I believe there are advantages to having a mesh freely going back and forth between generative modeling and simulation and back : adding procedural cracking or wrinkling onto a simulated cloth or skin, or instantiate static flowers on top of a plant asset rocking in the wind, for instance. The use cases seem pretty much infinite.

I agree visible/renderable flags make output nodes redundant. I also think they provide more flexibility, since there's no rewiring going on.

We may also want to have something happen when the user selects a node, such as overlaying a wireframe preview on top of the displayed object, or something. Just food for thought, although that's probably not vital.

As long as we're talking about simulation with meshes, and not flips or other kind of data, I think it would make sense to present geometry nodes and higher level simulation nodes in the same place : rigid bodies, soft bodies, cloth all manipulate geometry, the main difference being the latter have to be aware of their previous state to compute current state, hence requiring a simulation step from frame 1 - but the way I see it they're a different tool to manipulate the same data. To add to that, I believe there are advantages to having a mesh freely going back and forth between generative modeling and simulation and back : adding procedural cracking or wrinkling onto a simulated cloth or skin, or instantiate static flowers on top of a plant asset rocking in the wind, for instance. The use cases seem pretty much infinite. I agree visible/renderable flags make output nodes redundant. I also think they provide more flexibility, since there's no rewiring going on. We may also want to have something happen when the user selects a node, such as overlaying a wireframe preview on top of the displayed object, or something. Just food for thought, although that's probably not vital.

Examples of nodes for creating new geometry have location sockets shape as for non singular values. Is this another oversight? Or there will be possibility to create several objects on different locations (thus there must be array of vector values rather than vector texture, like vertices of some other mesh)?

Examples of nodes for creating new geometry have location sockets shape as for non singular values. Is this another oversight? Or there will be possibility to create several objects on different locations (thus there must be array of vector values rather than vector texture, like vertices of some other mesh)?

Most of the Cluster Node examples you've included could be described as either filters or groups. 'Cluster' has spatial connotations, and I really don't see how it's supposed to fit here; why group two different node types under an obtuse label that doesn't work well for either of them?

Most of the Cluster Node examples you've included could be described as either filters or groups. 'Cluster' has spatial connotations, and I really don't see how it's supposed to fit here; why group two different node types under an obtuse label that doesn't work well for either of them?

@ThatAsherGuy It's not two node types - Clusters are analogous to selections. Some apps call them Groups, though it might be confusing since in Blender, the term 'group node' means something else and very different from selections. The apps that call it 'Groups' use a different term for group nodes like 'networks', 'subnetworks' or 'assets'.

Some apps call it Clusters, which doesn't bring with it the naming collision with the over-used term 'group' in Blender.

But, we could also call it simply 'Selections' - although I think users might expect that it would then relate to the current selection in the viewport, which could be misleading.

Or we could call it Groups, but then rename group nodes to 'Meta Nodes' perhaps, which would be consistent with Meta Strips. I don't mind too much what the name is, as long as it doesn't cause confusion by calling two very different things the same name, and as long as it is reasonably clear what it means.

Blender's Vertex Groups perhaps should be renamed to Weight Groups, since they don't really store selections, don't allow for edge or face data - instead they allow you to specify a weight value per vertex.

@ThatAsherGuy It's not two node types - Clusters are analogous to selections. Some apps call them Groups, though it might be confusing since in Blender, the term 'group node' means something else and very different from selections. The apps that call it 'Groups' use a different term for group nodes like 'networks', 'subnetworks' or 'assets'. Some apps call it Clusters, which doesn't bring with it the naming collision with the over-used term 'group' in Blender. But, we could also call it simply 'Selections' - although I think users might expect that it would then relate to the current selection in the viewport, which could be misleading. Or we could call it Groups, but then rename group nodes to 'Meta Nodes' perhaps, which would be consistent with Meta Strips. I don't mind too much what the name is, as long as it doesn't cause confusion by calling two very different things the same name, and as long as it is reasonably clear what it means. Blender's Vertex Groups perhaps should be renamed to Weight Groups, since they don't really store selections, don't allow for edge or face data - instead they allow you to specify a weight value per vertex.

@WilliamReynish The ones I'm looking at as filters are the nodes that take a cluster or clusters as an input and output a sub-set or super-set based on things like boolean logic, face angles, or spatial proximity. They're nodes that could be abstracted to work with both clusters and meshes, filling a 'math nodes for geometry' role that's worth defining as a discrete node type.

@WilliamReynish The ones I'm looking at as filters are the nodes that take a cluster or clusters as an input and output a sub-set or super-set based on things like boolean logic, face angles, or spatial proximity. They're nodes that could be abstracted to work with both clusters and meshes, filling a 'math nodes for geometry' role that's worth defining as a discrete node type.

Blender's Vertex Groups perhaps should be renamed to Weight Groups, since they don't really store selections, don't allow for edge or face data - instead they allow you to specify a weight value per vertex.

Since they concern vertices shouldn't they keep the name 'vertex' somewhere ? We're probably going to end up with many attribute types per component : bool, color, vector2, vector3, float (current vgroups), and maybe even more although the usage of an enum or array per-component is not obvious to me right now.
Thinking of this, why not call them a generic name such as attribute maps, and then specify component and type as supplemental detail ? ("faceVectorMap.001" to give an example)

@ThatAsherGuy instinctively I would leave those operations up to an explicit "join" or "merge geometry" node but what you suggest is interesting !

@Ko I was wondering that as well about the two vector sockets. I'm not sure it is worth overloading geometry creation nodes with the kind of stuff that can be done with an array/instance just down the tree ?

It was suggested above to have a flexible object type output, so as to be able to start working with a mesh object, convert it to something else (possibly several times) within the nodetree, and output it as a different object type, such as volume or curve. I think that should be an ideal to work towards, although I cannot imagine the implications it has for the rest of Blender.

> Blender's Vertex Groups perhaps should be renamed to Weight Groups, since they don't really store selections, don't allow for edge or face data - instead they allow you to specify a weight value per vertex. Since they concern vertices shouldn't they keep the name 'vertex' somewhere ? We're probably going to end up with many attribute types per component : bool, color, vector2, vector3, float (current vgroups), and maybe even more although the usage of an enum or array per-component is not obvious to me right now. Thinking of this, why not call them a generic name such as attribute maps, and then specify component and type as supplemental detail ? ("faceVectorMap.001" to give an example) @ThatAsherGuy instinctively I would leave those operations up to an explicit "join" or "merge geometry" node but what you suggest is interesting ! @Ko I was wondering that as well about the two vector sockets. I'm not sure it is worth overloading geometry creation nodes with the kind of stuff that can be done with an array/instance just down the tree ? It was suggested above to have a flexible object type output, so as to be able to start working with a mesh object, convert it to something else (possibly several times) within the nodetree, and output it as a different object type, such as volume or curve. I think that should be an ideal to work towards, although I cannot imagine the implications it has for the rest of Blender.

For array node its better to add "4x4 Matrix" type socket and nodes to operate with them. Matrix can hold location, rotation and scale all in one. If user connected Matrix output to Vector input, then only location component must be used for vector. If it was Vector output to Matrix input, then matrix with this vector as location, and with scale of (1,1,1) must be created. This is how Sverchok addon does it.

For array node its better to add "4x4 Matrix" type socket and nodes to operate with them. Matrix can hold location, rotation and scale all in one. If user connected Matrix output to Vector input, then only location component must be used for vector. If it was Vector output to Matrix input, then matrix with this vector as location, and with scale of (1,1,1) must be created. This is how Sverchok addon does it.
Contributor

In #74967#896402, @WilliamReynish wrote:
@ThatAsherGuy It's not two node types - Clusters are analogous to selections. Some apps call them Groups, though it might be confusing since in Blender, the term 'group node' means something else and very different from selections. The apps that call it 'Groups' use a different term for group nodes like 'networks', 'subnetworks' or 'assets'.

Some apps call it Clusters, which doesn't bring with it the naming collision with the over-used term 'group' in Blender.

But, we could also call it simply 'Selections' - although I think users might expect that it would then relate to the current selection in the viewport, which could be misleading.

Or we could call it Groups, but then rename group nodes to 'Meta Nodes' perhaps, which would be consistent with Meta Strips. I don't mind too much what the name is, as long as it doesn't cause confusion by calling two very different things the same name, and as long as it is reasonably clear what it means.

Blender's Vertex Groups perhaps should be renamed to Weight Groups, since they don't really store selections, don't allow for edge or face data - instead they allow you to specify a weight value per vertex.

As a rule of the thumb, if someone asks you what X means and you don't use X as a word in the first sentence of explaining it, then X is a wrong name. So if someone asks you what Clusters are, and you don't use the word cluster in the first sentence in a descriptive manner, then it should not be called a cluster. If you call clusters selections right away, then the selections is obviously the right name.

If only everyone adhered to this rule, we'd have a lot less newbies asking what does X mean/do kind of questions....

It's all the more confusing given that the red output of each node is called "New Geometry" while the red inputs are called "Cluster". There should be something implying these two are similar/related.

Since new geometry already breaks the ice of having multiple words, I'd suggest utilizing a superpower of an English language called adjectives, which allows you to employ already "overused" terms in a more specific ways, for example "face group" or "vertex selection", etc...

> In #74967#896402, @WilliamReynish wrote: > @ThatAsherGuy It's not two node types - Clusters are analogous to selections. Some apps call them Groups, though it might be confusing since in Blender, the term 'group node' means something else and very different from selections. The apps that call it 'Groups' use a different term for group nodes like 'networks', 'subnetworks' or 'assets'. > > Some apps call it Clusters, which doesn't bring with it the naming collision with the over-used term 'group' in Blender. > > But, we could also call it simply 'Selections' - although I think users might expect that it would then relate to the current selection in the viewport, which could be misleading. > > Or we could call it Groups, but then rename group nodes to 'Meta Nodes' perhaps, which would be consistent with Meta Strips. I don't mind too much what the name is, as long as it doesn't cause confusion by calling two very different things the same name, and as long as it is reasonably clear what it means. > > Blender's Vertex Groups perhaps should be renamed to Weight Groups, since they don't really store selections, don't allow for edge or face data - instead they allow you to specify a weight value per vertex. As a rule of the thumb, if someone asks you what X means and you don't use X as a word in the first sentence of explaining it, then X is a wrong name. So if someone asks you what Clusters are, and you don't use the word cluster in the first sentence in a descriptive manner, then it should not be called a cluster. If you call clusters selections right away, then the selections is obviously the right name. If only everyone adhered to this rule, we'd have a lot less newbies asking what does X mean/do kind of questions.... It's all the more confusing given that the red output of each node is called "New Geometry" while the red inputs are called "Cluster". There should be something implying these two are similar/related. Since new geometry already breaks the ice of having multiple words, I'd suggest utilizing a superpower of an English language called adjectives, which allows you to employ already "overused" terms in a more specific ways, for example "face group" or "vertex selection", etc...
Member

Added subscriber: @JacquesLucke

Added subscriber: @JacquesLucke

Added subscriber: @mushroomeo

Added subscriber: @mushroomeo

hi i just wanted to beg you to keep it as SIMPLE AS POSSIBLE. and not to reinvent wheel...

that means keep all the names as it is right now, in modifiers like object as OBJECT to choose and vertex groups too, or you want to rename current modifier options too then?

then why not keep OBJECT word instead MESH in nodes? because can get rid of vertex group in modifier nodes? why? it just makes unnecessary complex.
why not keep the vertex groups inside nodes ? it would make it just simpler, you sure can have seperate own nodes for vertex group but most people will still want just to connect multiple modifiers in node window just for reason to have much more clear view and same time would erase chaos of to much nodes and speed....

it would keep blender understandable consistent, like MESH is in edit mode and have tools named for it like vertex, edge, face, etc...
for selection mode node of vertex, edge, face i suggest just use simple naming: MESH SELECTION MODES. this way everyone could understand it immediatly and not get confused.

and same for own vertex group node, and collection. we had already change of groups to collection : (

i suggest best example just keep make it as sorcar addon nodes... its simple , fast , easy, even it could be made even with fewer nodes too...

https://duckduckgo.com/?q=albert+einstein+simplicity+quote&t=ffsb&atb=v1-1&iar=images&iax=images&ia=images XD

hi i just wanted to beg you to keep it as SIMPLE AS POSSIBLE. and not to reinvent wheel... that means keep all the names as it is right now, in modifiers like object as OBJECT to choose and vertex groups too, or you want to rename current modifier options too then? then why not keep OBJECT word instead MESH in nodes? because can get rid of vertex group in modifier nodes? why? it just makes unnecessary complex. why not keep the vertex groups inside nodes ? it would make it just simpler, you sure can have seperate own nodes for vertex group but most people will still want just to connect multiple modifiers in node window just for reason to have much more clear view and same time would erase chaos of to much nodes and speed.... it would keep blender understandable consistent, like MESH is in edit mode and have tools named for it like vertex, edge, face, etc... for selection mode node of vertex, edge, face i suggest just use simple naming: MESH SELECTION MODES. this way everyone could understand it immediatly and not get confused. and same for own vertex group node, and collection. we had already change of groups to collection : ( i suggest best example just keep make it as sorcar addon nodes... its simple , fast , easy, even it could be made even with fewer nodes too... https://duckduckgo.com/?q=albert+einstein+simplicity+quote&t=ffsb&atb=v1-1&iar=images&iax=images&ia=images XD
Contributor

In #74967#896603, @mushroomeo wrote:
hi i just wanted to beg you to keep it as SIMPLE AS POSSIBLE. and not to reinvent wheel...

that means keep all the names as it is right now, in modifiers like object as OBJECT to choose and vertex groups too, or you want to rename current modifier options too then?

then why not keep OBJECT word instead MESH in nodes? because can get rid of vertex group in modifier nodes? why? it just makes unnecessary complex.
why not keep the vertex groups inside nodes ? it would make it just simpler, you sure can have seperate own nodes for vertex group but most people will still want just to connect multiple modifiers in node window just for reason to have much more clear view and same time would erase chaos of to much nodes and speed....

it would keep blender understandable consistent, like MESH is in edit mode and have tools named for it like vertex, edge, face, etc...
for selection mode node of vertex, edge, face i suggest just use simple naming: MESH SELECTION MODES. this way everyone could understand it immediatly and not get confused.

and same for own vertex group node, and collection. we had already change of groups to collection : (

i suggest best example just keep make it as sorcar addon nodes... its simple , fast , easy, even it could be made even with fewer nodes too...

https://duckduckgo.com/?q=albert+einstein+simplicity+quote&t=ffsb&atb=v1-1&iar=images&iax=images&ia=images XD

If nodes did the same thing as modifier stack, but just with node flow view, there would be no point in changing it. This is the start of Everything Nodes project. Notice the word Everything. Objects in Blender viewport can be of many more types than just meshes. It can be lights, empties, particle systems, cameras, etc... The point of node based workflow is flexibility, at the expense of some learning curve. I am not saying modifier nodes should be complicated just for the sake of it, but dumbing them down to the exact same thing as modifier stack except laid out horizontally would defeat the point of going node based.

In Blender, Object is just a container, container of datablock present in a viewport:
image.png
You may think that when you add cube, and adding modifiers, that you are interacting with a cube object, but in reality you are interacting with a generic object container containing a mesh datablock, and the modifier stack presence is actually conditioned by the presence of the mesh datablock, not the container itself. That's why you don't get modifier stack on empties or lights for example.

> In #74967#896603, @mushroomeo wrote: > hi i just wanted to beg you to keep it as SIMPLE AS POSSIBLE. and not to reinvent wheel... > > that means keep all the names as it is right now, in modifiers like object as OBJECT to choose and vertex groups too, or you want to rename current modifier options too then? > > then why not keep OBJECT word instead MESH in nodes? because can get rid of vertex group in modifier nodes? why? it just makes unnecessary complex. > why not keep the vertex groups inside nodes ? it would make it just simpler, you sure can have seperate own nodes for vertex group but most people will still want just to connect multiple modifiers in node window just for reason to have much more clear view and same time would erase chaos of to much nodes and speed.... > > it would keep blender understandable consistent, like MESH is in edit mode and have tools named for it like vertex, edge, face, etc... > for selection mode node of vertex, edge, face i suggest just use simple naming: MESH SELECTION MODES. this way everyone could understand it immediatly and not get confused. > > and same for own vertex group node, and collection. we had already change of groups to collection : ( > > > i suggest best example just keep make it as sorcar addon nodes... its simple , fast , easy, even it could be made even with fewer nodes too... > > https://duckduckgo.com/?q=albert+einstein+simplicity+quote&t=ffsb&atb=v1-1&iar=images&iax=images&ia=images XD If nodes did the same thing as modifier stack, but just with node flow view, there would be no point in changing it. This is the start of Everything Nodes project. Notice the word **Everything**. Objects in Blender viewport can be of many more types than just meshes. It can be lights, empties, particle systems, cameras, etc... The point of node based workflow is flexibility, at the expense of some learning curve. I am not saying modifier nodes should be complicated just for the sake of it, but dumbing them down to the exact same thing as modifier stack except laid out horizontally would defeat the point of going node based. In Blender, Object is just a container, container of datablock present in a viewport: ![image.png](https://archive.blender.org/developer/F8424952/image.png) You may think that when you add cube, and adding modifiers, that you are interacting with a cube object, but in reality you are interacting with a generic object container containing a mesh datablock, and the modifier stack presence is actually conditioned by the presence of the mesh datablock, not the container itself. That's why you don't get modifier stack on empties or lights for example.

Added subscriber: @wevon-2

Added subscriber: @wevon-2

I would like to refer to another point that is how you can interact with the nodes from the 3D viewer.
Although most nodes do not require 3D interaction, deformers, constraints, and mesh modifiers do.
I think 3D Gizmos should appear when selecting nodes and a node editing tool is actived.
By using the Gizmos they could be modified, autofill the attributes of the node, and autoconnect other nodes.
Nodes2.png

I would like to refer to another point that is how you can interact with the nodes from the 3D viewer. Although most nodes do not require 3D interaction, deformers, constraints, and mesh modifiers do. I think 3D Gizmos should appear when selecting nodes and a node editing tool is actived. By using the Gizmos they could be modified, autofill the attributes of the node, and autoconnect other nodes. ![Nodes2.png](https://archive.blender.org/developer/F8425037/Nodes2.png)

In #74967#896402, @WilliamReynish wrote:
@ThatAsherGuy It's not two node types - Clusters are analogous to selections. Some apps call them Groups, though it might be confusing since in Blender, the term 'group node' means something else and very different from selections. The apps that call it 'Groups' use a different term for group nodes like 'networks', 'subnetworks' or 'assets'.

Some apps call it Clusters, which doesn't bring with it the naming collision with the over-used term 'group' in Blender.

But, we could also call it simply 'Selections' - although I think users might expect that it would then relate to the current selection in the viewport, which could be misleading.

Or we could call it Groups, but then rename group nodes to 'Meta Nodes' perhaps, which would be consistent with Meta Strips. I don't mind too much what the name is, as long as it doesn't cause confusion by calling two very different things the same name, and as long as it is reasonably clear what it means.

Blender's Vertex Groups perhaps should be renamed to Weight Groups, since they don't really store selections, don't allow for edge or face data - instead they allow you to specify a weight value per vertex.

Maybe calling it "Subset" would convey the concept better than Cluster and still avoiding the misleading Group word?

> In #74967#896402, @WilliamReynish wrote: > @ThatAsherGuy It's not two node types - Clusters are analogous to selections. Some apps call them Groups, though it might be confusing since in Blender, the term 'group node' means something else and very different from selections. The apps that call it 'Groups' use a different term for group nodes like 'networks', 'subnetworks' or 'assets'. > > Some apps call it Clusters, which doesn't bring with it the naming collision with the over-used term 'group' in Blender. > > But, we could also call it simply 'Selections' - although I think users might expect that it would then relate to the current selection in the viewport, which could be misleading. > > Or we could call it Groups, but then rename group nodes to 'Meta Nodes' perhaps, which would be consistent with Meta Strips. I don't mind too much what the name is, as long as it doesn't cause confusion by calling two very different things the same name, and as long as it is reasonably clear what it means. > > Blender's Vertex Groups perhaps should be renamed to Weight Groups, since they don't really store selections, don't allow for edge or face data - instead they allow you to specify a weight value per vertex. Maybe calling it "Subset" would convey the concept better than Cluster and still avoiding the misleading Group word?

If nodes did the same thing as modifier stack, but just with node flow view, there would be no point in changing it. This is the start of Everything Nodes project. Notice the word Everything

hi i think you missunderstood what i mean. i am fully aware of "everything" idea. i just suggesting to keep the modifiers fully in one node and the namings like it is right now in modifier stack, plus have all other nodes needs that you mentioned.
i mean why should modifier node get rid of options and be seperated for each option in own node. why cant have it both?

1 example of idea: in sorcar addon you have modifier node too, all the modifier options are inside that 1 node. but desired end results can be made with many nodes from edit mode.

like the idea of to have principled BSDF shader node was too to make meterial setup easy...

this is just what i ment

so why not?

> If nodes did the same thing as modifier stack, but just with node flow view, there would be no point in changing it. This is the start of Everything Nodes project. Notice the word **Everything** > hi i think you missunderstood what i mean. i am fully aware of "everything" idea. i just suggesting to keep the modifiers fully in one node and the namings like it is right now in modifier stack, plus have all other nodes needs that you mentioned. i mean why should modifier node get rid of options and be seperated for each option in own node. why cant have it both? 1 example of idea: in sorcar addon you have modifier node too, all the modifier options are inside that 1 node. but desired end results can be made with many nodes from edit mode. like the idea of to have principled BSDF shader node was too to make meterial setup easy... this is just what i ment so why not?

In reference to what I was previously commented (relationship between the 3D viewer and Node Editor) I see 5 possibilities when entering procedural mode.
1- Completely freeze the manual edition.
2- Completely freeze the manual edition, but allow to select elements to create clusters.
3- Maintain the edition of the base mesh, and apply the functions of the nodes after, as it is currently happening with the modifiers.
4- Freeze the initial mesh, connect nodes, and by selecting the nodes, allow some editing on the 3D viewer.
5- Pass to a historical mode, where each action generates a node, and it's conneted automatically.

I would prefer the last one.

In reference to what I was previously commented (relationship between the 3D viewer and Node Editor) I see 5 possibilities when entering procedural mode. 1- Completely freeze the manual edition. 2- Completely freeze the manual edition, but allow to select elements to create clusters. 3- Maintain the edition of the base mesh, and apply the functions of the nodes after, as it is currently happening with the modifiers. 4- Freeze the initial mesh, connect nodes, and by selecting the nodes, allow some editing on the 3D viewer. 5- Pass to a historical mode, where each action generates a node, and it's conneted automatically. I would prefer the last one.

Added subscriber: @moisessalvador

Added subscriber: @moisessalvador

here's what i found on the wiki about the possible future of simulations.

https://wiki.blender.org/wiki/Source/Nodes/UnifiedSimulationSystemProposal
https://wiki.blender.org/wiki/Source/Nodes/BreakModifierStack
https://wiki.blender.org/wiki/Source/Nodes/SimulationArchitectureProposal

simulation nodes could cache what's upstream when the simulation starts, and simply read the cache when it's made. it seems that simulations need their own solution and interface where you could run and edit all simulations in a scene without going to each object manually, which the nodes would reference (it would be nice to have a timeline to move simulation cache strips)

here's what i found on the wiki about the possible future of simulations. https://wiki.blender.org/wiki/Source/Nodes/UnifiedSimulationSystemProposal https://wiki.blender.org/wiki/Source/Nodes/BreakModifierStack https://wiki.blender.org/wiki/Source/Nodes/SimulationArchitectureProposal simulation nodes could cache what's upstream when the simulation starts, and simply read the cache when it's made. it seems that simulations need their own solution and interface where you could run and edit all simulations in a scene without going to each object manually, which the nodes would reference (it would be nice to have a timeline to move simulation cache strips)

@mushroomeo The reason why nodes should be more atomic, is because that is the strength of a nodes system: that you pipe things together with inputs and outputs, and that you can control how things like vertex groups or textures affect the result. For example, there is no need for a specific button to invert vertex groups - you can just add an Invert node instead between the vertex group output and the modifier input.

If we don't take advantage of the flexibility of nodes in the geometry nodes system, the benefits in terms of simplicity, flexibility and power are greatly reduced.

@mushroomeo The reason why nodes should be more atomic, is because that is the strength of a nodes system: that you pipe things together with inputs and outputs, and that you can control *how* things like vertex groups or textures affect the result. For example, there is no need for a specific button to invert vertex groups - you can just add an Invert node instead between the vertex group output and the modifier input. If we don't take advantage of the flexibility of nodes in the geometry nodes system, the benefits in terms of simplicity, flexibility and power are greatly reduced.

Added subscriber: @kosirm-2

Added subscriber: @kosirm-2

Added subscriber: @nikitron

Added subscriber: @nikitron

just hope we can integrate sverchok nodes as extra addon for everything nodes with numpy data structure. Because if you create math nodes working with your data, we need that node to work with ours numpy also.

just hope we can integrate sverchok nodes as extra addon for everything nodes with numpy data structure. Because if you create math nodes working with your data, we need that node to work with ours numpy also.

Added subscriber: @randum

Added subscriber: @randum

Actually does not see the reason to put fucntionality of catching data to Cache node. As I see this should be just a mode for any node.

Actually does not see the reason to put fucntionality of catching data to `Cache node`. As I see this should be just a mode for any node.

In #74967#901080, @randum wrote:
Actually does not see the reason to put fucntionality of catching data to Cache node. As I see this should be just a mode for any node.

Having Cache as a discrete node could actually be a very important feature potentially opening up certain workflows.

It could allow adding certain modifier types after it not originally supported by the base object type (like say bezier curves) unless we lift those limitations from the beginning.
It could also open up the possibility to manually edit the mesh in the viewport at a certain point, to introduce some form of destructive steps in the middle or end of the tree.
Maybe it could be used as output to some form of external filesystem based cache files like say alembic files or other

> In #74967#901080, @randum wrote: > Actually does not see the reason to put fucntionality of catching data to `Cache node`. As I see this should be just a mode for any node. Having *Cache* as a discrete node could actually be a very important feature potentially opening up certain workflows. It could allow adding certain modifier types after it not originally supported by the base object type (like say bezier curves) unless we lift those limitations from the beginning. It could also open up the possibility to manually edit the mesh in the viewport at a certain point, to introduce some form of destructive steps in the middle or end of the tree. Maybe it could be used as output to some form of external filesystem based cache files like say alembic files or other

Removed subscriber: @moisessalvador

Removed subscriber: @moisessalvador

If we don't plan on supporting live creation of nodetree from edit mesh operators, then a potential solution would be to create a "procedural object". This object would not have a predetermined data type (mesh, curve, volume...) but would be able to convert between types internally through conversion nodes and use any operators related to them such as VDB boolean operations, curve lofting or regular mesh operations. It could also have its own edit mode where selection, instead of activating given vertex or edge, would activate the relevant node and display its gizmo(s). Just water to the mill.

If we don't plan on supporting live creation of nodetree from edit mesh operators, then a potential solution would be to create a "procedural object". This object would not have a predetermined data type (mesh, curve, volume...) but would be able to convert between types internally through conversion nodes and use any operators related to them such as VDB boolean operations, curve lofting or regular mesh operations. It could also have its own edit mode where selection, instead of activating given vertex or edge, would activate the relevant node and display its gizmo(s). Just water to the mill.

In #74967#901102, @DuarteRamos wrote:

Having Cache as a discrete node could actually be a very important feature potentially opening up certain workflows.

It could allow adding certain modifier types after it not originally supported by the base object type (like say bezier curves) unless we lift those limitations from the beginning.
It could also open up the possibility to manually edit the mesh in the viewport at a certain point, to introduce some form of destructive steps in the middle or end of the tree.
Maybe it could be used as output to some form of external filesystem based cache files like say alembic files or other

I still does not see why all this functionality can't be inside every node. If I would like to catch data in some step I would like just switch such node in catch mode. Adding new nodes is quite expensive procedure (from point of spended time for placing node, adding links) and should be avoided if possible. If data of all nodes can be caught it looks like data can be kept right inside them instead of creating seperate node for this.

> In #74967#901102, @DuarteRamos wrote: > > Having *Cache* as a discrete node could actually be a very important feature potentially opening up certain workflows. > > It could allow adding certain modifier types after it not originally supported by the base object type (like say bezier curves) unless we lift those limitations from the beginning. > It could also open up the possibility to manually edit the mesh in the viewport at a certain point, to introduce some form of destructive steps in the middle or end of the tree. > Maybe it could be used as output to some form of external filesystem based cache files like say alembic files or other I still does not see why all this functionality can't be inside every node. If I would like to catch data in some step I would like just switch such node in `catch` mode. Adding new nodes is quite expensive procedure (from point of spended time for placing node, adding links) and should be avoided if possible. If data of all nodes can be caught it looks like data can be kept right inside them instead of creating seperate node for this.

@randum there's a difference between in memory cache and disc cache which needs to be remembered. It makes much more
sense for a disc cache to be a discrete node so you can have control over file paths and baking data -- imagine being able to set
up a render pipeline which automatically bakes sims when needed. You don't need that on every node in the tree, just at important
points in the sim such as baking fluid particles then baking mesh as in Mantaflow.

Every node could cache its input in memory as an optimization but that should be transparent to the user.

@randum there's a difference between in memory cache and disc cache which needs to be remembered. It makes much more sense for a disc cache to be a discrete node so you can have control over file paths and baking data -- imagine being able to set up a render pipeline which automatically bakes sims when needed. You don't need that on every node in the tree, just at important points in the sim such as baking fluid particles then baking mesh as in Mantaflow. Every node could cache its input in memory as an optimization but that should be transparent to the user.

I think you're talking about different use cases. Caching a mesh to disk is one, locking/freezing the outputs of a node in the graph is another.

For the latter, it makes sense for it to be a feature of every node (as it is in e.g. Houdini), to freeze any type of output, not just meshes.

I think you're talking about different use cases. Caching a mesh to disk is one, locking/freezing the outputs of a node in the graph is another. For the latter, it makes sense for it to be a feature of every node (as it is in e.g. Houdini), to freeze any type of output, not just meshes.

Added subscriber: @John-44

Added subscriber: @John-44

@JohnCox-3 In the description of the topic 'catching' consider as a protection from extra tree calculation. So it is a tool by which a user can declare that he is not going change some part of tree so Blender could update a tree more rapidly. It's not about of saving any data to disk as I'm understanding this.

@JohnCox-3 In the description of the topic 'catching' consider as a protection from extra tree calculation. So it is a tool by which a user can declare that he is not going change some part of tree so Blender could update a tree more rapidly. It's not about of saving any data to disk as I'm understanding this.

Added subscriber: @Zuorion-4

Added subscriber: @Zuorion-4

obraz.png
And how about "display in Edit Mode" and "On Cage"?
In linear stack its easy to determine where to stop showing cage, but how to do that in nodes?
obraz.png


Also having viewport output and render as separate outputs will complicate simple node tree like that:
obraz.png
Subdiv is disabled in viewport (or have different subdiv levels) but modifiers down the line have same setup
having two outputs would require to duplicate all below nodes and complicate tree
wouldn't be better if:

  1. Properties from Properties Editor could have two values, one for render, one for viewport
    obraz.png
    or imo better
  2. branch node that user could setup separate behavior as he would like
    obraz.png
![obraz.png](https://archive.blender.org/developer/F8440000/obraz.png) And how about "display in Edit Mode" and "On Cage"? In linear stack its easy to determine where to stop showing cage, but how to do that in nodes? ![obraz.png](https://archive.blender.org/developer/F8440042/obraz.png) --- Also having viewport output and render as separate outputs will complicate simple node tree like that: ![obraz.png](https://archive.blender.org/developer/F8440053/obraz.png) Subdiv is disabled in viewport (or have different subdiv levels) but modifiers down the line have same setup having two outputs would require to duplicate all below nodes and complicate tree wouldn't be better if: 1. Properties from Properties Editor could have two values, one for render, one for viewport ![obraz.png](https://archive.blender.org/developer/F8440078/obraz.png) or imo better 2. branch node that user could setup separate behavior as he would like ![obraz.png](https://archive.blender.org/developer/F8440080/obraz.png)

@WilliamReynish please I just want to know.

Will there be a way to individually apply a modifier just like the current stack system?

And I need to know how is the addon compatibility is going to work. Most add-ons rely on a linear stack to work.

@WilliamReynish please I just want to know. Will there be a way to individually apply a modifier just like the current stack system? And I need to know how is the addon compatibility is going to work. Most add-ons rely on a linear stack to work.
Contributor

In #74967#902314, @jeacom wrote:
@WilliamReynish please I just want to know.

Will there be a way to individually apply a modifier just like the current stack system?

And I need to know how is the addon compatibility is going to work. Most add-ons rely on a linear stack to work.

Addon compatibility will most likely break, but going modifier based will probably make majority of modifier related addons obsolete.

> In #74967#902314, @jeacom wrote: > @WilliamReynish please I just want to know. > > Will there be a way to individually apply a modifier just like the current stack system? > > And I need to know how is the addon compatibility is going to work. Most add-ons rely on a linear stack to work. Addon compatibility will most likely break, but going modifier based will probably make majority of modifier related addons obsolete.

Added subscriber: @DirSurya

Added subscriber: @DirSurya

Added subscriber: @TakingFire

Added subscriber: @TakingFire

Added subscriber: @shanberg

Added subscriber: @shanberg

Added subscriber: @mkingsnorth

Added subscriber: @mkingsnorth

Added subscriber: @rpopovici

Added subscriber: @rpopovici

Hello @brecht

What is wrong with this proposal? https://wiki.blender.org/wiki/Source/Nodes/UnifiedSimulationSystemProposal
Why not a state "pass-through" solution like the one presented in that paper? It would be closer to what Houdini does and believe me, the node tree is alot cleaner and easier to manage
And why "cluster" sets? Selection sets or selection groups would much more intutitive. The "cluster" word is usualy reserved for other stuff like a point cloud..

Beside this, what's the thing with three types of node and three types of links between nodes? That alone adds a complexity of 9x to your nodes project. With a state pass through aproch you don't have this problem. Grouping can be done from props panels or math expressions, just like Houdini does it. Otherwise, my feeling is that this project will endup "spaghetti-nodes" instead of everything nodes :) No offense intended

Hello @brecht What is wrong with this proposal? https://wiki.blender.org/wiki/Source/Nodes/UnifiedSimulationSystemProposal Why not a state "pass-through" solution like the one presented in that paper? It would be closer to what Houdini does and believe me, the node tree is alot cleaner and easier to manage And why "cluster" sets? Selection sets or selection groups would much more intutitive. The "cluster" word is usualy reserved for other stuff like a point cloud.. Beside this, what's the thing with three types of node and three types of links between nodes? That alone adds a complexity of 9x to your nodes project. With a state pass through aproch you don't have this problem. Grouping can be done from props panels or math expressions, just like Houdini does it. Otherwise, my feeling is that this project will endup "spaghetti-nodes" instead of everything nodes :) No offense intended

@rpopovici, Houdini also has a distinction between geometry (SOP) and dynamics (DOP) nodes? The planned simulation / particle nodes in Blender are also designed to construct a state to be simulated, rather than each node modifying the geometry and passing that along as geometry nodes would.

@rpopovici, Houdini also has a distinction between geometry (SOP) and dynamics (DOP) nodes? The planned simulation / particle nodes in Blender are also designed to construct a state to be simulated, rather than each node modifying the geometry and passing that along as geometry nodes would.

@brecht
I am not sure if I explained well enough what I meant by state pass through:
My idea was to limit the number of links between nodes by passing down the line the entire scene(object state) and each node will "select" based on some selection expressions(manually editable if necessary)
what mesh data to transform. In houdini they are doing something similar, and the nodes are editable though the properties panel which gives you more room for parametrization.
Also they kinda have it both ways if you want more coding you can choose VEX or python, or if you like nodes you can use VOPS
From what I am seeing, you are trying to achieve something similar to houdini VOPS, which is great if you like pure nodes, but as I have said earlier: this will become node spaghetti quite fast.
My point here is that you should consider node parametrization/expressions as well if you want to keep the nodetree clean and manageable.

@brecht I am not sure if I explained well enough what I meant by state pass through: My idea was to limit the number of links between nodes by passing down the line the entire scene(object state) and each node will "select" based on some selection expressions(manually editable if necessary) what mesh data to transform. In houdini they are doing something similar, and the nodes are editable though the properties panel which gives you more room for parametrization. Also they kinda have it both ways if you want more coding you can choose VEX or python, or if you like nodes you can use VOPS From what I am seeing, you are trying to achieve something similar to houdini VOPS, which is great if you like pure nodes, but as I have said earlier: this will become node spaghetti quite fast. My point here is that you should consider node parametrization/expressions as well if you want to keep the nodetree clean and manageable.

@rpopovici I guess you are suggesting that rather than explicit linking e.g. clusters between nodes, you specify the name of an attribute to write to in one node and then in another node specify that same name to read from. And so you would get fewer links in the node graph.

There are pros and cons to that kind of design. At least in my experience with Houdini, I often have to go and read a lot of node parameters and then build up a mental model of hidden relations, to understand what a node graph is actually doing. And for editing, it's easy to accidentally break those relations because I forgot to edit parameters in all the right places.

So to me, the spaghetti is always there, the difference is just if it's visually represented or something you have to keep in your head.

@rpopovici I guess you are suggesting that rather than explicit linking e.g. clusters between nodes, you specify the name of an attribute to write to in one node and then in another node specify that same name to read from. And so you would get fewer links in the node graph. There are pros and cons to that kind of design. At least in my experience with Houdini, I often have to go and read a lot of node parameters and then build up a mental model of hidden relations, to understand what a node graph is actually doing. And for editing, it's easy to accidentally break those relations because I forgot to edit parameters in all the right places. So to me, the spaghetti is always there, the difference is just if it's visually represented or something you have to keep in your head.

@brecht Houdini lets you visualise all connections between nodes even if they are created by expressions; you can enable an overlay of
all the relationship lines between nodes. More importantly, you can easily inspect all the node's attributes in the node tree, geometry spreadsheet
or visualise them as overlays in the viewport so it is never a question of having to keep everything in your head. There isn't an equivalent
of visualisers or Info window in this design that I can see. (The Outliner could be adapted to become a geometry spreadsheet but that should
be a different design task.)

Also where there are multiple connections between nodes as part of the design (such as Vellum for example) dragging and dropping a node on the
noodle lines will wire all of them in automatically.

The point is I think keeping the node tree as simple as possible and adding ways of inspecting the data is a better design than having all connections
explicit and in the node graph where you have to manually edit them. In my experience of teaching Houdini, people soon learn how to find
the attributes they need using the extra tools to view the data -- not everything has to happen in the node editor

@brecht Houdini lets you visualise all connections between nodes even if they are created by expressions; you can enable an overlay of all the relationship lines between nodes. More importantly, you can easily inspect all the node's attributes in the node tree, geometry spreadsheet or visualise them as overlays in the viewport so it is never a question of having to keep everything in your head. There isn't an equivalent of visualisers or Info window in this design that I can see. (The Outliner could be adapted to become a geometry spreadsheet but that should be a different design task.) Also where there are multiple connections between nodes as part of the design (such as Vellum for example) dragging and dropping a node on the noodle lines will wire all of them in automatically. The point is I think keeping the node tree as simple as possible and adding ways of inspecting the data is a better design than having all connections explicit and in the node graph where you have to manually edit them. In my experience of teaching Houdini, people soon learn how to find the attributes they need using the extra tools to view the data -- not everything has to happen in the node editor

@brecht There is always a sweet spot between nodes and parametrization. Too many nodes are hard to manage manually and you become unproductive; too much programming/parametrization will make it hard for a non-programmer to use.
Some of the stuff John Cox was talking about in the previous post for anyone who wants to take a look:
Visualizing dependencies - https://www.sidefx.com/docs/houdini/network/dependencies.html
Geometry Spreadsheet and Attributes - https://www.youtube.com/watch?v=VwQEkqXutxo
Geometry Visualizer - https://vimeo.com/167151977

And let's not forget here about another two very powerful features in H:
Relative parameter referencing - https://www.sidefx.com/docs/houdini/network/copying.html
Collapse Nodes To Subnetwork(subnet extraction) - https://www.youtube.com/watch?v=o4hMrmg9ZkA

@brecht There is always a sweet spot between nodes and parametrization. Too many nodes are hard to manage manually and you become unproductive; too much programming/parametrization will make it hard for a non-programmer to use. Some of the stuff John Cox was talking about in the previous post for anyone who wants to take a look: Visualizing dependencies - https://www.sidefx.com/docs/houdini/network/dependencies.html Geometry Spreadsheet and Attributes - https://www.youtube.com/watch?v=VwQEkqXutxo Geometry Visualizer - https://vimeo.com/167151977 And let's not forget here about another two very powerful features in H: Relative parameter referencing - https://www.sidefx.com/docs/houdini/network/copying.html Collapse Nodes To Subnetwork(subnet extraction) - https://www.youtube.com/watch?v=o4hMrmg9ZkA

Added subscriber: @RC12

Added subscriber: @RC12

@JohnCox-3, I can see advantages of the Houdini design, and I'm not particularly trying to argue it should be one way or the other. I agree good visualization in the viewport and spreadsheets is important, which this design task does not address so far.

My earlier comments in this task were about the need for a distinction between geometry and simulation nodes. I assumed @rpopovici addressed me specifically for that reason, but it seems their point was about how to pass along geometry attributes / clusters / selection which I haven't given my opinion on. My point was that in Houdini you have this node graph that you see by default, and then another layer of connections. And that inevitably makes it harder to understand even if there are visualization tools that can help. However it may be a necessary evil.

I don't think that the specific selection / cluster nodes as proposed in the task description will actually work, since you can't do something like Grow/Shrink Cluster without the mesh topology. So this part of the design does need more work.

@JohnCox-3, I can see advantages of the Houdini design, and I'm not particularly trying to argue it should be one way or the other. I agree good visualization in the viewport and spreadsheets is important, which this design task does not address so far. My earlier comments in this task were about the need for a distinction between geometry and simulation nodes. I assumed @rpopovici addressed me specifically for that reason, but it seems their point was about how to pass along geometry attributes / clusters / selection which I haven't given my opinion on. My point was that in Houdini you have this node graph that you see by default, and then another layer of connections. And that inevitably makes it harder to understand even if there are visualization tools that can help. However it may be a necessary evil. I don't think that the specific selection / cluster nodes as proposed in the task description will actually work, since you can't do something like Grow/Shrink Cluster without the mesh topology. So this part of the design does need more work.

@brecht By pass-through I was referring to the fact that you can merge multiple objects(mesh data, volumes, particles, etc) under a single data stream. The current node will transform only the bits it's concerned with and everything else will be passed through to the next node in the network.
In simulations(DOPS), the only difference is the fact that you have time based constraints or physics constraints. You can think of SOPs as DOPS always in frame zero. In essence, geometry nodes underlying behavior is not much different
than DOPS's behavior. You still have to do a pass-through modified state to the next node in the tree, similarly to what happens in a looping simulation. IMO, there shouldn't be a hard link between object data in the scene and the nodes operating on this data. The data represents the state of the system and the nodes are just transformations applied to this data based on some constraints or selection patterns.

@brecht By pass-through I was referring to the fact that you can merge multiple objects(mesh data, volumes, particles, etc) under a single data stream. The current node will transform only the bits it's concerned with and everything else will be passed through to the next node in the network. In simulations(DOPS), the only difference is the fact that you have time based constraints or physics constraints. You can think of SOPs as DOPS always in frame zero. In essence, geometry nodes underlying behavior is not much different than DOPS's behavior. You still have to do a pass-through modified state to the next node in the tree, similarly to what happens in a looping simulation. IMO, there shouldn't be a hard link between object data in the scene and the nodes operating on this data. The data represents the state of the system and the nodes are just transformations applied to this data based on some constraints or selection patterns.

In #74967#922456, @rpopovici wrote:
In simulations(DOPS), the only difference is the fact that you have time based constraints or physics constraints. You can think of SOPs as DOPS always in frame zero. In essence, geometry nodes underlying behavior is not much different than DOPS's behavior.

From my understanding the data being passed along in SOPs and DOPs is very different. One being geometry and the other being a description for how a solver modifies geometry over time. All these node graphs can be described by some kind of data flow in general terms, but that doesn't help to pin down the design.

> In #74967#922456, @rpopovici wrote: > In simulations(DOPS), the only difference is the fact that you have time based constraints or physics constraints. You can think of SOPs as DOPS always in frame zero. In essence, geometry nodes underlying behavior is not much different than DOPS's behavior. From my understanding the data being passed along in SOPs and DOPs is very different. One being geometry and the other being a description for how a solver modifies geometry over time. All these node graphs can be described by some kind of data flow in general terms, but that doesn't help to pin down the design.

Maybe I am in error here. I was under the impression that H has an unified underling architecture. Anyway, I would do the geometry nodes the same way as H does DOPS. A declarative approach is always superior to an imperative one.
You have more room for underling strategies, less input/output manual wiring, automatic data conversion(e.g object -> mesh) when necessary for the input stream, and overall a nicer user experience.

Maybe I am in error here. I was under the impression that H has an unified underling architecture. Anyway, I would do the geometry nodes the same way as H does DOPS. A declarative approach is always superior to an imperative one. You have more room for underling strategies, less input/output manual wiring, automatic data conversion(e.g object -> mesh) when necessary for the input stream, and overall a nicer user experience.

Added subscriber: @AlbertoVelazquez

Added subscriber: @AlbertoVelazquez

As I see it, except for limitation, I don't understand that nodes are not separate objects instead of modifiers. More so when the node tree can generate objects as primitives and can have multiple outputs.

A little thought, it might be nice to leave the time it takes to calculate under each node optional.

As I see it, except for limitation, I don't understand that nodes are not separate objects instead of modifiers. More so when the node tree can generate objects as primitives and can have multiple outputs. A little thought, it might be nice to leave the time it takes to calculate under each node optional.

Added subscriber: @BartekMoniewski

Added subscriber: @BartekMoniewski

Added subscriber: @ckohl_art

Added subscriber: @ckohl_art

Added subscriber: @AlexeyPerminov

Added subscriber: @AlexeyPerminov

Added subscriber: @Lowlet

Added subscriber: @Lowlet

Can you also synchronize all geometry operations in viewport with this node system so we can work in 3d view as usually and it will create node tree for us behind the scenes(like in houdini) It would be great non destructive workflow and it wouldn't slow down modelling process by forcing us manually create nodes.

Can you also synchronize all geometry operations in viewport with this node system so we can work in 3d view as usually and it will create node tree for us behind the scenes(like in houdini) It would be great non destructive workflow and it wouldn't slow down modelling process by forcing us manually create nodes.

In #74967#950193, @Lowlet wrote:
Can you also synchronize all geometry operations in viewport with this node system so we can work in 3d view as usually and it will create node tree for us behind the scenes(like in houdini) It would be great non destructive workflow and it wouldn't slow down modelling process by forcing us manually create nodes.

This is the most important feature that all the system should be built on, in my opinion. Working with just nodes is a step back, while working as usual and having a node tree automatically built under the hood (that you can open!) would be a game changer.

> In #74967#950193, @Lowlet wrote: > Can you also synchronize all geometry operations in viewport with this node system so we can work in 3d view as usually and it will create node tree for us behind the scenes(like in houdini) It would be great non destructive workflow and it wouldn't slow down modelling process by forcing us manually create nodes. This is the most important feature that all the system should be built on, in my opinion. Working with just nodes is a step back, while working as usual and having a node tree automatically built under the hood (that you can open!) would be a game changer.

Added subscriber: @michaelknubben

Added subscriber: @michaelknubben

It's comforting to see that nodes will continue to have linear representation similar to the Shader Editor, but I wonder:
Has any thought been given to multiple nodetrees on one mesh? After all, we support this with materials too.
This way the user could have a complex node-tree for multiple interdependent effects, but also have one nodetree below it for a simple subdivision, and: easily control its visibility in the viewport, or apply it to the mesh.

This way, you could also easily convert old files to the new system, with a nodetree per 'modifier'. It would also open the door to a few built-in presets that present the user with a clean layout that's custom-built (as in shaders: a group with custom named inputs etc) to function well, but leave the door open to a 'use nodes' button to dive deeper.

It's comforting to see that nodes will continue to have linear representation similar to the Shader Editor, but I wonder: Has any thought been given to multiple nodetrees on one mesh? After all, we support this with materials too. This way the user could have a complex node-tree for multiple interdependent effects, but also have one nodetree below it for a simple subdivision, and: easily control its visibility in the viewport, or apply it to the mesh. This way, you could also easily convert old files to the new system, with a nodetree per 'modifier'. It would also open the door to a few built-in presets that present the user with a clean layout that's custom-built (as in shaders: a group with custom named inputs etc) to function well, but leave the door open to a 'use nodes' button to dive deeper.

Added subscriber: @KingGoddardJr

Added subscriber: @KingGoddardJr

Added subscriber: @astroblitz

Added subscriber: @astroblitz
Contributor

Added subscriber: @RedMser

Added subscriber: @RedMser

Added subscriber: @Mir-Mir-Roy

Added subscriber: @Mir-Mir-Roy
Contributor

Added subscriber: @KenzieMac130

Added subscriber: @KenzieMac130

Added subscriber: @slowburn

Added subscriber: @slowburn

Removed subscriber: @Senna

Removed subscriber: @Senna

Added subscriber: @proulxpl

Added subscriber: @proulxpl

Added subscriber: @thecooper8

Added subscriber: @thecooper8

Added subscriber: @KidTempo

Added subscriber: @KidTempo

Added subscriber: @CobraA

Added subscriber: @CobraA

I personally think Maya's approach is much more successful for Artists than Houdini, they can work without worrying about complexity of nodes & have the option to turn off or delete the "construction history" for them but at the same time have the low level control for every component whether you're doing modeling, rigging, shading..etc all put in one cohesive place .
This is why it's the front runner because of this balance, i am not sure if Blender can hit that since it wasn't built from the get go with the same paradigm & from seeing this task, it seems there will be only creation nodes.

I personally think Maya's approach is much more successful for Artists than Houdini, they can work without worrying about complexity of nodes & have the option to turn off or delete the "construction history" for them but at the same time have the low level control for every component whether you're doing modeling, rigging, shading..etc all put in one cohesive place . This is why it's the front runner because of this balance, i am not sure if Blender can hit that since it wasn't built from the get go with the same paradigm & from seeing this task, it seems there will be only creation nodes.

This is why it would be very important to have nodes generated also by the regular viewport workflow. You would end with the best of both worlds: you can go the nodes way, powerful as you want; or you can still go the 3d way, which would have a sort of realtime history in the form of a nodegraph. Together they'd make a hybrid approach where one can model and adjust the nodes back and forth.
Sorry for repeating myself.

This is why it would be very important to have nodes generated also by the regular viewport workflow. You would end with the best of both worlds: you can go the nodes way, powerful as you want; or you can still go the 3d way, which would have a sort of realtime history in the form of a nodegraph. Together they'd make a hybrid approach where one can model and adjust the nodes back and forth. Sorry for repeating myself.

Added subscriber: @Grady

Added subscriber: @Grady

I don't think there's any question that this system will be more powerful and allow for more complex and interesting creations.

Having said that, are simple and common modifier setups going to require more clicks to setup under this system? If so I think we that should be something addressed in the design.

For an example, I model furniture pretty much constantly for my job, most of the time whenever I'm using modifiers, it's just to quickly put a solidify / mirror / bevel / subsurf modifier, or sometimes multiple of the previously mentioned together to create a panel with automatic thickness and bevels, or something mirrored across multiple axis. For those modelling tasks, the node tree concept is of no benefit, the existing 'modifier stack' system is more than enough for my needs. Will a node tree require more clicks for me to setup?

Because ideally, I wouldn't want all this extra functionality and power to come at the cost of losing speed at the common simple tasks. Because those kinds of basic modifier setups are what I use modifiers for 99% of the time.

I am still pleased to see the new system, but unless or until it is just as fast as the current modifier stack for 'common and simple' tasks, I'm hoping the option to choose between nodes or modifier stacks will remain available, because for most things I would see myself probably just opting to use the stack and only use the node tree when my needs are more complex.

Please don't take this comment the wrong way, I'm not saying this design is no good, I'm just inquiring how this system will work for simple tasks that don't require complex node trees to achieve and what effect it will have on workflow speed. For all I know there's a plan to ensure that that simple tasks require no additional clicks to the current workflow, if so, that's great!

I don't think there's any question that this system will be more powerful and allow for more complex and interesting creations. Having said that, are simple and common modifier setups going to require more clicks to setup under this system? If so I think we that should be something addressed in the design. For an example, I model furniture pretty much constantly for my job, most of the time whenever I'm using modifiers, it's just to quickly put a solidify / mirror / bevel / subsurf modifier, or sometimes multiple of the previously mentioned together to create a panel with automatic thickness and bevels, or something mirrored across multiple axis. For those modelling tasks, the node tree concept is of no benefit, the existing 'modifier stack' system is more than enough for my needs. Will a node tree require more clicks for me to setup? Because ideally, I wouldn't want all this extra functionality and power to come at the cost of losing speed at the common simple tasks. Because those kinds of basic modifier setups are what I use modifiers for 99% of the time. I am still pleased to see the new system, but unless or until it is just as fast as the current modifier stack for 'common and simple' tasks, I'm hoping the option to choose between nodes or modifier stacks will remain available, because for most things I would see myself probably just opting to use the stack and only use the node tree when my needs are more complex. Please don't take this comment the wrong way, I'm not saying this design is no good, I'm just inquiring how this system will work for simple tasks that don't require complex node trees to achieve and what effect it will have on workflow speed. For all I know there's a plan to ensure that that simple tasks require no additional clicks to the current workflow, if so, that's great!
Contributor

Modifier stacks should be emulated by a blender addon. If you look at some of the addons already out there for improving concept workflows the modifier stack isn't actually better than nodes, many addons create complex, messy, hacky modifier stacks and clutters the scene outliner to do procedural nondestructive modeling. Moving to nodes would make this actually better for these addons as it has moved to a fixed layer stack to a contained procedural graph. There is really no reason to have a hard coded modifier stack anymore in my opinion. It should just end up being a quick UI thing on top of the graph backend.

Modifier stacks should be emulated by a blender addon. If you look at some of the addons already out there for improving concept workflows the modifier stack isn't actually better than nodes, many addons create complex, messy, hacky modifier stacks and clutters the scene outliner to do procedural nondestructive modeling. Moving to nodes would make this actually better for these addons as it has moved to a fixed layer stack to a contained procedural graph. There is really no reason to have a hard coded modifier stack anymore in my opinion. It should just end up being a quick UI thing on top of the graph backend.

In #74967#1018977, @astrand130 wrote:
Modifier stacks should be emulated by a blender addon. If you look at some of the addons already out there for improving concept workflows the modifier stack isn't actually better than nodes, many addons create complex, messy, hacky modifier stacks and clutters the scene outliner to do procedural nondestructive modeling. Moving to nodes would make this actually better for these addons as it has moved to a fixed layer stack to a contained procedural graph. There is really no reason to have a hard coded modifier stack anymore in my opinion. It should just end up being a quick UI thing on top of the graph backend.

Currently, in the python API, I dont see a function to draw individual modifiers in an UILayout,

For that I think it would be nice to have a UILayout.template_single_modifier(mod) function to allow us to draw a modifier UI anywhere in the interface, otherwise, we would have to code each modifier UI from scratch in python and every time there was a change in the modifier properties or a new modifier was added, the add-on would break.

Or even better.

I dont believe that there's a difference between a tree of modifiers with a single branch and a modifier stack, so we could have a GUI similar to the shader editor that just list the nodes linearly as deep as it can, we would need also an easy way to apply, remove, duplicate and reorder a single modifier from within this GUI, preferentially as similar as possible if not better than the current 2.90 modifier stack (I'm talking about the drag and drop feature).

> In #74967#1018977, @astrand130 wrote: > Modifier stacks should be emulated by a blender addon. If you look at some of the addons already out there for improving concept workflows the modifier stack isn't actually better than nodes, many addons create complex, messy, hacky modifier stacks and clutters the scene outliner to do procedural nondestructive modeling. Moving to nodes would make this actually better for these addons as it has moved to a fixed layer stack to a contained procedural graph. There is really no reason to have a hard coded modifier stack anymore in my opinion. It should just end up being a quick UI thing on top of the graph backend. Currently, in the python API, I dont see a function to draw individual modifiers in an UILayout, For that I think it would be nice to have a UILayout.template_single_modifier(mod) function to allow us to draw a modifier UI anywhere in the interface, otherwise, we would have to code each modifier UI from scratch in python and every time there was a change in the modifier properties or a new modifier was added, the add-on would break. Or even better. I dont believe that there's a difference between a tree of modifiers with a single branch and a modifier stack, so we could have a GUI similar to the shader editor that just list the nodes linearly as deep as it can, we would need also an easy way to apply, remove, duplicate and reorder a single modifier from within this GUI, preferentially as similar as possible if not better than the current 2.90 modifier stack (I'm talking about the drag and drop feature).

Removed subscriber: @Mir-Mir-Roy

Removed subscriber: @Mir-Mir-Roy

Added subscriber: @Mir-Mir-Roy

Added subscriber: @Mir-Mir-Roy
Contributor

In #74967#1018993, @jeacom wrote:

In #74967#1018977, @astrand130 wrote:
Modifier stacks should be emulated by a blender addon. If you look at some of the addons already out there for improving concept workflows the modifier stack isn't actually better than nodes, many addons create complex, messy, hacky modifier stacks and clutters the scene outliner to do procedural nondestructive modeling. Moving to nodes would make this actually better for these addons as it has moved to a fixed layer stack to a contained procedural graph. There is really no reason to have a hard coded modifier stack anymore in my opinion. It should just end up being a quick UI thing on top of the graph backend.

Currently, in the python API, I dont see a function to draw individual modifiers in an UILayout,

For that I think it would be nice to have a UILayout.template_single_modifier(mod) function to allow us to draw a modifier UI anywhere in the interface, otherwise, we would have to code each modifier UI from scratch in python and every time there was a change in the modifier properties or a new modifier was added, the add-on would break.

Or even better.

I dont believe that there's a difference between a tree of modifiers with a single branch and a modifier stack, so we could have a GUI similar to the shader editor that just list the nodes linearly as deep as it can, we would need also an easy way to apply, remove, duplicate and reorder a single modifier from within this GUI, preferentially as similar as possible if not better than the current 2.90 modifier stack (I'm talking about the drag and drop feature).

I think there are plans to use this to expose data from node groups better than how materials currently do it as to not clutter the panel. I think the goal is to allow users to create node groups and expose interfaces but the ability to replicate the modifier UX via python might be better discussed on the UX thread.

> In #74967#1018993, @jeacom wrote: >> In #74967#1018977, @astrand130 wrote: >> Modifier stacks should be emulated by a blender addon. If you look at some of the addons already out there for improving concept workflows the modifier stack isn't actually better than nodes, many addons create complex, messy, hacky modifier stacks and clutters the scene outliner to do procedural nondestructive modeling. Moving to nodes would make this actually better for these addons as it has moved to a fixed layer stack to a contained procedural graph. There is really no reason to have a hard coded modifier stack anymore in my opinion. It should just end up being a quick UI thing on top of the graph backend. > > Currently, in the python API, I dont see a function to draw individual modifiers in an UILayout, > > For that I think it would be nice to have a UILayout.template_single_modifier(mod) function to allow us to draw a modifier UI anywhere in the interface, otherwise, we would have to code each modifier UI from scratch in python and every time there was a change in the modifier properties or a new modifier was added, the add-on would break. > > Or even better. > > I dont believe that there's a difference between a tree of modifiers with a single branch and a modifier stack, so we could have a GUI similar to the shader editor that just list the nodes linearly as deep as it can, we would need also an easy way to apply, remove, duplicate and reorder a single modifier from within this GUI, preferentially as similar as possible if not better than the current 2.90 modifier stack (I'm talking about the drag and drop feature). I think there are plans to use this to expose data from node groups better than how materials currently do it as to not clutter the panel. I think the goal is to allow users to create node groups and expose interfaces but the ability to replicate the modifier UX via python might be better discussed on the UX thread.
Member

Added subscriber: @Imaginer

Added subscriber: @Imaginer
Member

In #74967#1018977, @astrand130 wrote:
There is really no reason to have a hard coded modifier stack anymore in my opinion. It should just end up being a quick UI thing on top of the graph backend.

I agree with this part, there should be one backend (node graph) that handles everything and two UI frontends; one for the node editor and one for the properties editor that replicates the modifier stack similar to how materials are represented.
Because, when you think about it, a modifier stack can be very easily represented by a node graph. If each of the current modifiers gets converted into a node group all you have to do to replicate the modifier stack is to connect the "modifiers" in a straight line from the input to the output. This also looks like it could be represented in a way very similar to how the current modifier stack is represented in the properties editor
.

In #74967#1018977, @astrand130 wrote:
Modifier stacks should be emulated by a blender addon.

I do not agree with this. It should not be required for an addon to replicate a core component such as a modifier stack. If at all possible, systems should be designed properly to begin with, and while it's great to have addons to extend the flexibility of systems, they shouldn't need to work around stuff that could be easily solved when the systems are first being designed.

In #74967#1019031, @astrand130 wrote:
I think there are plans to use this to expose data from node groups better than how materials currently do it as to not clutter the panel. I think the goal is to allow users to create node groups and expose interfaces but the ability to replicate the modifier UX via python might be better discussed on the UX thread.

Exposing data from node groups better than how materials do it sounds good. But whether this design is better discussed here or in #67088 is debatable. Since this task is about the design of Geometry Nodes specifically, and it will ultimately replace the modifier stack, I think discussing their interface (of which a stack-like UI should be a core component) is perfectly fine.

> In #74967#1018977, @astrand130 wrote: >There is really no reason to have a hard coded modifier stack anymore in my opinion. It should just end up being a quick UI thing on top of the graph backend. I agree with this part, there should be one backend (node graph) that handles everything and two UI frontends; one for the node editor and one for the properties editor that replicates the modifier stack similar to how materials are represented. Because, when you think about it, a modifier stack can be very easily represented by a node graph. If each of the current modifiers gets converted into a node group all you have to do to replicate the modifier stack is to connect the "modifiers" in a straight line from the input to the output. This also looks like it could be represented in a way very similar to how the current modifier stack is represented in the properties editor . > In #74967#1018977, @astrand130 wrote: > Modifier stacks should be emulated by a blender addon. I do not agree with this. It should not be required for an addon to replicate a core component such as a modifier stack. If at all possible, systems should be designed properly to begin with, and while it's great to have addons to extend the flexibility of systems, they shouldn't need to work around stuff that could be easily solved when the systems are first being designed. > In #74967#1019031, @astrand130 wrote: > I think there are plans to use this to expose data from node groups better than how materials currently do it as to not clutter the panel. I think the goal is to allow users to create node groups and expose interfaces but the ability to replicate the modifier UX via python might be better discussed on the UX thread. Exposing data from node groups better than how materials do it sounds good. But whether this design is better discussed here or in #67088 is debatable. Since this task is about the design of Geometry Nodes specifically, and it will ultimately replace the modifier stack, I think discussing their interface (of which a stack-like UI should be a core component) is perfectly fine.

To give an example of what I mean, here's the current process for setting up an object to have a mirror modifier for modelling purposes, as you can see it's simply 3 clicks, Select, Add Modifier, Mirror. Done.

Peek 2020-09-21 01-19.mp4

Under this new geometry nodes system, how many clicks would be involved to setup a similar mirror modifier setup?

Or for that matter a bevel modifier, solidify, etc.

If such setups will involve significantly more clicks, I do believe that long term there would be value in maintaining a modifier stack UI with optional modifier node mode instead. Even if under the hood the modifier stack is just a node setup. Just in the interest of maintaining a fast workflow. Since we all know Blender users love their speedy workflows.

Or... What if the Node option for modifiers was simply a type of Modifier? It could be a type of modifier called "Node Modifier" and part of the existing stack of modifiers.

The advantage I can think of there is that it would allow for creating complex node modifiers and storing them for re-use later, then stacking them together to chain complex node modifiers together.

It would have another benefit as well, as it could allow for dividing complex modifications up into separate steps, as separate node modifiers, in case they wish to later "Apply" the first step to do some destructive modelling.

Right now in the current 'modifier stack' UI, it is possible to for example, add a Mirror Modifier, Bevel, Subsurf, etc, then later on if you wish to commit the mirror modifier to make an asymmetrical design but keep the bevel and subsurf, you can currently just simply 'Apply' the mirror modifier.

But the UI mockup up above shows only a single 'Apply' button for the entire node modifier system and that does make sense given the tree like structure wouldn't lend itself well to applying only a subset of the modifier nodes.

Something like this:

node mockup.png

To give an example of what I mean, here's the current process for setting up an object to have a mirror modifier for modelling purposes, as you can see it's simply 3 clicks, Select, Add Modifier, Mirror. Done. [Peek 2020-09-21 01-19.mp4](https://archive.blender.org/developer/F8903402/Peek_2020-09-21_01-19.mp4) Under this new geometry nodes system, how many clicks would be involved to setup a similar mirror modifier setup? Or for that matter a bevel modifier, solidify, etc. If such setups will involve significantly more clicks, I do believe that long term there would be value in maintaining a modifier stack UI with optional modifier node mode instead. Even if under the hood the modifier stack is just a node setup. Just in the interest of maintaining a fast workflow. Since we all know Blender users love their speedy workflows. Or... What if the Node option for modifiers was simply a type of Modifier? It could be a type of modifier called "Node Modifier" and part of the existing stack of modifiers. The advantage I can think of there is that it would allow for creating complex node modifiers and storing them for re-use later, then stacking them together to chain complex node modifiers together. It would have another benefit as well, as it could allow for dividing complex modifications up into separate steps, as separate node modifiers, in case they wish to later "Apply" the first step to do some destructive modelling. Right now in the current 'modifier stack' UI, it is possible to for example, add a Mirror Modifier, Bevel, Subsurf, etc, then later on if you wish to commit the mirror modifier to make an asymmetrical design but keep the bevel and subsurf, you can currently just simply 'Apply' the mirror modifier. But the UI mockup up above shows only a single 'Apply' button for the entire node modifier system and that does make sense given the tree like structure wouldn't lend itself well to applying only a subset of the modifier nodes. Something like this: ![node mockup.png](https://archive.blender.org/developer/F8903435/node_mockup.png)

Added subscriber: @Znio.G

Added subscriber: @Znio.G

It makes sense to have nodes for Particles,Rigging, Grooming...and i understand that people want to do more procedual modeling, Blender is known for it's powerful modeling capabilities more than anything else & that's thanks to the modifiers stack and the other tools/addons.
Keeping the modifiers stack is essential to many users, and also much more easier & straightforward workflow , this is how 3ds max team has done it.

It makes sense to have nodes for Particles,Rigging, Grooming...and i understand that people want to do more procedual modeling, Blender is known for it's powerful modeling capabilities more than anything else & that's thanks to the modifiers stack and the other tools/addons. Keeping the modifiers stack is essential to many users, and also much more easier & straightforward workflow , this is how 3ds max team has done it.

We will most likely keep the modifier stack, from recent discussions regarding particle and geometry nodes.

We want users to be able to turn node groups in custom modifiers with a high-level interface. And I think the modifier stack is a good place for that kind of interface, where you don't have to open the node editor, but can just quickly add a modifier and tweak a few settings.

Creating such modifiers would be done by adding a group input to the geometry nodes, and marking it as an asset. And then it can be available right from the add modifier menu like a native modifier.

We will most likely keep the modifier stack, from recent discussions regarding particle and geometry nodes. We want users to be able to turn node groups in custom modifiers with a high-level interface. And I think the modifier stack is a good place for that kind of interface, where you don't have to open the node editor, but can just quickly add a modifier and tweak a few settings. Creating such modifiers would be done by adding a group input to the geometry nodes, and marking it as an asset. And then it can be available right from the add modifier menu like a native modifier.

That sounds very flexible, best of both worlds ! I assume existing modifiers will be kept around for transition and compatibility ?
Additionally I suppose custom interfaces for "node modifiers" will require a more thorough access to socket type, widget creation (checkboxes...), etc. than we currently have in Cycles node groups ?

In #74967#922456, @rpopovici wrote:
@brecht By pass-through I was referring to the fact that you can merge multiple objects(mesh data, volumes, particles, etc) under a single data stream. The current node will transform only the bits it's concerned with and everything else will be passed through to the next node in the network.
In simulations(DOPS), the only difference is the fact that you have time based constraints or physics constraints. You can think of SOPs as DOPS always in frame zero. In essence, geometry nodes underlying behavior is not much different
than DOPS's behavior. You still have to do a pass-through modified state to the next node in the tree, similarly to what happens in a looping simulation. IMO, there shouldn't be a hard link between object data in the scene and the nodes operating on this data. The data represents the state of the system and the nodes are just transformations applied to this data based on some constraints or selection patterns.

I thought some more about how to reconcile the clarity of Houdini's networks and the "completeness" so to say- of this proposal, which functionally seems to be more or less based on Cycles or ICE.
So what if those passthrough nodes (as per your definition) were in fact group nodes inside which the data was "silently" split into several streams, operated on then re-merged ? The user wouldn't ever need to have more than one connection between any given two nodes. Optionally, we could have a "split data" node similar to "separate xyz" that would output as many things are there are datatypes flowing within that stream (in case the user wants to access something punctually and making a group feels overkill).

That sounds very flexible, best of both worlds ! I assume existing modifiers will be kept around for transition and compatibility ? Additionally I suppose custom interfaces for "node modifiers" will require a more thorough access to socket type, widget creation (checkboxes...), etc. than we currently have in Cycles node groups ? > In #74967#922456, @rpopovici wrote: > @brecht By pass-through I was referring to the fact that you can merge multiple objects(mesh data, volumes, particles, etc) under a single data stream. The current node will transform only the bits it's concerned with and everything else will be passed through to the next node in the network. > In simulations(DOPS), the only difference is the fact that you have time based constraints or physics constraints. You can think of SOPs as DOPS always in frame zero. In essence, geometry nodes underlying behavior is not much different > than DOPS's behavior. You still have to do a pass-through modified state to the next node in the tree, similarly to what happens in a looping simulation. IMO, there shouldn't be a hard link between object data in the scene and the nodes operating on this data. The data represents the state of the system and the nodes are just transformations applied to this data based on some constraints or selection patterns. I thought some more about how to reconcile the clarity of Houdini's networks and the "completeness" so to say- of this proposal, which functionally seems to be more or less based on Cycles or ICE. So what if those passthrough nodes (as per your definition) were in fact group nodes inside which the data was "silently" split into several streams, operated on then re-merged ? The user wouldn't ever need to have more than one connection between any given two nodes. Optionally, we could have a "split data" node similar to "separate xyz" that would output as many things are there are datatypes flowing within that stream (in case the user wants to access something punctually and making a group feels overkill).

In #74967#1020169, @brecht wrote:
We will most likely keep the modifier stack, from recent discussions regarding particle and geometry nodes.

We want users to be able to turn node groups in custom modifiers with a high-level interface. And I think the modifier stack is a good place for that kind of interface, where you don't have to open the node editor, but can just quickly add a modifier and tweak a few settings.

Creating such modifiers would be done by adding a group input to the geometry nodes, and marking it as an asset. And then it can be available right from the add modifier menu like a native modifier.

If I may be frank for a moment: That. Sounds. Brilliant.

My excitement for 'everything nodes' just immediately dialed up to 11.

So basically rather than replacing the modifier stack, you're creating a node editor interface for us to create our own custom modifiers.

Oh.... Now my brain is overflowing with the possibilities. I can just imagine all the kinds of user content that will get created and shared now.

This in combination with the asset management, and brush management improvements, I can imagine on Blender Market buying packs of brushes, modifiers, assets, etc, and adding them straight into my asset manager in Blender.

> In #74967#1020169, @brecht wrote: > We will most likely keep the modifier stack, from recent discussions regarding particle and geometry nodes. > > We want users to be able to turn node groups in custom modifiers with a high-level interface. And I think the modifier stack is a good place for that kind of interface, where you don't have to open the node editor, but can just quickly add a modifier and tweak a few settings. > > Creating such modifiers would be done by adding a group input to the geometry nodes, and marking it as an asset. And then it can be available right from the add modifier menu like a native modifier. If I may be frank for a moment: That. Sounds. **Brilliant.** My excitement for 'everything nodes' just immediately dialed up to 11. So basically rather than replacing the modifier stack, you're creating a node editor interface for us to create *our own custom modifiers.* Oh.... Now my brain is overflowing with the possibilities. I can just imagine all the kinds of user content that will get created and shared now. This in combination with the asset management, and brush management improvements, I can imagine on Blender Market buying packs of brushes, modifiers, assets, etc, and adding them straight into my asset manager in Blender.

In #74967#1020169, @brecht wrote:
We will most likely keep the modifier stack, from recent discussions regarding particle and geometry nodes.

We want users to be able to turn node groups in custom modifiers with a high-level interface. And I think the modifier stack is a good place for that kind of interface, where you don't have to open the node editor, but can just quickly add a modifier and tweak a few settings.

Thanks for the reassuring,that's really great news.

> In #74967#1020169, @brecht wrote: > We will most likely keep the modifier stack, from recent discussions regarding particle and geometry nodes. > > We want users to be able to turn node groups in custom modifiers with a high-level interface. And I think the modifier stack is a good place for that kind of interface, where you don't have to open the node editor, but can just quickly add a modifier and tweak a few settings. > Thanks for the reassuring,that's really great news.

Added subscriber: @EvandroFerreiradaCosta

Added subscriber: @EvandroFerreiradaCosta

"Users will constantly want to preview certain sections of the node tree, and having to manually connect those things up to the output node will be a pain. An easier way to preview the output of any node would be extra useful for geometry nodes, but would also be a benefit for shader or compositing nodes."

How about integrating the Node Wrangler addon shortcuts and features into the default behavior? It's already one of the staples of Blender usage, almost anyone with a bit of knowledge in Blender already uses it and prefers it. For newbies it changes nothing in how to use nodes, the basic workflow is the same and for that reason integrating it would still be backwards compatible both in terms of files, as well as old tutorials (If the person didn't use the addon, the basic shortcuts are still the same, If the person used it, you can do the same without having to activate any addons).

The Node Wrangler features such as quickly previewing a node using Ctrl+Shift+Click, quickly mixing (ctrl+shift+Rclick), quickly switching sockets or connections, among others, are all extremely useful for any node based workflow... why try to reinvent the wheel when we already have a very proof-tested toolset used by thousands everyday in both personal and commercial projects?

Also let's not forget about UV operations being nodal as well, maybe unwrapping based on clusters, based on cameras (to do unwraps from "camera view" instead of from view), etc.

"Users will constantly want to preview certain sections of the node tree, and having to manually connect those things up to the output node will be a pain. An easier way to preview the output of any node would be extra useful for geometry nodes, but would also be a benefit for shader or compositing nodes." How about integrating the Node Wrangler addon shortcuts and features into the default behavior? It's already one of the staples of Blender usage, almost anyone with a bit of knowledge in Blender already uses it and prefers it. For newbies it changes nothing in how to use nodes, the basic workflow is the same and for that reason integrating it would still be backwards compatible both in terms of files, as well as old tutorials (If the person didn't use the addon, the basic shortcuts are still the same, If the person used it, you can do the same without having to activate any addons). The Node Wrangler features such as quickly previewing a node using Ctrl+Shift+Click, quickly mixing (ctrl+shift+Rclick), quickly switching sockets or connections, among others, are all extremely useful for any node based workflow... why try to reinvent the wheel when we already have a very proof-tested toolset used by thousands everyday in both personal and commercial projects? Also let's not forget about UV operations being nodal as well, maybe unwrapping based on clusters, based on cameras (to do unwraps from "camera view" instead of from view), etc.

Added subscriber: @Adam.S

Added subscriber: @Adam.S

I hope you guys can go as low level as possible similar to Maya, having a generic nodes that can be inherited by different object types like Transform, Shape & shading nodes. for example when creating a cube you get a node goup of those atomic nodes but also specfic ones to for that object type, this way you can do all sort of things in just one single node editor.

I hope you guys can go as low level as possible similar to Maya, having a generic nodes that can be inherited by different object types like Transform, Shape & shading nodes. for example when creating a cube you get a node goup of those atomic nodes but also specfic ones to for that object type, this way you can do all sort of things in just one single node editor.

Added subscriber: @StephenSwaney

Added subscriber: @StephenSwaney

Do Not Post Screenshots of Copyrighted Software

An excerpt from an email from Ton Roosendaal:

In our design process we have to stay away from references to other (non
free/open) applications. Not only do we have enough design powers to
make own solutions, it's also a violation of copyrights from others.

Do Not Post Screenshots of Copyrighted Software An excerpt from an email from Ton Roosendaal: > In our design process we have to stay away from references to other (non > free/open) applications. Not only do we have enough design powers to > make own solutions, it's also a violation of copyrights from others.

In #74967#1024361, @StephenSwaney wrote:
Do Not Post Screenshots of Copyrighted Software

An excerpt from an email from Ton Roosendaal:

In our design process we have to stay away from references to other (non
free/open) applications. Not only do we have enough design powers to
make own solutions, it's also a violation of copyrights from others.

Sorry i didn't know about that, i removed the image but sure we can at least mention the labels of nodes or is that also a violation?

> In #74967#1024361, @StephenSwaney wrote: > Do Not Post Screenshots of Copyrighted Software > > An excerpt from an email from Ton Roosendaal: > >> In our design process we have to stay away from references to other (non >> free/open) applications. Not only do we have enough design powers to >> make own solutions, it's also a violation of copyrights from others. Sorry i didn't know about that, i removed the image but sure we can at least mention the labels of nodes or is that also a violation?

Removed subscriber: @deadpin

Removed subscriber: @deadpin

Removed subscriber: @AlbertoVelazquez

Removed subscriber: @AlbertoVelazquez

I've often read, that it will be possible to affect collections (or collection instances) with the modifier nodes (geometry nodes). But I can't find any "official" statements about that. Is that part of the geometry nodes plans?

I've often read, that it will be possible to affect collections (or collection instances) with the modifier nodes (geometry nodes). But I can't find any "official" statements about that. Is that part of the geometry nodes plans?

Making modifiers work on collections would be a separate project, not part of an initial geometry nodes implementation.

Making modifiers work on collections would be a separate project, not part of an initial geometry nodes implementation.

Added subscriber: @AndreasBergmeier

Added subscriber: @AndreasBergmeier

We will most likely keep the modifier stack, from recent discussions regarding particle and geometry nodes.

We want users to be able to turn node groups in custom modifiers with a high-level interface. And I think the modifier stack is a good place for that kind of interface, where you don't have to open the node editor, but can just quickly add a modifier and tweak a few settings.

IIUC Modifiers would then only be a fancy interface on top of Nodes. If that is indeed correct then would it maybe make sense to move the Modifier code into an addon and ship it by default?
Just to ensure that there is a clean API.

> We will most likely keep the modifier stack, from recent discussions regarding particle and geometry nodes. > We want users to be able to turn node groups in custom modifiers with a high-level interface. And I think the modifier stack is a good place for that kind of interface, where you don't have to open the node editor, but can just quickly add a modifier and tweak a few settings. IIUC Modifiers would then only be a fancy interface on top of Nodes. If that is indeed correct then would it maybe make sense to move the Modifier code into an addon and ship it by default? Just to ensure that there is a clean API.
Contributor

Added subscriber: @Eary

Added subscriber: @Eary

Added subscriber: @GeorgiaPacific

Added subscriber: @GeorgiaPacific
Contributor

In #74967#1017581, @lsscpp wrote:
This is why it would be very important to have nodes generated also by the regular viewport workflow. You would end with the best of both worlds: you can go the nodes way, powerful as you want; or you can still go the 3d way, which would have a sort of realtime history in the form of a nodegraph. Together they'd make a hybrid approach where one can model and adjust the nodes back and forth.
Sorry for repeating myself.

I vote for this, this is very important.

In #74967#895591, @WilliamReynish wrote:
I believe the ability to automatically generate geometry node trees using Edit Mode operators and tools is outside the initial scope of what will be supported. Supporting that is tricky there are a ton of potential issues and pitfalls. It would be great to be able to do that eventually perhaps, but initially I don't think it'll be something the core developers will pursue.

That's a very sad, the ability to have the nodes be generated behind the scene will make geometry node more user friendly since people can use regular modeling when they want to and switch to nodes at any point. Without it, you basically need to stick to one of them from start to end, not really friendly I would say.

> In #74967#1017581, @lsscpp wrote: > This is why it would be very important to have nodes generated also by the regular viewport workflow. You would end with the best of both worlds: you can go the nodes way, powerful as you want; or you can still go the 3d way, which would have a sort of realtime history in the form of a nodegraph. Together they'd make a hybrid approach where one can model and adjust the nodes back and forth. > Sorry for repeating myself. I vote for this, this is very important. > In #74967#895591, @WilliamReynish wrote: > I believe the ability to automatically generate geometry node trees using Edit Mode operators and tools is outside the initial scope of what will be supported. Supporting that is tricky there are a ton of potential issues and pitfalls. It would be great to be able to do that eventually perhaps, but initially I don't think it'll be something the core developers will pursue. That's a very sad, the ability to have the nodes be generated behind the scene will make geometry node more user friendly since people can use regular modeling when they want to and switch to nodes at any point. Without it, you basically need to stick to one of them from start to end, not really friendly I would say.

Added subscriber: @3di

Added subscriber: @3di

Will the current modifier stack become the user interface for the top level node of a node based modifier? So if you create a node based modifier, you can expose whatever enclosed parameters you want to the top level node, and those parameters will then be displayed as a new modifier in the modifier stack (as well as being shown in the shader editor's n panel)? And vice versa, if you add a modifier in the usual way, it will create a new modifier in the node editor, so that you can dive into it and make changes to it's functionality for example. Or do we still need to differentiate between edit mode and object mode?

Will the current modifier stack become the user interface for the top level node of a node based modifier? So if you create a node based modifier, you can expose whatever enclosed parameters you want to the top level node, and those parameters will then be displayed as a new modifier in the modifier stack (as well as being shown in the shader editor's n panel)? And vice versa, if you add a modifier in the usual way, it will create a new modifier in the node editor, so that you can dive into it and make changes to it's functionality for example. Or do we still need to differentiate between edit mode and object mode?

In #74967#1040671, @Eary wrote:

In #74967#1017581, @lsscpp wrote:
This is why it would be very important to have nodes generated also by the regular viewport workflow. You would end with the best of both worlds: you can go the nodes way, powerful as you want; or you can still go the 3d way, which would have a sort of realtime history in the form of a nodegraph. Together they'd make a hybrid approach where one can model and adjust the nodes back and forth.
Sorry for repeating myself.

I vote for this, this is very important.

In #74967#895591, @WilliamReynish wrote:
I believe the ability to automatically generate geometry node trees using Edit Mode operators and tools is outside the initial scope of what will be supported. Supporting that is tricky there are a ton of potential issues and pitfalls. It would be great to be able to do that eventually perhaps, but initially I don't think it'll be something the core developers will pursue.

That's a very sad, the ability to have the nodes be generated behind the scene will make geometry node more user friendly since people can use regular modeling when they want to and switch to nodes at any point. Without it, you basically need to stick to one of them from start to end, not really friendly I would say.

Once each edit mode operator has a node counterpart, then it will be very easy to auto generate the the node tree because it'll just be a case of populating the node with the same parameters as the redo menu and then wiring it's input to the previous nodes output. There'll need to be a transform node with a selection property too in order to capture user transformations (move, rotate, scale)

Auto generation of object mode nodes should be even easier, because you could have a one node fits all, which just records any UI interaction such as changing of a parameter or a movment or adding a modifier etc, it would just store the parameter and the old/new values. Basically just a node based undo history where each undo's values are editable after the fact. Sort of a dynamically generated node that can store any type of user activity.

> In #74967#1040671, @Eary wrote: >> In #74967#1017581, @lsscpp wrote: >> This is why it would be very important to have nodes generated also by the regular viewport workflow. You would end with the best of both worlds: you can go the nodes way, powerful as you want; or you can still go the 3d way, which would have a sort of realtime history in the form of a nodegraph. Together they'd make a hybrid approach where one can model and adjust the nodes back and forth. >> Sorry for repeating myself. > > I vote for this, this is very important. > > > >> In #74967#895591, @WilliamReynish wrote: >> I believe the ability to automatically generate geometry node trees using Edit Mode operators and tools is outside the initial scope of what will be supported. Supporting that is tricky there are a ton of potential issues and pitfalls. It would be great to be able to do that eventually perhaps, but initially I don't think it'll be something the core developers will pursue. > > That's a very sad, the ability to have the nodes be generated behind the scene will make geometry node more user friendly since people can use regular modeling when they want to and switch to nodes at any point. Without it, you basically need to stick to one of them from start to end, not really friendly I would say. Once each edit mode operator has a node counterpart, then it will be very easy to auto generate the the node tree because it'll just be a case of populating the node with the same parameters as the redo menu and then wiring it's input to the previous nodes output. There'll need to be a transform node with a selection property too in order to capture user transformations (move, rotate, scale) Auto generation of object mode nodes should be even easier, because you could have a one node fits all, which just records any UI interaction such as changing of a parameter or a movment or adding a modifier etc, it would just store the parameter and the old/new values. Basically just a node based undo history where each undo's values are editable after the fact. Sort of a dynamically generated node that can store any type of user activity.

It's easy to miss trade-offs that come with a modeling system that has to support both procedural and interactive approaches. A good interactive tool does not necessarily make for a good procedural node, and vice versa. And just because you could define nodes for every tool and generate a resulting node graph, does not mean that node graph will end up being useful, rather than an unorganized mess. Or you might make the UX of a tool worse so that it can be generated as a procedural node.

There are systems that are really good at interactive modeling, at procedural modeling, and at interactive procedural modeling. But all 3 types of systems come with trade-offs and workflow / UX choices. They are all useful in their own right for different use cases, but trying to build one that has to best of all worlds within a single workflow seems overambitious to me. Especially if the idea would be to just turn existing modeling tools into nodes, I don't expect that to result in a good workflow.

It's easy to miss trade-offs that come with a modeling system that has to support both procedural and interactive approaches. A good interactive tool does not necessarily make for a good procedural node, and vice versa. And just because you could define nodes for every tool and generate a resulting node graph, does not mean that node graph will end up being useful, rather than an unorganized mess. Or you might make the UX of a tool worse so that it can be generated as a procedural node. There are systems that are really good at interactive modeling, at procedural modeling, and at interactive procedural modeling. But all 3 types of systems come with trade-offs and workflow / UX choices. They are all useful in their own right for different use cases, but trying to build one that has to best of all worlds within a single workflow seems overambitious to me. Especially if the idea would be to just turn existing modeling tools into nodes, I don't expect that to result in a good workflow.

In #74967#1042990, @3di wrote:
Once each edit mode operator has a node counterpart, then it will be very easy to auto generate the the node tree because it'll just be a case of populating the node with the same parameters as the redo menu and then wiring it's input to the previous nodes output. There'll need to be a transform node with a selection property too in order to capture user transformations (move, rotate, scale)

I also see this as rather straightforward -at least in theory, because some particulars are not obvious : comes to mind the fact that nodes should be able to operate on a procedural selection of mesh elements, rather than an explicit list of elements (as is done currently in edit mode). As noted above by several people, such other means could include selecting with a volume or through another rule, involving point coordinates or normals, and so on.
I guess that means pretty much every operator should be reworked to reflect those additional parameters. Obviously these indirect selections wouldn't work that well through the viewport, they would have to live inside the node editor -unless we have some sort of "embedded node view" within the viewport, similar to how the last operator panel pops up in the bottom left corner, but that's just a UI concern.

Then there's the notion of "tagging" generated geometry (ie including it in a group, or giving it an attribute) for the operations that support it such as bevel, etc. This adds another round of parameters.

@brecht I could be mistaken, but I don't see users needing to generate a nodetree in the background for the majority of modeling jobs. The way I see it, such a feature would only be relevant in the process of creating assets that are meant to be procedural, be packed into a node and have variations and parameters, such as furniture, buildings or houses, vegetation and anything in great quantities -and I guess we have to exclude characters since we'd have to auto-generate a fitting rig as well and that seems out of scope (at least I never heard anyone mention this as a target).
This means what ? that once the user decides to activate "history" (=background node creation), the generated node tree must indeed be readable, and tweakable. I think most operators we use in edit mode today, once converted to individual nodes, would tick those boxes. The issue I mentioned above (how to allow the user to make procedural selections easily) still stands however, and requires going out of the viewport and into the node to write down a rule for that selection.
Most procedural assets will need rule-based selections because once the vertex count changes, vertex indices change as a consequence and explicit selection cannot be relied on anymore. So inevitably the user will need to do some back-and-forth between viewport and node editor, unless one is cleverly integrated into the other.

Just rambling and hopefully, food for thought.

> In #74967#1042990, @3di wrote: > Once each edit mode operator has a node counterpart, then it will be very easy to auto generate the the node tree because it'll just be a case of populating the node with the same parameters as the redo menu and then wiring it's input to the previous nodes output. There'll need to be a transform node with a selection property too in order to capture user transformations (move, rotate, scale) I also see this as rather straightforward -at least in theory, because some particulars are not obvious : comes to mind the fact that nodes should be able to operate on a procedural selection of mesh elements, rather than an explicit list of elements (as is done currently in edit mode). As noted above by several people, such other means could include selecting with a volume or through another rule, involving point coordinates or normals, and so on. I guess that means pretty much every operator should be reworked to reflect those additional parameters. Obviously these indirect selections wouldn't work that well through the viewport, they would have to live inside the node editor -unless we have some sort of "embedded node view" within the viewport, similar to how the last operator panel pops up in the bottom left corner, but that's just a UI concern. Then there's the notion of "tagging" generated geometry (*ie* including it in a group, or giving it an attribute) for the operations that support it such as bevel, etc. This adds another round of parameters. @brecht I could be mistaken, but I don't see users needing to generate a nodetree in the background for the majority of modeling jobs. The way I see it, such a feature would only be relevant in the process of creating assets that are meant to be procedural, be packed into a node and have variations and parameters, such as furniture, buildings or houses, vegetation and anything in great quantities -and I guess we have to exclude characters since we'd have to auto-generate a fitting rig as well and that seems out of scope (at least I never heard anyone mention this as a target). This means what ? that once the user decides to activate "history" (=background node creation), the generated node tree must indeed be readable, and tweakable. I think most operators we use in edit mode today, once converted to individual nodes, would tick those boxes. The issue I mentioned above (how to allow the user to make procedural selections easily) still stands however, and requires going out of the viewport and into the node to write down a rule for that selection. Most procedural assets will need rule-based selections because once the vertex count changes, vertex indices change as a consequence and explicit selection cannot be relied on anymore. So inevitably the user will need to do some back-and-forth between viewport and node editor, unless one is cleverly integrated into the other. Just rambling and hopefully, food for thought.

In #74967#1043067, @brecht wrote:
It's easy to miss trade-offs that come with a modeling system that has to support both procedural and interactive approaches. A good interactive tool does not necessarily make for a good procedural node, and vice versa. And just because you could define nodes for every tool and generate a resulting node graph, does not mean that node graph will end up being useful, rather than an unorganized mess. Or you might make the UX of a tool worse so that it can be generated as a procedural node.

There are systems that are really good at interactive modeling, at procedural modeling, and at interactive procedural modeling. But all 3 types of systems come with trade-offs. They are all useful in their own right for different use cases, but trying to build one that has to best of all worlds within a single workflow seems overambitious to me.

I'd disagree, it's a necessity, otherwise you can either work fast with no ability to parametrically change or animate what you've done...or you can work very slowly by creating geometry with nodes, and then reap the extra functionality of being able to change/animate all parameters, or even throw in new nodes at any point within the tree.

I don't understand why the UX of an existing tool would need to be altered in order to be represented by a node, could you give an example of an existing edit mode operator that would need additional parameters adding to it in order for it to be replicated as a node? The only additional information that a node would need over the redo panel would be the selected verts/edges/faces as Adrien mentioned, but this wouldn't need to be represented in the redo panel or the tool options, this would just be obtained from the selection and stored in the node (there's no downside to having additional parameters in the node that aren't in the interactive tools parameters, in fact it's useful to be able to have access to the selection after the fact, this allow to change which verts/edges/faces are affected either by manually changing the selection, or by feeding in a selection node which would have options for automatic selectin based on attributes or face angles etc.

One approach to be considered would be the same as the OTHER node based software which mixes procedural with interactive, all non operator activity in SOPS can be recorded into a single node, whilst the operations get their own nodes. So you end up with a manageable node tree that still has all of the important nodes rather than millions of nodes because of recording each user movment of verts for example.

Strongly recommend someone from the dev team become more familiar with how the competition handles this automatic node tree creation process, otherwise we're starting out with a hobbled system from the get go.

> In #74967#1043067, @brecht wrote: > It's easy to miss trade-offs that come with a modeling system that has to support both procedural and interactive approaches. A good interactive tool does not necessarily make for a good procedural node, and vice versa. And just because you could define nodes for every tool and generate a resulting node graph, does not mean that node graph will end up being useful, rather than an unorganized mess. Or you might make the UX of a tool worse so that it can be generated as a procedural node. > > There are systems that are really good at interactive modeling, at procedural modeling, and at interactive procedural modeling. But all 3 types of systems come with trade-offs. They are all useful in their own right for different use cases, but trying to build one that has to best of all worlds within a single workflow seems overambitious to me. I'd disagree, it's a necessity, otherwise you can either work fast with no ability to parametrically change or animate what you've done...or you can work very slowly by creating geometry with nodes, and then reap the extra functionality of being able to change/animate all parameters, or even throw in new nodes at any point within the tree. I don't understand why the UX of an existing tool would need to be altered in order to be represented by a node, could you give an example of an existing edit mode operator that would need additional parameters adding to it in order for it to be replicated as a node? The only additional information that a node would need over the redo panel would be the selected verts/edges/faces as Adrien mentioned, but this wouldn't need to be represented in the redo panel or the tool options, this would just be obtained from the selection and stored in the node (there's no downside to having additional parameters in the node that aren't in the interactive tools parameters, in fact it's useful to be able to have access to the selection after the fact, this allow to change which verts/edges/faces are affected either by manually changing the selection, or by feeding in a selection node which would have options for automatic selectin based on attributes or face angles etc. One approach to be considered would be the same as the OTHER node based software which mixes procedural with interactive, all non operator activity in SOPS can be recorded into a single node, whilst the operations get their own nodes. So you end up with a manageable node tree that still has all of the important nodes rather than millions of nodes because of recording each user movment of verts for example. Strongly recommend someone from the dev team become more familiar with how the competition handles this automatic node tree creation process, otherwise we're starting out with a hobbled system from the get go.

In fact this is what's so exciting about Blender implementing nodes, because Blender's interctive modelling is waaaaaaaay better than the competition, so the ability to work lightening fast AND be able to parametrically change the result of this, this'll set Blender above the competition (for modelling at least)

In fact this is what's so exciting about Blender implementing nodes, because Blender's interctive modelling is waaaaaaaay better than the competition, so the ability to work lightening fast AND be able to parametrically change the result of this, this'll set Blender above the competition (for modelling at least)

In #74967#1043080, @3di wrote:
I don't understand why the UX of an existing tool would need to be altered in order to be represented by a node, could you give an example of an existing edit mode operator that would need additional parameters adding to it in order for it to be replicated as a node? The only additional information that a node would need over the redo panel would be the selected verts/edges/faces as Adrien mentioned, but this wouldn't need to be represented in the redo panel or the tool options, this would just be obtained from the selection and stored in the node (there's no downside to having additional parameters in the node that aren't in the interactive tools parameters, in fact it's useful to be able to have access to the selection after the fact, this allow to change which verts/edges/faces are affected either by manually changing the selection, or by feeding in a selection node which would have options for automatic selectin based on attributes or face angles etc.

I thought of some other operators which wouldn't be immediately translate-able into nodes "as is" : any operator that relies on view orientation such as knife or knife project would have to have this stuff exposed first (view vector, namely), and even then I'm not sure how well it would work : see you can rotate all around an object while cutting it -how can this ever be procedural ?
Additionally, tools that rely on cursor position such as vertex or edge slide would need another way of determining sliding direction (world space ? tangent space ?), so from a bird's eye view a lot of operators would need a fair bit of refactoring.
Admittedly the knife tool/operator does not fit well in a procedural modeling workflow so maybe this is a bad example... I guess such operators could be "left out in translation".

In any case, I agree with you in that this should be considered from the start. Hopefully we're making some progress already in terms of determining what the workflow would look like, not sure how much of this is helpful from an architectural point of view...?

In #74967#1043080, @3di wrote:
One approach to be considered would be the same as the OTHER node based software which mixes procedural with interactive, all non operator activity in SOPS can be recorded into a single node, whilst the operations get their own nodes.

Could you please explain further how this would work, ideally without too direct a reference to the software in question ? (and definitely no pictures) I'm curious.

> In #74967#1043080, @3di wrote: > I don't understand why the UX of an existing tool would need to be altered in order to be represented by a node, could you give an example of an existing edit mode operator that would need additional parameters adding to it in order for it to be replicated as a node? The only additional information that a node would need over the redo panel would be the selected verts/edges/faces as Adrien mentioned, but this wouldn't need to be represented in the redo panel or the tool options, this would just be obtained from the selection and stored in the node (there's no downside to having additional parameters in the node that aren't in the interactive tools parameters, in fact it's useful to be able to have access to the selection after the fact, this allow to change which verts/edges/faces are affected either by manually changing the selection, or by feeding in a selection node which would have options for automatic selectin based on attributes or face angles etc. I thought of some other operators which wouldn't be immediately translate-able into nodes "as is" : any operator that relies on view orientation such as knife or knife project would have to have this stuff exposed first (view vector, namely), and even then I'm not sure how well it would work : see you can rotate all around an object while cutting it -how can this ever be procedural ? Additionally, tools that rely on cursor position such as vertex or edge slide would need another way of determining sliding direction (world space ? tangent space ?), so from a bird's eye view a lot of operators would need a fair bit of refactoring. Admittedly the knife tool/operator does not fit well in a procedural modeling workflow so maybe this is a bad example... I guess such operators could be "left out in translation". In any case, I agree with you in that this should be considered from the start. Hopefully we're making some progress already in terms of determining what the workflow would look like, not sure how much of this is helpful from an architectural point of view...? > In #74967#1043080, @3di wrote: > One approach to be considered would be the same as the OTHER node based software which mixes procedural with interactive, all non operator activity in SOPS can be recorded into a single node, whilst the operations get their own nodes. Could you please explain further how this would work, ideally without too direct a reference to the software in question ? (and definitely no pictures) I'm curious.

Yep, any activity such as manually sliding vertices should be consolidated into a single 'user edit' node which are un-editable after the fact. I think knife project should be fine as a node, the node would just need to store viewport camera orientation, which would also be handy for manipulation after the fact, this would have no impact on the interctive use of the tool though, as the viewport camera orientation is already defined by the user rather than a parameter.

I worry a bit that the entire system is being devised without a sufficient knowledge of established node based workflows.

Yep, any activity such as manually sliding vertices should be consolidated into a single 'user edit' node which are un-editable after the fact. I think knife project should be fine as a node, the node would just need to store viewport camera orientation, which would also be handy for manipulation after the fact, this would have no impact on the interctive use of the tool though, as the viewport camera orientation is already defined by the user rather than a parameter. I worry a bit that the entire system is being devised without a sufficient knowledge of established node based workflows.

I'm quite familiar with other software.

While it is possible to animate or parametrize a model created with mesh edit mode type operations in some other software, from what I've seen this it either doesn't work very well, or the modeling UX is more rigid than Blender. It works well as a way to tweak parameters in a undo history, and that could be supported in Blender too. But that doesn't require turning tools into nodes.

And I agree that you could turn every Blender modeling tool into a node. My point is that the resulting node graph would not be that great.

I'm quite familiar with other software. While it is possible to animate or parametrize a model created with mesh edit mode type operations in some other software, from what I've seen this it either doesn't work very well, or the modeling UX is more rigid than Blender. It works well as a way to tweak parameters in a undo history, and that could be supported in Blender too. But that doesn't require turning tools into nodes. And I agree that you could turn every Blender modeling tool into a node. My point is that the resulting node graph would not be that great.

Yes, the interactive viewport aspect is a bit clunky in the other software, but that's not because of it's ability to generate counterpart nodes on the fly, that's just because it's not as well designed as Blender's interactive modelling workflow, which is why utilising Blender's speedy interactive mode to generate the nodes is so important in my opinion. I'm not sure who the main driving force was behind Blender's modelling workflow/interaction design, but it's pure genius. Incidentally, the problem you're describing of a messy node graph when auto generating nodes, it did used to be a problem in the other software too until recently when it began to consolidate all non 'operator' operations into a single 'user edit' node (or some name of that ilk), or multiple user edit nodes separated by operator nodes if the user did some manual moving of vertices in between parameter driven operations.

The main beauty of the combined workflow is the ability to remove or mute individual operations/nodes and also the ability to add nodes/operations anywhere within the undo history/node tree, and probably even more importantly the ability to package and share a node tree to other users or colleagues in a way that only exposes selected parameters to a front end node, meaning people can either use the tree as intended by just manipulating the front end parameters, or dive into the tree to create new functionality, or use it as a starting point for another node etc. Will the undo history approach you mentioned also allow for the insertion or removal of operators, and allow for collaboratively created community tools? Similar to HDA.

Another benefit is smaller file size, because only the steps to make the geometry need to be stored, rather than potentially massive resulting geometry. The user should have the option to manually store the geometry at various points in the tree, either by freezing a node (auto cache to ram), dropping down a file cache node (to avoid time consuming recalculations on file open), or ram cache node (to avoid recalculation of upstream nodes during editing of downstream nodes).

If it's not the intention to create node versions of edit mode operators, does this mean that it's not planned to allow for manual creation of node trees that perform edit mode operations either?

Yes, the interactive viewport aspect is a bit clunky in the other software, but that's not because of it's ability to generate counterpart nodes on the fly, that's just because it's not as well designed as Blender's interactive modelling workflow, which is why utilising Blender's speedy interactive mode to generate the nodes is so important in my opinion. I'm not sure who the main driving force was behind Blender's modelling workflow/interaction design, but it's pure genius. Incidentally, the problem you're describing of a messy node graph when auto generating nodes, it did used to be a problem in the other software too until recently when it began to consolidate all non 'operator' operations into a single 'user edit' node (or some name of that ilk), or multiple user edit nodes separated by operator nodes if the user did some manual moving of vertices in between parameter driven operations. The main beauty of the combined workflow is the ability to remove or mute individual operations/nodes and also the ability to add nodes/operations anywhere within the undo history/node tree, and probably even more importantly the ability to package and share a node tree to other users or colleagues in a way that only exposes selected parameters to a front end node, meaning people can either use the tree as intended by just manipulating the front end parameters, or dive into the tree to create new functionality, or use it as a starting point for another node etc. Will the undo history approach you mentioned also allow for the insertion or removal of operators, and allow for collaboratively created community tools? Similar to HDA. Another benefit is smaller file size, because only the steps to make the geometry need to be stored, rather than potentially massive resulting geometry. The user should have the option to manually store the geometry at various points in the tree, either by freezing a node (auto cache to ram), dropping down a file cache node (to avoid time consuming recalculations on file open), or ram cache node (to avoid recalculation of upstream nodes during editing of downstream nodes). If it's not the intention to create node versions of edit mode operators, does this mean that it's not planned to allow for manual creation of node trees that perform edit mode operations either?

I understand the idea, but I don't believe a reusable/tweakable asset or even reduced file size is what you will actually get when you turn Blender edit mode operations into nodes. In some specific cases if you're careful to use a small subset of tools in specific ways, then maybe. But in general there would be too many operations in the graph that break when tweaking earlier operations, or reordering or deleting. Collapsing operations is fine but doesn't solve that problem.

The current #geometry_nodes project is focused on use cases like scattering, VFX and the type of functionality provided by existing modifiers. Turning every edit mode operator into a node is not important for those use cases.

I understand the idea, but I don't believe a reusable/tweakable asset or even reduced file size is what you will actually get when you turn Blender edit mode operations into nodes. In some specific cases if you're careful to use a small subset of tools in specific ways, then maybe. But in general there would be too many operations in the graph that break when tweaking earlier operations, or reordering or deleting. Collapsing operations is fine but doesn't solve that problem. The current #geometry_nodes project is focused on use cases like scattering, VFX and the type of functionality provided by existing modifiers. Turning every edit mode operator into a node is not important for those use cases.

Generally the only upstream nodes that can break the tree are nodes that add or remove vertices, resulting in a change of vertex order, unless later nodes rely on selections/groups generated from attributes such as face angle etc. Thanks for the info 👍

Generally the only upstream nodes that can break the tree are nodes that add or remove vertices, resulting in a change of vertex order, unless later nodes rely on selections/groups generated from attributes such as face angle etc. Thanks for the info 👍

Added subscriber: @himanshu662000

Added subscriber: @himanshu662000

Added subscriber: @Hto-Ya

Added subscriber: @Hto-Ya

Added subscriber: @VladimirTurcan

Added subscriber: @VladimirTurcan

Added subscriber: @Keavon

Added subscriber: @Keavon

I just read through the proposal and love everything about it— except the part that talks about a Cache node. That should happen automatically (each node caches its value, and that gets reused unless an upstream change occurs). It shouldn't require any user intervention, since caches always suck.

I just read through the proposal and love everything about it— except the part that talks about a Cache node. That should happen automatically (each node caches its value, and that gets reused unless an upstream change occurs). It shouldn't require any user intervention, since caches always suck.

I think it’s great that we can expose any parameter to the modifier stack front end. The current implementation of physically wiring a node’s input socket back to the group socket is going to lead to unnecessarily messy node trees though. A better solution would be to right click a socket and then from a context menu select “expose”. This would modify the look of the socket to indicate it’s exposed rather than sodomise the clarity of the tree with surplus to requirement wires. Check out Houdini VOPS to see what I mean.

I think it’s great that we can expose any parameter to the modifier stack front end. The current implementation of physically wiring a node’s input socket back to the group socket is going to lead to unnecessarily messy node trees though. A better solution would be to right click a socket and then from a context menu select “expose”. This would modify the look of the socket to indicate it’s exposed rather than sodomise the clarity of the tree with surplus to requirement wires. Check out Houdini VOPS to see what I mean.
Contributor

In #74967#1049718, @Keavon wrote:
I just read through the proposal and love everything about it— except the part that talks about a Cache node. That should happen automatically (each node caches its value, and that gets reused unless an upstream change occurs). It shouldn't require any user intervention, since caches always suck.

I think this should be done implicitly as a local optimization while editing or working on recent files if the user specifies. However the existence of explicit cache is very much necessary if this concept expands into the realm of user created dependency graphs and multi-machine render/simulation farms for heavy production tasks.

> In #74967#1049718, @Keavon wrote: > I just read through the proposal and love everything about it— except the part that talks about a Cache node. That should happen automatically (each node caches its value, and that gets reused unless an upstream change occurs). It shouldn't require any user intervention, since caches always suck. I think this should be done implicitly as a local optimization while editing or working on recent files if the user specifies. However the existence of explicit cache is very much necessary if this concept expands into the realm of user created dependency graphs and multi-machine render/simulation farms for heavy production tasks.

Added subscriber: @ImmanuelCalvinHerchenbach

Added subscriber: @ImmanuelCalvinHerchenbach

So far I really like the higher level design aspects however being a rather technical minded generative artist im missing some lower level things.

  • Creating geometry from scratch by defining vertices, edges, faces and their attributes
  • 4D-Vectors and Matrix attributes/noodle-types

The latter sure could be faked with multiple attributes/noodles and custom NodeGroups but having a dedicated Matrix- and a 4DVectortype would make the nodes much more powerful and multiple transformations much less of a noodle mess.

Regarding the Cache Node I think the concept could be extended to Solver/Feedback Node that feeds the result of the last (simulation)frame as input into the nodenetwork.
Alternatively this could be realized with a FileIn/FileOut Node that read and write Diskcaches respectively where the FileIn Node has a frameoffset.

As Geometry Nodes can become rather complex a Debugginview would be very handy that can show geometry informations and/or attribute tables of a specified Node.

So far I really like the higher level design aspects however being a rather technical minded generative artist im missing some lower level things. - Creating geometry from scratch by defining vertices, edges, faces and their attributes - 4D-Vectors and Matrix attributes/noodle-types The latter sure could be faked with multiple attributes/noodles and custom NodeGroups but having a dedicated Matrix- and a 4DVectortype would make the nodes much more powerful and multiple transformations much less of a noodle mess. Regarding the Cache Node I think the concept could be extended to Solver/Feedback Node that feeds the result of the last (simulation)frame as input into the nodenetwork. Alternatively this could be realized with a FileIn/FileOut Node that read and write Diskcaches respectively where the FileIn Node has a frameoffset. As Geometry Nodes can become rather complex a Debugginview would be very handy that can show geometry informations and/or attribute tables of a specified Node.

Added subscriber: @Branskugel

Added subscriber: @Branskugel

If there is no short term plans for edit mode operators synchronization can there be at least added generic Edit node which will record all changes from edit mode manipulations? This will simulate Edit Poly modifier from 3ds max and will greatly increase non destructive modelling in Blender.

If there is no short term plans for edit mode operators synchronization can there be at least added generic Edit node which will record all changes from edit mode manipulations? This will simulate Edit Poly modifier from 3ds max and will greatly increase non destructive modelling in Blender.

Added subscriber: @the_avg_guy

Added subscriber: @the_avg_guy

Added subscriber: @Sparazza

Added subscriber: @Sparazza

Added subscriber: @lictex_1

Added subscriber: @lictex_1

Added subscriber: @bao007fei

Added subscriber: @bao007fei

Added subscriber: @gianni

Added subscriber: @gianni

Hello, about the geometry nodes, is there a way to add them at given coordinates, for example reading those from a text file? It's very useful to have some objects, like trees, positioned correctly. Being able to apply the transformation makes the scene more realistic. Thanks.

Hello, about the geometry nodes, is there a way to add them at given coordinates, for example reading those from a text file? It's very useful to have some objects, like trees, positioned correctly. Being able to apply the transformation makes the scene more realistic. Thanks.

Added subscriber: @satishgoda1

Added subscriber: @satishgoda1

For user feedback on current geometry nodes, please use this topic:
https://devtalk.blender.org/t/geometry-nodes/16108

For user feedback on current geometry nodes, please use this topic: https://devtalk.blender.org/t/geometry-nodes/16108
Member

Changed status from 'Confirmed' to: 'Resolved'

Changed status from 'Confirmed' to: 'Resolved'
Hans Goudey self-assigned this 2021-03-24 22:43:29 +01:00
Member

I'm going to take the liberty of closing this task, since I think it's basically all covered by newer design and implementation tasks:

  • #86839 (Converting Modifiers to Nodes) and subtasks
  • #83239 (Find a solution to preview of a part of the nodetree)
  • #85652 (Implement socket inspection and links values)
  • etc.
I'm going to take the liberty of closing this task, since I think it's basically all covered by newer design and implementation tasks: * #86839 (Converting Modifiers to Nodes) and subtasks * #83239 (Find a solution to preview of a part of the nodetree) * #85652 (Implement socket inspection and links values) * etc.

Added subscriber: @JacobMerrill-1

Added subscriber: @JacobMerrill-1

The new direction geonodes took away from attributes, has been amazing,

I am worried though that some basic details are overlooked.

1 . Matrix input from object info node bpy.data.objects['Cube'].matrix_world

  1. Matrix Transform node (like bpy.data.objects['Cube'].matrix_world @ mathutils.vector )
    [with inversion switch to bring a vector into local space]

  2. Create matrix node - [ rotate - scale - transform and sheer maybe later?]

  • if anyone has a good reason why these nodes would not be added I would definitely like to know.
The new direction geonodes took away from attributes, has been amazing, I am worried though that some basic details are overlooked. 1 . Matrix input from object info node bpy.data.objects['Cube'].matrix_world 2. Matrix Transform node (like bpy.data.objects['Cube'].matrix_world @ mathutils.vector ) [with inversion switch to bring a vector into local space] 3. Create matrix node - [ rotate - scale - transform and sheer maybe later?] - if anyone has a good reason why these nodes would not be added I would definitely like to know.

Added subscriber: @dodo-2

Added subscriber: @dodo-2

This comment was removed by @dodo-2

*This comment was removed by @dodo-2*

This comment was removed by @dodo-2

*This comment was removed by @dodo-2*

Added subscriber: @Cigitia

Added subscriber: @Cigitia

Removed subscriber: @Cigitia

Removed subscriber: @Cigitia

Added subscriber: @yeshenghuohuo

Added subscriber: @yeshenghuohuo
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
71 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#74967
No description provided.