Everything Nodes UX #67088

Closed
opened 2019-07-16 23:26:53 +02:00 by William Reynish · 120 comments

The forthcoming ‘Everything Nodes’ project is bound to have large implications for how we use Blender to do many common tasks. The purpose of this document, is to add more specificity to this area, and to serve as a starting point for further design work.

We already have a series of technical design docs here: https://wiki.blender.org/wiki/Source/Nodes by @JacquesLucke

This design document is meant as a counterpoint by instead focusing on the end-user experience and how the various parts fit together.


Goals

The overall aim of the Everything Nodes system is twofold: Both to add tremendous amounts of new low level power and flexibility, and also to add high level simplicity and ease of use.

Nodes will allow Blender to become fully procedural, meaning that artists will be able to more efficiently build scenes that are orders of magnitude more complex and advanced in a short amount of time, with full non-linear control. The object-oriented nature of node systems means that users can use, re-use and share node systems and node groups without having to start from scratch.

We should aim to make this system a foundational, integrated one, which will be core to many Blender features and workflows, rather than something tacked on on the side, with limited scope and integration.

We should also aim to fully replace the previous systems, such as the old modifiers and particles systems. Otherwise we end up with multiple competing systems which will be hard to maintain for developers and confusing for users.

Additionally, nodes, assets and properties should work together to provide an integrated, wholistic system that works together as one.


Node Systems

Currently, we use nodes for materials, compositing and textures (deprecated).

In addition to these, the Everything Nodes concept would add nodes for many more areas:

  • Modifiers & procedural modeling
  • Particles & hair
  • Physics & Simulations
  • Constraints & kinematics

Open Questions:

  • ? How do we define the borders between these systems?
  • ? Modifiers, particles, hair, materials can all be attached to objects, but per definition this is not the case for constraints. These need to operate on a higher level. How and where does this work exactly?
  • ? How can you communicate from one node tree type to another? You might want to drive a modifier and a material property with the same texture, for eg.
  • ? Particles currently sort of integrate with the modifier stack. If these things are in separate node trees, how does this work exactly? Are particles a separate object type which then references emitters?

Modifiers

Node-based modifiers may seem like a curiosity, until you realize how powerful this can be. The old modifiers stack works ok for simple cases if stringing a few modifiers together, but as soon as you want to do more complex generative procedural modeling, the limitations of the modifier stack become apparent.

Node-based modifiers will allow:

  • Much more powerful use of textures to drive any parameter
  • Can do more flexible trees, rather than just a stack (this is needed for generative modeling)
  • Much more powerful procedural animations can be created (see the Animation Nodes addon for example)
  • etc

Open Questions:

  • ? Currently, the modifier stack allows users to toggle modifiers for the viewport and render result separately. Nodes don't have this feature, although it could be added, but how? Via separate node tree outputs or bypass toggles on each node?

Parametric Modeling

Currently in Blender, as soon as you create a primitive, the settings are immediately baked and cannot later be changed. Node-based modifiers have the potential to finally address this issue. Here’s how:

  • When the user adds any primitive (Eg UV Sphere, Cone, Cylinder etc), they see the usual operator controls for adjusting the settings and values
  • However, rather than simply baking those settings into a static mesh, they simply modify the settings inside the modifier nodetree
  • These settings can be changed and modified at any time, and you can build modifiers on top of this node, so you can use these as inputs for boolean operations for example, and still change the # of segments at any time.
  • If the user wishes to ‘freeze’ the node tree, they can do so by running a ‘Freeze Nodes’ operator, or by going to Edit Mode, which will automatically prompt the user to freeze the mesh.

{F7614433, size=full}

Screenshot 2019-07-16 at 23.19.18.png

Properties Editor & high level control

Nodes allow for far more complex control and power. But how can we package this power in a way that stays simple and easy to use?

For materials, we already mirror the node tree inside the Properties editor, but in my estimation this works poorly in anything other than the very simplest of cases. We can do better.

As it turns out, we actually already have solved this: We already include a beautifully simple and powerful method of packaging low level complexity in a higher level interface with Group nodes. This system allows users to hide away lots of complexity and only expose a few useful parameters. This general concept can be expanded upon, by making it so entire node trees, not just Group nodes, can expose a small subset of parameters in the Properties Editor.

The nice thing about this solution, is that casual users don’t need to actually open up the node editors, but can just tweak the high level exposed inputs.

Screenshot 2019-07-16 at 22.20.50.png
The node tree has defined a series of high level inputs

These will be exposed in the Properties Editor, like so:

Screenshot 2019-07-16 at 23.07.10.png

Material nodes, with exposed parameters in the Properties Editor:
Screenshot 2019-07-16 at 23.20.31.png

Modifier nodes, with exposed parameters in the Properties Editor:
Screenshot 2019-07-16 at 23.21.07.png

Particle nodes, with exposed parameters in the Properties Editor:
Screenshot 2019-07-16 at 23.23.31.png

This approach makes even more sense if we provide a series of starting points. This is where assets come in:

Open question: ? Would we then remove the old nodes-in-properties system from the Material Properties?
I think yes, as the above system is just cleaner and scales much better, although in theory we could keep both views inside Material Properties


Assets

Assets can play an important role in the Everything Nodes system. With node systems exposing a smaller subset of parameters in the Properties, it makes a lot more sense to supply users with many more starting points. The idea being that casual users won’t have to dive into the nodes - they can just add materials, particles, modifiers etc and adjust the exposed parameters. Users will only have to delve into the node systems if they wish to deviate from what the available assets allow for.

For more on assets, see #54642 (Asset Project: User Interface)

Workflow examples:

  • The user opens the Asset Browser, and navigates to the screw asset. The user drags this asset into the scene. The screw itself is being generated with a node system, and has a small set of high-level user-facing controls exposed in the Properties (head type, length, etc)
  • The user may want to have a bent screw, so they open up the nodes and add a Bend deformer node at the end of the node tree

The user browses the Asset Browser and locates the Rusty Metal material. The user drags this over the screw to apply it. This material has a few parameters exposed (rustiness, cavity dirt, metal type, etc)

Particle assets:
Screenshot 2019-07-16 at 23.19.56.png

Mesh/modifier assets:
Screenshot 2019-07-16 at 23.20.17.png


Gizmos

While a node-based workflow allows for a fully procedural workflow, it’s also more technical and disconnected from directly manipulating items in the 3d view. We can address this by letting nodes spawn interactive gizmos in the viewport.

Screenshot 2019-07-17 at 10.21.50.png

We can implement this by adding a series of built-in special 'gizmo nodes' which can be used as inputs for the node tree. Examples of gizmo nodes are the Location Gizmo, Direction Gizmo and others. A toggle on these nodes can show or hide these gizmos in the viewport.


Node Editor

With a higher reliance on nodes, it makes sense to make a few key improvements to the node editors themselves, such as the following:

Compact mode
Node trees can easily become quite messy, so a toggle to work in a clean and compact node layout can make things more tidy:
Screenshot 2019-07-17 at 09.19.23.png
In this mode, we can also make re-ordering nodes much easier, by simply allowing for dragging the nodes, which will automatically re-order them and re-connect them up, like so:
Screenshot 2019-07-17 at 09.22.47.png

Connect pop-up
Currently it can be quite a maze to figure out which nodes fit with each other, and you have to dive into long menus to find the relevant nodes. We can make this much simpler, by making it so dragging from an input spawns a searchable popup with only the relevant node-types. This way you don't have to guess and search for which types of nodes fit the current input socket - they will be right there:

Screenshot 2019-07-17 at 09.25.58.png


Recap

The nodes system in Blender can both make Blender vastly more powerful with added flexibility, but also can make Blender much easier to use, if we combine nodes with the assets system and high level controls in the Properties and viewport gizmos, as well as introduce a few key improvements to the node editors themselves.

  • We can add high level controls inside the Properties Editor, using a system similar to Group Nodes
  • This works best if we have a built-in assets system so users don't have to build all this from scratch
  • This in turn means that some user don't even NEED to mess around with nodes in many simple cases, even if he/she is using nodes indirectly by simply adjusting the exposed values inside the Properties editor.
  • Gizmos can add visual interactive controls in the viewport, to more directly control nodes
  • A few key improvements to the node editors can go a long way to make using nodes easier
The forthcoming ‘Everything Nodes’ project is bound to have large implications for how we use Blender to do many common tasks. The purpose of this document, is to add more specificity to this area, and to serve as a starting point for further design work. We already have a series of technical design docs here: https://wiki.blender.org/wiki/Source/Nodes by @JacquesLucke This design document is meant as a counterpoint by instead focusing on the *end-user experience* and how the various parts fit together. ----------- ## Goals The overall aim of the Everything Nodes system is twofold: Both to add tremendous amounts of new low level power and flexibility, and also to add high level simplicity and ease of use. Nodes will allow Blender to become fully procedural, meaning that artists will be able to more efficiently build scenes that are orders of magnitude more complex and advanced in a short amount of time, with full non-linear control. The object-oriented nature of node systems means that users can use, re-use and share node systems and node groups without having to start from scratch. We should aim to make this system a foundational, integrated one, which will be core to many Blender features and workflows, rather than something tacked on on the side, with limited scope and integration. We should also aim to fully replace the previous systems, such as the old modifiers and particles systems. Otherwise we end up with multiple competing systems which will be hard to maintain for developers and confusing for users. Additionally, nodes, assets and properties should work together to provide an integrated, wholistic system that works together as one. ----------- ## Node Systems Currently, we use nodes for materials, compositing and textures (deprecated). In addition to these, the Everything Nodes concept would add nodes for many more areas: - Modifiers & procedural modeling - Particles & hair - Physics & Simulations - Constraints & kinematics Open Questions: - `?` How do we define the borders between these systems? - `?` Modifiers, particles, hair, materials can all be attached to objects, but per definition this is not the case for constraints. These need to operate on a higher level. How and where does this work exactly? - `?` How can you communicate from one node tree type to another? You might want to drive a modifier and a material property with the same texture, for eg. - `?` Particles currently sort of integrate with the modifier stack. If these things are in separate node trees, how does this work exactly? Are particles a separate object type which then references emitters? ## Modifiers Node-based modifiers may seem like a curiosity, until you realize how powerful this can be. The old modifiers stack works ok for simple cases if stringing a few modifiers together, but as soon as you want to do more complex generative procedural modeling, the limitations of the modifier stack become apparent. Node-based modifiers will allow: - Much more powerful use of textures to drive any parameter - Can do more flexible trees, rather than just a stack (this is needed for generative modeling) - Much more powerful procedural animations can be created (see the Animation Nodes addon for example) - etc Open Questions: - `?` Currently, the modifier stack allows users to toggle modifiers for the viewport and render result separately. Nodes don't have this feature, although it could be added, but how? Via separate node tree outputs or bypass toggles on each node? ### Parametric Modeling Currently in Blender, as soon as you create a primitive, the settings are immediately baked and cannot later be changed. Node-based modifiers have the potential to finally address this issue. Here’s how: - When the user adds any primitive (Eg UV Sphere, Cone, Cylinder etc), they see the usual operator controls for adjusting the settings and values - However, rather than simply baking those settings into a static mesh, they simply modify the settings inside the modifier nodetree - These settings can be changed and modified at any time, and you can build modifiers on top of this node, so you can use these as inputs for boolean operations for example, and still change the # of segments at any time. - If the user wishes to ‘freeze’ the node tree, they can do so by running a ‘Freeze Nodes’ operator, or by going to Edit Mode, which will automatically prompt the user to freeze the mesh. {[F7614433](https://archive.blender.org/developer/F7614433/Screenshot_2019-07-17_at_10.08.36.png), size=full} ![Screenshot 2019-07-16 at 23.19.18.png](https://archive.blender.org/developer/F7613653/Screenshot_2019-07-16_at_23.19.18.png) ### Properties Editor & high level control Nodes allow for far more complex control and power. But how can we package this power in a way that stays simple and easy to use? For materials, we already mirror the node tree inside the Properties editor, but in my estimation this works poorly in anything other than the very simplest of cases. We can do better. As it turns out, we actually already *have* solved this: We already include a beautifully simple and powerful method of packaging low level complexity in a higher level interface with *Group nodes*. This system allows users to hide away lots of complexity and only expose a few useful parameters. This general concept can be expanded upon, by making it so entire node trees, not just Group nodes, can expose a small subset of parameters in the Properties Editor. The nice thing about this solution, is that casual users don’t need to actually open up the node editors, but can just tweak the high level exposed inputs. ![Screenshot 2019-07-16 at 22.20.50.png](https://archive.blender.org/developer/F7613626/Screenshot_2019-07-16_at_22.20.50.png) *The node tree has defined a series of high level inputs* These will be exposed in the Properties Editor, like so: ![Screenshot 2019-07-16 at 23.07.10.png](https://archive.blender.org/developer/F7613631/Screenshot_2019-07-16_at_23.07.10.png) Material nodes, with exposed parameters in the Properties Editor: ![Screenshot 2019-07-16 at 23.20.31.png](https://archive.blender.org/developer/F7613659/Screenshot_2019-07-16_at_23.20.31.png) Modifier nodes, with exposed parameters in the Properties Editor: ![Screenshot 2019-07-16 at 23.21.07.png](https://archive.blender.org/developer/F7613662/Screenshot_2019-07-16_at_23.21.07.png) Particle nodes, with exposed parameters in the Properties Editor: ![Screenshot 2019-07-16 at 23.23.31.png](https://archive.blender.org/developer/F7613679/Screenshot_2019-07-16_at_23.23.31.png) This approach makes even more sense if we provide a series of starting points. This is where *assets* come in: Open question: `?` Would we then remove the old nodes-in-properties system from the Material Properties? I think yes, as the above system is just cleaner and scales much better, although in theory we could keep both views inside Material Properties ----------- ## Assets Assets can play an important role in the Everything Nodes system. With node systems exposing a smaller subset of parameters in the Properties, it makes a lot more sense to supply users with many more starting points. The idea being that casual users won’t have to dive into the nodes - they can just add materials, particles, modifiers etc and adjust the exposed parameters. Users will only have to delve into the node systems if they wish to deviate from what the available assets allow for. For more on assets, see #54642 (Asset Project: User Interface) **Workflow examples:** - The user opens the Asset Browser, and navigates to the screw asset. The user drags this asset into the scene. The screw itself is being generated with a node system, and has a small set of high-level user-facing controls exposed in the Properties (head type, length, etc) - The user may want to have a bent screw, so they open up the nodes and add a Bend deformer node at the end of the node tree # The user browses the Asset Browser and locates the Rusty Metal material. The user drags this over the screw to apply it. This material has a few parameters exposed (rustiness, cavity dirt, metal type, etc) Particle assets: ![Screenshot 2019-07-16 at 23.19.56.png](https://archive.blender.org/developer/F7613655/Screenshot_2019-07-16_at_23.19.56.png) Mesh/modifier assets: ![Screenshot 2019-07-16 at 23.20.17.png](https://archive.blender.org/developer/F7613657/Screenshot_2019-07-16_at_23.20.17.png) ----------- ## Gizmos While a node-based workflow allows for a fully procedural workflow, it’s also more technical and disconnected from directly manipulating items in the 3d view. We can address this by letting nodes spawn interactive gizmos in the viewport. ![Screenshot 2019-07-17 at 10.21.50.png](https://archive.blender.org/developer/F7614441/Screenshot_2019-07-17_at_10.21.50.png) We can implement this by adding a series of built-in special 'gizmo nodes' which can be used as inputs for the node tree. Examples of gizmo nodes are the Location Gizmo, Direction Gizmo and others. A toggle on these nodes can show or hide these gizmos in the viewport. ----------- ## Node Editor With a higher reliance on nodes, it makes sense to make a few key improvements to the node editors themselves, such as the following: **Compact mode** Node trees can easily become quite messy, so a toggle to work in a clean and compact node layout can make things more tidy: ![Screenshot 2019-07-17 at 09.19.23.png](https://archive.blender.org/developer/F7614377/Screenshot_2019-07-17_at_09.19.23.png) In this mode, we can also make re-ordering nodes much easier, by simply allowing for dragging the nodes, which will automatically re-order them and re-connect them up, like so: ![Screenshot 2019-07-17 at 09.22.47.png](https://archive.blender.org/developer/F7614379/Screenshot_2019-07-17_at_09.22.47.png) **Connect pop-up** Currently it can be quite a maze to figure out which nodes fit with each other, and you have to dive into long menus to find the relevant nodes. We can make this much simpler, by making it so dragging from an input spawns a searchable popup with only the relevant node-types. This way you don't have to guess and search for which types of nodes fit the current input socket - they will be right there: ![Screenshot 2019-07-17 at 09.25.58.png](https://archive.blender.org/developer/F7614383/Screenshot_2019-07-17_at_09.25.58.png) ----------- ## Recap The nodes system in Blender can both make Blender vastly more powerful with added flexibility, but also can make Blender much easier to use, if we combine nodes with the assets system and high level controls in the Properties and viewport gizmos, as well as introduce a few key improvements to the node editors themselves. - We can add high level controls inside the Properties Editor, using a system similar to Group Nodes - This works best if we have a built-in assets system so users don't have to build all this from scratch - This in turn means that some user don't even NEED to mess around with nodes in many simple cases, even if he/she is using nodes indirectly by simply adjusting the exposed values inside the Properties editor. - Gizmos can add visual interactive controls in the viewport, to more directly control nodes - A few key improvements to the node editors can go a long way to make using nodes easier

Added subscriber: @WilliamReynish

Added subscriber: @WilliamReynish

Added subscriber: @JacquesLucke

Added subscriber: @JacquesLucke

Added subscriber: @DuarteRamos

Added subscriber: @DuarteRamos

Added subscriber: @GavinScott

Added subscriber: @GavinScott

Added subscriber: @JonDoe286

Added subscriber: @JonDoe286

Added subscriber: @mont29

Added subscriber: @mont29

Great proposal, this will be a huge game changer for Blender.

From what I read in the linked wiki, I think one important feature to highlight is the importance of what is described as "Mesh groups", hopefully also "Curve Groups" for Bezier curve objects.

They would essentially be an upgrade to current Vertex Groups feature, and will be a valuable tool for procedural modelling, especially providing some form "selection filtering" if nodes/modifiers are capable of taking them as input.

They could also provide a great opportunity for cleanup, essentially replacing all the currently excessive "per edge data layers" like Edge Creases, Bevel Weight, UV Seams, Sharp Edge, among others with a generic layer system supporting arbitrary data. The ability to manipulate all these with nodes is also important, not only in terms of inclusion/exclusion operations, but also assigning arbitrary values, say for example to the resulting vertex from an extrude operation.

Great proposal, this will be a huge game changer for Blender. From what I read in the linked wiki, I think one important feature to highlight is the importance of what is described as "Mesh groups", hopefully also "Curve Groups" for Bezier curve objects. They would essentially be an upgrade to current *Vertex Groups* feature, and will be a valuable tool for procedural modelling, especially providing some form "selection filtering" if nodes/modifiers are capable of taking them as input. They could also provide a great opportunity for cleanup, essentially replacing all the currently excessive "per edge data layers" like Edge Creases, Bevel Weight, UV Seams, Sharp Edge, among others with a generic layer system supporting arbitrary data. The ability to manipulate all these with nodes is also important, not only in terms of inclusion/exclusion operations, but also assigning arbitrary values, say for example to the resulting vertex from an extrude operation.

Added subscriber: @filibis

Added subscriber: @filibis

Added subscriber: @dark999

Added subscriber: @dark999

Added subscriber: @JeffreyKlug

Added subscriber: @JeffreyKlug

Added subscriber: @JulianPerez

Added subscriber: @JulianPerez
Member

Added subscriber: @Poulpator

Added subscriber: @Poulpator

Added subscriber: @hadrien

Added subscriber: @hadrien

If the user wishes to ‘freeze’ the node tree, they can do so by running a ‘Freeze Nodes’ operator, or by going to Edit Mode, which will automatically prompt the user to freeze the mesh.

That doesn't have to be the case, as any operator called from within Edit Mode could add a corresponding node to the tree instead, keeping the process non-destructive. It's also important to be able to access edit mode to just select components (points, faces) to work on, or to simply transform a selection. Such an operation could be stored in a 'transform' node.

We can implement this by adding a series of built-in special 'gizmo nodes' which can be used as inputs for the node tree. Examples of gizmo nodes are the Location Gizmo, Direction Gizmo and others. A toggle on these nodes can show or hide these gizmos in the viewport.

I love the idea of having gizmos attached to some nodes - after all, you'd expect to see the extrude gizmo when selecting the extrude operator node within the node editor - however I'm not sure just keeping those 'gizmo toggles' wherever there are relevant node sockets isn't simpler - say a spin operator node has an origin vector input - the toggle for the corresponding gizmo could just be placed to the side of the value field / socket. These would probably be mutually exclusive too, to not end up with a cloud of gizmos floating around in the viewport ? Or maybe gizmos could be spawned simply by selecting the node ?

However you do not touch on the subject of whether or not a node tree is attached to an object, and if so is the object type (hence data structure) fixed ? Or could the object type be dynamic, dependant on the output node type ? (mesh output, voxel output, curve output...) Asking this because going from bmesh to volume, to particles, etc. could be very powerful.

Great points all over, and I dig what you suggest UI-wise to enhance the nodes themselves. I'd add to that list the ability to have a real interface inside of nodes, such as panels, dividers, etc. of course that would be a nice touch, but nothing strictly necessary.

> If the user wishes to ‘freeze’ the node tree, they can do so by running a ‘Freeze Nodes’ operator, or by going to Edit Mode, which will automatically prompt the user to freeze the mesh. That doesn't have to be the case, as any operator called from within Edit Mode could add a corresponding node to the tree instead, keeping the process non-destructive. It's also important to be able to access edit mode to just select components (points, faces) to work on, or to simply transform a selection. Such an operation could be stored in a 'transform' node. > We can implement this by adding a series of built-in special 'gizmo nodes' which can be used as inputs for the node tree. Examples of gizmo nodes are the Location Gizmo, Direction Gizmo and others. A toggle on these nodes can show or hide these gizmos in the viewport. I love the idea of having gizmos attached to some nodes - after all, you'd expect to see the *extrude* gizmo when selecting the *extrude operator* node within the node editor - however I'm not sure just keeping those 'gizmo toggles' wherever there are relevant node sockets isn't simpler - say a *spin operator* node has an *origin* vector input - the toggle for the corresponding gizmo could just be placed to the side of the value field / socket. These would probably be mutually exclusive too, to not end up with a cloud of gizmos floating around in the viewport ? Or maybe gizmos could be spawned simply by selecting the node ? However you do not touch on the subject of whether or not a node tree is attached to an object, and if so is the object type (hence data structure) fixed ? Or could the object type be dynamic, dependant on the output node type ? (mesh output, voxel output, curve output...) Asking this because going from bmesh to volume, to particles, etc. could be very powerful. Great points all over, and I dig what you suggest UI-wise to enhance the nodes themselves. I'd add to that list the ability to have a real interface inside of nodes, such as panels, dividers, etc. of course that would be a nice touch, but nothing strictly necessary.

In #67088#725723, @hadrien wrote:
That doesn't have to be the case, as any operator called from within Edit Mode could add a corresponding node to the tree instead, keeping the process non-destructive. It's also important to be able to access edit mode to just select components (points, faces) to work on, or to simply transform a selection. Such an operation could be stored in a 'transform' node.

Yes, ideally Edit Mode could still be used for non-destructive modeling. I expect that would not be easy to do in practice.

And yes, you should indeed be able to enter Edit Mode for objects with modifier nodes, although I don’t expect it’s easy to make it so you can actually select and interact with generated mesh data this way. Probably this aspect would work somewhat like the current modifiers, although a more advanced approach would be to integrate the destructive and non-destructive workflows more.

In theory this could be done, so that any mesh editing operation was automatically mapped to a node operation.

> In #67088#725723, @hadrien wrote: > That doesn't have to be the case, as any operator called from within Edit Mode could add a corresponding node to the tree instead, keeping the process non-destructive. It's also important to be able to access edit mode to just select components (points, faces) to work on, or to simply transform a selection. Such an operation could be stored in a 'transform' node. Yes, ideally Edit Mode could still be used for non-destructive modeling. I expect that would not be easy to do in practice. And yes, you should indeed be able to enter Edit Mode for objects with modifier nodes, although I don’t expect it’s easy to make it so you can actually select and interact with generated mesh data this way. Probably this aspect would work somewhat like the current modifiers, although a more advanced approach would be to integrate the destructive and non-destructive workflows more. In theory this could be done, so that any mesh editing operation was automatically mapped to a node operation.
Member

Added subscriber: @PabloDobarro

Added subscriber: @PabloDobarro

Added subscriber: @SecuoyaEx

Added subscriber: @SecuoyaEx

How does this tie with the future of rigging? Currently it's very hard to have a base set of deforming bones and swap the rig around it, similar to what Source Filmmaker does. You can't swap Object data because the dropdown doesn't allow for that, nor can you do it swapping Armature data, because constraints and other rigging values are stored on an object/bone level, not on the armature. You have to duplicate the Armature object, along with all its children objects if you don't want to rebind each object to it. Could rigs be their own datablock or something like that? I heard stuff about rig compilation, would the new nodes even allow for that?

How does this tie with the future of rigging? Currently it's very hard to have a base set of deforming bones and swap the rig around it, similar to what Source Filmmaker does. You can't swap Object data because the dropdown doesn't allow for that, nor can you do it swapping Armature data, because constraints and other rigging values are stored on an object/bone level, not on the armature. You have to duplicate the Armature object, along with all its children objects if you don't want to rebind each object to it. Could rigs be their own datablock or something like that? I heard stuff about rig compilation, would the new nodes even allow for that?

Constraint nodes will most likely have to live on a higher level than on the bone/object level. With Blender’s architecture it’s not 100% clear how to do this. One solution is to store it on the armature level, although then they cannot be supported by objects, which doesn’t seem it would be acceptable. Other obvious solution is to store them somehow in Collections, which can contain objects and is portable. But what then happens to objects who live in multiple collections is not obvious.

Constraint nodes will most likely have to live on a higher level than on the bone/object level. With Blender’s architecture it’s not 100% clear how to do this. One solution is to store it on the armature level, although then they cannot be supported by objects, which doesn’t seem it would be acceptable. Other obvious solution is to store them somehow in Collections, which can contain objects and is portable. But what then happens to objects who live in multiple collections is not obvious.

Added subscriber: @jeacom

Added subscriber: @jeacom

Would be also great to have python defined nodes.
Ones that run like operators.

Would be also great to have python defined nodes. Ones that run like operators.

Indeed, however there may be issues with the single-threaded nature of Python. Node trees should be as fast as possible, by using multiple cores and even using things like JIT compiling and GPU execution. Python nodes may conflict with this.

@JacquesLucke perhaps you could expand on this?

Indeed, however there may be issues with the single-threaded nature of Python. Node trees should be as fast as possible, by using multiple cores and even using things like JIT compiling and GPU execution. Python nodes may conflict with this. @JacquesLucke perhaps you could expand on this?
Member

In #67088#726249, @WilliamReynish wrote:
Indeed, however there may be issues with the single-threaded nature of Python. Node trees should be as fast as possible, by using multiple cores and even using things like JIT compiling and GPU execution. Python nodes may conflict with this.

@JacquesLucke perhaps you could expand on this?

You said the main thing already, CPython is inherently single threaded due to the GIL (global interpreter lock). Nevertheless, there are many cases in which it can be very useful to use Python in nodes. I think it is likely that we will be able to use Python in certain kinds of nodes at some point, but it is not something we focus on right now.

> In #67088#726249, @WilliamReynish wrote: > Indeed, however there may be issues with the single-threaded nature of Python. Node trees should be as fast as possible, by using multiple cores and even using things like JIT compiling and GPU execution. Python nodes may conflict with this. > > @JacquesLucke perhaps you could expand on this? You said the main thing already, CPython is inherently single threaded due to the GIL (global interpreter lock). Nevertheless, there are many cases in which it can be very useful to use Python in nodes. I think it is likely that we will be able to use Python in certain kinds of nodes at some point, but it is not something we focus on right now.
Member

Added subscriber: @lichtwerk

Added subscriber: @lichtwerk

Added subscriber: @tonton

Added subscriber: @tonton

Added subscriber: @tgarriott

Added subscriber: @tgarriott
Contributor

Added subscriber: @RedMser

Added subscriber: @RedMser

Added subscriber: @MauricioMarinho

Added subscriber: @MauricioMarinho

Added subscriber: @alterdings

Added subscriber: @alterdings

Added subscriber: @xdanic

Added subscriber: @xdanic

Added subscriber: @AndyCuccaro

Added subscriber: @AndyCuccaro
Member

Added subscriber: @Mets

Added subscriber: @Mets
Member

I'm hoping there will be two types of bone nodes.

One nodegraph on the armature level, for constructing the armature, (ie. where you can do things that you would do in edit mode), and one on a pose bone level, which is a nodegraph for each pose bone to do things that you would currently do with constraints. (I think the node incarnation of bone constraints should be called something else, like Solver nodes, since they wouldn't really be constraining anything, they would just output some information that you can choose to use however you want.)

Sorry if this is not on topic though.

I'm hoping there will be two types of bone nodes. One nodegraph on the armature level, for constructing the armature, (ie. where you can do things that you would do in edit mode), and one on a pose bone level, which is a nodegraph for each pose bone to do things that you would currently do with constraints. (I think the node incarnation of bone constraints should be called something else, like Solver nodes, since they wouldn't really be constraining anything, they would just output some information that you can choose to use however you want.) Sorry if this is not on topic though.

Added subscriber: @DanielPaul

Added subscriber: @DanielPaul

Added subscriber: @lemenicier_julien

Added subscriber: @lemenicier_julien

Added subscriber: @0o00o0oo

Added subscriber: @0o00o0oo

This is exciting stuff! And I love the idea of the relevant nodes spawning as gizmos. Really looking forward to seeing what the idea looks like practically. I'm for any and all ability to control the nodes as visually as possible.

I recently did a project using Animation Nodes, and while it's powerful, it was taxing to experiment with what I thought would be simple changes.
image.png E.g. I wanted to offset the animation for each individual words, but figuring out how to do that and setting it up was a huge hassle. However, this would be quite easily and straight-forwardly done with a Dope Sheet.
Of course, I do admit it could be because I don't quite know how to use all the nodes (but there is a lot!), but I really hope the new node system for Blender could be far more intuitive and easy to make simple animations and adjustments.

Node systems are powerful, but artists like me find themselves often lost in the logic of our own node setups as soon as they begin to become a little complicated. That's why, with AN, I appreciated things like the "Interpolation Viewer".
image.png But even things like that would be great to take even further, like make it so it's not merely showing the result of the node setup, but also allowing the artist to manipulate the physical graph to affect the values would be very useful.

This is exciting stuff! And I love the idea of the relevant nodes spawning as gizmos. Really looking forward to seeing what the idea looks like practically. I'm for any and all ability to control the nodes as visually as possible. I recently did [a project ](https://www.instagram.com/p/B0uy70fHFxJ/) using Animation Nodes, and while it's powerful, it was taxing to experiment with what I thought would be simple changes. ![image.png](https://archive.blender.org/developer/F7664983/image.png) E.g. I wanted to offset the animation for each individual words, but figuring out how to do that and setting it up was a huge hassle. However, this would be quite easily and straight-forwardly done with a Dope Sheet. Of course, I do admit it could be because I don't quite know how to use all the nodes (but there is a lot!), but I really hope the new node system for Blender could be far more intuitive and easy to make simple animations and adjustments. Node systems are powerful, but artists like me find themselves often lost in the logic of our own node setups as soon as they begin to become a little complicated. That's why, with AN, I appreciated things like the "Interpolation Viewer". ![image.png](https://archive.blender.org/developer/F7665060/image.png) But even things like that would be great to take even further, like make it so it's not merely showing the result of the node setup, but also allowing the artist to manipulate the physical graph to affect the values would be very useful.

Added subscriber: @CobraA

Added subscriber: @CobraA

excuse my basic knowledge of the subject but when you say "everything nodes" that does that mean also on the data block level where an object can have different atomic nodes like attributes node,shape node, transform node,..etc aka as DG & DAG nodes something like in maya or softimage this gives more granular control on the scene data and how the information flow for hierarchies and relations or is it something more high level?

excuse my basic knowledge of the subject but when you say "everything nodes" that does that mean also on the data block level where an object can have different atomic nodes like attributes node,shape node, transform node,..etc aka as DG & DAG nodes something like in maya or softimage this gives more granular control on the scene data and how the information flow for hierarchies and relations or is it something more high level?

Added subscriber: @RosarioRosato

Added subscriber: @RosarioRosato

Added subscriber: @Fux

Added subscriber: @Fux

Added subscriber: @Dogway

Added subscriber: @Dogway

Maybe a little bit off-topic but this could improve the readability of node-systems in general.
Especially with animation nodes and the new Bparticles, where a lot of nodes are used, it's sometimes difficult to see where nodes are actually connected. The node-noodle gradient of selected nodes goes into the same grey as every other noodle.

An option to change the noodle-gradient from selected nodes could improve this.
At the moment only the color can be changed (wire color), not the falloff.

Noodles from selected nodes could also be shown on top of all other noodles.

Example in attached GIF.

Unbenannt.gif

Maybe a little bit off-topic but this could improve the readability of node-systems in general. Especially with animation nodes and the new Bparticles, where a lot of nodes are used, it's sometimes difficult to see where nodes are actually connected. The node-noodle gradient of selected nodes goes into the same grey as every other noodle. An option to change the noodle-gradient from selected nodes could improve this. At the moment only the color can be changed (wire color), not the falloff. Noodles from selected nodes could also be shown on top of all other noodles. Example in attached GIF. ![Unbenannt.gif](https://archive.blender.org/developer/F7945557/Unbenannt.gif)

Added subscriber: @jfmatheu

Added subscriber: @jfmatheu

Added subscriber: @kalmdown

Added subscriber: @kalmdown

Added subscriber: @andylomas

Added subscriber: @andylomas

Not sure if this has already been considered, but one thing that can really enhance collaborative workflow, and re-use of assets across multiple shots or sequences, is having group nodes that read their contents (and what is exposed control wise on the top level of the group) from shared files on disk, particularly if the reading of those files can be managed as assets.

Basically group nodes that reference a file on disk, and the file is dynamically used to define the contents of the group when the scene is read in or the user asks for the group node's contents to be dynamically reloaded on demand. You should also be able to publish to the file on disk from the node to share its contects with other scenes or users.

Potentially allows all sorts of stuff, from shared procedural assets through to things like shared sequence lighting setups.

Not sure if this has already been considered, but one thing that can really enhance collaborative workflow, and re-use of assets across multiple shots or sequences, is having group nodes that read their contents (and what is exposed control wise on the top level of the group) from shared files on disk, particularly if the reading of those files can be managed as assets. Basically group nodes that reference a file on disk, and the file is dynamically used to define the contents of the group when the scene is read in or the user asks for the group node's contents to be dynamically reloaded on demand. You should also be able to publish to the file on disk from the node to share its contects with other scenes or users. Potentially allows all sorts of stuff, from shared procedural assets through to things like shared sequence lighting setups.

Added subscriber: @Pipeliner

Added subscriber: @Pipeliner

Added subscriber: @Spline_Shader

Added subscriber: @Spline_Shader

Will it be possible to insert new nodes with addons, so that a user's own code can be included in a more modular fashion? It would be a lot easier than digging around in Blender's source code. This would be a lower barrier to entry for scripters and riggers who need more power but don't want to be actual software devs.

Will it be possible to insert new nodes with addons, so that a user's own code can be included in a more modular fashion? It would be a lot easier than digging around in Blender's source code. This would be a lower barrier to entry for scripters and riggers who need more power but don't want to be actual software devs.

Added subscriber: @zebus3dream

Added subscriber: @zebus3dream

Added subscriber: @Rasterman

Added subscriber: @Rasterman

So happy that the Blender community is considering procedural based systems, big kudos from Houdini camp.

"How can you communicate from one node tree type to another? You might want to drive a modifier and a material property with the same texture, for eg."

I want to send a big nod towards the @attribute system for internal communication between parametric nodes across all contexts, maybe Blender deserves a variation of this of its own.

In general, I would suggest looking into Hou mechanics and how it deals with different contexts (rendering, compositing, particles, modeling, etc.) and their communication. The Geo Spreadsheet is where the party's at. Also, Grouping :) There's simply a lot about Houdini that I'd wish to see in Blender which I think would make it more powerful and more accessible to nerdy people.

So happy that the Blender community is considering procedural based systems, big kudos from Houdini camp. *"How can you communicate from one node tree type to another? You might want to drive a modifier and a material property with the same texture, for eg."* I want to send a big nod towards the **@attribute** system for internal communication between parametric nodes across all contexts, maybe Blender deserves a variation of this of its own. In general, I would suggest looking into Hou mechanics and how it deals with different contexts (rendering, compositing, particles, modeling, etc.) and their communication. The Geo Spreadsheet is where the party's at. Also, **Grouping** :) There's simply a lot about Houdini that I'd wish to see in Blender which I think would make it more powerful and more accessible to nerdy people.

Added subscriber: @JulianEisel

Added subscriber: @JulianEisel

In #67088#821829, @Rasterman wrote:
In general, I would suggest looking into Hou mechanics and how it deals with different contexts (rendering, compositing, particles, modeling, etc.) and their communication. The Geo Spreadsheet is where the party's at.

Also, Grouping :) There's simply a lot about Houdini that I'd wish to see in Blender which I think would make it more powerful and more accessible to nerdy people.

Grouping I imagine would become a necessity as soon as procedural modeling is in play - I don't see ottomh any other obvious way of specifying elements (=components) for a given operator/node, other than "group by volume" and that sort of thing.
I think @JulianEisel had started work on a spreadsheet type editor a while ago - did you have this kind of usage in mind ? (geometry attributes)

> In #67088#821829, @Rasterman wrote: > In general, I would suggest looking into Hou mechanics and how it deals with different contexts (rendering, compositing, particles, modeling, etc.) and their communication. The Geo Spreadsheet is where the party's at. > Also, **Grouping** :) There's simply a lot about Houdini that I'd wish to see in Blender which I think would make it more powerful and more accessible to nerdy people. Grouping I imagine would become a necessity as soon as procedural modeling is in play - I don't see ottomh any other obvious way of specifying elements (=components) for a given operator/node, other than "group by volume" and that sort of thing. I think @JulianEisel had started work on a spreadsheet type editor a while ago - did you have this kind of usage in mind ? (geometry attributes)

I think @JulianEisel had started work on a spreadsheet type editor a while ago - did you have this kind of usage in mind ? (geometry attributes)

Yes, point, primitive, volume, and detail attributes that are interchangeable via attribute transfer nodes. This is the powerhouse of Houdini, it allows a lot of low-level modifications to be made, and a much greater procedural control over various transformations, copy stamping, and (my favorite) volume operations, which allow for addition of forces to geometry. Imagine each point (in Houdini as point is different than a vertex), aside from position, normals, etc, having also orintation, velocity, density and hundreds of other potential, context sensitive attributes..

Grouping I imagine would become a necessity as soon as procedural modeling is in play - I don't see ottomh any other obvious way of specifying elements (=components) for a given operator/node, other than "group by volume" and that sort of thing.

Point grouping is even more powerful. Also, there's groups by expressions, by attributes, and a lot of different grouping nodes altogether.

One more powerful procedural backbone is the parameter reference system, which allows users to directly reference a parameter of a geometry node (Y size of a sphere for example) in another node. Or the possibility to add expressions (sine function for example) to a certain parameter, so you get animation just by clicking play, without the need for keyframing.

I could go on forever... It's a powerful platform, but I would love to see Blender become a competition for Houdini, and have them more interchangeable (Houdini is really weak for basic poly modeling, which I think Blender took better care of).

> I think @JulianEisel had started work on a spreadsheet type editor a while ago - did you have this kind of usage in mind ? (geometry attributes) Yes, point, primitive, volume, and detail attributes that are interchangeable via attribute transfer nodes. This is the powerhouse of Houdini, it allows a lot of low-level modifications to be made, and a much greater procedural control over various transformations, copy stamping, and (my favorite) volume operations, which allow for addition of forces to geometry. Imagine each point (in Houdini as point is different than a vertex), aside from position, normals, etc, having also orintation, velocity, density and hundreds of other potential, context sensitive attributes.. > Grouping I imagine would become a necessity as soon as procedural modeling is in play - I don't see ottomh any other obvious way of specifying elements (=components) for a given operator/node, other than "group by volume" and that sort of thing. Point grouping is even more powerful. Also, there's groups by expressions, by attributes, and a lot of different grouping nodes altogether. One more powerful procedural backbone is the parameter reference system, which allows users to directly reference a parameter of a geometry node (Y size of a sphere for example) in another node. Or the possibility to add expressions (sine function for example) to a certain parameter, so you get animation just by clicking play, without the need for keyframing. I could go on forever... It's a powerful platform, but I would love to see Blender become a competition for Houdini, and have them more interchangeable (Houdini is really weak for basic poly modeling, which I think Blender took better care of).

Would like to be able to turn sets of nodes on/off easily. Either groups of nodes could turn up in Collections, to be hidden or in a Node Editor a Group Node would have a switch.

Would like to be able to turn sets of nodes on/off easily. Either groups of nodes could turn up in Collections, to be hidden or in a Node Editor a Group Node would have a switch.

@kalmdown you can already mute nodes. And there will of course be more than one node tree - in one scene you can have many particle systems, for example, and toggle each one in the Outliner if you only wish to see a particular one at any one time.

@kalmdown you can already mute nodes. And there will of course be more than one node tree - in one scene you can have many particle systems, for example, and toggle each one in the Outliner if you only wish to see a particular one at any one time.

Would like more control to mute groups or node trees - independent of particle systems. Assuming AN or an AN-like system is added there is no way to stop specific node groups from running. Since I do group them by function being able to click on a run/don't run toggle per group would be logical.

Would like more control to mute groups or node trees - independent of particle systems. Assuming AN or an AN-like system is added there is no way to stop specific node groups from running. Since I do group them by function being able to click on a run/don't run toggle per group would be logical.

Added subscriber: @Constantina32

Added subscriber: @Constantina32

Added subscriber: @Cedch

Added subscriber: @Cedch

Added subscriber: @Jorgensen-1

Added subscriber: @Jorgensen-1

Added subscriber: @BSannholm

Added subscriber: @BSannholm

Removed subscriber: @BSannholm

Removed subscriber: @BSannholm

Added subscriber: @jon_b

Added subscriber: @jon_b

Added subscriber: @BartekMoniewski

Added subscriber: @BartekMoniewski

Added subscriber: @Zeirus

Added subscriber: @Zeirus

Added subscriber: @Ghostil

Added subscriber: @Ghostil

I hope that the modeling of the meshes themselves will not change much in a regular editor and will remain as they are now, but will simply be written to the node system in a separate editor?
Houdini, in my opinion, has one drawback, it is not convenient to work with uv there. If at the same time the work with uv does not change radically it will be very convenient when everything will be on ndoah.

I hope that the modeling of the meshes themselves will not change much in a regular editor and will remain as they are now, but will simply be written to the node system in a separate editor? Houdini, in my opinion, has one drawback, it is not convenient to work with uv there. If at the same time the work with uv does not change radically it will be very convenient when everything will be on ndoah.

Added subscriber: @Ko

Added subscriber: @Ko

There also this old "cycles_camera_nodes" branch: https://developer.blender.org/rB0e1f34a0c8b7f2a6ecddbca65f44bbd444c1386a

There also this old "cycles_camera_nodes" branch: https://developer.blender.org/rB0e1f34a0c8b7f2a6ecddbca65f44bbd444c1386a

Added subscriber: @awesomedata

Added subscriber: @awesomedata

@WilliamReynish

Just an addendum to the "Stack Based" node design:

Nodes could be better-visualized in a custom UI "stack" by using a similar UI approach as Houdini does in its "Digital Assets" (HDAs) approach to UI.
This approach generates a "higher-level" UI, which is useful for artists when importing their procedural ("digital") assets into game engines to design levels. Artists generally only need a few values to tweak at a time, and giving them control over what values are displayed by default would be a better approach (imo) than "stacks" as we are currently seeing them.

Check it out:

https://www.youtube.com/watch?v=SiT4r22BWY8&t=10m35s

The user-interface can be comprised of any UI element the programmer/artist wants (that Blender supports), leading to better usability, greater REusability of the node programming, and easier-to-understand UI for the node-based "tool" / "modifier" overall. We only need to think of the modifier in terms of being a reusable "tool" instead of a "pre-programmed 'time-saver' feature that still fits in with the old paradigm", as we are currently looking at it now.

This approach can also fit in well with the Gizmos / Exposed Parameter setup we've already got in the design by simply offering those "Gizmos" or "Parameters" as options to include as well as what UI element (i.e. button/sider/label/gizmo) would best suit them in the UI the artist desires.

To summarize:

This approach would keep Blender simple/straightforward to use (for new users) while also creating a better approach to UI even for existing modifiers, as many controls on some modifiers are simply not relevant to some artists/projects, so providing the option to hide them, potentially by default (and thereby "simplify" the whole interface), would be a step up to users who prefer the "old" modifier stack to the "new" node-based approach to modifiers. They can essentially have the equivalent to a "modifier preset" using a system like this.

Thoughts?

@WilliamReynish Just an addendum to the "Stack Based" node design: Nodes could be better-visualized in a custom UI "stack" by using a similar UI approach as Houdini does in its "Digital Assets" (HDAs) approach to UI. This approach generates a "higher-level" UI, which is useful for artists when importing their procedural ("digital") assets into game engines to design levels. Artists generally only need a few values to tweak at a time, and giving them control over what values are displayed by default would be a better approach (imo) than "stacks" as we are currently seeing them. **Check it out**: https://www.youtube.com/watch?v=SiT4r22BWY8&t=10m35s The user-interface can be comprised of any UI element the programmer/artist wants (that Blender supports), leading to better usability, greater REusability of the node programming, and easier-to-understand UI for the node-based "tool" / "modifier" overall. We only need to think of the modifier in terms of being a reusable "tool" instead of a "pre-programmed 'time-saver' feature that still fits in with the old paradigm", as we are currently looking at it now. This approach can also fit in well with the Gizmos / Exposed Parameter setup we've already got in the design by simply offering those "Gizmos" or "Parameters" as options to include as well as what UI element (i.e. button/sider/label/gizmo) would best suit them in the UI the artist desires. **To summarize:** This approach would keep Blender simple/straightforward to use (for new users) while also creating a better approach to UI even for existing modifiers, as many controls on some modifiers are simply not relevant to some artists/projects, so providing the option to hide them, potentially by default (and thereby "simplify" the whole interface), would be a step up to users who prefer the "old" modifier stack to the "new" node-based approach to modifiers. They can essentially have the equivalent to a "modifier preset" using a system like this. Thoughts?

Added subscriber: @StuartStelzer

Added subscriber: @StuartStelzer

Not sure if this is within scope or currently being considered, but one thing in Blender that isn't currently modifier based but would very much benefit from a modifier based workflow is bezier curve geometry generation.

Those currently belong to objectdata level datablocks, but having them separate as non destructive operations on top of objectdata would be preferable.
Materializing beveling, extruding, tapering, offsetting bezier curves as nodes/modifier would definitely be powerful, if not a must.

Not sure if this is within scope or currently being considered, but one thing in Blender that isn't currently modifier based but would very much benefit from a modifier based workflow is bezier curve geometry generation. Those currently belong to objectdata level datablocks, but having them separate as non destructive operations on top of objectdata would be preferable. Materializing beveling, extruding, tapering, offsetting bezier curves as nodes/modifier would definitely be powerful, if not a must.

This curve/line-based modifier approach to procedural modeling is the thing people use in Houdini the most.

I would agree that it is a MUST in the Blender Node-based modeling workflow.

This curve/line-based modifier approach to procedural modeling is the thing people use in Houdini the most. I would agree that it is a MUST in the Blender Node-based modeling workflow.

Added subscriber: @DirSurya

Added subscriber: @DirSurya

Added subscriber: @TakingFire

Added subscriber: @TakingFire

Added subscriber: @Koltan

Added subscriber: @Koltan

Regarding performance for Python nodes, could [numba]] help with that? It allows to JIT-compile Python code and provides a great boost in speed similar to Cython, while being able to run on multiple cores. It can also compile ahead of time, which could be useful with nodes that have static inputs/outputs. It also supports GPU computation with CUDA, but https:*github.com/numba/numba/pull/582 . [ https://github.com/pydata/sparse/issues/126 | Some Python developers discussed the pros/cons of using either Cython/Numba/C++ for speeding up low-level code, and it seems that Numba made a good case for its adoptability. I personally used it on small projects and the speed-up was great for little effort. However this would introduce new dependencies (it requires numpy, and relies on llvmlite for JIT compilation, total size is around 30MB). I am quite new here and I have no idea if there are strict guidelines/requirements in that regard. Anyway, that could be interesting to look into if we want to support Python nodes without significant loss of performance.

Regarding performance for Python nodes, could [numba]] help with that? It allows to JIT-compile Python code and provides a great boost in speed similar to Cython, while being able to run on multiple cores. It can also compile ahead of time, which could be useful with nodes that have static inputs/outputs. It also supports GPU computation with CUDA, but [[ https:*github.com/numba/numba/pull/582 | not OpenCL ]]. [[ https://github.com/pydata/sparse/issues/126 | Some Python developers ](http:*numba.pydata.org/) discussed the pros/cons of using either Cython/Numba/C++ for speeding up low-level code, and it seems that Numba made a good case for its adoptability. I personally used it on small projects and the speed-up was great for little effort. However this would introduce new dependencies (it requires numpy, and relies on llvmlite for JIT compilation, total size is around 30MB). I am quite new here and I have no idea if there are strict guidelines/requirements in that regard. Anyway, that could be interesting to look into if we want to support Python nodes without significant loss of performance.

Added subscriber: @ClinToch

Added subscriber: @ClinToch

Added subscriber: @Mohammad-Sanioura

Added subscriber: @Mohammad-Sanioura

Added subscriber: @AlexeyPerminov

Added subscriber: @AlexeyPerminov

Added subscriber: @EvandroFerreiradaCosta

Added subscriber: @EvandroFerreiradaCosta

I think some people are flocking here to give feedback regarding features instead of UX because the main task #68980 Everything Nodes was closed as a duplicate of #73324 Particles Nodes, even though Particles are only a fraction of the Everything Nodes project (?).

One thing I saw in the documentation is that the functions are being abstracted as much as possible, which is great for increased control over all aspects of a mesh/particle/simulation/material/etc. But i can see this clearly becoming too confusing and overly complex just as Animation nodes is currently, for a non-programmer or someone not acquainted with some specific terms created for everything nodes (such as primitives/functions/attributes).
My suggestion is to keep, just as currently we have inside Blender, a set of standard modifiers, which would be in fact "node groups" of the functions required to do the desired operation, that the user can select (For example a Move or Extrude Modifier), instead of having to manually insert all of the required nodes for the operation (input/convert/group/operate/output). (Of course the user could still access the modifier group to create more inputs/outputs, or "ungroup" the group to expose all nodes for further rearrange/usage)

Also, I think is imperative that every operation in blender to have a modifier a.k.a "group node" equivalent, this including regular Move/Rotate/Scale operations, as well as edit-mode ones such as Unwrap, Join, Split, Extrude, Mark/Clear Seam/Sharp (maybe by angle or feature), Bevel, Shear, Slide, Push/Pull, Connect, Create/Delete Vertex Group (as well as Assign/Remove from vertex group), Create/Delete Vertex Color, Create/Delete UV Maps, etc.
I can also see the usefulness of instead of having the "By Angle/Weight/Vertex Group" controls inside of each operation node (ex: Bevel), have it be a separate node, so that you could do any of the above operations on only a set of "primitives" (vertices, edges, tris or quads) depending on the angle/weight/vertexgroup, etc. Ex: Only UV-Unwrap a set of vertex groups that were defined (unsing a Assign to vertex group) by the angles to adjacent primitives (using a "By Feature" node).

In the case of a Unwrap modifier (a group containing a Create/Assign UV Map node and an Unwrap node), knowing that the operation can be taxing on very complex objects if done after every change to the node tree, maybe the unwrap node could have a special option to be updated manually, or only if the directly previous nodes in the chain had changes, etc.
Also, the option Project from View and Project from View(Bounds) could be instead substituted to Project from Camera and Project from Camera(Bounds) with an option to select which camera. This way not only can you create and modify an object parametrically, you could also auto-seam the object (maybe based on features like angle), uv-unwrap it and assign procedural/image materials to it, all inside the node window.

I think some people are flocking here to give feedback regarding features instead of UX because the main task #68980 **Everything Nodes** was closed as a duplicate of #73324 **Particles Nodes**, even though Particles are only a fraction of the Everything Nodes project (?). One thing I saw in the [documentation ](https://wiki.blender.org/wiki/Source/Nodes) is that the functions are being abstracted as much as possible, which is great for increased control over all aspects of a mesh/particle/simulation/material/etc. But i can see this clearly becoming too confusing and overly complex just as Animation nodes is currently, for a non-programmer or someone not acquainted with some specific terms created for everything nodes (such as primitives/functions/attributes). My suggestion is to keep, just as currently we have inside Blender, a set of standard modifiers, which would be in fact "node groups" of the functions required to do the desired operation, that the user can select (For example a Move or Extrude Modifier), instead of having to manually insert all of the required nodes for the operation (input/convert/group/operate/output). (Of course the user could still access the modifier group to create more inputs/outputs, or "ungroup" the group to expose all nodes for further rearrange/usage) Also, I think is imperative that every operation in blender to have a modifier a.k.a "group node" equivalent, this including regular Move/Rotate/Scale operations, as well as edit-mode ones such as Unwrap, Join, Split, Extrude, Mark/Clear Seam/Sharp (maybe by angle or feature), Bevel, Shear, Slide, Push/Pull, Connect, Create/Delete Vertex Group (as well as Assign/Remove from vertex group), Create/Delete Vertex Color, Create/Delete UV Maps, etc. I can also see the usefulness of instead of having the "By Angle/Weight/Vertex Group" controls inside of each operation node (ex: Bevel), have it be a separate node, so that you could do any of the above operations on only a set of "primitives" (vertices, edges, tris or quads) depending on the angle/weight/vertexgroup, etc. Ex: Only UV-Unwrap a set of vertex groups that were defined (unsing a Assign to vertex group) by the angles to adjacent primitives (using a "By Feature" node). In the case of a Unwrap modifier (a group containing a Create/Assign UV Map node and an Unwrap node), knowing that the operation can be taxing on very complex objects if done after every change to the node tree, maybe the unwrap node could have a special option to be updated manually, or only if the directly previous nodes in the chain had changes, etc. Also, the option **Project from View** and **Project from View(Bounds)** could be instead substituted to **Project from Camera** and **Project from Camera(Bounds)** with an option to select which camera. This way not only can you create and modify an object parametrically, you could also auto-seam the object (maybe based on features like angle), uv-unwrap it and assign procedural/image materials to it, all inside the node window.

Added subscriber: @aisatan.ne

Added subscriber: @aisatan.ne
I saw plugin called "Sorcar", maybe check out it too https://blenderartists.org/t/sorcar-procedural-modeling-in-blender-using-node-editor/1156769

Added subscriber: @BeckersC

Added subscriber: @BeckersC

Added subscriber: @ompadu

Added subscriber: @ompadu
Member

Added subscriber: @Kdaf

Added subscriber: @Kdaf

Added subscriber: @roothdw

Added subscriber: @roothdw

Added subscriber: @Hto-Ya

Added subscriber: @Hto-Ya

Added subscriber: @HoloTheDrunk

Added subscriber: @HoloTheDrunk

Added subscriber: @Muritaka

Added subscriber: @Muritaka

Added subscriber: @hyyou

Added subscriber: @hyyou

Added subscriber: @proulxpl

Added subscriber: @proulxpl

Added subscriber: @mkingsnorth

Added subscriber: @mkingsnorth

Added subscriber: @myselfhimself-3

Added subscriber: @myselfhimself-3

Hi,
For now the Python API for "building" Compositing nodes (or node groups more precisely) misses a Python-scriptable node a bit as the Blender Game Engine had: one node, one linked python file in the Text editor for example. This Python node should be able to take namely at least an image input socket, and output and image socket.
For me, this is blocking a bit an easy integration of the G'MIC Python/C++ binding which provides more that 500 hundred 2d filters.
The Blender Custom Nodes stalled Github project by @bitsawer had introduced within a probably easily splittable C/C++ patch:

  • a Python-scriptable compositing node (with images input and output)
  • a G'MIC compositing node
  • a GLSL compositing node
    If the Python-scriptable compositing node could extracted from that patch, adapted and implemented, I could then more easily implement a pure-python G'MIC compositing node with the G'MIC Python binding for which I have been working full time for 8 months. Also many people would be interested in writing numpy-based compositing node scripts I am sure.
    This was thought over on this Github Issue on the now sleeping blender-custom-nodes project . :-)
Hi, For now the Python API for "building" Compositing nodes (or node groups more precisely) misses a Python-scriptable node a bit as the Blender Game Engine had: one node, one linked python file in the Text editor for example. This Python node should be able to take namely at least an image input socket, and output and image socket. For me, this is blocking a bit an easy integration of the [G'MIC ](https://gmic.eu/) Python/C++ binding which provides more that 500 hundred 2d filters. The [Blender Custom Nodes stalled Github project ](https://github.com/bitsawer/blender-custom-nodes) by @bitsawer had introduced within a probably easily splittable C/C++ patch: - a Python-scriptable compositing node (with images input and output) - a G'MIC compositing node - a GLSL compositing node If the Python-scriptable compositing node could extracted from that patch, adapted and implemented, I could then more easily implement a pure-python G'MIC compositing node with the [G'MIC Python binding ](https://github.com/myselfhimself/gmic-py) for which I have been working full time for 8 months. Also many people would be interested in writing numpy-based compositing node scripts I am sure. This was thought over on [this Github Issue on the now sleeping blender-custom-nodes project ](https://github.com/bitsawer/blender-custom-nodes/issues/5). :-)

Removed subscriber: @SecuoyaEx

Removed subscriber: @SecuoyaEx

Added subscriber: @hadesx01

Added subscriber: @hadesx01

Added subscriber: @Russ1642

Added subscriber: @Russ1642

Added subscriber: @Dragon.Studio

Added subscriber: @Dragon.Studio

Added subscriber: @KidTempo

Added subscriber: @KidTempo

Added subscriber: @Hyudan

Added subscriber: @Hyudan

Added subscriber: @Lukas-132

Added subscriber: @Lukas-132

Added subscriber: @gemelo

Added subscriber: @gemelo

Added subscriber: @TroyAC

Added subscriber: @TroyAC

I have a suggestion and I am not sure it should go here.

My suggestion is to have portal nodes to portal strings from one node tree to another. You would start with an output node. This output node you would name like a data-block. You could then plug in as many strings into this node on the left and name each string. Then you would go to another node tree and add a input portal. You would select from a drop down what portal data-block to use. The portal-ed strings would then be on the right of this input portal.

This would be useful because it will allow for more custom node tree sizes and dividing as well as more repeat node options than just node groups.

Note: There would also need to be an error message if a tree becomes a cycle due to the portals and cannot be executed. Precisely if (a) nodes output cannot be determined because it's output cycles back around to its own input. Preferably the node(s) with this error become or are bordered with a certain color.

If I did not post this in the right spot, feel free to add/change this suggestion to the right spot.

I have a suggestion and I am not sure it should go here. My suggestion is to have portal nodes to portal strings from one node tree to another. You would start with an output node. This output node you would name like a data-block. You could then plug in as many strings into this node on the left and name each string. Then you would go to another node tree and add a input portal. You would select from a drop down what portal data-block to use. The portal-ed strings would then be on the right of this input portal. This would be useful because it will allow for more custom node tree sizes and dividing as well as more repeat node options than just node groups. Note: There would also need to be an error message if a tree becomes a cycle due to the portals and cannot be executed. Precisely if (a) nodes output cannot be determined because it's output cycles back around to its own input. Preferably the node(s) with this error become or are bordered with a certain color. If I did not post this in the right spot, feel free to add/change this suggestion to the right spot.

Added subscriber: @mr.marcodivita

Added subscriber: @mr.marcodivita

Added subscriber: @DerTee

Added subscriber: @DerTee

Added subscriber: @lictex_1

Added subscriber: @lictex_1

Added subscriber: @piledog

Added subscriber: @piledog

Added subscriber: @danterazor

Added subscriber: @danterazor
Member

Added subscriber: @HooglyBoogly

Added subscriber: @HooglyBoogly
Member

Changed status from 'Confirmed' to: 'Resolved'

Changed status from 'Confirmed' to: 'Resolved'
Hans Goudey self-assigned this 2021-12-13 14:55:58 +01:00
Member

I'll close this task now, since much of this has already been implemented, designed, or planned. One thing we don't have a task for yet is parametric primitives that add a geometry nodes modifier and prompt for applying when entering edit mode. I think that can be considered separately.

I'll close this task now, since much of this has already been implemented, designed, or planned. One thing we don't have a task for yet is parametric primitives that add a geometry nodes modifier and prompt for applying when entering edit mode. I think that can be considered separately.

Added subscriber: @Michael-Drake

Added subscriber: @Michael-Drake

Removed subscriber: @Michael-Drake

Removed subscriber: @Michael-Drake

Added subscriber: @Michael-Drake

Added subscriber: @Michael-Drake
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
82 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#67088
No description provided.