Remesh Node #89052

Open
opened 2021-06-11 10:35:40 +02:00 by Fabian Schempp · 42 comments
Member

Remesh Modifier Modes will be split into two separate nodes.

  1. Voxel Remesh (Voxel Remesh algorithm)
  2. Block Remesh (original Remesh algorithm mainly useful for block effect)

Properties can be used as they are in the modifier.

Algorithms are already external from Modifier code so it should be easy to reuse it for the node.

voxel_remesh.png

block_remesh.png

Remesh Modifier Modes will be split into two separate nodes. 1. Voxel Remesh (Voxel Remesh algorithm) 2. Block Remesh (original Remesh algorithm mainly useful for block effect) Properties can be used as they are in the modifier. Algorithms are already external from Modifier code so it should be easy to reuse it for the node. ![voxel_remesh.png](https://archive.blender.org/developer/F10166701/voxel_remesh.png) ![block_remesh.png](https://archive.blender.org/developer/F10166700/block_remesh.png)
Fabian Schempp self-assigned this 2021-06-11 10:35:40 +02:00
Author
Member

Added subscriber: @FabianSchempp

Added subscriber: @FabianSchempp

Added subscriber: @rboxman

Added subscriber: @rboxman

Will there be any provisions for guarding against very low voxel sizes or high octree depths? Since these are node inputs that may flow in from other parts of the graph, I can imagine it being very easy for "bad" values to end up as input either accidentally or otherwise. Connecting any input to that socket is pretty dangerous unless you're absolutely sure the value that's going to be used first.

Will there be any provisions for guarding against very low voxel sizes or high octree depths? Since these are node inputs that may flow in from other parts of the graph, I can imagine it being very easy for "bad" values to end up as input either accidentally or otherwise. Connecting any input to that socket is pretty dangerous unless you're absolutely sure the value that's going to be used first.

Added subscriber: @Grady

Added subscriber: @Grady

In #89052#1175075, @rboxman wrote:
Will there be any provisions for guarding against very low voxel sizes or high octree depths? Since these are node inputs that may flow in from other parts of the graph, I can imagine it being very easy for "bad" values to end up as input either accidentally or otherwise. Connecting any input to that socket is pretty dangerous unless you're absolutely sure the value that's going to be used first.

That's a good point.
Technically there's nothing stopping the user from implementing their own 'limits' mechanism by passing any inputs through some kind of node logic that clips the values to a limited range.
But it could be worth considering adding that to some nodes as a built in feature.
Some kind of 'min' and 'max' setting to prevent insane values, like a voxel size of 0.00001 which would almost certainly result in a crash and be hard for users to 'debug'.. or possibly even fix after it's set..
Perhaps similar to how the RGB Curves node clipping UI works.
Example below.

example.png

> In #89052#1175075, @rboxman wrote: > Will there be any provisions for guarding against very low voxel sizes or high octree depths? Since these are node inputs that may flow in from other parts of the graph, I can imagine it being very easy for "bad" values to end up as input either accidentally or otherwise. Connecting any input to that socket is pretty dangerous unless you're absolutely sure the value that's going to be used first. That's a good point. Technically there's nothing stopping the user from implementing their own 'limits' mechanism by passing any inputs through some kind of node logic that clips the values to a limited range. But it could be worth considering adding that to some nodes as a built in feature. Some kind of 'min' and 'max' setting to prevent insane values, like a voxel size of 0.00001 which would almost certainly result in a crash and be hard for users to 'debug'.. or possibly even fix after it's set.. Perhaps similar to how the RGB Curves node clipping UI works. Example below. ![example.png](https://archive.blender.org/developer/F10167012/example.png)

Added subscriber: @GeorgiaPacific

Added subscriber: @GeorgiaPacific
Author
Member

In #89052#1175075, @rboxman wrote:
Will there be any provisions for guarding against very low voxel sizes or high octree depths? Since these are node inputs that may flow in from other parts of the graph, I can imagine it being very easy for "bad" values to end up as input either accidentally or otherwise. Connecting any input to that socket is pretty dangerous unless you're absolutely sure the value that's going to be used first.

Interesting Point. I will consider this!

> In #89052#1175075, @rboxman wrote: > Will there be any provisions for guarding against very low voxel sizes or high octree depths? Since these are node inputs that may flow in from other parts of the graph, I can imagine it being very easy for "bad" values to end up as input either accidentally or otherwise. Connecting any input to that socket is pretty dangerous unless you're absolutely sure the value that's going to be used first. Interesting Point. I will consider this!
Member

Changed status from 'Needs Triage' to: 'Confirmed'

Changed status from 'Needs Triage' to: 'Confirmed'
Contributor

Added subscriber: @Rawalanche

Added subscriber: @Rawalanche
Contributor

In #89052#1175075, @rboxman wrote:
Will there be any provisions for guarding against very low voxel sizes or high octree depths? Since these are node inputs that may flow in from other parts of the graph, I can imagine it being very easy for "bad" values to end up as input either accidentally or otherwise. Connecting any input to that socket is pretty dangerous unless you're absolutely sure the value that's going to be used first.

Regular modifiers suffer from this as well. It's super easy to freeze/crash Blender by mistyping voxel size value in the remesh modifier or accidentally dragging on the spinner. So rather than having some solution hardcoded in the new geometry nodes, I think this problem should be tacked more globally. Ideally by Blender having some sort of system to allow users to escape running operations.

And once that's in place, GN nodes, non GN modifiers, and pretty much any operators in the entire Blender (Smart UV project for example) with risk of excessive processing times could implement this global safeguard.

It'd be really sad if only a few specific nodes in only one specific editor would have this sort of functionality. But then again, that's a very Blender way of doing it, so that's probably how it will end up :)

> In #89052#1175075, @rboxman wrote: > Will there be any provisions for guarding against very low voxel sizes or high octree depths? Since these are node inputs that may flow in from other parts of the graph, I can imagine it being very easy for "bad" values to end up as input either accidentally or otherwise. Connecting any input to that socket is pretty dangerous unless you're absolutely sure the value that's going to be used first. Regular modifiers suffer from this as well. It's super easy to freeze/crash Blender by mistyping voxel size value in the remesh modifier or accidentally dragging on the spinner. So rather than having some solution hardcoded in the new geometry nodes, I think this problem should be tacked more globally. Ideally by Blender having some sort of system to allow users to escape running operations. And once that's in place, GN nodes, non GN modifiers, and pretty much any operators in the entire Blender (Smart UV project for example) with risk of excessive processing times could implement this global safeguard. It'd be really sad if only a few specific nodes in only one specific editor would have this sort of functionality. But then again, that's a very Blender way of doing it, so that's probably how it will end up :)
Member

Added subscriber: @PratikPB2123

Added subscriber: @PratikPB2123

Added subscriber: @1D_Inc

Added subscriber: @1D_Inc

In #89052#1175075, @rboxman wrote:
Connecting any input to that socket is pretty dangerous unless you're absolutely sure the value that's going to be used first.

How about proper default values? Something like 0.5
Five times safer.

> In #89052#1175075, @rboxman wrote: > Connecting any input to that socket is pretty dangerous unless you're absolutely sure the value that's going to be used first. How about proper default values? Something like 0.5 Five times safer.

In #89052#1177169, @1D_Inc wrote:
How about proper default values? Something like 0.5

Defaults won't help. This is about connecting the output of some existing node network to the input socket on the node here. That existing network already has values flowing through it and, unless you are absolutely sure of those values, plugging it into the node here is a recipe for disaster. Even allowing the user to "break" out of the operation is not good or desirable; often times it's still too late no matter how quickly you press ESC. It's also dangerous after you connect it. Tweaking the network will feed new values into the node and those can be problematic as well.

If there's no good solution to this, the design might have to remove the ability to connect sockets for that input at least.

> In #89052#1177169, @1D_Inc wrote: > How about proper default values? Something like 0.5 Defaults won't help. This is about connecting the output of some existing node network to the input socket on the node here. That existing network already has values flowing through it and, unless you are absolutely sure of those values, plugging it into the node here is a recipe for disaster. Even allowing the user to "break" out of the operation is not good or desirable; often times it's still too late no matter how quickly you press ESC. It's also dangerous after you connect it. Tweaking the network will feed new values into the node and those can be problematic as well. If there's no good solution to this, the design might have to remove the ability to connect sockets for that input at least.

In #89052#1177301, @rboxman wrote:

If there's no good solution to this, the design might have to remove the ability to connect sockets for that input at least.

Something like a "preventer", a safety node, or node part that breaks the chain in case of possible overload?

> In #89052#1177301, @rboxman wrote: > If there's no good solution to this, the design might have to remove the ability to connect sockets for that input at least. Something like a "preventer", a safety node, or node part that breaks the chain in case of possible overload?
Contributor

In #89052#1177345, @1D_Inc wrote:

In #89052#1177301, @rboxman wrote:

If there's no good solution to this, the design might have to remove the ability to connect sockets for that input at least.

Something like a "preventer", a safety node, or node part that breaks the chain in case of possible overload?

I find it impressive how you consistently manage to find a worst solution to most problems in Blender. The whole point of overload errors is that they are hard to predict, and happen by accident, not on purpose. The whole point of accidents is that you don't anticipate them, so you don't put the "preventer" node in there. You only realize that you need that node there when it's already too late and Blender is frozen.

There's industry standard for handling these situations - ability to escape running processes, usually based on either user pressing a key (usually Escape key), or occasionally combined with another safeguard in form of memory limit, which automatically escapes the running operator if the memory consumption exceeds threshold (let's say 80% of available RAM).

> In #89052#1177345, @1D_Inc wrote: >> In #89052#1177301, @rboxman wrote: > >> If there's no good solution to this, the design might have to remove the ability to connect sockets for that input at least. > > Something like a "preventer", a safety node, or node part that breaks the chain in case of possible overload? I find it impressive how you consistently manage to find a worst solution to most problems in Blender. The whole point of overload errors is that they are hard to predict, and happen by accident, not on purpose. The whole point of accidents is that you don't anticipate them, so you don't put the "preventer" node in there. You only realize that you need that node there when it's already too late and Blender is frozen. There's industry standard for handling these situations - ability to escape running processes, usually based on either user pressing a key (usually Escape key), or occasionally combined with another safeguard in form of memory limit, which automatically escapes the running operator if the memory consumption exceeds threshold (let's say 80% of available RAM).

mockup.jpg

I just did a quick mockup in Photoshop so excuse the quality. What about something like this? It's a design pattern that could be used for other nodes and modifiers as well that have 'dangerous' inputs.

In this case, the limit percentage "0.1%" is referring to 0.1% of object's longest dimension, that way it works no matter how big or small the object is.

But a simple 'limit' setting on nodes could allow for them to have safe guards against dangerous inputs and by having those limits there by default, new users won't accidentally feed a 0.0000001 value into a remesh node and wonder why Blender is crashing.

![mockup.jpg](https://archive.blender.org/developer/F10174683/mockup.jpg) I just did a quick mockup in Photoshop so excuse the quality. What about something like this? It's a design pattern that could be used for other nodes and modifiers as well that have 'dangerous' inputs. In this case, the limit percentage "0.1%" is referring to 0.1% of object's longest dimension, that way it works no matter how big or small the object is. But a simple 'limit' setting on nodes could allow for them to have safe guards against dangerous inputs and by having those limits there by default, new users won't accidentally feed a 0.0000001 value into a remesh node and wonder why Blender is crashing.

In #89052#1177532, @Rawalanche wrote:

I find it impressive how you consistently manage to find a worst solution to most problems in Blender.

Generally unprovable.

The whole point of accidents is that you don't anticipate them

For sure. So this is not about remesh node development anyway.

There's industry standard for handling these situations - ability to escape

3dsmax still can save files larger than it can open later on the same computer. There is nothing about "industry standards", it is about individual software solutions to handle harware limitations and sometimes about probable solutions to halting problem as well. I tried to deal with this problem when I was making scripts for Autocad that process large amounts of data, and I can imagine how difficult it is to solve this problem for cross-platform.
Safety magic is always hard to achieve.

In #89052#1177550, @Grady wrote:
In this case, the limit percentage "0.1%" is referring to 0.1% of object's longest dimension, that way it works no matter how big or small the object is.

This already can be solved with math node. Hard to say if it is a solution, if to take into account that value that you can process depends on your hardware)

> In #89052#1177532, @Rawalanche wrote: > I find it impressive how you consistently manage to find a worst solution to most problems in Blender. Generally unprovable. >The whole point of accidents is that you don't anticipate them For sure. So this is not about remesh node development anyway. > There's industry standard for handling these situations - ability to escape 3dsmax still can save files larger than it can open later on the same computer. There is nothing about "industry standards", it is about individual software solutions to handle harware limitations and sometimes about probable solutions to [halting problem ](https://youtu.be/t37GQgUPa6k) as well. I tried to deal with this problem when I was making scripts for Autocad that process large amounts of data, and I can imagine how difficult it is to solve this problem for cross-platform. Safety magic is always hard to achieve. > In #89052#1177550, @Grady wrote: > In this case, the limit percentage "0.1%" is referring to 0.1% of object's longest dimension, that way it works no matter how big or small the object is. This already can be solved with math node. Hard to say if it is a solution, if to take into account that value that you can process depends on your hardware)

Added subscriber: @Erindale

Added subscriber: @Erindale

Constraints to protect unaware users can often become obstacles to users needing specific outcomes.
Perhaps soft limits would suffice like on the Subdivide modifier where you can manually type in larger numbers if needed.

I would be wary of setting a precedent for mollycoddling functionality out of Blender.

Constraints to protect unaware users can often become obstacles to users needing specific outcomes. Perhaps soft limits would suffice like on the Subdivide modifier where you can manually type in larger numbers if needed. I would be wary of setting a precedent for mollycoddling functionality out of Blender.

Soft limits are a UI usability solution and are not related to hardware capacity and its ability to run nodes layout.

Soft limits are a UI usability solution and are not related to hardware capacity and its ability to run nodes layout.
Contributor

In #89052#1177649, @Erindale wrote:
Constraints to protect unaware users can often become obstacles to users needing specific outcomes.
Perhaps soft limits would suffice like on the Subdivide modifier where you can manually type in larger numbers if needed.

I would be wary of setting a precedent for mollycoddling functionality out of Blender.

A simple example. You create a cliff generator GN node network which turns any primitive shape into a cliff mesh, and it utilizes a remesh modifier. You test in on roughly 10M^3 mesh and it works well. Another user you supply the GN network will import a mesh from other DCC and doesn't check the scale of the object, so he ends up with 500M^3 mesh. As soon as he puts on a GN modifier an selects your network, he freezes his Blender and loses any unsaved work. How do you prevent that?

Is your solution really to rely on users to carefully explore every GN node network that's not their, to scour it for any scale dependent modifier and fix them themselves?

> In #89052#1177649, @Erindale wrote: > Constraints to protect unaware users can often become obstacles to users needing specific outcomes. > Perhaps soft limits would suffice like on the Subdivide modifier where you can manually type in larger numbers if needed. > > I would be wary of setting a precedent for mollycoddling functionality out of Blender. A simple example. You create a cliff generator GN node network which turns any primitive shape into a cliff mesh, and it utilizes a remesh modifier. You test in on roughly 10M^3 mesh and it works well. Another user you supply the GN network will import a mesh from other DCC and doesn't check the scale of the object, so he ends up with 500M^3 mesh. As soon as he puts on a GN modifier an selects your network, he freezes his Blender and loses any unsaved work. How do you prevent that? Is your solution really to rely on users to carefully explore every GN node network that's not their, to scour it for any scale dependent modifier and fix them themselves?

Added subscriber: @MadMinstrel

Added subscriber: @MadMinstrel

I would suggest that a limit is indeed present, but there must be a way to disable it.

Just because an object is large doesn't automatically mean it's going to take a lot of memory. For example, a procedurally generated, sparse asteroid field is not going to have much of an impact on account of mostly being empty space. While previously you could argue that this is an implausible scenario, and each asteroid ought to be an individual object, now with geometry nodes, it's quite feasible, and a limit of 0.1% will prove nothing but a headache. Just taking the typical case into account is never a good idea.

I would suggest that a limit is indeed present, but there *must* be a way to disable it. Just because an object is large doesn't automatically mean it's going to take a lot of memory. For example, a procedurally generated, sparse asteroid field is not going to have much of an impact on account of mostly being empty space. While previously you could argue that this is an implausible scenario, and each asteroid ought to be an individual object, now with geometry nodes, it's quite feasible, and a limit of 0.1% will prove nothing but a headache. Just taking the typical case into account is never a good idea.

Added subscriber: @Jarrett-Johnson

Added subscriber: @Jarrett-Johnson

Added subscriber: @Kavukamari

Added subscriber: @Kavukamari

I think Erindale's solution where the slider has a soft cap at something like 0.01 but the user can type in any value they want is a good one. but it doesn't solve the problem of users plugging random nodes in and accidentally crashing the program. maybe a checkbox for "bypass voxel size limit" that defaults to off? something feels off about it, but I think it might work

I think Erindale's solution where the slider has a soft cap at something like 0.01 but the user can type in any value they want is a good one. but it doesn't solve the problem of users plugging random nodes in and accidentally crashing the program. maybe a checkbox for "bypass voxel size limit" that defaults to off? something feels off about it, but I think it might work

In #89052#1250735, @Kavukamari wrote:
I think Erindale's solution where the slider has a soft cap at something like 0.01 but the user can type in any value they want is a good one. but it doesn't solve the problem of users plugging random nodes in and accidentally crashing the program. maybe a checkbox for "bypass voxel size limit" that defaults to off? something feels off about it, but I think it might work

Perhaps just a simple warning dialog.

"Warning, the voxel size of XX.X could take a while to compute, continue?"
[ Yes ] [ No ]

Or simply, show a progress bar while geometry nodes are processing (if they take longer than say, 1 second to finish) with an option to cancel to abort.

Either way, I think the issue as it stands, is a broader issue that affects many inputs across Blender, including many geometry node inputs, and probably should have a broader solution.

Just my 2 cents.

> In #89052#1250735, @Kavukamari wrote: > I think Erindale's solution where the slider has a soft cap at something like 0.01 but the user can type in any value they want is a good one. but it doesn't solve the problem of users plugging random nodes in and accidentally crashing the program. maybe a checkbox for "bypass voxel size limit" that defaults to off? something feels off about it, but I think it might work Perhaps just a simple warning dialog. > "Warning, the voxel size of XX.X could take a while to compute, continue?" > [ Yes ] [ No ] Or simply, show a progress bar while geometry nodes are processing (if they take longer than say, 1 second to finish) with an option to cancel to abort. Either way, I think the issue as it stands, is a broader issue that affects many inputs across Blender, including many geometry node inputs, and probably should have a broader solution. Just my 2 cents.

Added subscriber: @VoxelRemeshNodePliz

Added subscriber: @VoxelRemeshNodePliz

How long does it usually take to get a new geo node reviewed? I was really hoping to see this node in the 3.0 release...

How long does it usually take to get a new geo node reviewed? I was really hoping to see this node in the 3.0 release...

Added subscriber: @luischerub

Added subscriber: @luischerub

Added subscriber: @sozap

Added subscriber: @sozap

Added subscriber: @YanivGershoni

Added subscriber: @YanivGershoni

Added subscriber: @Dragosh

Added subscriber: @Dragosh

Added subscriber: @jak99jak

Added subscriber: @jak99jak

Added subscriber: @tathaniel26

Added subscriber: @tathaniel26

Added subscriber: @Dspazio

Added subscriber: @Dspazio

Added subscriber: @set9

Added subscriber: @set9

Added subscriber: @pollutedmind

Added subscriber: @pollutedmind

Added subscriber: @Rowan-Ibbeken

Added subscriber: @Rowan-Ibbeken

Has there been any progress on this task? Is there a chance it will make it into 3.4?

Has there been any progress on this task? Is there a chance it will make it into 3.4?

Added subscriber: @emilis

Added subscriber: @emilis
Philipp Oeser removed the
Interest
Nodes & Physics
label 2023-02-10 08:44:48 +01:00
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
24 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#89052
No description provided.