NPR Design #120403

Open
opened 2024-04-08 15:48:56 +02:00 by Miguel Pozo · 41 comments
Member

Design Goals

  • Provide first-class support for stylized shading and rendering, unconstrained by PBR rules.
  • Be expressive enough to allow arbitrary art styles, instead of just providing built-in support for a fixed set of common effects.
  • Extend (instead of replace) the already existing physically-based material/render workflows and implementations.
  • PBR and Stylized materials should interact and be composed together automatically in a visually consistent and coherent way.

NPR Challenges

  • NPR requires way more customization support than PBR, and requires exposing parts of the rendering process that are typically kept behind the render engine "black box". At the same time, we need to keep the process intuitive, artist-friendly, and fail-safe.
  • Some common techniques require operating on converged, noiseless data (eg. applying thresholds or ramps to AO and soft shadows), but renderers typically rely on stochastic techniques that converge over multiple samples.
  • Filter-based effects that require sampling contiguous surfaces are extremely common. However, filter-based effects are prone to temporal instability (perspective aliasing and disocclusion artifacts), and not all render techniques can support them (blended/hashed transparency, raytracing).

For these reasons and because of the usual lack of built-in support, NPR is often done as a post-render compositing process, but this has its own issues:

  • Material/object-specific stylization requires a lot of extra setup and render targets.
  • It only really works for primary rays on opaque surfaces. Applying stylization to transparent, reflected, or refracted surfaces is practically impossible.
  • The image is already converged, so operating on it without introducing aliasing issues can be challenging.
  • Many useful data is already lost (eg. lights and shadows are already applied/merged).

However, these are mostly workflow and integration issues rather than fundamental issues at the core rendering level.

Proposal

This proposal focuses on solving the core technical and integration aspects, taking EEVEE integration as the initial target.
The details about how to expose and implement specific shading features are orthogonal to this proposal.
For an overview of the features intended to be supported see this board by Dillon Goo Studios.
The main goal here is to provide a technically sound space inside Blender where these features can be developed.

Big Picture

The NPR engine would be a new type of engine halfway between a regular render engine and a compositing engine.
Instead of operating on final images once the main renderer has finished its job, it works as the final steps of each render sample, acting as a second deferred shading step, before assembling the final image sample and handling sample accumulation.

This leaves the space required to solve the issues mentioned above, without having to build a whole new "full" render engine, and, at the same time, imposing as few hard requirements as possible on the "core" engine.

On the user side, this introduces a new NPR node system, separate from the regular Material nodes, where custom shading and filtering can be applied.
Each NPR node tree generates a Closure-like node that can be used from Material nodes.

Material Nodes NPR Nodes
unnamed unnamed

The images are just for illustrative purposes, don't take the specifics of the nodes themselves as part of the proposal.

This approach provides several advantages:

  • Material nodes stay compatible between engines.
  • There's a clearly communicated distinction between always-available features and NPR-only features.
  • Imposes a clear Material > NPR evaluation order, avoiding cyclic dependency issues where shading nodes could act as inputs of surface properties.
  • Allows supporting filter nodes, texture sockets, and any other required NPR-specific feature without having to fight against the material nodes design.

Implementation

Each NPR material generates a list of its required "G-Buffer" input data.

The "core" engine is responsible for generating this G-Buffer and passing it to the NPR engine at the end of each sample. This works similarly to the already existing compositing passes.

In addition, the NPR engine is also capable of running its own scene synchronization to generate its own engine-specific data when needed.

Then the NPR engine runs separate screen-space passes for each NPR material type (only on the pixels covered by that material) and assembles the final image.

G-Buffer packing

Something to keep in mind is that since we are working with per-sample data, render targets can be much more compressed than in regular compositing.

For example, storing one shadow mask for each overlapping light might seem too expensive, but we only need ceil(log2(shadow_rays+1)) bits for each shadow mask. In the case of EEVEE that's 1 bit by default, and 3 bits at worst.

Tier System

Any stochastically rendered surface (jittered transparency, rough reflections/refractions, ray-traced DoF and motion-blur) prevents the NPR system from applying filters to them.
For this reason and as a fallback when a true BSDF is required, we need a tier system.

There would be 3 tiers:

  • Full: All NPR features supported.
    For opaque and layered transparency surfaces, and for 0 roughness reflections and refractions.
  • Per-Pixel: Custom shading and compositing of shading features are supported, but filters and sample accumulation nodes are not, and are executed as a simple no-op pass-trough.
    For stochastically rendered surfaces.
  • Fallback: The regular output from the Material nodes.
    For features where a true energy-conserving BSDF is needed (eg. diffuse GI, rough reflection/refractions).

This is handled automatically, so users only need to be aware that in some contexts filters will be skipped, and that the base material output can still be used.

EEVEE-side features

Other than outputting the required render targets, there are some features that, even if they're only used on the NPR side, would have to be implemented or would be more practical to implement at the core engine side:

  • Depth offset. Requires modifying the fragment depth in the material surface shaders.
  • Self-Shadows. Requires rendering an object ID texture along the shadow-map.
  • Shading position offset. Needs to be taken into account for shadow tagging.

There are also some required features that could be implemented on the NPR engine side, but are already planned for EEVEE anyway (light linking and light nodes).

Layered Transparency

Supporting the "Full" level features on transparent surfaces would require a layered OIT solution at the core engine level, or implementing it at the NPR engine level and passing the depth buffer of each layer to the core engine.

Reflections & Refractions

To support correct interaction between PBR and NPR materials, NPR shading should also be applied to NPR surfaces reflected and refracted onto other materials (including PBR ones). This implies that the NPR engine should handle the final composition of PBR materials as well.

A "generic" solution could be requesting render passes for secondary rays too (so each ray depth level has its own render target) and applying NPR shading in reverse order.

PBR Render AOV (Depth 2) AOV (Depth 1) AOV (Depth 0)
imagen imagen imagen imagen

However, since EEVEE only supports screen-space tracing for now (and probes should already have the NPR shading applied at bake time), a reflection and refraction UV may be enough. And since horizon scanning is meant to be used for diffuse and rough reflections, it should be fine to use them as-is without any NPR post-processing.

Custom sample accumulation and resolve

Threshold before sample accumulation (Incorrect) Threshold after sample accumulation (Correct but aliased)
imagen imagen

A possible solution to solve the "operating on converged, noiseless data" issue is accumulating samples in a "geometrically-aliased" way, but storing per-sample weights (MSAA style) so anti-aliasing can still be applied as a last step.
This allows the user to operate on converged shading data without having to worry about aliasing issues.

Single sample Accumulation Threshold Resolve (WIP)
imagen imagen imagen imagen
imagen imagen imagen imagen

However, applying this method to every NPR material input may be too costly, and there are legit reasons for wanting to operate directly on per-sample data, anyway.
So a possible solution could be to expose this functionality as a node, and use different socket types for per-sample vs converged data (for example using the same socket color but different shape).

Open questions

  • Name for the project?

  • Do we implement the NPR engine as a separate draw engine?
    As a library that EEVEE can use?
    By inheriting EEVEE classes with virtual functions where needed?

  • Is a future Cycles integration a possibility we should consider at all?

  • Can/Should we re-use parts of the material and compositing nodes implementation?

  • How does Grease Pencil fit here?
    Is the planned EEVEE-GP integration enough, so the NPR engine can just treat GP as any other surface?

Roadmap

The initial version of NPR nodes would work as an improvement of the ShaderToRGB workflow, with access to separate shading features and the ability to perform custom shading, but without filter nodes.
We can implement "filters" at the shader level at this stage though, as long as they operate on "global" inputs (like depth, or normal) and the whole node tree can be compiled to a single shader pass.

From there we can iterate on adding all the required shading features, and finally implement filtering nodes and the custom sample accumulation/resolve solution.

  1. Design.
  2. Base framework and EEVEE integration (First preview release).
    1. Node System (Per-Pixel tier support).
    2. Custom Nodes.
    3. Built-in Nodes.
  3. EEVEE-side features
    1. Light linking.
    2. Self-Shadows.
    3. Shading position offset.
    4. Depth offset.
    5. Light nodes.
  4. Filter Nodes (Single Pass)
  5. Filter Nodes (Full-Tier support) (Multiple Passes)
    1. Custom sample accumulation/resolve solution.
    2. Custom filter nodes.
    3. Built-in filter nodes.
  6. Layered OIT.
# Design Goals - Provide first-class support for stylized shading and rendering, unconstrained by PBR rules. - Be expressive enough to allow arbitrary art styles, instead of just providing built-in support for a fixed set of common effects. - Extend (instead of replace) the already existing physically-based material/render workflows and implementations. - PBR and Stylized materials should interact and be composed together automatically in a visually consistent and coherent way. # NPR Challenges - NPR requires way more customization support than PBR, and requires exposing parts of the rendering process that are typically kept behind the render engine "black box". At the same time, we need to keep the process intuitive, artist-friendly, and fail-safe. - Some common techniques require operating on converged, noiseless data (eg. applying thresholds or ramps to AO and soft shadows), but renderers typically rely on stochastic techniques that converge over multiple samples. - Filter-based effects that require sampling contiguous surfaces are extremely common. However, filter-based effects are prone to temporal instability (perspective aliasing and disocclusion artifacts), and not all render techniques can support them (blended/hashed transparency, raytracing). For these reasons and because of the usual lack of built-in support, NPR is often done as a post-render compositing process, but this has its own issues: - Material/object-specific stylization requires a lot of extra setup and render targets. - It only really works for primary rays on opaque surfaces. Applying stylization to transparent, reflected, or refracted surfaces is practically impossible. - The image is already converged, so operating on it without introducing aliasing issues can be challenging. - Many useful data is already lost (eg. lights and shadows are already applied/merged). However, these are mostly workflow and integration issues rather than fundamental issues at the core rendering level. # Proposal This proposal focuses on solving the core technical and integration aspects, taking EEVEE integration as the initial target. The details about how to expose and implement specific shading features are orthogonal to this proposal. For an overview of the features intended to be supported see [this board](https://miro.com/app/board/uXjVNRq8YR4=/) by Dillon Goo Studios. The main goal here is to provide a technically sound space inside Blender where these features can be developed. ## Big Picture The NPR engine would be a new type of engine halfway between a regular render engine and a compositing engine. Instead of operating on final images once the main renderer has finished its job, it works as the final steps of each render sample, acting as a second deferred shading step, before assembling the final image sample and handling sample accumulation. This leaves the space required to solve the issues mentioned above, without having to build a whole new "full" render engine, and, at the same time, imposing as few hard requirements as possible on the "core" engine. On the user side, this introduces a new NPR node system, separate from the regular Material nodes, where custom shading and filtering can be applied. Each NPR node tree generates a Closure-like node that can be used from Material nodes. | Material Nodes | NPR Nodes | | --- | --- | | ![unnamed](/attachments/368cedf7-06a8-45e0-b397-5032a51ffdc2) | ![unnamed](/attachments/207e776d-d651-4a97-8875-36486742ad9c) | > *The images are just for illustrative purposes, don't take the specifics of the nodes themselves as part of the proposal.* This approach provides several advantages: - Material nodes stay compatible between engines. - There's a clearly communicated distinction between always-available features and NPR-only features. - Imposes a clear Material > NPR evaluation order, avoiding cyclic dependency issues where shading nodes could act as inputs of surface properties. - Allows supporting filter nodes, texture sockets, and any other required NPR-specific feature without having to fight against the material nodes design. ## Implementation Each NPR material generates a list of its required "G-Buffer" input data. The "core" engine is responsible for generating this G-Buffer and passing it to the NPR engine at the end of each sample. This works similarly to the already existing compositing passes. In addition, the NPR engine is also capable of running its own scene synchronization to generate its own engine-specific data when needed. Then the NPR engine runs separate screen-space passes for each NPR material type (only on the pixels covered by that material) and assembles the final image. ### G-Buffer packing Something to keep in mind is that since we are working with per-sample data, render targets can be much more compressed than in regular compositing. For example, storing one shadow mask for each overlapping light might seem too expensive, but we only need `ceil(log2(shadow_rays+1))` bits for each shadow mask. In the case of EEVEE that's 1 bit by default, and 3 bits at worst. ### Tier System Any stochastically rendered surface (jittered transparency, rough reflections/refractions, ray-traced DoF and motion-blur) prevents the NPR system from applying filters to them. For this reason and as a fallback when a true BSDF is required, we need a tier system. There would be 3 tiers: - Full: All NPR features supported. For opaque and layered transparency surfaces, and for 0 roughness reflections and refractions. - Per-Pixel: Custom shading and compositing of shading features are supported, but filters and sample accumulation nodes are not, and are executed as a simple no-op pass-trough. For stochastically rendered surfaces. - Fallback: The regular output from the Material nodes. For features where a true energy-conserving BSDF is needed (eg. diffuse GI, rough reflection/refractions). This is handled automatically, so users only need to be aware that in some contexts filters will be skipped, and that the base material output can still be used. ### EEVEE-side features Other than outputting the required render targets, there are some features that, even if they're only used on the NPR side, would have to be implemented or would be more practical to implement at the core engine side: - Depth offset. Requires modifying the fragment depth in the material surface shaders. - Self-Shadows. Requires rendering an object ID texture along the shadow-map. - Shading position offset. Needs to be taken into account for shadow tagging. There are also some required features that could be implemented on the NPR engine side, but are already planned for EEVEE anyway (light linking and light nodes). ### Layered Transparency Supporting the "Full" level features on transparent surfaces would require a layered OIT solution at the core engine level, or implementing it at the NPR engine level and passing the depth buffer of each layer to the core engine. ### Reflections & Refractions To support correct interaction between PBR and NPR materials, NPR shading should also be applied to NPR surfaces reflected and refracted onto other materials (including PBR ones). This implies that the NPR engine should handle the final composition of PBR materials as well. A "generic" solution could be requesting render passes for secondary rays too (so each ray depth level has its own render target) and applying NPR shading in reverse order. | PBR Render | AOV (Depth 2) | AOV (Depth 1) | AOV (Depth 0) | | --- | --- | --- | --- | | ![imagen](/attachments/48d36d1b-b8ac-4cb7-bca3-ca5931e61d91)|![imagen](/attachments/6e3ec3e1-e8c3-4a83-84a2-125a96a9f530)|![imagen](/attachments/b5c288a6-3645-4830-a2a7-55beec51c438)|![imagen](/attachments/f9e4c767-5af4-46a0-9d15-bf29c998674c)| However, since EEVEE only supports screen-space tracing for now (and probes should already have the NPR shading applied at bake time), a reflection and refraction UV may be enough. And since horizon scanning is meant to be used for diffuse and rough reflections, it should be fine to use them as-is without any NPR post-processing. ### Custom sample accumulation and resolve | Threshold before sample accumulation (Incorrect) | Threshold after sample accumulation (Correct but aliased) | | --- | --- | | ![imagen](/attachments/9b29778b-3f58-4ee8-aa99-4d262dc9ebe6) | ![imagen](/attachments/29f00315-81cb-4670-9e8c-ec79a4bc198c) | A possible solution to solve the "operating on converged, noiseless data" issue is accumulating samples in a "geometrically-aliased" way, but storing per-sample weights (MSAA style) so anti-aliasing can still be applied as a last step. This allows the user to operate on converged shading data without having to worry about aliasing issues. | Single sample | Accumulation | Threshold | Resolve (WIP) | | --- | --- | --- | --- | | ![imagen](/attachments/43e8eae0-068d-46c6-aa57-aa7fbf040aa9) | ![imagen](/attachments/b153d844-d2d9-44ee-9621-6be8e73498c8) | ![imagen](/attachments/01487c52-cb0f-4429-8bfe-1a6794446072) | ![imagen](/attachments/1ec5a298-0954-4503-84be-3c9fea2029e8) | | ![imagen](/attachments/90b1c6a0-f41b-43aa-baef-fce9957be1f8) | ![imagen](/attachments/690af08d-8b1a-4d00-9656-cc69986331c4) | ![imagen](/attachments/31d34e81-6897-4914-bab1-609e80a04362) | ![imagen](/attachments/c8a098b0-be5c-42ac-886b-fa6fd993c474) | However, applying this method to every NPR material input may be too costly, and there are legit reasons for wanting to operate directly on per-sample data, anyway. So a possible solution could be to expose this functionality as a node, and use different socket types for per-sample vs converged data (for example using the same socket color but different shape). # Open questions - Name for the project? - Do we implement the NPR engine as a separate draw engine? As a library that EEVEE can use? By inheriting EEVEE classes with virtual functions where needed? - Is a future Cycles integration a possibility we should consider at all? - Can/Should we re-use parts of the material and compositing nodes implementation? - How does Grease Pencil fit here? Is the planned EEVEE-GP integration enough, so the NPR engine can just treat GP as any other surface? # Roadmap The initial version of NPR nodes would work as an improvement of the ShaderToRGB workflow, with access to separate shading features and the ability to perform custom shading, but without filter nodes. We can implement "filters" at the shader level at this stage though, as long as they operate on "global" inputs (like depth, or normal) and the whole node tree can be compiled to a single shader pass. From there we can iterate on adding all the required shading features, and finally implement filtering nodes and the custom sample accumulation/resolve solution. 0. Design. 1. Base framework and EEVEE integration (First preview release). 1. Node System (Per-Pixel tier support). 2. Custom Nodes. 3. Built-in Nodes. 2. EEVEE-side features 1. Light linking. 2. Self-Shadows. 3. Shading position offset. 4. Depth offset. 5. Light nodes. 3. Filter Nodes (Single Pass) 3. Filter Nodes (Full-Tier support) (Multiple Passes) 1. Custom sample accumulation/resolve solution. 2. Custom filter nodes. 3. Built-in filter nodes. 5. Layered OIT.
Miguel Pozo added this to the Viewport & EEVEE project 2024-04-08 15:48:59 +02:00
Member

@pragma37 Hey :) I really hope that Grease Pencil can become a part of this proposal!

How does Grease Pencil fit here?

Many of the current NPR issues can be said about Grease Pencil as well. On top of this, shading is very limited (Grease Pencil has its own materials, not node based, few options).
Since GPv3 will turn Grease Pencil essentially into CurvesGeometry, to me, the question becomes "how do we render curves in NPR?". So the same thing could be asked about hair for example.

Ideally, we don't enforce a meshing step and have a native way of rendering curves somehow.

@pragma37 Hey :) I really hope that Grease Pencil can become a part of this proposal! > How does Grease Pencil fit here? Many of the current NPR issues can be said about Grease Pencil as well. On top of this, shading is very limited (Grease Pencil has its own materials, not node based, few options). Since GPv3 will turn Grease Pencil essentially into `CurvesGeometry`, to me, the question becomes "how do we render curves in NPR?". So the same thing could be asked about hair for example. Ideally, we don't enforce a meshing step and have a native way of rendering curves somehow.
Author
Member

@filedescriptor The idea is that the NPR renderer doesn't render the meshes itself, it just receives a G-Buffer with the geometry already rendered.
So the NPR renderer doesn't care about meshes vs sculpt meshes vs curves, they're just pre-rendered surfaces.

So if the EEVEE-Grease Pencil integration means GP objects can have a regular EEVEE Material and act like any other surface, then GP doesn't need to be a special case on the NPR side.
But I'm not sure what's exactly the plan with EEVEE-GP.

@filedescriptor The idea is that the NPR renderer doesn't render the meshes itself, it just receives a G-Buffer with the geometry already rendered. So the NPR renderer doesn't care about meshes vs sculpt meshes vs curves, they're just pre-rendered surfaces. So if the EEVEE-Grease Pencil integration means GP objects can have a regular EEVEE Material and act like any other surface, then GP doesn't need to be a special case on the NPR side. But I'm not sure what's exactly the plan with EEVEE-GP.
Member

The NPR engine would be a new type of engine halfway between a regular render engine and a compositing engine. Instead of operating on final images once the main renderer has finished its job, it works as the final steps of each render sample, acting as a second deferred shading step, before assembling the final image sample and handling sample accumulation.

I really love this part. This is one step closer to ideal flexibility in which artists could decide what exactly something needs to look like!

use different socket types for per-sample vs converged data.

Sampling alias is indeed something we have to worry about. Also not all properties needs to be sampled to give a smooth result. A hybrid approach might work best and yes giving choices are good.

Overall I'm just not very sure how this in-between stage could be implemented under blender. Like, you could have direct G-buffer pass through, or "do some filters over g-buffer and pass along", and you might want to have the filter result of some passes to affect other passes? (like blur normal but masked on top with object id). This might take some time I think but I'm really looking forward to it!

Since GPv3 will turn Grease Pencil essentially into CurvesGeometry, to me, the question becomes "how do we render curves in NPR?". So the same thing could be asked about hair for example.

I think curves/lines are a very different topic. Since the renderer here mostly talks about how we turn individual pixels into what we want (although through some filters we can get differentials which are lines), but lines IMO takes a different approach that sampling might not conceptually align with how a lot types of lines are perceived naturally (a lot of times it's regarding visually smooth edges, which it's very hard to threshold though pixel based method to pin point where the line should be). But I guess we can stay in the loop and see if we can come up with some ideas on how to do things like this.

> The NPR engine would be a new type of engine halfway between a regular render engine and a compositing engine. Instead of operating on final images once the main renderer has finished its job, it works as the final steps of each render sample, acting as a second deferred shading step, before assembling the final image sample and handling sample accumulation. I really love this part. This is one step closer to ideal flexibility in which artists could decide what exactly something needs to look like! > use different socket types for per-sample vs converged data. Sampling alias is indeed something we have to worry about. Also not all properties needs to be sampled to give a smooth result. A hybrid approach might work best and yes giving choices are good. Overall I'm just not very sure how this in-between stage could be implemented under blender. Like, you could have direct G-buffer pass through, or "do some filters over g-buffer and pass along", and you might want to have the filter result of some passes to affect other passes? (like blur normal but masked on top with object id). This might take some time I think but I'm really looking forward to it! > Since GPv3 will turn Grease Pencil essentially into CurvesGeometry, to me, the question becomes "how do we render curves in NPR?". So the same thing could be asked about hair for example. I think curves/lines are a very different topic. Since the renderer here mostly talks about how we turn individual pixels into what we want (although through some filters we can get differentials which are lines), but lines IMO takes a different approach that sampling might not conceptually align with how a lot types of lines are perceived naturally (a lot of times it's regarding visually smooth edges, which it's very hard to threshold though pixel based method to pin point where the line should be). But I guess we can stay in the loop and see if we can come up with some ideas on how to do things like this.
Author
Member

@ChengduLittleA I'm not sure I follow your reasoning.

The engine always receives a per-render-sample G-Buffer, not a converged one.
It's actually the task of the NPR engine to generate the converged image.
It works the same as any deferred renderer (including EEVEE-Next), except the deferred pass shader is generated from nodes.

So, if the current order is (simplified):
EEVEE GBuffer -> EEVEE Shading -> EEVEE Sample accumulation -> Compositor
The order with NPR would be:
EEVEE GBuffer -> EEVEE Shading -> NPR Shading -> NPR Sample accumulation -> Compositor

Aliasing would only be an issue in the sense that you can't generate a correct ramp/threshold from a single sample due to shading aliasing (like in this image), that's why the custom sample accumulation and resolve is proposed.

@ChengduLittleA I'm not sure I follow your reasoning. The engine always receives a per-render-sample G-Buffer, not a converged one. It's actually the task of the NPR engine to generate the converged image. It works the same as any deferred renderer (including EEVEE-Next), except the deferred pass shader is generated from nodes. So, if the current order is (simplified): `EEVEE GBuffer -> EEVEE Shading -> EEVEE Sample accumulation -> Compositor` The order with NPR would be: `EEVEE GBuffer -> EEVEE Shading -> NPR Shading -> NPR Sample accumulation -> Compositor` Aliasing would only be an issue in the sense that you can't generate a correct ramp/threshold from a single sample due to shading aliasing ([like in this image](https://projects.blender.org/blender/blender/attachments/9b29778b-3f58-4ee8-aa99-4d262dc9ebe6)), that's why the custom sample accumulation and resolve is proposed.
Contributor

LOVE this initiative! LOVE IT.

As an NPR user myself - I vote to build this directly into EEVEE for a smoother UX and adoptation - since many NPR artists already use EEVEE, so extending and making it more flexible (optionally as an user opt-in if necessary) would be great. Similar nodes, similar workflows, extra opt-in flexibility and creativity.

This allows this UX:

  1. Easy migration of existing NPR projects ("Let's use these new nodes to do X now, no need to migrate/change settings on everything!")
  2. Less render engine bloat and decision making ("Which render engine should I use for X project?")
  3. Less overhead ("As a dev, let's maintain one engine with more features than build a new one from the ground up")
  4. More user customizability without multi-engine use cases ("I want it to be somewhat realistic, like EEVEE Next, but have this X feature for NPR - and do less compositing")
LOVE this initiative! LOVE IT. As an NPR user myself - I vote to build this directly into EEVEE for a smoother UX and adoptation - since many NPR artists already use EEVEE, so extending and making it more flexible (optionally as an user opt-in if necessary) would be great. Similar nodes, similar workflows, extra opt-in flexibility and creativity. ### This allows this UX: 1. Easy migration of existing NPR projects ("Let's use these new nodes to do X now, no need to migrate/change settings on everything!") 2. Less render engine bloat and decision making ("Which render engine should I use for X project?") 3. Less overhead ("As a dev, let's maintain one engine with more features than build a new one from the ground up") 4. More user customizability without multi-engine use cases ("I want it to be somewhat realistic, like EEVEE Next, but have this X feature for NPR - and do less compositing")
Member

@pragma37 Thanks, I see.

@pragma37 Thanks, I see.
Author
Member

@filedescriptor I've just checked with Clément about the EEVEE-GP plans.
There's stuff to figure out, but in any case, I think the right way to go is to solve it at the EEVEE level first.

If we try to do it at the NPR level we're going to have troubles integrating GP and EEVEE objects together, which is the same reason why I wanted to avoid having separate NPR and EEVEE objects.
So I would solve the EEVEE side first, then we can see if there are GP-specific features that should be exposed to the NPR-side.

Still, I think it would make sense to include the EEVEE-GP integration as part of the NPR project planning.


@AndresStephens This is meant to work alongside EEVEE (and maybe other engines in the future).
And NPR nodes are meant to work in tandem with Material Nodes.
One of the reasons to implement NPR nodes as a separate node system is precisely to avoid having Material nodes working only under certain circumstances.
Also, we have not discussed this yet, but I would expect the NPR nodes (especially before implementing filters) to be pretty much the current Material nodes with some nodes removed and others added.

@filedescriptor I've just checked with Clément about the EEVEE-GP plans. There's stuff to figure out, but in any case, I think the right way to go is to solve it at the EEVEE level first. If we try to do it at the NPR level we're going to have troubles integrating GP and EEVEE objects together, which is the same reason why I wanted to avoid having separate NPR and EEVEE objects. So I would solve the EEVEE side first, then we can see if there are GP-specific features that should be exposed to the NPR-side. Still, I think it would make sense to include the EEVEE-GP integration as part of the NPR project planning. --- @AndresStephens This is meant to work alongside EEVEE (and maybe other engines in the future). And NPR nodes are meant to work in tandem with Material Nodes. One of the reasons to implement NPR nodes as a separate node system is precisely to avoid having Material nodes working only under certain circumstances. Also, we have not discussed this yet, but I would expect the NPR nodes (especially before implementing filters) to be pretty much the current Material nodes with some nodes removed and others added.
Contributor

One of the reasons to implement NPR nodes as a separate node system is precisely to avoid having Material nodes working only under certain circumstances.

This is quite weird to be honest. I get the reason its desirable to have nodes work between Cycles and EEVEE, but what is the practical examples of materials used in NPRs needing to work in other conditions? Like who would decide in mid production that "eh, I don't want NPR anymore, let's go photoreal", AND expect the material to work without any adjustments?

New node system is adding so much more complexity on user level, you have to learn and think about and work on entire new concept of "inbetween" editor. It's worst design from user pov imho, and cases it solves is pretty much zero or very low. While having nodes in eevee material editor would make this much much easier, and only situations it would break is if user is doing ridiculous tasks. Majority of user-base shouldn't have to endure complexity for those single-digit cases

> One of the reasons to implement NPR nodes as a separate node system is precisely to avoid having Material nodes working only under certain circumstances. This is quite weird to be honest. I get the reason its desirable to have nodes work between Cycles and EEVEE, but what is the practical examples of materials used in NPRs needing to work in other conditions? Like who would decide in mid production that "eh, I don't want NPR anymore, let's go photoreal", AND expect the material to work without any adjustments? New node system is adding so much more complexity on user level, you have to learn and think about and work on entire new concept of "inbetween" editor. It's worst design from user pov imho, and cases it solves is pretty much zero or very low. While having nodes in eevee material editor would make this much much easier, and only situations it would break is if user is doing ridiculous tasks. Majority of user-base shouldn't have to endure complexity for those single-digit cases

This is quite weird to be honest. I get the reason its desirable to have nodes work between Cycles and EEVEE, but what is the practical examples of materials used in NPRs needing to work in other conditions? Like who would decide in mid production that "eh, I don't want NPR anymore, let's go photoreal", AND expect the material to work without any adjustments?

I don't imagine this is something that has to be black and white. Some parts of an image can be photorealistic/physically based, whereas other parts of the image can be explicitly NPR. Consider elements like magical effects and superpowers. Even in a largely photoreal setting, you have a lot of leeway here stylistically, what a "portal" looks like is almost certainly going to be determined on an artistic basis rather than a physical one.

It's probably important to consider this is a task for furthering NPR design in Blender, but that doesn't mean that Miguel's suggestions are exclusively useful for NPR. There could be applications of this technology to improve physically based looks as well, assuming it wasn't made a unique engine. I think changing the name "NPR Nodes" to "Render Nodes" in the future would be a good idea so that it's clearer to people that it isn't exclusive. Or another, better-fitting name instead, I'm just using the terminology that was in Malt.

New node system is adding so much more complexity on user level, you have to learn and think about and work on entire new concept of "inbetween" editor. It's worst design from user pov imho, and cases it solves is pretty much zero or very low. While having nodes in eevee material editor would make this much much easier, and only situations it would break is if user is doing ridiculous tasks. Majority of user-base shouldn't have to endure complexity for those single-digit cases

This will almost certainly end up very similar to the case of Geometry Nodes. Has Geometry Nodes made Blender more complex? Sure. However, Individual users do not have to learn every in and out of Geometry Nodes to reap the benefits of the system, and with their modifier-based design being expanded to include the concept of tools, they have become more and more accessible to those without the technical know-how. This design methodology would slot right in I think.

To be completely honest, I think that this will make NPR dramatically less complex to perform in Blender, as there is so much finagling you have to do to get around existing design limitations at the moment, there is quite a bit of prerequisite knowledge necessary to get the look you want at times.

> This is quite weird to be honest. I get the reason its desirable to have nodes work between Cycles and EEVEE, but what is the practical examples of materials used in NPRs needing to work in other conditions? Like who would decide in mid production that "eh, I don't want NPR anymore, let's go photoreal", AND expect the material to work without any adjustments? I don't imagine this is something that has to be black and white. Some parts of an image can be photorealistic/physically based, whereas other parts of the image can be explicitly NPR. Consider elements like magical effects and superpowers. Even in a largely photoreal setting, you have a lot of leeway here stylistically, what a "portal" looks like is almost certainly going to be determined on an artistic basis rather than a physical one. It's probably important to consider this is a task for furthering NPR design in Blender, but that doesn't mean that Miguel's suggestions are exclusively useful for NPR. There could be applications of this technology to improve physically based looks as well, assuming it wasn't made a unique engine. I think changing the name "NPR Nodes" to "Render Nodes" in the future would be a good idea so that it's clearer to people that it isn't exclusive. Or another, better-fitting name instead, I'm just using the terminology that was in Malt. > New node system is adding so much more complexity on user level, you have to learn and think about and work on entire new concept of "inbetween" editor. It's worst design from user pov imho, and cases it solves is pretty much zero or very low. While having nodes in eevee material editor would make this much much easier, and only situations it would break is if user is doing ridiculous tasks. Majority of user-base shouldn't have to endure complexity for those single-digit cases This will almost certainly end up very similar to the case of Geometry Nodes. Has Geometry Nodes made Blender more complex? Sure. However, Individual users do not have to learn every in and out of Geometry Nodes to reap the benefits of the system, and with their modifier-based design being expanded to include the concept of tools, they have become more and more accessible to those without the technical know-how. This design methodology would slot right in I think. To be _completely_ honest, I think that this will make NPR dramatically less complex to perform in Blender, as there is so much finagling you have to do to get around existing design limitations at the moment, there is quite a bit of prerequisite knowledge necessary to get the look you want at times.
Member

@pragma37 Thanks for the explanation :D

The order with NPR would be:
EEVEE GBuffer -> EEVEE Shading -> NPR Shading -> NPR Sample accumulation -> Compositor

What I was thinking is that could we send the results from NPR Sample accumulation back to e.g. material to affect shading that way? (Well probably not cause I guess the structure would be way more complex, but that's what I was intended to describe 😅 )

@pragma37 Thanks for the explanation :D > The order with NPR would be: > `EEVEE GBuffer -> EEVEE Shading -> NPR Shading -> NPR Sample accumulation -> Compositor` What I was thinking is that could we send the results from `NPR Sample accumulation` back to e.g. material to affect shading that way? (Well probably not cause I guess the structure would be way more complex, but that's what I was intended to describe 😅 )
Author
Member

@nickberckley

I get the reason its desirable to have nodes work between Cycles and EEVEE

Nope, the reasons are way more involved than that. This is from a previous design mailing thread.
The context was discussing how to support the features from the Miro board:


(...)

Textures/Filters (Screen-Space normal effects, Custom Refraction, Screen-Space shadow filters)
Do we add texture sockets and filters to material nodes? Do we only support hard-coded built-in effects?
Material nodes are not well prepared to build custom filters (and probably shouldn’t? surface fragment shaders are a bad place to do so from a performance standpoint).

Pre-pass dependency (Cavity, Curvature, Rim, Screen-Space normal effects)
These effects assume there’s a pre-pass, but that’s not always the case.
For example:

  • Transparent materials (would require layer based OIT).
  • Ray-tracing (alternative implementation possible, but it would be either very noisy or very expensive)
  • Shadow maps (see dependency issues)

Dependency issues (almost all features)
Right now material nodes follow (for the most part) a surface properties -> shading -> output order.
However, a material nodes approach where shading features are available as color inputs would mean that hierarchy is completely lost.
For example:

  • You can use shadows to drive transparency or displacement, which should affect shadows, and you get a dependency loop.
  • You can use GI to drive shading which would again cause a dependency loop.
    This means that things that can be easily expressed with nodes will result in “undefined behavior” that doesn’t translate to the actual output, and could easily change based on implementation details.

Energy conservation (GI)
NPR shading is usually non energy conserving, which, at best will convert some objects into lamps, and at worst would cause the whole lighting to blow out.

For these reasons, I’m leaning towards a solution with an explicit distinction between material nodes and NPR nodes. A separate node system (halfway between materials and compositing nodes) with their inputs exposed to material nodes.

(...)

These would provide several advantages:

  • Material nodes are still (almost) fully compatible between engines.
  • No dependency issues. Shading nodes are inputs that only affect the NPR output, and this is exposed in an obvious/intuitive way to the user.
  • Transparency and displacement are configured in the regular material nodes.
  • Features that require a true BSDF(GI) can just use the regular material.
  • Can implement proper support for filter nodes and texture sockets, or any other NPR specific feature we need without having to fight against the material nodes design.
  • Easier to support custom (glsl) nodes, since we are operating in a simpler context with just plain values and texture data.
  • Future proof, would work with ray-tracing or any other method that can write several layers of AOVs.
  • Doesn’t put too many constraints on the underlying engine, so could share most of its implementation with EEVEE.
  • Leaves the door open for eventually working with Cycles and render engine addons.

Cons:

  • It’s an extra step that requires writing extra texture data and memory to store it.
  • Can’t use UVs or Vertex Colors without writing them to extra render targets or rendering the mesh again.

This thread is for discussing the technical design, I deliberately left as many UX decisions as possible out of this proposal because figuring out the technical integration/implementation and its implications is already complex enough.
We will ask for user feedback once we get into the UX side, so please, let's keep the bikeshedding aside from now.

@nickberckley > I get the reason its desirable to have nodes work between Cycles and EEVEE Nope, the reasons are way more involved than that. This is from a previous design mailing thread. The context was discussing how to support the features from the Miro board: --- (...) **Textures/Filters (Screen-Space normal effects, Custom Refraction, Screen-Space shadow filters)** Do we add texture sockets and filters to material nodes? Do we only support hard-coded built-in effects? Material nodes are not well prepared to build custom filters (and probably shouldn’t? surface fragment shaders are a bad place to do so from a performance standpoint). **Pre-pass dependency (Cavity, Curvature, Rim, Screen-Space normal effects)** These effects assume there’s a pre-pass, but that’s not always the case. For example: * Transparent materials (would require layer based OIT). * Ray-tracing (alternative implementation possible, but it would be either very noisy or very expensive) * Shadow maps (see dependency issues) **Dependency issues (almost all features)** Right now material nodes follow (for the most part) a surface properties -> shading -> output order. However, a material nodes approach where shading features are available as color inputs would mean that hierarchy is completely lost. For example: * You can use shadows to drive transparency or displacement, which should affect shadows, and you get a dependency loop. * You can use GI to drive shading which would again cause a dependency loop. This means that things that can be easily expressed with nodes will result in “undefined behavior” that doesn’t translate to the actual output, and could easily change based on implementation details. **Energy conservation (GI)** NPR shading is usually non energy conserving, which, at best will convert some objects into lamps, and at worst would cause the whole lighting to blow out. For these reasons, I’m leaning towards a solution with an explicit distinction between material nodes and NPR nodes. A separate node system (halfway between materials and compositing nodes) with their inputs exposed to material nodes. (...) These would provide several advantages: - Material nodes are still (almost) fully compatible between engines. - No dependency issues. Shading nodes are inputs that only affect the NPR output, and this is exposed in an obvious/intuitive way to the user. - Transparency and displacement are configured in the regular material nodes. - Features that require a true BSDF(GI) can just use the regular material. - Can implement proper support for filter nodes and texture sockets, or any other NPR specific feature we need without having to fight against the material nodes design. - Easier to support custom (glsl) nodes, since we are operating in a simpler context with just plain values and texture data. - Future proof, would work with ray-tracing or any other method that can write several layers of AOVs. - Doesn’t put too many constraints on the underlying engine, so could share most of its implementation with EEVEE. - Leaves the door open for eventually working with Cycles and render engine addons. Cons: - It’s an extra step that requires writing extra texture data and memory to store it. - Can’t use UVs or Vertex Colors without writing them to extra render targets or rendering the mesh again. --- This thread is for discussing the technical design, I deliberately left as many UX decisions as possible out of this proposal because figuring out the technical integration/implementation and its implications is already complex enough. We will ask for user feedback once we get into the UX side, so please, let's keep the bikeshedding aside from now.
Author
Member

@ChengduLittleA

What I was thinking is that could we send the results from NPR Sample accumulation back to e.g. material to affect shading that way? (Well probably not cause I guess the structure would be way more complex, but that's what I was intended to describe 😅 )

It's not just a complexity issue. If you do something like that the image won't ever converge.
That's why the split between aliased/converged sockets would be needed, you can't just mix aliased and converged data.

@ChengduLittleA > What I was thinking is that could we send the results from NPR Sample accumulation back to e.g. material to affect shading that way? (Well probably not cause I guess the structure would be way more complex, but that's what I was intended to describe 😅 ) It's not just a complexity issue. If you do something like that the image won't ever converge. That's why the split between aliased/converged sockets would be needed, you can't just mix aliased and converged data.

How much of the shader pipeline would be considered to be implemented?

How much of the shader pipeline would be considered to be implemented?

If the current goal is toon shading in some recent games (Genshin, etc.), using the shading features in PBR renderers such as EEVEE is enough. This should be the main goal since I don't see many other practically used styles of NPR on the market.

One pass for each material instance is against the idea of deferred shading. However, tagging pixels with material/object ID can improve performance.

A proper NPR render engine will eventually handle more than GBuffers, such as rigging, extracting features such as curves from the mesh, and user interactions like sketching.
Otherwise, I don't think such an engine has any advantage over Unity, which is very convenient for iterating different shading styles, and already have many existing solutions & projects.

If the current goal is toon shading in some recent games (Genshin, etc.), using the shading features in PBR renderers such as EEVEE is enough. This should be the main goal since I don't see many other practically used styles of NPR on the market. One pass for each material instance is against the idea of deferred shading. However, tagging pixels with material/object ID can improve performance. A proper NPR render engine will eventually handle more than GBuffers, such as rigging, extracting features such as curves from the mesh, and user interactions like sketching. Otherwise, I don't think such an engine has any advantage over Unity, which is very convenient for iterating different shading styles, and already have many existing solutions & projects.
Author
Member

@WangZiWei-Jiang
The goal is to support arbitrary stylized shading in a way that works consistently with all render features, not just for primary rays.

Rigging and tool-based features are not the job of render engines in Blender, such features should follow the already established Blender workflows.

Geometry-based features would still be possible, but they should work consistently with the rest of the rendering features.

I don't agree with your Unity statement, but that's completely off-topic.

@WangZiWei-Jiang The goal is to support arbitrary stylized shading in a way that works consistently with all render features, not just for primary rays. Rigging and tool-based features are not the job of render engines in Blender, such features should follow the already established Blender workflows. Geometry-based features would still be possible, but they should work consistently with the rest of the rendering features. I don't agree with your Unity statement, but that's completely off-topic.
Author
Member

@HannahFantasia Sorry, I don't understand the question.

@HannahFantasia Sorry, I don't understand the question.

@pragma37 Would it be possible with this approach to do something like a Gouraud shader?

@pragma37 Would it be possible with this approach to do something like a Gouraud shader?
Author
Member

@HannahFantasia Ah, now I see what you meant.
No, the geometry is not supposed to be rendered by the NPR engine, so it can't support custom vertex shaders.

@HannahFantasia Ah, now I see what you meant. No, the geometry is not supposed to be rendered by the NPR engine, so it can't support custom vertex shaders.

@pragma37 Could it be possible to think something similar to simulation zone as NPR zone like?

@pragma37 Could it be possible to think something similar to simulation zone as NPR zone like?
Author
Member

@Traslev Maybe? We could allow reading the render targets and AOVs from the previous frame.
I can see a few use cases and it shouldn't be that hard to support.

@Traslev Maybe? We could allow reading the render targets and AOVs from the previous frame. I can see a few use cases and it shouldn't be that hard to support.

NPR artists will probably want the most granular possible control over lighting and shading, which would require Light and Shadow options arguably being exposed at a BSDF level rather than at the object level as it is now in Cycles (at least from what I've seen while trying it).

Since creating multiple Light Groups and Light Linking/Shadow Linking collections would clutter up the linking boxes quite quickly I think it would be better to allow defining lights at a Shader Node level (as seen in these mockups), then using the object-level linking when the inputs are left empty.

Using a cryptomatte-like node where we can pick lights to our choosing (and selecting if those lights will either be the only ones to affect the BSDF or the only ones not to affect it) I think we can get a massive amount of flexibility for NPR purposes.

We can plug in these selected lights into a hypothetical BSDF node's Shading panel light inputs.

Shading panel in a BSDF Cryptomatte-like light selection node
Page 1.png Page 2.png

If it's not viable to implement this at Material time then the NPR system should at least be able to store which shadows are cast by which lights and objects so that we can, at NPR time, only keep the shadows and lights that we want.

NPR artists will probably want the most granular possible control over lighting and shading, which would require Light and Shadow options arguably being exposed at a BSDF level rather than at the object level as it is now in Cycles (at least from what I've seen while trying it). Since creating multiple Light Groups and Light Linking/Shadow Linking collections would clutter up the linking boxes quite quickly I think it would be better to allow defining lights at a Shader Node level (as seen in these mockups), then using the object-level linking when the inputs are left empty. Using a cryptomatte-like node where we can pick lights to our choosing (and selecting if those lights will either be the only ones to affect the BSDF or the only ones not to affect it) I think we can get a massive amount of flexibility for NPR purposes. We can plug in these selected lights into a hypothetical BSDF node's Shading panel light inputs. | Shading panel in a BSDF | Cryptomatte-like light selection node | | --- | --- | | ![Page 1.png](/attachments/b482cc36-d113-499a-9a91-2a4e00e67a5c) | ![Page 2.png](/attachments/7989d96f-ac62-4a25-9052-d633f05b4cd5) | If it's not viable to implement this at Material time then the NPR system should at least be able to store which shadows are cast by which lights and objects so that we can, at NPR time, only keep the shadows and lights that we want.

Since creating multiple Light Groups and Light Linking/Shadow Linking collections would clutter up the linking boxes quite quickly I think it would be better to allow defining lights at a Shader Node level

This is quite a bit like how GooEngine does light groups. Lights can be assigned either at the object level, or offer more split options (diffuse, shadow) within the shader itself.

> Since creating multiple Light Groups and Light Linking/Shadow Linking collections would clutter up the linking boxes quite quickly I think it would be better to allow defining lights at a Shader Node level This is quite a bit like how GooEngine does light groups. Lights can be assigned either at the object level, or offer more split options (diffuse, shadow) within the shader itself.
Author
Member

@MVPuccino I don't think we will be able to have that granularity with the already existing BSDF nodes.
But I agree that for NPR nodes having per-node control of light groups is a must-have.
Per-node toggle of self-shadows can be desirable, but it may be hard to implement in a performant way.
Per-node toggle of "cast-shadows" (assuming that means whether the surface casts shadows or not) doesn't seem feasible, though.
That should be handled through regular material nodes.

Ambient Occlusion and World (I assume that's the World shader indirect lighting) should be available as separate plain color inputs, so you should be able to compose them however you like.

@MVPuccino I don't think we will be able to have that granularity with the already existing BSDF nodes. But I agree that for NPR nodes having per-node control of light groups is a must-have. Per-node toggle of self-shadows can be desirable, but it may be hard to implement in a performant way. Per-node toggle of "cast-shadows" (assuming that means whether the surface casts shadows or not) doesn't seem feasible, though. That should be handled through regular material nodes. Ambient Occlusion and World (I assume that's the World shader indirect lighting) should be available as separate plain color inputs, so you should be able to compose them however you like.

I don't think we will be able to have that granularity with the already existing BSDF nodes.

I was doubtful too, but I think it might still possible to expose these shading options in the NPR process.

You said in the original post:

For example, storing one shadow mask for each overlapping light might seem too expensive, but we only need ceil(log2(shadow_rays+1)) bits for each shadow mask. In the case of EEVEE that's 1 bit by default, and 3 bits at worst.

Which makes me think that it might be possible to efficiently separate shadows (unless I'm not noticing some limitation, perhaps ID-ing different kinds of shadows per-light is too expensive, or having them separable unreasonably increases the size of the GBuffer?) generated by different lights into masks at NPR time; perhaps with a built-in shadow separation node like such:

Page 3.png

  • Self-Shading is for shading that happens from Normal calculations, basically what one gets from setting Shadow Mode to None in EEVEE Legacy
  • Cast Self-Shadows should be self-explanatory
  • Cast Shadows are for shadows that are cast upon the mesh by other objects occluding a light.
    These are outputted as black and white masks, hence the Float output rather than an Image output.

If this is the case then could this also open up the possibility of separating other parts by lights? For example, specular highlights. (And also potentially having a different generalized separation node for other parts of the GBuffer as either color info or as a mask)

Page 4.png

> I don't think we will be able to have that granularity with the already existing BSDF nodes. I was doubtful too, but I think it might still possible to expose these shading options in the NPR process. You said in the original post: > For example, storing one shadow mask for each overlapping light might seem too expensive, but we only need `ceil(log2(shadow_rays+1))` bits for each shadow mask. In the case of EEVEE that's 1 bit by default, and 3 bits at worst. Which makes me think that it might be possible to efficiently separate shadows (unless I'm not noticing some limitation, perhaps ID-ing different kinds of shadows per-light is too expensive, or having them separable unreasonably increases the size of the GBuffer?) generated by different lights into masks at NPR time; perhaps with a built-in shadow separation node like such: ![Page 3.png](/attachments/0ad069e7-0254-4ee5-be76-00f6f247ca09) - Self-Shading is for shading that happens from Normal calculations, basically what one gets from setting Shadow Mode to None in EEVEE Legacy - Cast Self-Shadows should be self-explanatory - Cast Shadows are for shadows that are cast upon the mesh by other objects occluding a light. These are outputted as black and white masks, hence the Float output rather than an Image output. If this is the case then could this also open up the possibility of separating other parts by lights? For example, specular highlights. (And also potentially having a different generalized separation node for other parts of the GBuffer as either color info or as a mask) ![Page 4.png](/attachments/77128048-6e06-4d11-88ba-8c32392d21e2)
Author
Member

I don't think we will be able to have that granularity with the already existing BSDF nodes.

I was doubtful too, but I think it might still possible to expose these shading options in the NPR process.

That's exactly what I meant with the next phrase:

@MVPuccino I don't think we will be able to have that granularity with the already existing BSDF nodes.
But I agree that for NPR nodes having per-node control of light groups is a must-have.

Self-Shading is for shading that happens from Normal calculations, basically what one gets from setting Shadow Mode to None in EEVEE Legacy
If this is the case then could this also open up the possibility of separating other parts by lights? For example, specular highlights.

This should be totally doable, not just "getting" them, but also computing them in custom ways.

I assumed that by self-shadows you meant casted self-shadows.

Cast Self-Shadows should be self-explanatory
Cast Shadows are for shadows that are cast upon the mesh by other objects occluding a light.

Getting a mask for casted self-shadows only (without shadows casted by other objects) would be too costly performance and memory-wise, since it would require a separate shadow map for each object that uses this option.
You could achieve a similar result by using light groups, though.

I certainly want to support, at least, a per-object toggle of casted self-shadows, but I'm not sure about the implementation details yet.

(And also potentially having a different generalized separation node for other parts of the GBuffer as either color info or as a mask)

The idea is to be able to read (and write) AOVs from NPR nodes.

> > I don't think we will be able to have that granularity with the already existing BSDF nodes. > > I was doubtful too, but I think it might still possible to expose these shading options in the NPR process. That's exactly what I meant with the next phrase: > @MVPuccino I don't think we will be able to have that granularity with the already existing BSDF nodes. But I agree that for NPR nodes having per-node control of light groups is a must-have. > Self-Shading is for shading that happens from Normal calculations, basically what one gets from setting Shadow Mode to None in EEVEE Legacy > If this is the case then could this also open up the possibility of separating other parts by lights? For example, specular highlights. This should be totally doable, not just "getting" them, but also computing them in custom ways. I assumed that by self-shadows you meant casted self-shadows. > Cast Self-Shadows should be self-explanatory Cast Shadows are for shadows that are cast upon the mesh by other objects occluding a light. Getting a mask for casted self-shadows only (without shadows casted by other objects) would be too costly performance and memory-wise, since it would require a separate shadow map for each object that uses this option. You could achieve a similar result by using light groups, though. I certainly want to support, at least, a per-object toggle of casted self-shadows, but I'm not sure about the implementation details yet. > (And also potentially having a different generalized separation node for other parts of the GBuffer as either color info or as a mask) The idea is to be able to read (and write) AOVs from NPR nodes.

I assumed that by self-shadows you meant casted self-shadows.

I have to apologize for the confusion.
The previously proposed "In BSDF shading controls" were prototyped rather quickly.
In there the logic was:

  • Self-Shadows: Cast Self-Shadows and Normal Shadows.
  • Received Shadows: Shadows cast from other objects upon the mesh.
  • Cast Shadows: Shadows the mesh casts.
    I won't bother with AO and World lighting.

I later realized this sequence was quite limiting and, frankly, dumb. (Especially the mesh controlling the shadow it casts, which would just add a layer of complexity over the Received Shadows pass. As well as not being able to separate Normal Self-Shading from Cast Self-Shadows)

So in the "Separate Shadows" node I decided to limit it as previously described

Getting a mask for casted self-shadows only (without shadows casted by other objects) would be too costly performance and memory-wise, since it would require a separate shadow map for each object that uses this option.
You could achieve a similar result by using light groups, though.

From the way you wrote it, it sounds like extracting just the Object Cast Shadows on their own without the Cast Self-Shadows is viable, just not the other way around.

Perhaps the Separate Shadows node can be changed to have this sequence of outputs:

  • Self-Shading (Normal calculated)
  • Object Cast Shadows
  • Combined Cast Shadows (Objects Cast Shadows + Cast Self-Shadows)

It's fine to omit Cast Self-Shadows. Even then, being able to extract just the Normal-based Self-Shading and the Object Cast Shadows would be plenty for a good number of styles. Additionally, I don't see how we couldn't make up for the missing Cast Self-Shadows by faking them (and other shadows) using Mesh Space Shadows and Custom Depth Offsets.

The idea is to be able to read (and write) AOVs from NPR nodes.

Sounds great, being able to use AOVs in the Viewport and not just in the Compositor is definitely a massive plus.

> I assumed that by self-shadows you meant casted self-shadows. I have to apologize for the confusion. The previously proposed "In BSDF shading controls" were prototyped rather quickly. In there the logic was: - Self-Shadows: Cast Self-Shadows and Normal Shadows. - Received Shadows: Shadows cast from other objects upon the mesh. - Cast Shadows: Shadows the mesh casts. I won't bother with AO and World lighting. I later realized this sequence was quite limiting and, frankly, dumb. (Especially the mesh controlling the shadow it casts, which would just add a layer of complexity over the Received Shadows pass. As well as not being able to separate Normal Self-Shading from Cast Self-Shadows) So in the "Separate Shadows" node I decided to limit it as previously described > Getting a mask for casted self-shadows only (without shadows casted by other objects) would be too costly performance and memory-wise, since it would require a separate shadow map for each object that uses this option. You could achieve a similar result by using light groups, though. From the way you wrote it, it sounds like extracting just the Object Cast Shadows on their own without the Cast Self-Shadows is viable, just not the other way around. Perhaps the Separate Shadows node can be changed to have this sequence of outputs: - Self-Shading (Normal calculated) - Object Cast Shadows - Combined Cast Shadows (Objects Cast Shadows + Cast Self-Shadows) It's fine to omit Cast Self-Shadows. Even then, being able to extract just the Normal-based Self-Shading and the Object Cast Shadows would be plenty for a good number of styles. Additionally, I don't see how we couldn't make up for the missing Cast Self-Shadows by faking them (and other shadows) using Mesh Space Shadows and Custom Depth Offsets. > The idea is to be able to read (and write) AOVs from NPR nodes. Sounds great, being able to use AOVs in the Viewport and not just in the Compositor is definitely a massive plus.

Hi @pragma37 I'm very excited to read your proposal. I have a few non technical thoughts that you might've already came across but I'm just posting them here just in case they can be useful.

One thing that immediately came to mind was to have the discreet ability of selecting specific meshes and project shadows depending on camera/viewer position, a lot of stuff like hair can be handled that way and not depend ambient occlusion.

Another important detail is to be able to select which objects not only get ambient occlusion and contact shadows but to exclude certain objects from it. This way I imagine having the hair always following this shadow coming from camera projection and then being able to combine the shadows from objects in front or close to the face. I have a few workarounds to do this stuff and all of them are independent of the lightning conditions in the scenes, like most 2D animation.

Having this ability combined with custom light groups and object exclusion would solve a lot of these issues that have to be fixed by hand at this moment.

Another thing I always wanted was to have the option to generate shadows before certain deformations like lattices for example, and then project them into the deformed meshes. If a user were to flatten out certain meshes to avoid extreme changes of perspective because of the camera lenses, there has to be a way for the shadow data to come from what the user would define as a neutral mesh/shape. I imagine it would be something like blend shapes but for shadows specifically, however I don't know how viable something like that would be.

Another thing I was thinking is to have the capacity to hand drawn and customize normal maps and normal bumps in viewport like you would do with a regular 2D drawing. There's some solutions to do this kind of customizing with geometry nodes along with smoothing to avoid the issues from using data transfer and things like that, but it's not a very artistic process.

Sometimes while working on a character you have an amazing silhouette that came from sculpting but then you spend a lot of time fighting the remeshing process to get closer to the convention and methods used to basically "bake" the shadows into the geometry by default. But whenever you want to make any changes you ruin hours of work, and if you want to have cleaner shadows you sometimes end up just doing data transfer of a higher poly mesh or using these nodes and in a lot of cases is just a lot faster to hand drawn the shadows frame by frame than dealing with all of this. The nature of 2D animation is such that none of these solutions are flexible enough for the range of expression needed, and it doesn't matter what kind of pipeline or method you take, the compromise is huge and a lot of what makes 2D animation beautiful gets lost for the functionality of 3D.

I we were able to guide shadows by hand drawing on top of the mesh, we could effectively and creatively separate the mesh from shading for characters in the same way we can separate and optimize physics and simulations by using proxies. Custom shaped highlights are a staple of 2D animation, and we are stuck to using slow to implement UV parallaxes and nodes that react differently and unpredictably on different meshes. Shaping shadow/highlight contours needs artistic control and user input. I understand this could be very difficult but this is the first tell to me for NPR, a lot of the other things can be solved one way or another.

As for the rest, I agree completely. Having the ability to use PBR is a necessity for backgrounds and vfxs in a lot of productions. We need as many options as possible to fit as many workflows and non typical pipelines. Having a character with NPR shading in front of a mirror to show the PBR background while controlling the amount of reflections in the floor and which objects or what colors are on those reflections (for example)

And before I forget, another thing I always handle in video compositing is things like adding a color ramp between contact surfaces, objects and characters, and all those artificial gradients needed to improve the composition. A lot of that can be fixed in post, but sometimes I add planes with color ramps in the scene, which is very bothersome, needs to manual exclusion from line detection and a lot of other issues. And managing that on the compositor is clunky. If there was just a way to have a "layer" hierarchy a lot of that would go away. Positioning things like flares, glows, color correction, gradients, and countless other things would solve most of that. However for this I know it has a lot more to do with other things than the renderer so I would just be happy by being able to pick objects or collections and control gradients with little to no work. There's a lot of things that are more convenient and faster to handle in video, but I think that at least an easy way to have gradient control on collections is part of the artistic work that should be in viewport to a certain degree.

EDITED TO AVOID BREAKING COPYRIGHT RULES

Ah last thing, curvature. Parameters also need to be exposed in a way to have the ability to map the curvature thickness/alpha.

In any case, if this were to move forward and we at least get crisps per object shadows that would be already an improvement to what we already have.

Hi @pragma37 I'm very excited to read your proposal. I have a few non technical thoughts that you might've already came across but I'm just posting them here just in case they can be useful. One thing that immediately came to mind was to have the discreet ability of selecting specific meshes and project shadows depending on camera/viewer position, a lot of stuff like hair can be handled that way and not depend ambient occlusion. Another important detail is to be able to select which objects not only get ambient occlusion and contact shadows but to exclude certain objects from it. This way I imagine having the hair always following this shadow coming from camera projection and then being able to combine the shadows from objects in front or close to the face. I have a few workarounds to do this stuff and all of them are independent of the lightning conditions in the scenes, like most 2D animation. Having this ability combined with custom light groups and object exclusion would solve a lot of these issues that have to be fixed by hand at this moment. Another thing I always wanted was to have the option to generate shadows before certain deformations like lattices for example, and then project them into the deformed meshes. If a user were to flatten out certain meshes to avoid extreme changes of perspective because of the camera lenses, there has to be a way for the shadow data to come from what the user would define as a neutral mesh/shape. I imagine it would be something like blend shapes but for shadows specifically, however I don't know how viable something like that would be. Another thing I was thinking is to have the capacity to hand drawn and customize normal maps and normal bumps in viewport like you would do with a regular 2D drawing. There's some solutions to do this kind of customizing with geometry nodes along with smoothing to avoid the issues from using data transfer and things like that, but it's not a very artistic process. Sometimes while working on a character you have an amazing silhouette that came from sculpting but then you spend a lot of time fighting the remeshing process to get closer to the convention and methods used to basically "bake" the shadows into the geometry by default. But whenever you want to make any changes you ruin hours of work, and if you want to have cleaner shadows you sometimes end up just doing data transfer of a higher poly mesh or using these nodes and in a lot of cases is just a lot faster to hand drawn the shadows frame by frame than dealing with all of this. The nature of 2D animation is such that none of these solutions are flexible enough for the range of expression needed, and it doesn't matter what kind of pipeline or method you take, the compromise is huge and a lot of what makes 2D animation beautiful gets lost for the functionality of 3D. I we were able to guide shadows by hand drawing on top of the mesh, we could effectively and creatively separate the mesh from shading for characters in the same way we can separate and optimize physics and simulations by using proxies. Custom shaped highlights are a staple of 2D animation, and we are stuck to using slow to implement UV parallaxes and nodes that react differently and unpredictably on different meshes. Shaping shadow/highlight contours needs artistic control and user input. I understand this could be very difficult but this is the first tell to me for NPR, a lot of the other things can be solved one way or another. As for the rest, I agree completely. Having the ability to use PBR is a necessity for backgrounds and vfxs in a lot of productions. We need as many options as possible to fit as many workflows and non typical pipelines. Having a character with NPR shading in front of a mirror to show the PBR background while controlling the amount of reflections in the floor and which objects or what colors are on those reflections (for example) And before I forget, another thing I always handle in video compositing is things like adding a color ramp between contact surfaces, objects and characters, and all those artificial gradients needed to improve the composition. A lot of that can be fixed in post, but sometimes I add planes with color ramps in the scene, which is very bothersome, needs to manual exclusion from line detection and a lot of other issues. And managing that on the compositor is clunky. If there was just a way to have a "layer" hierarchy a lot of that would go away. Positioning things like flares, glows, color correction, gradients, and countless other things would solve most of that. However for this I know it has a lot more to do with other things than the renderer so I would just be happy by being able to pick objects or collections and control gradients with little to no work. There's a lot of things that are more convenient and faster to handle in video, but I think that at least an easy way to have gradient control on collections is part of the artistic work that should be in viewport to a certain degree. *EDITED TO AVOID BREAKING COPYRIGHT RULES* Ah last thing, curvature. Parameters also need to be exposed in a way to have the ability to map the curvature thickness/alpha. In any case, if this were to move forward and we at least get crisps per object shadows that would be already an improvement to what we already have.
@blenderbroli See: https://devtalk.blender.org/t/copyright-guidelines-for-devtalk/17331

Sorry. I'm new around here @mod_moder Just edited my post to avoid any issues


EDIT 2:

https://devtalk.blender.org/t/projects-weekly-updates/35232/1

NPR project with @pragma37 > have a one week workshop soon, in July or August?

To discuss and organize

Amazing news!

Sorry. I'm new around here @mod_moder Just edited my post to avoid any issues --- EDIT 2: https://devtalk.blender.org/t/projects-weekly-updates/35232/1 > NPR project with @pragma37 > have a one week workshop soon, in July or August? > > To discuss and organize Amazing news!

Getting a mask for casted self-shadows only (without shadows casted by other objects) would be too costly performance and memory-wise, since it would require a separate shadow map for each object that uses this option.
You could achieve a similar result by using light groups, though.

If you don't need to receive other shadows "through" self-cast shadows (i.e. detecting the self -> other -> self case), you can actually do it much cheaper than this. In fact, I would argue that the above would be an anti-feature in many cases. For example, you don't want your characters' eyeballs casting shadows on the back of their head.

Instead of rendering a separate shadow map for each object that uses this option, you instead add an ID channel to the shadow map. This channel basically stores a random number per ID (whether this is per Material, per Object, or configurable is an open design question).
If the occluding surface has the same ID as the receiving surface, the shadow goes into the Self Cast Shadows mask.
If it doesn't, then it goes into the Cast Shadows mask.

> Getting a mask for casted self-shadows only (without shadows casted by other objects) would be too costly performance and memory-wise, since it would require a separate shadow map for each object that uses this option. > You could achieve a similar result by using light groups, though. If you don't need to receive other shadows "through" self-cast shadows (i.e. detecting the self -> other -> self case), you can actually do it much cheaper than this. In fact, I would argue that the above would be an anti-feature in many cases. For example, you don't want your characters' eyeballs casting shadows on the back of their head. Instead of rendering a separate shadow map for each object that uses this option, you instead add an ID channel to the shadow map. This channel basically stores a random number per ID (whether this is per Material, per Object, or configurable is an open design question). If the occluding surface has the same ID as the receiving surface, the shadow goes into the Self Cast Shadows mask. If it doesn't, then it goes into the Cast Shadows mask.

Can the Cast Shadows mask be modified, for example remove some shadows that do not look good and adding some shadow for emphasis?

Can the Cast Shadows mask be modified, for example remove some shadows that do not look good and adding some shadow for emphasis?

From the linked board, I would especially love these to become a reality.
Light nodes or at least just textures to me is one of the biggest missing EEVEE features.
Having decals in EEVEE and Cycles would be a dream come true, thinking of like game engine style decals, which could support full material nodes so you can give them individual roughness, metallic, etc maps as well as crazy parallax effects.
image

From the linked board, I would especially love these to become a reality. Light nodes or at least just textures to me is one of the biggest missing EEVEE features. Having decals in EEVEE and Cycles would be a dream come true, thinking of like game engine style decals, which could support full material nodes so you can give them individual roughness, metallic, etc maps as well as crazy parallax effects. <img width="432" alt="image" src="attachments/78c96273-950f-4a72-a078-9af480c8708d">
Author
Member

Instead of rendering a separate shadow map for each object that uses this option, you instead add an ID channel to the shadow map.
If the occluding surface has the same ID as the receiving surface, the shadow goes into the Self Cast Shadows mask.
If it doesn't, then it goes into the Cast Shadows mask.

That's the method I use in Malt: f46cc9484f/Malt/Pipelines/NPR_Pipeline/Shaders/NPR_Pipeline/NPR_Lighting.glsl (L126)

However, this only tells if the surface casting the shadow comes from a different object or from itself.
It doesn't allow getting all the shadows that an object would cast into itself, since other objects can occlude the object surface in the shadow-map.

> Instead of rendering a separate shadow map for each object that uses this option, you instead add an ID channel to the shadow map. > If the occluding surface has the same ID as the receiving surface, the shadow goes into the Self Cast Shadows mask. > If it doesn't, then it goes into the Cast Shadows mask. That's the method I use in Malt: https://github.com/bnpr/Malt/blob/f46cc9484f073cbeeefa86a1c72b811a1dd15ef1/Malt/Pipelines/NPR_Pipeline/Shaders/NPR_Pipeline/NPR_Lighting.glsl#L126 However, this only tells if the surface casting the shadow comes from a different object or from itself. It doesn't allow getting all the shadows that an object would cast into itself, since other objects can occlude the object surface in the shadow-map.

The NPR Nodes example seems to be a setup that you could reproduce in the Compositor, and @Thalia-Solari mentioned that it could have broader usage, offering the name Render Nodes, which sounds a lot like Compositor nodes to me (though maybe I can't see an important distinction here).

Is it possible to implement the NPR Nodes via the Compositor, or to expose the necessary passes so that users can access that data in the standard Compositor?

One of the nice things I find about using Goo Engine is that it's just Blender, but I can turn off shadows per object, and use the Curvature node, it's not an entirely new workflow that I need to learn.

If it's possible to split the NPR Nodes features between Shader and Compositor nodes I think that would streamline the workflow, rather than adding a third node graph to adjust the look, though I appreciate that I may not know a technical reason why this is not possible.

Thanks for this proposal and discussion!

The NPR Nodes example seems to be a setup that you could reproduce in the Compositor, and @Thalia-Solari mentioned that it could have broader usage, offering the name Render Nodes, which sounds a lot like Compositor nodes to me (though maybe I can't see an important distinction here). Is it possible to implement the NPR Nodes via the Compositor, or to expose the necessary passes so that users can access that data in the standard Compositor? One of the nice things I find about using Goo Engine is that it's just Blender, but I can turn off shadows per object, and use the Curvature node, it's not an entirely new workflow that I need to learn. If it's possible to split the NPR Nodes features between Shader and Compositor nodes I think that would streamline the workflow, rather than adding a third node graph to adjust the look, though I appreciate that I may not know a technical reason why this is not possible. Thanks for this proposal and discussion!
Author
Member

@SpectralVectors You're not the first to express the concern about adding a new node system, but NPR nodes are going to be mostly the same as the current ones. It's not going to be a full new node system (that screenshot is from Malt, that's why the nodes look different from what you would expect).
I don't have any intention of adding a new node system just for the sake of it.
In fact, the plan is to share the implementation with the rest of the shader nodes, so certain features like Texture sockets may be available from Object shader nodes too.

I think the easiest way to explain the workflow it is to imagine that every node added after ShaderToRGB must be inside its own Node Group. Inside that node group, there are going to be some new nodes available and others removed, but the differences are on a similar level to Object shader nodes vs World/Light shader nodes.
For filter nodes, I would go for a 1:1 match to the already existing compositor nodes, even if it may not be possible to share the internal implementation.

The Render Nodes that @Thalia-Solari mentioned are a concept from Malt, they're not similar to anything that exists in Blender already.
They would be the equivalent of splitting the EEVEE internal implementation into multiple nodes, so users can build their own "render engine" from pre-existing parts, and replace/add extra parts with plugins.
It's a very powerful system, but it also has performance implications and requires some understanding of how a render engine works internally to use it.
This proposal takes a very different approach, much closer to the regular Blender workflow.

@SpectralVectors You're not the first to express the concern about adding a new node system, but NPR nodes are going to be mostly the same as the current ones. It's not going to be a full new node system (that screenshot is from Malt, that's why the nodes look different from what you would expect). I don't have any intention of adding a new node system just for the sake of it. In fact, the plan is to share the implementation with the rest of the shader nodes, so certain features like Texture sockets may be available from Object shader nodes too. I think the easiest way to explain the workflow it is to imagine that every node added after ShaderToRGB must be inside its own Node Group. Inside that node group, there are going to be some new nodes available and others removed, but the differences are on a similar level to Object shader nodes vs World/Light shader nodes. For filter nodes, I would go for a 1:1 match to the already existing compositor nodes, even if it may not be possible to share the internal implementation. The Render Nodes that @Thalia-Solari mentioned are a concept from Malt, they're not similar to anything that exists in Blender already. They would be the equivalent of splitting the EEVEE internal implementation into multiple nodes, so users can build their own "render engine" from pre-existing parts, and replace/add extra parts with plugins. It's a very powerful system, but it also has performance implications and requires some understanding of how a render engine works internally to use it. This proposal takes a very different approach, much closer to the regular Blender workflow.

I don't see any npr features that the composition system could not do, if it had "curvature" input as a channel we can turn on/off.

https://youtu.be/6UqPdQbbbkM?feature=shared

this uses multipass branch to sample depth and aov.

I don't see any npr features that the composition system could not do, if it had "curvature" input as a channel we can turn on/off. https://youtu.be/6UqPdQbbbkM?feature=shared this uses multipass branch to sample depth and aov.
Author
Member

@JacobMerrill-1

For these reasons and because of the usual lack of built-in support, NPR is often done as a post-render compositing process, but this has its own issues:

  • Material/object-specific stylization requires a lot of extra setup and render targets.
  • It only really works for primary rays on opaque surfaces. Applying stylization to transparent, reflected, or refracted surfaces is practically impossible.
  • The image is already converged, so operating on it without introducing aliasing issues can be challenging.
  • Many useful data is already lost (eg. lights and shadows are already applied/merged).

However, these are mostly workflow and integration issues rather than fundamental issues at the core rendering level.

@JacobMerrill-1 >For these reasons and because of the usual lack of built-in support, NPR is often done as a post-render compositing process, but this has its own issues: > > - Material/object-specific stylization requires a lot of extra setup and render targets. > - It only really works for primary rays on opaque surfaces. Applying stylization to transparent, reflected, or refracted surfaces is practically impossible. > - The image is already converged, so operating on it without introducing aliasing issues can be challenging. > - Many useful data is already lost (eg. lights and shadows are already applied/merged). > >However, these are mostly workflow and integration issues rather than fundamental issues at the core rendering level.

There's a massive difference between being able to shade / light NPR in the scene, compared to post / compositing.

There's a massive difference between being able to shade / light NPR in the scene, compared to post / compositing.

@pragma37 Thanks for the explanation, I figured there was something that I didn't get.

I'm still a little lost on this detail:

every node added after ShaderToRGB must be inside its own Node Group.

Does that mean that they won't function outside of that group as regular Shader Nodes?
Or am I misunderstanding again?
If they only work inside of their own node group and they only output to a special NPR material output, then isn't it essentially introducing a new node system without introducing a new node editor?

so certain features like Texture sockets may be available from Object shader nodes too.

OK, I'm a little lost again - we already have most (if not all) of the features of Texture Nodes already in the Shader Editor. What features would this enable?

@thorn-neverwake

There's a massive difference between being able to shade / light NPR in the scene, compared to post / compositing.

Absolutely true, but, isn't this why the Realtime Compositor was introduced? And, isn't this where EEVEE's Bloom setting has been moved?

I'm thinking of software like Pencil+: a line drawing compositor effect that allows viewport preview.

From Daisuke Onitsuka (CG Director of Evangelion remakes):

Since it became possible to check the line drawing in the Viewport... we were able to greatly reduce the amount of line settings mistakes. Being able to check line thickness before rendering is extremely useful.

I don't mean to be obtuse, I'm just having difficulty picturing what the actual workflow would be like.

Given that the basis of the proposal is Goo Engine's wishlist, have you been in touch with their developers?
Or a studio like Lightning Boy that also put a lot of time and effort into a stylized rendering system?
Or Cogumelo Softworks, developer of Toonkit for Cycles?
(Apologies if these people are already in this thread and I just don't know their names!)

I'm also wondering where Clement and Omar are on this proposal: it would be really nice to know what features have been accommodated for already in the EEVEE Next redesign (if any), and what features could be added to the realtime compositor.
And, with Bloom moving exclusively to the Compositor for 4.2 and up, it seems like realtime compositor use will be a necessity to preview accurate scene lighting anyway.

Thanks again for the discussion and responses, and apologies for my misunderstandings!

@pragma37 Thanks for the explanation, I figured there was something that I didn't get. I'm still a little lost on this detail: > every node added after ShaderToRGB must be inside its own Node Group. Does that mean that they won't function outside of that group as regular Shader Nodes? Or am I misunderstanding again? If they only work inside of their own node group and they only output to a special NPR material output, then isn't it essentially introducing a new node system without introducing a new node editor? > so certain features like Texture sockets may be available from Object shader nodes too. OK, I'm a little lost again - we already have most (if not all) of the features of Texture Nodes already in the Shader Editor. What features would this enable? @thorn-neverwake > There's a massive difference between being able to shade / light NPR in the scene, compared to post / compositing. Absolutely true, but, isn't this why the Realtime Compositor was introduced? And, isn't this where EEVEE's Bloom setting has been moved? I'm thinking of software like Pencil+: a line drawing compositor effect that allows viewport preview. From Daisuke Onitsuka (CG Director of Evangelion remakes): > Since it became possible to check the line drawing in the Viewport... we were able to greatly reduce the amount of line settings mistakes. Being able to check line thickness before rendering is extremely useful. I don't mean to be obtuse, I'm just having difficulty picturing what the actual workflow would be like. Given that the basis of the proposal is Goo Engine's wishlist, have you been in touch with their developers? Or a studio like Lightning Boy that also put a lot of time and effort into a stylized rendering system? Or Cogumelo Softworks, developer of Toonkit for Cycles? (Apologies if these people are already in this thread and I just don't know their names!) I'm also wondering where Clement and Omar are on this proposal: it would be really nice to know what features have been accommodated for already in the EEVEE Next redesign (if any), and what features could be added to the realtime compositor. And, with Bloom moving exclusively to the Compositor for 4.2 and up, it seems like realtime compositor use will be a necessity to preview accurate scene lighting anyway. Thanks again for the discussion and responses, and apologies for my misunderstandings!
Author
Member

@thorn-neverwake @SpectralVectors I've had to remove your last 2 messages. 🙁
See https://devtalk.blender.org/t/copyright-guidelines-for-devtalk/17331

@thorn-neverwake @SpectralVectors I've had to remove your last 2 messages. 🙁 See https://devtalk.blender.org/t/copyright-guidelines-for-devtalk/17331

@thorn-neverwake @SpectralVectors I've had to remove your last 2 messages. 🙁
See https://devtalk.blender.org/t/copyright-guidelines-for-devtalk/17331

How did my screenshots of Blender add-ons (which by Blender's own licensing, must be GPL) break the guidelines for posting?

> @thorn-neverwake @SpectralVectors I've had to remove your last 2 messages. 🙁 > See https://devtalk.blender.org/t/copyright-guidelines-for-devtalk/17331 > > How did my screenshots of Blender add-ons (which by Blender's own licensing, must be GPL) break the guidelines for posting?
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset System
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Code Documentation
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Viewport & EEVEE
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Asset Browser Project
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Module
Viewport & EEVEE
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Severity
High
Severity
Low
Severity
Normal
Severity
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No Assignees
17 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#120403
No description provided.