NPR Design #120403
Labels
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset System
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Code Documentation
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Viewport & EEVEE
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Asset Browser Project
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Module
Viewport & EEVEE
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Severity
High
Severity
Low
Severity
Normal
Severity
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
17 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: blender/blender#120403
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Design Goals
NPR Challenges
For these reasons and because of the usual lack of built-in support, NPR is often done as a post-render compositing process, but this has its own issues:
However, these are mostly workflow and integration issues rather than fundamental issues at the core rendering level.
Proposal
This proposal focuses on solving the core technical and integration aspects, taking EEVEE integration as the initial target.
The details about how to expose and implement specific shading features are orthogonal to this proposal.
For an overview of the features intended to be supported see this board by Dillon Goo Studios.
The main goal here is to provide a technically sound space inside Blender where these features can be developed.
Big Picture
The NPR engine would be a new type of engine halfway between a regular render engine and a compositing engine.
Instead of operating on final images once the main renderer has finished its job, it works as the final steps of each render sample, acting as a second deferred shading step, before assembling the final image sample and handling sample accumulation.
This leaves the space required to solve the issues mentioned above, without having to build a whole new "full" render engine, and, at the same time, imposing as few hard requirements as possible on the "core" engine.
On the user side, this introduces a new NPR node system, separate from the regular Material nodes, where custom shading and filtering can be applied.
Each NPR node tree generates a Closure-like node that can be used from Material nodes.
This approach provides several advantages:
Implementation
Each NPR material generates a list of its required "G-Buffer" input data.
The "core" engine is responsible for generating this G-Buffer and passing it to the NPR engine at the end of each sample. This works similarly to the already existing compositing passes.
In addition, the NPR engine is also capable of running its own scene synchronization to generate its own engine-specific data when needed.
Then the NPR engine runs separate screen-space passes for each NPR material type (only on the pixels covered by that material) and assembles the final image.
G-Buffer packing
Something to keep in mind is that since we are working with per-sample data, render targets can be much more compressed than in regular compositing.
For example, storing one shadow mask for each overlapping light might seem too expensive, but we only need
ceil(log2(shadow_rays+1))
bits for each shadow mask. In the case of EEVEE that's 1 bit by default, and 3 bits at worst.Tier System
Any stochastically rendered surface (jittered transparency, rough reflections/refractions, ray-traced DoF and motion-blur) prevents the NPR system from applying filters to them.
For this reason and as a fallback when a true BSDF is required, we need a tier system.
There would be 3 tiers:
For opaque and layered transparency surfaces, and for 0 roughness reflections and refractions.
For stochastically rendered surfaces.
For features where a true energy-conserving BSDF is needed (eg. diffuse GI, rough reflection/refractions).
This is handled automatically, so users only need to be aware that in some contexts filters will be skipped, and that the base material output can still be used.
EEVEE-side features
Other than outputting the required render targets, there are some features that, even if they're only used on the NPR side, would have to be implemented or would be more practical to implement at the core engine side:
There are also some required features that could be implemented on the NPR engine side, but are already planned for EEVEE anyway (light linking and light nodes).
Layered Transparency
Supporting the "Full" level features on transparent surfaces would require a layered OIT solution at the core engine level, or implementing it at the NPR engine level and passing the depth buffer of each layer to the core engine.
Reflections & Refractions
To support correct interaction between PBR and NPR materials, NPR shading should also be applied to NPR surfaces reflected and refracted onto other materials (including PBR ones). This implies that the NPR engine should handle the final composition of PBR materials as well.
A "generic" solution could be requesting render passes for secondary rays too (so each ray depth level has its own render target) and applying NPR shading in reverse order.
However, since EEVEE only supports screen-space tracing for now (and probes should already have the NPR shading applied at bake time), a reflection and refraction UV may be enough. And since horizon scanning is meant to be used for diffuse and rough reflections, it should be fine to use them as-is without any NPR post-processing.
Custom sample accumulation and resolve
A possible solution to solve the "operating on converged, noiseless data" issue is accumulating samples in a "geometrically-aliased" way, but storing per-sample weights (MSAA style) so anti-aliasing can still be applied as a last step.
This allows the user to operate on converged shading data without having to worry about aliasing issues.
However, applying this method to every NPR material input may be too costly, and there are legit reasons for wanting to operate directly on per-sample data, anyway.
So a possible solution could be to expose this functionality as a node, and use different socket types for per-sample vs converged data (for example using the same socket color but different shape).
Open questions
Name for the project?
Do we implement the NPR engine as a separate draw engine?
As a library that EEVEE can use?
By inheriting EEVEE classes with virtual functions where needed?
Is a future Cycles integration a possibility we should consider at all?
Can/Should we re-use parts of the material and compositing nodes implementation?
How does Grease Pencil fit here?
Is the planned EEVEE-GP integration enough, so the NPR engine can just treat GP as any other surface?
Roadmap
The initial version of NPR nodes would work as an improvement of the ShaderToRGB workflow, with access to separate shading features and the ability to perform custom shading, but without filter nodes.
We can implement "filters" at the shader level at this stage though, as long as they operate on "global" inputs (like depth, or normal) and the whole node tree can be compiled to a single shader pass.
From there we can iterate on adding all the required shading features, and finally implement filtering nodes and the custom sample accumulation/resolve solution.
@pragma37 Hey :) I really hope that Grease Pencil can become a part of this proposal!
Many of the current NPR issues can be said about Grease Pencil as well. On top of this, shading is very limited (Grease Pencil has its own materials, not node based, few options).
Since GPv3 will turn Grease Pencil essentially into
CurvesGeometry
, to me, the question becomes "how do we render curves in NPR?". So the same thing could be asked about hair for example.Ideally, we don't enforce a meshing step and have a native way of rendering curves somehow.
@filedescriptor The idea is that the NPR renderer doesn't render the meshes itself, it just receives a G-Buffer with the geometry already rendered.
So the NPR renderer doesn't care about meshes vs sculpt meshes vs curves, they're just pre-rendered surfaces.
So if the EEVEE-Grease Pencil integration means GP objects can have a regular EEVEE Material and act like any other surface, then GP doesn't need to be a special case on the NPR side.
But I'm not sure what's exactly the plan with EEVEE-GP.
I really love this part. This is one step closer to ideal flexibility in which artists could decide what exactly something needs to look like!
Sampling alias is indeed something we have to worry about. Also not all properties needs to be sampled to give a smooth result. A hybrid approach might work best and yes giving choices are good.
Overall I'm just not very sure how this in-between stage could be implemented under blender. Like, you could have direct G-buffer pass through, or "do some filters over g-buffer and pass along", and you might want to have the filter result of some passes to affect other passes? (like blur normal but masked on top with object id). This might take some time I think but I'm really looking forward to it!
I think curves/lines are a very different topic. Since the renderer here mostly talks about how we turn individual pixels into what we want (although through some filters we can get differentials which are lines), but lines IMO takes a different approach that sampling might not conceptually align with how a lot types of lines are perceived naturally (a lot of times it's regarding visually smooth edges, which it's very hard to threshold though pixel based method to pin point where the line should be). But I guess we can stay in the loop and see if we can come up with some ideas on how to do things like this.
@ChengduLittleA I'm not sure I follow your reasoning.
The engine always receives a per-render-sample G-Buffer, not a converged one.
It's actually the task of the NPR engine to generate the converged image.
It works the same as any deferred renderer (including EEVEE-Next), except the deferred pass shader is generated from nodes.
So, if the current order is (simplified):
EEVEE GBuffer -> EEVEE Shading -> EEVEE Sample accumulation -> Compositor
The order with NPR would be:
EEVEE GBuffer -> EEVEE Shading -> NPR Shading -> NPR Sample accumulation -> Compositor
Aliasing would only be an issue in the sense that you can't generate a correct ramp/threshold from a single sample due to shading aliasing (like in this image), that's why the custom sample accumulation and resolve is proposed.
LOVE this initiative! LOVE IT.
As an NPR user myself - I vote to build this directly into EEVEE for a smoother UX and adoptation - since many NPR artists already use EEVEE, so extending and making it more flexible (optionally as an user opt-in if necessary) would be great. Similar nodes, similar workflows, extra opt-in flexibility and creativity.
This allows this UX:
@pragma37 Thanks, I see.
@filedescriptor I've just checked with Clément about the EEVEE-GP plans.
There's stuff to figure out, but in any case, I think the right way to go is to solve it at the EEVEE level first.
If we try to do it at the NPR level we're going to have troubles integrating GP and EEVEE objects together, which is the same reason why I wanted to avoid having separate NPR and EEVEE objects.
So I would solve the EEVEE side first, then we can see if there are GP-specific features that should be exposed to the NPR-side.
Still, I think it would make sense to include the EEVEE-GP integration as part of the NPR project planning.
@AndresStephens This is meant to work alongside EEVEE (and maybe other engines in the future).
And NPR nodes are meant to work in tandem with Material Nodes.
One of the reasons to implement NPR nodes as a separate node system is precisely to avoid having Material nodes working only under certain circumstances.
Also, we have not discussed this yet, but I would expect the NPR nodes (especially before implementing filters) to be pretty much the current Material nodes with some nodes removed and others added.
This is quite weird to be honest. I get the reason its desirable to have nodes work between Cycles and EEVEE, but what is the practical examples of materials used in NPRs needing to work in other conditions? Like who would decide in mid production that "eh, I don't want NPR anymore, let's go photoreal", AND expect the material to work without any adjustments?
New node system is adding so much more complexity on user level, you have to learn and think about and work on entire new concept of "inbetween" editor. It's worst design from user pov imho, and cases it solves is pretty much zero or very low. While having nodes in eevee material editor would make this much much easier, and only situations it would break is if user is doing ridiculous tasks. Majority of user-base shouldn't have to endure complexity for those single-digit cases
I don't imagine this is something that has to be black and white. Some parts of an image can be photorealistic/physically based, whereas other parts of the image can be explicitly NPR. Consider elements like magical effects and superpowers. Even in a largely photoreal setting, you have a lot of leeway here stylistically, what a "portal" looks like is almost certainly going to be determined on an artistic basis rather than a physical one.
It's probably important to consider this is a task for furthering NPR design in Blender, but that doesn't mean that Miguel's suggestions are exclusively useful for NPR. There could be applications of this technology to improve physically based looks as well, assuming it wasn't made a unique engine. I think changing the name "NPR Nodes" to "Render Nodes" in the future would be a good idea so that it's clearer to people that it isn't exclusive. Or another, better-fitting name instead, I'm just using the terminology that was in Malt.
This will almost certainly end up very similar to the case of Geometry Nodes. Has Geometry Nodes made Blender more complex? Sure. However, Individual users do not have to learn every in and out of Geometry Nodes to reap the benefits of the system, and with their modifier-based design being expanded to include the concept of tools, they have become more and more accessible to those without the technical know-how. This design methodology would slot right in I think.
To be completely honest, I think that this will make NPR dramatically less complex to perform in Blender, as there is so much finagling you have to do to get around existing design limitations at the moment, there is quite a bit of prerequisite knowledge necessary to get the look you want at times.
@pragma37 Thanks for the explanation :D
What I was thinking is that could we send the results from
NPR Sample accumulation
back to e.g. material to affect shading that way? (Well probably not cause I guess the structure would be way more complex, but that's what I was intended to describe 😅 )@nickberckley
Nope, the reasons are way more involved than that. This is from a previous design mailing thread.
The context was discussing how to support the features from the Miro board:
(...)
Textures/Filters (Screen-Space normal effects, Custom Refraction, Screen-Space shadow filters)
Do we add texture sockets and filters to material nodes? Do we only support hard-coded built-in effects?
Material nodes are not well prepared to build custom filters (and probably shouldn’t? surface fragment shaders are a bad place to do so from a performance standpoint).
Pre-pass dependency (Cavity, Curvature, Rim, Screen-Space normal effects)
These effects assume there’s a pre-pass, but that’s not always the case.
For example:
Dependency issues (almost all features)
Right now material nodes follow (for the most part) a surface properties -> shading -> output order.
However, a material nodes approach where shading features are available as color inputs would mean that hierarchy is completely lost.
For example:
This means that things that can be easily expressed with nodes will result in “undefined behavior” that doesn’t translate to the actual output, and could easily change based on implementation details.
Energy conservation (GI)
NPR shading is usually non energy conserving, which, at best will convert some objects into lamps, and at worst would cause the whole lighting to blow out.
For these reasons, I’m leaning towards a solution with an explicit distinction between material nodes and NPR nodes. A separate node system (halfway between materials and compositing nodes) with their inputs exposed to material nodes.
(...)
These would provide several advantages:
Cons:
This thread is for discussing the technical design, I deliberately left as many UX decisions as possible out of this proposal because figuring out the technical integration/implementation and its implications is already complex enough.
We will ask for user feedback once we get into the UX side, so please, let's keep the bikeshedding aside from now.
@ChengduLittleA
It's not just a complexity issue. If you do something like that the image won't ever converge.
That's why the split between aliased/converged sockets would be needed, you can't just mix aliased and converged data.
How much of the shader pipeline would be considered to be implemented?
If the current goal is toon shading in some recent games (Genshin, etc.), using the shading features in PBR renderers such as EEVEE is enough. This should be the main goal since I don't see many other practically used styles of NPR on the market.
One pass for each material instance is against the idea of deferred shading. However, tagging pixels with material/object ID can improve performance.
A proper NPR render engine will eventually handle more than GBuffers, such as rigging, extracting features such as curves from the mesh, and user interactions like sketching.
Otherwise, I don't think such an engine has any advantage over Unity, which is very convenient for iterating different shading styles, and already have many existing solutions & projects.
@WangZiWei-Jiang
The goal is to support arbitrary stylized shading in a way that works consistently with all render features, not just for primary rays.
Rigging and tool-based features are not the job of render engines in Blender, such features should follow the already established Blender workflows.
Geometry-based features would still be possible, but they should work consistently with the rest of the rendering features.
I don't agree with your Unity statement, but that's completely off-topic.
@HannahFantasia Sorry, I don't understand the question.
@pragma37 Would it be possible with this approach to do something like a Gouraud shader?
@HannahFantasia Ah, now I see what you meant.
No, the geometry is not supposed to be rendered by the NPR engine, so it can't support custom vertex shaders.
@pragma37 Could it be possible to think something similar to simulation zone as NPR zone like?
@Traslev Maybe? We could allow reading the render targets and AOVs from the previous frame.
I can see a few use cases and it shouldn't be that hard to support.
NPR artists will probably want the most granular possible control over lighting and shading, which would require Light and Shadow options arguably being exposed at a BSDF level rather than at the object level as it is now in Cycles (at least from what I've seen while trying it).
Since creating multiple Light Groups and Light Linking/Shadow Linking collections would clutter up the linking boxes quite quickly I think it would be better to allow defining lights at a Shader Node level (as seen in these mockups), then using the object-level linking when the inputs are left empty.
Using a cryptomatte-like node where we can pick lights to our choosing (and selecting if those lights will either be the only ones to affect the BSDF or the only ones not to affect it) I think we can get a massive amount of flexibility for NPR purposes.
We can plug in these selected lights into a hypothetical BSDF node's Shading panel light inputs.
If it's not viable to implement this at Material time then the NPR system should at least be able to store which shadows are cast by which lights and objects so that we can, at NPR time, only keep the shadows and lights that we want.
This is quite a bit like how GooEngine does light groups. Lights can be assigned either at the object level, or offer more split options (diffuse, shadow) within the shader itself.
@MVPuccino I don't think we will be able to have that granularity with the already existing BSDF nodes.
But I agree that for NPR nodes having per-node control of light groups is a must-have.
Per-node toggle of self-shadows can be desirable, but it may be hard to implement in a performant way.
Per-node toggle of "cast-shadows" (assuming that means whether the surface casts shadows or not) doesn't seem feasible, though.
That should be handled through regular material nodes.
Ambient Occlusion and World (I assume that's the World shader indirect lighting) should be available as separate plain color inputs, so you should be able to compose them however you like.
I was doubtful too, but I think it might still possible to expose these shading options in the NPR process.
You said in the original post:
Which makes me think that it might be possible to efficiently separate shadows (unless I'm not noticing some limitation, perhaps ID-ing different kinds of shadows per-light is too expensive, or having them separable unreasonably increases the size of the GBuffer?) generated by different lights into masks at NPR time; perhaps with a built-in shadow separation node like such:
These are outputted as black and white masks, hence the Float output rather than an Image output.
If this is the case then could this also open up the possibility of separating other parts by lights? For example, specular highlights. (And also potentially having a different generalized separation node for other parts of the GBuffer as either color info or as a mask)
That's exactly what I meant with the next phrase:
This should be totally doable, not just "getting" them, but also computing them in custom ways.
I assumed that by self-shadows you meant casted self-shadows.
Getting a mask for casted self-shadows only (without shadows casted by other objects) would be too costly performance and memory-wise, since it would require a separate shadow map for each object that uses this option.
You could achieve a similar result by using light groups, though.
I certainly want to support, at least, a per-object toggle of casted self-shadows, but I'm not sure about the implementation details yet.
The idea is to be able to read (and write) AOVs from NPR nodes.
I have to apologize for the confusion.
The previously proposed "In BSDF shading controls" were prototyped rather quickly.
In there the logic was:
I won't bother with AO and World lighting.
I later realized this sequence was quite limiting and, frankly, dumb. (Especially the mesh controlling the shadow it casts, which would just add a layer of complexity over the Received Shadows pass. As well as not being able to separate Normal Self-Shading from Cast Self-Shadows)
So in the "Separate Shadows" node I decided to limit it as previously described
From the way you wrote it, it sounds like extracting just the Object Cast Shadows on their own without the Cast Self-Shadows is viable, just not the other way around.
Perhaps the Separate Shadows node can be changed to have this sequence of outputs:
It's fine to omit Cast Self-Shadows. Even then, being able to extract just the Normal-based Self-Shading and the Object Cast Shadows would be plenty for a good number of styles. Additionally, I don't see how we couldn't make up for the missing Cast Self-Shadows by faking them (and other shadows) using Mesh Space Shadows and Custom Depth Offsets.
Sounds great, being able to use AOVs in the Viewport and not just in the Compositor is definitely a massive plus.
Hi @pragma37 I'm very excited to read your proposal. I have a few non technical thoughts that you might've already came across but I'm just posting them here just in case they can be useful.
One thing that immediately came to mind was to have the discreet ability of selecting specific meshes and project shadows depending on camera/viewer position, a lot of stuff like hair can be handled that way and not depend ambient occlusion.
Another important detail is to be able to select which objects not only get ambient occlusion and contact shadows but to exclude certain objects from it. This way I imagine having the hair always following this shadow coming from camera projection and then being able to combine the shadows from objects in front or close to the face. I have a few workarounds to do this stuff and all of them are independent of the lightning conditions in the scenes, like most 2D animation.
Having this ability combined with custom light groups and object exclusion would solve a lot of these issues that have to be fixed by hand at this moment.
Another thing I always wanted was to have the option to generate shadows before certain deformations like lattices for example, and then project them into the deformed meshes. If a user were to flatten out certain meshes to avoid extreme changes of perspective because of the camera lenses, there has to be a way for the shadow data to come from what the user would define as a neutral mesh/shape. I imagine it would be something like blend shapes but for shadows specifically, however I don't know how viable something like that would be.
Another thing I was thinking is to have the capacity to hand drawn and customize normal maps and normal bumps in viewport like you would do with a regular 2D drawing. There's some solutions to do this kind of customizing with geometry nodes along with smoothing to avoid the issues from using data transfer and things like that, but it's not a very artistic process.
Sometimes while working on a character you have an amazing silhouette that came from sculpting but then you spend a lot of time fighting the remeshing process to get closer to the convention and methods used to basically "bake" the shadows into the geometry by default. But whenever you want to make any changes you ruin hours of work, and if you want to have cleaner shadows you sometimes end up just doing data transfer of a higher poly mesh or using these nodes and in a lot of cases is just a lot faster to hand drawn the shadows frame by frame than dealing with all of this. The nature of 2D animation is such that none of these solutions are flexible enough for the range of expression needed, and it doesn't matter what kind of pipeline or method you take, the compromise is huge and a lot of what makes 2D animation beautiful gets lost for the functionality of 3D.
I we were able to guide shadows by hand drawing on top of the mesh, we could effectively and creatively separate the mesh from shading for characters in the same way we can separate and optimize physics and simulations by using proxies. Custom shaped highlights are a staple of 2D animation, and we are stuck to using slow to implement UV parallaxes and nodes that react differently and unpredictably on different meshes. Shaping shadow/highlight contours needs artistic control and user input. I understand this could be very difficult but this is the first tell to me for NPR, a lot of the other things can be solved one way or another.
As for the rest, I agree completely. Having the ability to use PBR is a necessity for backgrounds and vfxs in a lot of productions. We need as many options as possible to fit as many workflows and non typical pipelines. Having a character with NPR shading in front of a mirror to show the PBR background while controlling the amount of reflections in the floor and which objects or what colors are on those reflections (for example)
And before I forget, another thing I always handle in video compositing is things like adding a color ramp between contact surfaces, objects and characters, and all those artificial gradients needed to improve the composition. A lot of that can be fixed in post, but sometimes I add planes with color ramps in the scene, which is very bothersome, needs to manual exclusion from line detection and a lot of other issues. And managing that on the compositor is clunky. If there was just a way to have a "layer" hierarchy a lot of that would go away. Positioning things like flares, glows, color correction, gradients, and countless other things would solve most of that. However for this I know it has a lot more to do with other things than the renderer so I would just be happy by being able to pick objects or collections and control gradients with little to no work. There's a lot of things that are more convenient and faster to handle in video, but I think that at least an easy way to have gradient control on collections is part of the artistic work that should be in viewport to a certain degree.
EDITED TO AVOID BREAKING COPYRIGHT RULES
Ah last thing, curvature. Parameters also need to be exposed in a way to have the ability to map the curvature thickness/alpha.
In any case, if this were to move forward and we at least get crisps per object shadows that would be already an improvement to what we already have.
@blenderbroli See: https://devtalk.blender.org/t/copyright-guidelines-for-devtalk/17331
Sorry. I'm new around here @mod_moder Just edited my post to avoid any issues
EDIT 2:
https://devtalk.blender.org/t/projects-weekly-updates/35232/1
Amazing news!
If you don't need to receive other shadows "through" self-cast shadows (i.e. detecting the self -> other -> self case), you can actually do it much cheaper than this. In fact, I would argue that the above would be an anti-feature in many cases. For example, you don't want your characters' eyeballs casting shadows on the back of their head.
Instead of rendering a separate shadow map for each object that uses this option, you instead add an ID channel to the shadow map. This channel basically stores a random number per ID (whether this is per Material, per Object, or configurable is an open design question).
If the occluding surface has the same ID as the receiving surface, the shadow goes into the Self Cast Shadows mask.
If it doesn't, then it goes into the Cast Shadows mask.
Can the Cast Shadows mask be modified, for example remove some shadows that do not look good and adding some shadow for emphasis?
From the linked board, I would especially love these to become a reality.
Light nodes or at least just textures to me is one of the biggest missing EEVEE features.
Having decals in EEVEE and Cycles would be a dream come true, thinking of like game engine style decals, which could support full material nodes so you can give them individual roughness, metallic, etc maps as well as crazy parallax effects.
That's the method I use in Malt:
f46cc9484f/Malt/Pipelines/NPR_Pipeline/Shaders/NPR_Pipeline/NPR_Lighting.glsl (L126)
However, this only tells if the surface casting the shadow comes from a different object or from itself.
It doesn't allow getting all the shadows that an object would cast into itself, since other objects can occlude the object surface in the shadow-map.
The NPR Nodes example seems to be a setup that you could reproduce in the Compositor, and @Thalia-Solari mentioned that it could have broader usage, offering the name Render Nodes, which sounds a lot like Compositor nodes to me (though maybe I can't see an important distinction here).
Is it possible to implement the NPR Nodes via the Compositor, or to expose the necessary passes so that users can access that data in the standard Compositor?
One of the nice things I find about using Goo Engine is that it's just Blender, but I can turn off shadows per object, and use the Curvature node, it's not an entirely new workflow that I need to learn.
If it's possible to split the NPR Nodes features between Shader and Compositor nodes I think that would streamline the workflow, rather than adding a third node graph to adjust the look, though I appreciate that I may not know a technical reason why this is not possible.
Thanks for this proposal and discussion!
@SpectralVectors You're not the first to express the concern about adding a new node system, but NPR nodes are going to be mostly the same as the current ones. It's not going to be a full new node system (that screenshot is from Malt, that's why the nodes look different from what you would expect).
I don't have any intention of adding a new node system just for the sake of it.
In fact, the plan is to share the implementation with the rest of the shader nodes, so certain features like Texture sockets may be available from Object shader nodes too.
I think the easiest way to explain the workflow it is to imagine that every node added after ShaderToRGB must be inside its own Node Group. Inside that node group, there are going to be some new nodes available and others removed, but the differences are on a similar level to Object shader nodes vs World/Light shader nodes.
For filter nodes, I would go for a 1:1 match to the already existing compositor nodes, even if it may not be possible to share the internal implementation.
The Render Nodes that @Thalia-Solari mentioned are a concept from Malt, they're not similar to anything that exists in Blender already.
They would be the equivalent of splitting the EEVEE internal implementation into multiple nodes, so users can build their own "render engine" from pre-existing parts, and replace/add extra parts with plugins.
It's a very powerful system, but it also has performance implications and requires some understanding of how a render engine works internally to use it.
This proposal takes a very different approach, much closer to the regular Blender workflow.
I don't see any npr features that the composition system could not do, if it had "curvature" input as a channel we can turn on/off.
https://youtu.be/6UqPdQbbbkM?feature=shared
this uses multipass branch to sample depth and aov.
@JacobMerrill-1
There's a massive difference between being able to shade / light NPR in the scene, compared to post / compositing.
@pragma37 Thanks for the explanation, I figured there was something that I didn't get.
I'm still a little lost on this detail:
Does that mean that they won't function outside of that group as regular Shader Nodes?
Or am I misunderstanding again?
If they only work inside of their own node group and they only output to a special NPR material output, then isn't it essentially introducing a new node system without introducing a new node editor?
OK, I'm a little lost again - we already have most (if not all) of the features of Texture Nodes already in the Shader Editor. What features would this enable?
@thorn-neverwake
Absolutely true, but, isn't this why the Realtime Compositor was introduced? And, isn't this where EEVEE's Bloom setting has been moved?
I'm thinking of software like Pencil+: a line drawing compositor effect that allows viewport preview.
From Daisuke Onitsuka (CG Director of Evangelion remakes):
I don't mean to be obtuse, I'm just having difficulty picturing what the actual workflow would be like.
Given that the basis of the proposal is Goo Engine's wishlist, have you been in touch with their developers?
Or a studio like Lightning Boy that also put a lot of time and effort into a stylized rendering system?
Or Cogumelo Softworks, developer of Toonkit for Cycles?
(Apologies if these people are already in this thread and I just don't know their names!)
I'm also wondering where Clement and Omar are on this proposal: it would be really nice to know what features have been accommodated for already in the EEVEE Next redesign (if any), and what features could be added to the realtime compositor.
And, with Bloom moving exclusively to the Compositor for 4.2 and up, it seems like realtime compositor use will be a necessity to preview accurate scene lighting anyway.
Thanks again for the discussion and responses, and apologies for my misunderstandings!
@thorn-neverwake @SpectralVectors I've had to remove your last 2 messages. 🙁
See https://devtalk.blender.org/t/copyright-guidelines-for-devtalk/17331
How did my screenshots of Blender add-ons (which by Blender's own licensing, must be GPL) break the guidelines for posting?