Cycles: Tweak Principled BSDF Subsurface parameters #110989

Merged
Lukas Stockner merged 9 commits from LukasStockner/blender:subsurface-principled into main 2023-09-13 02:45:40 +02:00
Member

Depends on #110864, only the last commit is specific to this PR.

Previously, the Principled BSDF used the Subsurface input to scale the radius. When it was zero, it used a diffuse closure, otherwise a subsurface closure.

This sort of scaling input makes sense, but it should be specified in distance units, rather than a 0..1 factor, so this commit changes the unit.

Additionally, it adds support for mixing diffuse and subsurface components. This is part of e.g. the OpenPBR spec, and the logic behind it is to support modeling e.g. dirt or paint on top of skin.

For typical materials, this will be either zero or one (like metallic or transmission), but supporting fractional inputs makes sense for e.g. smooth transitions at boundaries.

Also, the Subsurface IOR input is removed - conceptually, the subsurface component is below the interface which generates the dielectric specular reflection, so it should share the same IOR.

There's still one open issue: When opening existing files, the subtype of the Subsurface Scale socket isn't applied for some reason. When opening a new file, it shows up as meters as expected.

Depends on #110864, only the last commit is specific to this PR. Previously, the Principled BSDF used the Subsurface input to scale the radius. When it was zero, it used a diffuse closure, otherwise a subsurface closure. This sort of scaling input makes sense, but it should be specified in distance units, rather than a 0..1 factor, so this commit changes the unit. Additionally, it adds support for mixing diffuse and subsurface components. This is part of e.g. the OpenPBR spec, and the logic behind it is to support modeling e.g. dirt or paint on top of skin. For typical materials, this will be either zero or one (like metallic or transmission), but supporting fractional inputs makes sense for e.g. smooth transitions at boundaries. Also, the Subsurface IOR input is removed - conceptually, the subsurface component is below the interface which generates the dielectric specular reflection, so it should share the same IOR. There's still one open issue: When opening existing files, the subtype of the Subsurface Scale socket isn't applied for some reason. When opening a new file, it shows up as meters as expected.
Lukas Stockner requested review from Brecht Van Lommel 2023-08-10 08:25:16 +02:00
Lukas Stockner added the
Interest
Render & Cycles
label 2023-08-10 08:25:27 +02:00
Lukas Stockner added this to the Render & Cycles project 2023-08-10 08:25:34 +02:00

I'm not convinced yet that we need a dedicated Subsurface Color, and it's not clear it will be in the final version of OpenPBR. Maybe we could leave it out, and then if we are sure we need it add it afterwards? That would avoid breaking compatibility. What do you think?

Everything else sounds good.

I'm not convinced yet that we need a dedicated Subsurface Color, and it's not clear it will be in the final version of OpenPBR. Maybe we could leave it out, and then if we are sure we need it add it afterwards? That would avoid breaking compatibility. What do you think? Everything else sounds good.

@Christophe-Hery, you may have feedback on this.

@Christophe-Hery, you may have feedback on this.
Author
Member

@Christophe-Hery, you may have feedback on this.

We've briefly talked at SIGGRAPH. This PR is only about parametrization, I've got a second branch (no PR yet) for updating how the entry bounce is handled.

His current approach is to randomly (50/50) pick between diffuse lambertian transmission and GGX refraction, using a fixed roughness of 1, since this produces the best match for skin.

I've got an implementation working, but I'd like to experiment some more with it. From a theoretical perspective, I'm not super happy with this mix and the hardcoded roughness, and would like to see if we can find a way to just use refraction using the main surface roughness while keeping similar appearance (maybe just use high "main" roughness for skin and use Coat for specular highlights). Then again, while theory is nice, what matters in the end is the look, so if hardcoding it looks right we can just do that.

I've also opened a discussion on the OpenPBR repo for this a few days ago, maybe they have some input as well.

> @Christophe-Hery, you may have feedback on this. We've briefly talked at SIGGRAPH. This PR is only about parametrization, I've got a second branch (no PR yet) for updating how the entry bounce is handled. His current approach is to randomly (50/50) pick between diffuse lambertian transmission and GGX refraction, using a fixed roughness of 1, since this produces the best match for skin. I've got an implementation working, but I'd like to experiment some more with it. From a theoretical perspective, I'm not super happy with this mix and the hardcoded roughness, and would like to see if we can find a way to just use refraction using the main surface roughness while keeping similar appearance (maybe just use high "main" roughness for skin and use Coat for specular highlights). Then again, while theory is nice, what matters in the end is the look, so if hardcoding it looks right we can just do that. I've also opened a discussion on the OpenPBR repo for this a few days ago, maybe they have some input as well.
First-time contributor

From a theoretical perspective, I'm not super happy with this mix and the hardcoded roughness, and would like to see if we can find a way to just use refraction using the main surface roughness while keeping similar appearance (maybe just use high "main" roughness for skin and use Coat for specular highlights). Then again, while theory is nice, what matters in the end is the look, so if hardcoding it looks right we can just do that.

As I told you at Siggraph, in our internal version, we do not hard-code the mix, though in practice, except for some rare debugging times, we leave it at default 0.5 on all our human assets. So I just wanted to avoid for you to have to support another parameter in the interface.

Now for the roughness. Remember, this boundary is not directly in relationship with the specular component. We want a compound effect through multiple layers of dead cells (stratum corneum), into the epidermis medium. Sampling a specular lobe with roughness 1 is a good approximation, and it will be more directional than a diffuse/cosine distribution. (In the end, we still, as per above, employ some diffuse, for, even at roughness 1, the walk would tend to penetrate too deep into the skin, producing non matching observed and captured behavior on skin - it is not too scientific, I recognize, but our "compromise" solution ends up working quite well).

> From a theoretical perspective, I'm not super happy with this mix and the hardcoded roughness, and would like to see if we can find a way to just use refraction using the main surface roughness while keeping similar appearance (maybe just use high "main" roughness for skin and use Coat for specular highlights). Then again, while theory is nice, what matters in the end is the look, so if hardcoding it looks right we can just do that. As I told you at Siggraph, in our internal version, we do not hard-code the mix, though in practice, except for some rare debugging times, we leave it at default 0.5 on all our human assets. So I just wanted to avoid for you to have to support another parameter in the interface. Now for the roughness. Remember, this boundary is not directly in relationship with the specular component. We want a compound effect through multiple layers of dead cells (stratum corneum), into the epidermis medium. Sampling a specular lobe with roughness 1 is a good approximation, and it will be more directional than a diffuse/cosine distribution. (In the end, we still, as per above, employ some diffuse, for, even at roughness 1, the walk would tend to penetrate too deep into the skin, producing non matching observed and captured behavior on skin - it is not too scientific, I recognize, but our "compromise" solution ends up working quite well).
First-time contributor

I'm not convinced yet that we need a dedicated Subsurface Color, and it's not clear it will be in the final version of OpenPBR. Maybe we could leave it out, and then if we are sure we need it add it afterwards? That would avoid breaking compatibility. What do you think?

I would agree. The base color should be enough.
And if you think about it, lambertian diffuse is just an expression/limit of the bssrdf, so the same input should apply to both.
(Btw, for numerical precision issues, we used to automatically shift to diffuse if the ray footprint is small, ie the character/object in distance).
Also, in our internal version, the implicit diffuse lobe is forced pure lambertian, by putting bssrdf->roughness = FLT_MAX.

> I'm not convinced yet that we need a dedicated Subsurface Color, and it's not clear it will be in the final version of OpenPBR. Maybe we could leave it out, and then if we are sure we need it add it afterwards? That would avoid breaking compatibility. What do you think? I would agree. The base color should be enough. And if you think about it, lambertian diffuse is just an expression/limit of the bssrdf, so the same input should apply to both. (Btw, for numerical precision issues, we used to automatically shift to diffuse if the ray footprint is small, ie the character/object in distance). Also, in our internal version, the implicit diffuse lobe is forced pure lambertian, by putting bssrdf->roughness = FLT_MAX.
First-time contributor

Also, the Subsurface IOR input is removed - conceptually, the subsurface component is below the interface which generates the dielectric specular reflection, so it should share the same IOR.

I recommend against that. This is the same issue as the hard-coded roughness I propose for the refraction entry. One may need explicit control over the specular behavior, and a separate one for subsurface. Again, remember that you are dealing, for human skin, with a complex multi-layered medium.

> Also, the Subsurface IOR input is removed - conceptually, the subsurface component is below the interface which generates the dielectric specular reflection, so it should share the same IOR. I recommend against that. This is the same issue as the hard-coded roughness I propose for the refraction entry. One may need explicit control over the specular behavior, and a separate one for subsurface. Again, remember that you are dealing, for human skin, with a complex multi-layered medium.
Author
Member

Now for the roughness. Remember, this boundary is not directly in relationship with the specular component. We want a compound effect through multiple layers of dead cells (stratum corneum), into the epidermis medium. Sampling a specular lobe with roughness 1 is a good approximation, and it will be more directional than a diffuse/cosine distribution. (In the end, we still, as per above, employ some diffuse, for, even at roughness 1, the walk would tend to penetrate too deep into the skin, producing non matching observed and captured behavior on skin - it is not too scientific, I recognize, but our "compromise" solution ends up working quite well).

I recommend against that. This is the same issue as the hard-coded roughness I propose for the refraction entry. One may need explicit control over the specular behavior, and a separate one for subsurface. Again, remember that you are dealing, for human skin, with a complex multi-layered medium.

Hmm... Yeah, I understand the logic, for a complex material like skin it makes sense to use special interface handling. But subsurface might also be used for other, simpler types of materials (e.g. marble).

One idea: We already have two different subsurface modes for the albedo parametrization. The adaptive-radius mode works well for fairly dense materials like skin, but can cause issues for thinner materials (see below), for which the fixed-radius mode can work better. Meanwhile, the "50% lambertian, 50% GGX with roughness 1" entry bounce also works well for e.g. skin, but causes white edges for thinner materials.
So, we could just make the bounce behavior dependent on the selected parametrization.

In any case, to illustrate the differences, here's a BlendSwap model with a smooth and a rough surface. Not sure which one I prefer to be honest.

Approach Smooth Rough Extra
Blender 3.6 blender3.6_smooth.png Roughness 0.1 blender3.6_rough.png Roughness 0.7
50/50, fixed roughness mix_5050_smooth.png Roughness 0.1 mix_5050_rough.png Roughness 0.7 mix_5050_adaptiveradius.png Using non-fixed radius,
note the light bleed color
Surface roughness direct_smooth.png Roughness 0.1 direct_rough.png Roughness 0.7 direct_veryrough_coated.png Using Roughness 1.0
plus smooth Coat

In the end, I don't have a particularly strong opinion here. The 50/50 mix feels weird, but I can't argue with results 😄

I'm not convinced yet that we need a dedicated Subsurface Color, and it's not clear it will be in the final version of OpenPBR. Maybe we could leave it out, and then if we are sure we need it add it afterwards? That would avoid breaking compatibility. What do you think?

I would agree. The base color should be enough.
And if you think about it, lambertian diffuse is just an expression/limit of the bssrdf, so the same input should apply to both.

Makes sense, I'll remove it.

(Btw, for numerical precision issues, we used to automatically shift to diffuse if the ray footprint is small, ie the character/object in distance).

Yeah, good point. I think we talked about this - we already have the differential info anyways, so that's easy enough to implement.

Also, in our internal version, the implicit diffuse lobe is forced pure lambertian, by putting bssrdf->roughness = FLT_MAX.

The Principled Diffuse closure is gone now that #110864 is merged, so this is no longer needed - the exit bounce is now always lambertian.

I recommend against that. This is the same issue as the hard-coded roughness I propose for the refraction entry. One may need explicit control over the specular behavior, and a separate one for subsurface. Again, remember that you are dealing, for human skin, with a complex multi-layered medium.

Yeah, this comes down to the same point as before I guess - if we want to model a more complex boundary layer, as in skin, considering this separately makes sense. Did you find that artists are actually tweaking this in practice, or is it a case like the 50/50 mix parameter where a fixed value is good enough?

> Now for the roughness. Remember, this boundary is not directly in relationship with the specular component. We want a compound effect through multiple layers of dead cells (stratum corneum), into the epidermis medium. Sampling a specular lobe with roughness 1 is a good approximation, and it will be more directional than a diffuse/cosine distribution. (In the end, we still, as per above, employ some diffuse, for, even at roughness 1, the walk would tend to penetrate too deep into the skin, producing non matching observed and captured behavior on skin - it is not too scientific, I recognize, but our "compromise" solution ends up working quite well). > I recommend against that. This is the same issue as the hard-coded roughness I propose for the refraction entry. One may need explicit control over the specular behavior, and a separate one for subsurface. Again, remember that you are dealing, for human skin, with a complex multi-layered medium. Hmm... Yeah, I understand the logic, for a complex material like skin it makes sense to use special interface handling. But subsurface might also be used for other, simpler types of materials (e.g. marble). One idea: We already have two different subsurface modes for the albedo parametrization. The adaptive-radius mode works well for fairly dense materials like skin, but can cause issues for thinner materials (see below), for which the fixed-radius mode can work better. Meanwhile, the "50% lambertian, 50% GGX with roughness 1" entry bounce also works well for e.g. skin, but causes white edges for thinner materials. So, we could just make the bounce behavior dependent on the selected parametrization. In any case, to illustrate the differences, here's [a BlendSwap model](https://blendswap.com/blend/17861) with a smooth and a rough surface. Not sure which one I prefer to be honest. | Approach | Smooth | Rough | Extra | | - | - | - | - | | Blender 3.6 | ![blender3.6_smooth.png](/attachments/31456013-acbf-4b29-aa72-48c85bd1dd4b) Roughness 0.1 | ![blender3.6_rough.png](/attachments/4ef3d99c-3537-41c7-bcbe-8a7419c63655) Roughness 0.7 | | | 50/50, fixed roughness | ![mix_5050_smooth.png](/attachments/db0d5c51-f87b-44d0-9223-8f11055c77b6) Roughness 0.1 | ![mix_5050_rough.png](/attachments/895e82ad-5d66-4685-9588-686947cbe831) Roughness 0.7 | ![mix_5050_adaptiveradius.png](/attachments/f6c9552d-d7f2-4466-90bc-0c3923697cf0) Using non-fixed radius,<br>note the light bleed color | | Surface roughness | ![direct_smooth.png](/attachments/89477dd7-ece9-418e-a701-c6b815eaad8b) Roughness 0.1 | ![direct_rough.png](/attachments/c392da5a-4959-4e0d-9fdc-26c310f8f3bf) Roughness 0.7 | ![direct_veryrough_coated.png](/attachments/8f3597fa-cb85-4281-bb6b-a183bf228b5a) Using Roughness 1.0<br>plus smooth Coat | In the end, I don't have a particularly strong opinion here. The 50/50 mix feels weird, but I can't argue with results 😄 >> I'm not convinced yet that we need a dedicated Subsurface Color, and it's not clear it will be in the final version of OpenPBR. Maybe we could leave it out, and then if we are sure we need it add it afterwards? That would avoid breaking compatibility. What do you think? > > I would agree. The base color should be enough. And if you think about it, lambertian diffuse is just an expression/limit of the bssrdf, so the same input should apply to both. Makes sense, I'll remove it. > (Btw, for numerical precision issues, we used to automatically shift to diffuse if the ray footprint is small, ie the character/object in distance). Yeah, good point. I think we talked about this - we already have the differential info anyways, so that's easy enough to implement. > Also, in our internal version, the implicit diffuse lobe is forced pure lambertian, by putting bssrdf->roughness = FLT_MAX. The Principled Diffuse closure is gone now that #110864 is merged, so this is no longer needed - the exit bounce is now always lambertian. > I recommend against that. This is the same issue as the hard-coded roughness I propose for the refraction entry. One may need explicit control over the specular behavior, and a separate one for subsurface. Again, remember that you are dealing, for human skin, with a complex multi-layered medium. Yeah, this comes down to the same point as before I guess - if we want to model a more complex boundary layer, as in skin, considering this separately makes sense. Did you find that artists are actually tweaking this in practice, or is it a case like the 50/50 mix parameter where a fixed value is good enough?

One idea: We already have two different subsurface modes for the albedo parametrization. The adaptive-radius mode works well for fairly dense materials like skin, but can cause issues for thinner materials (see below), for which the fixed-radius mode can work better. Meanwhile, the "50% lambertian, 50% GGX with roughness 1" entry bounce also works well for e.g. skin, but causes white edges for thinner materials.
So, we could just make the bounce behavior dependent on the selected parametrization.

I like this idea, to have a dedicated skin subsurface scattering mode, and another that can be OpenPBR compatible. The separate Subsurface IOR can also only be used for the skin mode. That gives more freedom to add additional parameters, and perhaps even name then in a way that makes their purpose more clear.

> One idea: We already have two different subsurface modes for the albedo parametrization. The adaptive-radius mode works well for fairly dense materials like skin, but can cause issues for thinner materials (see below), for which the fixed-radius mode can work better. Meanwhile, the "50% lambertian, 50% GGX with roughness 1" entry bounce also works well for e.g. skin, but causes white edges for thinner materials. > So, we could just make the bounce behavior dependent on the selected parametrization. I like this idea, to have a dedicated skin subsurface scattering mode, and another that can be OpenPBR compatible. The separate Subsurface IOR can also only be used for the skin mode. That gives more freedom to add additional parameters, and perhaps even name then in a way that makes their purpose more clear.

I'm not convinced yet that we need a dedicated Subsurface Color, and it's not clear it will be in the final version of OpenPBR. Maybe we could leave it out, and then if we are sure we need it add it afterwards? That would avoid breaking compatibility. What do you think?

I would agree. The base color should be enough.
And if you think about it, lambertian diffuse is just an expression/limit of the bssrdf, so the same input should apply to both.

A related problem is that we only use the radius at the light exit point. If we were doing light tracing, then if we hit some area with the radius 0 (say paint or a band-aid), light would not enter the skin at all. But since we are tracing from the other direction, colors from those areas will bleed in. Not sure if this is a serious problem in practice, or how to efficiently solve it.

Separate diffuse and subsurface color texture maps could help compensate for this, but I haven't heard of anyone doing this in practice.

> > I'm not convinced yet that we need a dedicated Subsurface Color, and it's not clear it will be in the final version of OpenPBR. Maybe we could leave it out, and then if we are sure we need it add it afterwards? That would avoid breaking compatibility. What do you think? > > I would agree. The base color should be enough. > And if you think about it, lambertian diffuse is just an expression/limit of the bssrdf, so the same input should apply to both. A related problem is that we only use the radius at the light exit point. If we were doing light tracing, then if we hit some area with the radius 0 (say paint or a band-aid), light would not enter the skin at all. But since we are tracing from the other direction, colors from those areas will bleed in. Not sure if this is a serious problem in practice, or how to efficiently solve it. Separate diffuse and subsurface color texture maps could help compensate for this, but I haven't heard of anyone doing this in practice.
First-time contributor

One idea: We already have two different subsurface modes for the albedo parametrization. The adaptive-radius mode works well for fairly dense materials like skin, but can cause issues for thinner materials (see below), for which the fixed-radius mode can work better. Meanwhile, the "50% lambertian, 50% GGX with roughness 1" entry bounce also works well for e.g. skin, but causes white edges for thinner materials.
So, we could just make the bounce behavior dependent on the selected parametrization.

I like this idea, to have a dedicated skin subsurface scattering mode, and another that can be OpenPBR compatible. The separate Subsurface IOR can also only be used for the skin mode. That gives more freedom to add additional parameters, and perhaps even name then in a way that makes their purpose more clear.

Agreed. This is the way our internal version works.
(And to answer Lukas, for this skin mode, the artists never change the 50% blend - nor the roughness, which is hard-coded anyway).

> > One idea: We already have two different subsurface modes for the albedo parametrization. The adaptive-radius mode works well for fairly dense materials like skin, but can cause issues for thinner materials (see below), for which the fixed-radius mode can work better. Meanwhile, the "50% lambertian, 50% GGX with roughness 1" entry bounce also works well for e.g. skin, but causes white edges for thinner materials. > > So, we could just make the bounce behavior dependent on the selected parametrization. > > I like this idea, to have a dedicated skin subsurface scattering mode, and another that can be OpenPBR compatible. The separate Subsurface IOR can also only be used for the skin mode. That gives more freedom to add additional parameters, and perhaps even name then in a way that makes their purpose more clear. > > Agreed. This is the way our internal version works. (And to answer Lukas, for this skin mode, the artists never change the 50% blend - nor the roughness, which is hard-coded anyway).
First-time contributor

In the end, I don't have a particularly strong opinion here. The 50/50 mix feels weird, but I can't argue with results 😄

Great tests. But for the case of skin, I would strongly suggest you use an albedo map (and, as I told you, make the scattering radius not too different in hue from the average of the albedo). You will probably then be able to understand and judge the 50/50 mix better. Also the separate subsurfaceIOR.
(I am sorry I just cannot show our images here, apart from what is published already, for instance at EGSR)

Also, in our internal version, the implicit diffuse lobe is forced pure lambertian, by putting bssrdf->roughness = FLT_MAX.

The Principled Diffuse closure is gone now that #110864 is merged, so this is no longer needed - the exit bounce is now always lambertian.

Cool. I had not seen that this was truly the case. Will remove this on our end. Thanks.
(Can you say why you kept bssrdf->roughness in the struct then? Also, I see it being set in osl_closure_bssrdf_setup and in closure.h)

> In the end, I don't have a particularly strong opinion here. The 50/50 mix feels weird, but I can't argue with results 😄 Great tests. But for the case of skin, I would strongly suggest you use an albedo map (and, as I told you, make the scattering radius not too different in hue from the average of the albedo). You will probably then be able to understand and judge the 50/50 mix better. Also the separate subsurfaceIOR. (I am sorry I just cannot show our images here, apart from what is published already, for instance at [EGSR](https://diglib.eg.org/handle/10.1111/cgf14887)) > > Also, in our internal version, the implicit diffuse lobe is forced pure lambertian, by putting bssrdf->roughness = FLT_MAX. > > The Principled Diffuse closure is gone now that #110864 is merged, so this is no longer needed - the exit bounce is now always lambertian. Cool. I had not seen that this was truly the case. Will remove this on our end. Thanks. (Can you say why you kept bssrdf->roughness in the struct then? Also, I see it being set in osl_closure_bssrdf_setup and in closure.h)
Lukas Stockner force-pushed subsurface-principled from 4c4e17c89b to 419c648634 2023-08-15 02:04:15 +02:00 Compare
Author
Member

Okay, updated version here:

  • Fixed sampling bug in GGX entry bounce
  • Removed shadowing-masking term from GGX entry to avoid darkening (we only care about the direction distribution here, so avoid impacting the albedo matching)
  • Adjust fallback to diffuse logic to also trigger for radius under 0.25 pixels
  • Bring back Subsurface IOR parameter for non-fixed-radius mapping, remove Subsurface Color parameter

Here's a new comparison table, with the above fixes and with an adjusted radius to avoid the complementary color (thanks for the hint @Christophe-Hery):

Approach Smooth Rough
Blender 3.6 blender3.6_smooth_v2.png Roughness 0.1 blender3.6_rough_v2.png Roughness 0.7
Non-Fixed radius (using 50% lambertian/50% GGX with alpha=1 entry) mix_5050_smooth_v2.png Roughness 0.1 mix_5050_rough_v2.png Roughness 0.7
Fixed radius (using 100% GGX entry) direct_smooth_v2.png Roughness 0.1 direct_rough_v2.png Roughness 0.7

A related problem is that we only use the radius at the light exit point. If we were doing light tracing, then if we hit some area with the radius 0 (say paint or a band-aid), light would not enter the skin at all. But since we are tracing from the other direction, colors from those areas will bleed in. Not sure if this is a serious problem in practice, or how to efficiently solve it.

This applies more generally - we also only use the albedo, roughness, IOR etc. from the light exit point. Realistically, though, you could argue that at least for scanned/painted albeod maps, this is already accounted for. Actually doing this properly would require spatially varying parameters throughout the medium, which is not really feasible to do based on texture maps on surfaces.

(And to answer Lukas, for this skin mode, the artists never change the 50% blend - nor the roughness, which is hard-coded anyway).

Sorry, I was a bit unclear there - I was wondering whether artists are adjusting the Subsurface IOR parameter.

Great tests. But for the case of skin, I would strongly suggest you use an albedo map (and, as I told you, make the scattering radius not too different in hue from the average of the albedo). You will probably then be able to understand and judge the 50/50 mix better. Also the separate subsurfaceIOR.

Yeah, fair enough - the test above was specifically for the non-skin case. Here's a test on an old hair demo model (I don't think the file is public, sorry):

Baseline (diffuse) Blender 3.6 (non-fixed radius, lambertian entry) New, non-fixed radius (50/50 entry) New, fixed radius (GGX entry)
skin_diffuse.png skin_blender3.6.png skin_nonfixed.png skin_fixed.png

Here, it's quite clear how the non-fixed radius mapping preserves the overall look of the diffuse baseline much better.

(Can you say why you kept bssrdf->roughness in the struct then? Also, I see it being set in osl_closure_bssrdf_setup and in closure.h)

Because the 100% GGX entry bounce that the code currently uses with the fixed-radius mapping uses the surface roughness for the entry bounce. The Non-fixed radius mapping uses your code with hardcoded alpha=1 for the entry bounce, so it's unused there.

Okay, updated version here: - Fixed sampling bug in GGX entry bounce - Removed shadowing-masking term from GGX entry to avoid darkening (we only care about the direction distribution here, so avoid impacting the albedo matching) - Adjust fallback to diffuse logic to also trigger for radius under 0.25 pixels - Bring back Subsurface IOR parameter for non-fixed-radius mapping, remove Subsurface Color parameter Here's a new comparison table, with the above fixes and with an adjusted radius to avoid the complementary color (thanks for the hint @Christophe-Hery): | Approach | Smooth | Rough | | - | - | - | | Blender 3.6 | ![blender3.6_smooth_v2.png](/attachments/ea144b37-4e4e-42f4-abbe-271f524baaf9) Roughness 0.1 | ![blender3.6_rough_v2.png](/attachments/16e0fc86-8889-44f6-b79a-57571a5d227f) Roughness 0.7 | | Non-Fixed radius (using 50% lambertian/50% GGX with alpha=1 entry) | ![mix_5050_smooth_v2.png](/attachments/365c8071-d8fa-4080-a8f8-86a4a46100a6) Roughness 0.1 | ![mix_5050_rough_v2.png](/attachments/df80598b-9c9f-4f29-8d84-7484660ecf60) Roughness 0.7 | | Fixed radius (using 100% GGX entry) | ![direct_smooth_v2.png](/attachments/f1b1a62f-d3af-4249-b1ee-f3b0480eb6d3) Roughness 0.1 | ![direct_rough_v2.png](/attachments/f8d7a11d-b406-4ee5-9288-300ba0aaecb6) Roughness 0.7 | > A related problem is that we only use the radius at the light exit point. If we were doing light tracing, then if we hit some area with the radius 0 (say paint or a band-aid), light would not enter the skin at all. But since we are tracing from the other direction, colors from those areas will bleed in. Not sure if this is a serious problem in practice, or how to efficiently solve it. This applies more generally - we also only use the albedo, roughness, IOR etc. from the light exit point. Realistically, though, you could argue that at least for scanned/painted albeod maps, this is already accounted for. Actually doing this properly would require spatially varying parameters throughout the medium, which is not really feasible to do based on texture maps on surfaces. > (And to answer Lukas, for this skin mode, the artists never change the 50% blend - nor the roughness, which is hard-coded anyway). Sorry, I was a bit unclear there - I was wondering whether artists are adjusting the Subsurface IOR parameter. > Great tests. But for the case of skin, I would strongly suggest you use an albedo map (and, as I told you, make the scattering radius not too different in hue from the average of the albedo). You will probably then be able to understand and judge the 50/50 mix better. Also the separate subsurfaceIOR. Yeah, fair enough - the test above was specifically for the non-skin case. Here's a test on [an old hair demo model](https://www.youtube.com/watch?v=PzgQiiQCTMo) (I don't think the file is public, sorry): | Baseline (diffuse) | Blender 3.6 (non-fixed radius, lambertian entry) | New, non-fixed radius (50/50 entry) | New, fixed radius (GGX entry) | | - | - | - | - | | ![skin_diffuse.png](/attachments/5102939c-ef83-4f45-bec7-7c5802e3fb29) | ![skin_blender3.6.png](/attachments/83737722-c8d1-4c99-907a-4ed81ad1ccda) | ![skin_nonfixed.png](/attachments/6698069b-caf8-4762-a4d6-32af157c8342) | ![skin_fixed.png](/attachments/4b53890b-1e98-4309-963d-8df5a26c1da2) | Here, it's quite clear how the non-fixed radius mapping preserves the overall look of the diffuse baseline much better. > (Can you say why you kept bssrdf->roughness in the struct then? Also, I see it being set in osl_closure_bssrdf_setup and in closure.h) Because the 100% GGX entry bounce that the code currently uses with the fixed-radius mapping uses the surface roughness for the entry bounce. The Non-fixed radius mapping uses your code with hardcoded alpha=1 for the entry bounce, so it's unused there.
Lukas Stockner changed title from WIP: Cycles: Tweak Principled BSDF Subsurface parameters to Cycles: Tweak Principled BSDF Subsurface parameters 2023-08-15 03:41:55 +02:00
Brecht Van Lommel requested changes 2023-08-15 14:31:33 +02:00
Brecht Van Lommel left a comment
Owner

Seems to work well in my tests.

Can we then also rename subsurface methods to something like this?

  • Christensen-Burley
  • Random Walk
  • Skin (Random Walk)
Seems to work well in my tests. Can we then also rename subsurface methods to something like this? * Christensen-Burley * Random Walk * Skin (Random Walk)
@ -26,0 +52,4 @@
make_orthonormals(Z, &X, &Y);
const float alpha = bssrdf->alpha;
const float alpha2 = sqr(alpha);

Unused variable.

Unused variable.
LukasStockner marked this conversation as resolved
@ -736,2 +710,2 @@
const float subsurface_anisotropy = stack_load_float(stack, data_node.w);
bssrdf->anisotropy = clamp(subsurface_anisotropy, 0.0f, 0.9f);
bssrdf->ior = param2;
/* TODO */

Still to be resolved?

Still to be resolved?
Author
Member

For now, I guess it's fine to keep this fixed as 1. We could expose it as a socket later on.

For now, I guess it's fine to keep this fixed as 1. We could expose it as a socket later on.
Author
Member

For naming, now about "Random Walk (non-organic)" and "Random Walk (organic)"? The same arguments about complex interface layers also apply to e.g. plants, but not to e.g. marble, so that might be a clearer distinction.

For naming, now about "Random Walk (non-organic)" and "Random Walk (organic)"? The same arguments about complex interface layers also apply to e.g. plants, but not to e.g. marble, so that might be a clearer distinction.

I would go with "Random Walk" and "Random Walk (Organic)" then. I would not want to give the impression that the former can't be used for organic materials, mostly that the latter has some functionality that may help with organic materials.

I would go with "Random Walk" and "Random Walk (Organic)" then. I would not want to give the impression that the former can't be used for organic materials, mostly that the latter has some functionality that may help with organic materials.
First-time contributor

Here, it's quite clear how the non-fixed radius mapping preserves the overall look of the diffuse baseline much better.

Thanks for doing this test. It showes the advantage of the "non-fixed radius" method. It preserves the texturing detail, only "blurring" the illumination.

(Can you say why you kept bssrdf->roughness in the struct then? Also, I see it being set in osl_closure_bssrdf_setup and in closure.h)

Because the 100% GGX entry bounce that the code currently uses with the fixed-radius mapping uses the surface roughness for the entry bounce. The Non-fixed radius mapping uses your code with hardcoded alpha=1 for the entry bounce, so it's unused there.

Do you need to support the refraction entry on the fixed-radius side?

(Also, yes, artists on our end, will adjust the subsurfaceIOR - in the end it likely relates to epidermis thickness and skin types)

> Here, it's quite clear how the non-fixed radius mapping preserves the overall look of the diffuse baseline much better. Thanks for doing this test. It showes the advantage of the "non-fixed radius" method. It preserves the texturing detail, only "blurring" the illumination. > > (Can you say why you kept bssrdf->roughness in the struct then? Also, I see it being set in osl_closure_bssrdf_setup and in closure.h) > > Because the 100% GGX entry bounce that the code currently uses with the fixed-radius mapping uses the surface roughness for the entry bounce. The Non-fixed radius mapping uses your code with hardcoded alpha=1 for the entry bounce, so it's unused there. Do you need to support the refraction entry on the fixed-radius side? (Also, yes, artists on our end, will adjust the subsurfaceIOR - in the end it likely relates to epidermis thickness and skin types)
First-time contributor

I am perhaps discovering a bug in the proposed re-implementation.
It is not obvious on first read, but there are actually and unfortunately two variables exactly called pdf in the current code (in subsurface_randow_walk.h):
1/ at line 183, float pdf, which after cosine entry, gets used at line 442 in guiding_record_bssrdf_bounce;
2/ a local (shadowing) Spectrum pdf at line 399, which ends up part of the throughput at line 424.

The current proposed code (from Lukas) forgets to compute the pdf (from scenario 1/ above) in case of refraction entry.
Thus guiding_record_bssrdf_bounce may get something undefined. Or, as written now, a pdf forced at 1.
This was not the case in my version.
Maybe providing the pdf to this path guiding is not necessary?

@brecht @LukasStockner Can you please double check?

I am perhaps discovering a bug in the proposed re-implementation. It is not obvious on first read, but there are actually and unfortunately two variables exactly called pdf in the current code (in subsurface_randow_walk.h): 1/ at line 183, float pdf, which after cosine entry, gets used at line 442 in guiding_record_bssrdf_bounce; 2/ a local (shadowing) Spectrum pdf at line 399, which ends up part of the throughput at line 424. The current proposed code (from Lukas) forgets to compute the pdf (from scenario 1/ above) in case of refraction entry. Thus guiding_record_bssrdf_bounce may get something undefined. Or, as written now, a pdf forced at 1. This was not the case in my version. Maybe providing the pdf to this path guiding is not necessary? @brecht @LukasStockner Can you please double check?
Lukas Stockner force-pushed subsurface-principled from 419c648634 to 4c7c4a4281 2023-09-11 04:26:55 +02:00 Compare
Author
Member

I am perhaps discovering a bug in the proposed re-implementation.
It is not obvious on first read, but there are actually and unfortunately two variables exactly called pdf in the current code (in subsurface_randow_walk.h):
1/ at line 183, float pdf, which after cosine entry, gets used at line 442 in guiding_record_bssrdf_bounce;
2/ a local (shadowing) Spectrum pdf at line 399, which ends up part of the throughput at line 424.

The current proposed code (from Lukas) forgets to compute the pdf (from scenario 1/ above) in case of refraction entry.
Thus guiding_record_bssrdf_bounce may get something undefined. Or, as written now, a pdf forced at 1.
This was not the case in my version.
Maybe providing the pdf to this path guiding is not necessary?

@brecht @LukasStockner Can you please double check?

Hm, yeah, that's not ideal. I don't think providing the exact PDF is really important here, but I can double-check.

> I am perhaps discovering a bug in the proposed re-implementation. > It is not obvious on first read, but there are actually and unfortunately two variables exactly called pdf in the current code (in subsurface_randow_walk.h): > 1/ at line 183, float pdf, which after cosine entry, gets used at line 442 in guiding_record_bssrdf_bounce; > 2/ a local (shadowing) Spectrum pdf at line 399, which ends up part of the throughput at line 424. > > The current proposed code (from Lukas) forgets to compute the pdf (from scenario 1/ above) in case of refraction entry. > Thus guiding_record_bssrdf_bounce may get something undefined. Or, as written now, a pdf forced at 1. > This was not the case in my version. > Maybe providing the pdf to this path guiding is not necessary? > > @brecht @LukasStockner Can you please double check? Hm, yeah, that's not ideal. I don't think providing the exact PDF is really important here, but I can double-check.
Brecht Van Lommel approved these changes 2023-09-11 18:39:06 +02:00
Lukas Stockner added 2 commits 2023-09-13 02:36:06 +02:00
Author
Member

I've added a TODO comment for the PDF (and an entry in the tracking issue) - it doesn't appear to cause problems, so I guess it's not a blocker for now.

I've added a TODO comment for the PDF (and an entry in the tracking issue) - it doesn't appear to cause problems, so I guess it's not a blocker for now.
Lukas Stockner merged commit d7aee5a580 into main 2023-09-13 02:45:40 +02:00
Lukas Stockner deleted branch subsurface-principled 2023-09-13 02:45:42 +02:00
First-time contributor

Got a few question and thoughts about this :)
Having a Subsurface and a Subsurface Scale slider feels like both of them are doing the same thing, except they are not. From my small testing the Subsurface slider sort of "fades" between whatever subsurface you have to just a default diffuse shader, while the Subsurface Scale slider actually scales it in a more physically correct way. Is my assumption correct? Because if that's the case I think it should be more clear somehow what slider one should use if you want a more physically correct material.
It kind of feels like the Bump node where you really shouldn't use the Strength slider since it just "fades" between full bump and no bump which is not the right way to do it (Distance is the way to go) because it looks bad no matter how you do it, and unfortunately almost everyone still uses the slider because it's labelled in such a way that people think that's the one to use (and also since you generally have to set the Distance to very low values to have any real effect since this value is in meter (which is not really clear either)).

And about that radius, this might be nit picking, but shouldn't the input be an RGB one since that's what you essentially is just multiplying in there? I mean if you change individual channels you clearly see that it is R, G and B that you are changing, not some X, Y and Z coordinate.

I also have a more general question about the three different SSS implementations.
Often when I use SSS I have objects that are different shell but should still act as one, so the only option for me there is to use Christensen-Burley. Not sure what the different options do under the hood, but is it possible to have all three of them being able to "melt" different shells together like this, with maybe an option to turn this feature on or off?

Got a few question and thoughts about this :) Having a Subsurface and a Subsurface Scale slider feels like both of them are doing the same thing, except they are not. From my small testing the Subsurface slider sort of "fades" between whatever subsurface you have to just a default diffuse shader, while the Subsurface Scale slider actually scales it in a more physically correct way. Is my assumption correct? Because if that's the case I think it should be more clear somehow what slider one should use if you want a more physically correct material. It kind of feels like the Bump node where you really shouldn't use the Strength slider since it just "fades" between full bump and no bump which is not the right way to do it (Distance is the way to go) because it looks bad no matter how you do it, and unfortunately almost everyone still uses the slider because it's labelled in such a way that people think that's the one to use (and also since you generally have to set the Distance to very low values to have any real effect since this value is in meter (which is not really clear either)). And about that radius, this might be nit picking, but shouldn't the input be an RGB one since that's what you essentially is just multiplying in there? I mean if you change individual channels you clearly see that it is R, G and B that you are changing, not some X, Y and Z coordinate. I also have a more general question about the three different SSS implementations. Often when I use SSS I have objects that are different shell but should still act as one, so the only option for me there is to use Christensen-Burley. Not sure what the different options do under the hood, but is it possible to have all three of them being able to "melt" different shells together like this, with maybe an option to turn this feature on or off?
For user feedback please use this topic: https://devtalk.blender.org/t/principled-v2-feedback-discussion-thread/24997
Sign in to join this conversation.
No reviewers
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No Assignees
4 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#110989
No description provided.