This patch supports "Image or Movie" and "Environment map" types of world texture for the viewport.
It supports:
- "View", "AngMap" and "Equirectangular" types of mapping.
- Different types of texture blending (according to BI world render).
- Same color blending as when it lacked textures (but render via glsl).
{F207734}
{F207735}
Example: {F275180}
Original author: @valentin_b4w
Regards,
Alexander (Blend4Web Team).
Reviewers: sergey, valentin_b4w, brecht, merwin
Reviewed By: merwin
Subscribers: campbellbarton, merwin, blueprintrandom, youle, a.romanov, yurikovelenov, AlexKowel, Evgeny_Rodygin
Projects: #rendering, #opengl_gfx, #bf_blender:_next
Differential Revision: https://developer.blender.org/D1414
The Vector Transform node is a useful node which is present in the Cycles renderer.
{F144283}
This patch implements the Vector Transform node for GLSL mode and the internal renderer.
Example: {F273060}
Alexander (Blend4Web Team)
Reviewers: brecht, campbellbarton, sergey
Reviewed By: campbellbarton, sergey
Subscribers: psy-fi, duarteframos, RobM, lightbwk, sergey, AlexKowel, valentin_b4w, Evgeny_Rodygin, yurikovelenov
Projects: #bf_blender:_next
Differential Revision: https://developer.blender.org/D909
This commit changes the way how we pass bounce information to the Light
Path node. Instead of manualy copying the bounces into ShaderData, we now
directly pass PathState. This reduces the arguments that we need to pass
around and also makes it easier to extend the feature.
This commit also exposes the Transmission Bounce Depth to the Light Path
node. It works similar to the Transparent Depth Output: Replace a
Transmission lightpath after X bounces with another shader, e.g a Diffuse
one. This can be used to avoid black surfaces, due to low amount of max
bounces.
Reviewed by Sergey and Brecht, thanks for some hlp with this.
I tested compilation and usage on CPU (SVM and OSL), CUDA, OpenCL Split
and Mega kernel. Hopefully this covers all devices. :)
This commit fixes shader tree compilation, but the shading result wouldn't be
doing actual refraction because it's a bit involved change which isn't really
considered a bug for now. There are more closures which are falling back to
diffuse BSDF currently.
This commit contains all the remained parts needed for initial integration of
OpenSubdiv into Blender's subdivision surface code. Includes both GPU and CPU
backends which works in the following way:
- When SubSurf modifier is the last in the modifiers stack then GPU pipeline
of OpenSubdiv is used, making viewport performance as fast as possible.
This also requires graphscard with GLSL 1.5 support. If this requirement is
not met, then no GPU pipeline is used at all.
- If SubSurf is not a last modifier or if DerivesMesh is being evaluated for
rendering then CPU limit evaluation API from OpenSubdiv is used. This only
replaces the legacy evaluation code from CCGSubSurf_legacy, but keeps CCG
structures exactly the same as they used to be for ages now.
This integration is fully covered with ifdef and not enabled by default
because there are several TODOs to be solved first:
- Face varying data interpolation is not really cleanly implemented for GPU
in OpenSubdiv 3.0. It is also not implemented for limit evaluation API.
This basically means we'll have really hard time supporting UVs.
- Limit evaluation only works with adaptivly subdivided meshes so far, which
basically means all the points of CCG are pushed to the limit. This gives
different result from old code.
- There are some serious optimizations possible on the topology refiner
creation, which would speed up initial OpenSubdiv mesh creation.
- There are some hardcoded asumptions in the GPU and DerivedMesh areas which
could be generalized.
That's something where Antony and Campbell can help, making it so the code
is structured in a way which is reusable by all planned viewport projects.
- There are also some workarounds in the dependency graph to make sure OpenGL
buffers are only freed from the main thread.
Those who'll be wanting to make experiments with this code should grab dev
branch (NOT master) from
https://github.com/Nazg-Gul/OpenSubdiv/tree/dev
There are some patches applied in there which we're working on on getting
into upstream.
With this patch "Particle Info" node from Cycles works in GLSL and BI
Alexander (Blend4Web Team)
Reviewers: psy-fi
Note: moved particle info to object render instance instead of
shadeinput during review - Antony.
Differential Revision: https://developer.blender.org/D1313
This patch will fix the world GLSL (mist, background, ambient) update for the BGE.
Reviewers: moguri, brecht
Reviewed By: moguri, brecht
Subscribers: panzergame
Differential Revision: https://developer.blender.org/D151
There was a differences between how Cycles and BI treats Normal shader:
- Different normal direction assumption
- Different policy about vector normalization
Previous idea of trying to use single function and flip the output if
needed becomes more tricky, so i've just added new GLSL function which
corresponds to how Cycles deals with the Normal shader.
This is added in the spirit of the general cycles GLSL system
which is pretty much WIP still.
This will only work on cycles at the moment but generating for blender
internal is possible too of course though it will be done in a separate
commit.
This hasn't been tested with all and every node in cycles, but
environment and regular textures with texture coordinates work.
There is some difference between the way cycles treats some coordinates,
which is in world space and the way GLSL treats them, which is in view
space.
We might want to explore and improve this further in the future.
...also </drumroll>
Even though GLSL allows to have polymorphic functions our codegen
is not aware of this at all.
Let's rename the functions for now, but in the future would be handy
to make codegen aware of the polymorphic functions.
Quite striaghtforward implementation, with the only weird thing that for some reason
my video driver wasn't happy with calling the function "clamp" giving some weirdo
shader compilation error messages.
Called the GPU function clamp_val which can handle float and vec3.
The solution is to do the multiplication with the energy in the shader
after texture application.
We might be able to avoid setting dyncol completely, but this needs
better investigation. Some shader paths also look a bit redundant.
Also, texture mapping is not supported very well for light lamps, might
also need investigation.
Few things:
- reflect() takes arguments in this order: N, I, it was swapped
in the previous code for some reason.
- Normal and view vectors are to be normalized. For the view
vector we're now using shade_view() in order to deal with the
ortho camera. However, Cycles does not support ortho camera
for reflection, but this is easy to do in a separate commit.
- Reflection vector is to be in the world space. Kudos to
Antony Riakiotakis for figuring this out!
The formula was not consistent across Blender and behaved strangely, now it is
a simple linear blend between color1 and min(color1, color2).
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D489
This commit does various changes for matcaps:
One is taking advantage of drawing with pbvh (which would only happen
with dyntopo previously) and drawing with partial redraw during
sculpting.
The second one is support for masks. To make this work in the special
case of multires, which uses flat shading, I use the only available flat
shaded builtins in OpenGL 2.0 which are color and secondary color.
Abusing colors in that way is also essential for flat shading to work if
we are to use pbvh draw in multires, since it is the color that is being
interpolated flatly, not the normal (which can only interpolated
smoothly). The pbvh drawing code for multires used last triangle
element's normal to compute the shading which would only produce smooth
results. This could change if we did the shading in the vertex shader
for flat shaded primitives, but this is more complex and makes it harder
to have one shader to rule the mole.
Also increased the brightness of the default diffuse color for
sculpting. This should be useful since artists like to tweak the
lighting settings and it will give them the full dynamic range of the
lights, but also it helps with correct brightness of sculpted matcaps.
Reviewers: brecht
Differential Revision: https://developer.blender.org/D435
For now this provides the following outputs:
- Color
- Light Vector
- Distance
- Shadow
- Visibility Factor
Note: Color output is multiplied by the lamp energy. Multiplication of
color*max(dot(light_vector,normal_vector),0)*shadow*visibility_factor
produces the exact same result as the Lambert shader.
Many thanks to Brecht for code review and discussion!
Revert 0c7d2de382. The "Camera Data" node actually gives the location
of the point in camera coordinate system. To obtain actual camera data,
we can use "Geometry" node instead.
Also modify the "Geometry" node, to produce correct view vector output
in orthographic GLSL preview.