Cycles expects to find all node sockets with their correct names, but
this can be changed via the API (see bug report discussion).
Solution for now is to let cycles accept this case gracefully instead
of crashing. The shader will simply use the internal default values for
inputs and any connections will be ignored.
Would be nice to report the error somewhere, but cycles doesn't have a
proper logging system for this purpose yet.
All textures are sampled bi-linear currently with the exception of OSL there texture sampling is fixed and set to smart bi-cubic.
This patch adds user control to this setting.
Added:
- bits to DNA / RNA in the form of an enum for supporting multiple interpolations types
- changes to the image texture node drawing code ( add enum)
- to ImageManager (this needs to know to allocate second texture when interpolation type is different)
- to node compiler (pass on interpolation type)
- to device tex_alloc this also needs to get the concept of multiple interpolation types
- implementation for doing non interpolated lookup for cuda and cpu
- implementation where we pass this along to osl ( this makes OSL also do linear untill I add smartcubic to the interface / DNA/ RNA)
Reviewers: brecht, dingto
Reviewed By: brecht
CC: dingto, venomgfx
Differential Revision: https://developer.blender.org/D317
Volumes can now have textured colors and density. There is a Volume Sampling
panel in the Render properties with these settings:
* Step size: distance between volume shader samples when rendering the volume.
Lower values give more accurate and detailed results but also increased render
time.
* Max steps: maximum number of steps through the volume before giving up, to
protect from extremely long render times with big objects or small step sizes.
This is much more compute intensive than homogeneous volume, so when you are not
using a texture you should enable the Homogeneous Volume option in the material
or world for faster rendering.
One important missing feature is that Generated texture coordinates are not yet
working in volumes, and they are the default coordinates for nearly all texture
nodes. So until that works you need to plug in object texture coordinates or a
world space position.
This is work by "storm", Stuart Broadfoot, Thomas Dinges and myself.
This is the simplest possible volume rendering case, constant density inside
the volume and no scattering or emission. My plan is to tweak, verify and commit
more volume rendering effects one by one, doing it all at once makes it
difficult to verify correctness and track down bugs.
Documentation is here:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Materials/Volume
Currently this hooks into path tracing in 3 ways, which should get us pretty
far until we add more advanced light sampling. These 3 hooks are repeated in
the path tracing, branched path tracing and transparent shadow code:
* Determine active volume shader at start of the path
* Change active volume shader on transmission through a surface
* Light attenuation over line segments between camera, surfaces and background
This is work by "storm", Stuart Broadfoot, Thomas Dinges and myself.
* Henyey-Greenstein scattering closure implementation.
* Rename transparent to absorption node and isotropic to scatter node.
* Volume density is folded into the closure weights.
* OSL support for volume closures and nodes.
* This commit has no user visible changes, there is no volume render code yet.
This is work by "storm", Stuart Broadfoot, Thomas Dinges and myself.
to standard nodes where the Blender socket names can differ from associated Cycles names and may require additional indices to make them unique. Script node sockets are already unique and exact due to
being generated from the script function parameters.
* Remove the compatible falloff SSS implementation. We shouldn't support two implementations in the long term, and 2.7x is a good release number do break some compatibility as well.
* Version patch added, so Files with Compatible falloff will automatically use Cubic now.
It was already mentioned in the manual, that Compatible is deprecated.
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/Shaders#BSSRDF
* Keep the Mapping node default type as Point for now, instead of Texture. The
latter is a better default, but this is breaking API compatibility and it's
too close to release to expect addons to be fixed in time.
* Vector Transform and Mapping nodes had properties with name "type" to set the
type of vector, but this conflicts with the node type property, so renamed to
vector_type now.
scale and rotation in mapping node, there would be shearing, and the only way
to avoid that was to add 2 mapping nodes. This is because to transform the
texture, the inverse transform needs to be done on the texture coordinate
Now the mapping node has Texture/Point/Vector/Normal types to transform the
vector for a particular purpose. Point is the existing behavior, Texture is
the new default that behaves more like you might expect.
A new hair bsdf node, with two closure options, is added. These closures allow the generation of the reflective and transmission components of hair. The node allows control of the highlight colour, roughness and angular shift.
Llimitations include:
-No glint or fresnel adjustments.
-The 'offset' is un-used when triangle primitives are used.
* Added a new sky model by Hosek and Wilkie: "An Analytic Model for Full Spectral Sky-Dome Radiance" http://cgg.mff.cuni.cz/projects/SkylightModelling/
Example render:
http://archive.dingto.org/2013/blender/code/new_sky_model.png
Documentation:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/Textures#Sky_Texture
Details:
* User can choose between the older Preetham and the new Hosek / Wilkie model via a dropdown. For older files, backwards compatibility is preserved. When we add a new Sky texture, it defaults to the new model though.
* For the new model, you can specify the ground albedo (see documentation for details).
* Turbidity now has a UI soft range between 1 and 10, higher values (up to 30) are still possible, but can result in weird colors or black.
* Removed the limitation of 1 sky texture per SVM stack. (Patch by Lukas Tönne, thanks!)
Thanks to Brecht for code review and some help!
This is part of my GSoC 2013 project, SVN merge of r59214, r59220, r59251 and r59601.
New features:
* Bump mapping now works with SSS
* Texture Blur factor for SSS, see the documentation for details:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/Shaders#Subsurface_Scattering
Work in progress for feedback:
Initial implementation of the "BSSRDF Importance Sampling" paper, which uses
a different importance sampling method. It gives better quality results in
many ways, with the availability of both Cubic and Gaussian falloff functions,
but also tends to be more noisy when using the progressive integrator and does
not give great results with some geometry. It works quite well for the
non-progressive integrator and is often less noisy there.
This code may still change a lot, so unless you're testing it may be best to
stick to the Compatible falloff function.
Skin test render and file that takes advantage of the gaussian falloff:
http://www.pasteall.org/pic/show.php?id=57661http://www.pasteall.org/pic/show.php?id=57662http://www.pasteall.org/blend/23501
* Replaced the Preetham model with the newer Hosek / Wilkie model:
"An Analytic Model for Full Spectral Sky-Dome Radiance" http://cgg.mff.cuni.cz/projects/SkylightModelling/
* We use the sample code data, which comes with the paper, but removed some unnecessary parts, we only need the xyz version.
* New "Albedo" UI paraemeter, to control the ground albedo (between 0 and 1).
* Works with SVM only atm (CPU and CUDA).
Example render:
http://www.pasteall.org/pic/show.php?id=57635
ToDo / Open Questions:
* OSL still uses the old model, will be done later. In the meantime it's useful to compare the two models this way.
* The new model needs a much weaker Strength value (0.01), otherwise it's white. Can this be fixed?
* Code cleanup.
* Added a node to convert a temperature in Kelvin to an RGB color. This can be used e.g. for lights, to easily find the right color temperature.
= Some common temperatures =
Candle light: 1500 Kelvin
Sunset/Sunrise: 1850 Kelvin
Studio lamps: 3200 Kelvin
Horizon daylight: 5000 Kelvin
Documentation: http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/More#Blackbody
Thanks to Philipp Oeser (lichtwerk), who essentially contributed to this with a patch! :)
This is part of my GSoC 2013 project. SVN merge of r57424, r57487, r57507, r57525, r58253 and r58774
* Code cleanup to avoid duplicated enum code.
* Added a third type for conversion next to Point and Vector: Normal. This is basically the same result as with the Vector type, but normalizes the vector at the end.
Thanks to Brecht for code review!
* Added a node to convert wavelength (in nanometers, from 380nm to 780nm) to RGB values. This can be useful to match real world colors easier.
* Code cleanup:
** Moved color functions (xyz and hsv) into dedicated utility files.
** Remove svm_lerp(), use interp() instead.
Documentation:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/More#Wavelength
Example render:
http://www.pasteall.org/pic/show.php?id=53202
This is part of my GSoC 2013. (revisions 57322, 57326, 57335 and 57367 from soc-2013-dingto).
don't have any way to deal with scripted node types yet, which could in principle by added with pynodes. The NodeCustomGroup type adds a way of scripting nodes by automating node groups which the
hardcoded system can then interpret like regular groups.
The new NodeCustomGroup type has the basic node_tree pointer property like the regular group node types and also uses the same socket interface system as regular groups. This means that input/output
sockets can be mapped to internal nodes in the same way as regular node groups in renderers and the compositor. On top of that, however, the NodeCustomGroup type can be subclassed in python scripts to flesh out
scripted node types with own draw functions, properties, updates and so on.
NB: Only cycles currently supports this node type and its derivatives, other systems may follow later.
* Added a node to convert wavelength (in nanometer, from 380nm to 780nm) to RGB values. This can be useful to match real world colors easier.
Example render:
http://www.pasteall.org/pic/show.php?id=53202
ToDo:
* Move some functions into an util file, maybe a common util_color.h or so.
* Test GPU, unfortunately sm_21 doesn't work for me yet.
well as I would like, but it works, just add a subsurface scattering node and
you can use it like any other BSDF.
It is using fully raytraced sampling compatible with progressive rendering
and other more advanced rendering algorithms we might used in the future, and
it uses no extra memory so it's suitable for complex scenes.
Disadvantage is that it can be quite noisy and slow. Two limitations that will
be solved are that it does not work with bump mapping yet, and that the falloff
function used is a simple cubic function, it's not using the real BSSRDF
falloff function yet.
The node has a color input, along with a scattering radius for each RGB color
channel along with an overall scale factor for the radii.
There is also no GPU support yet, will test if I can get that working later.
Node Documentation:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/Shaders#BSSRDF
Implementation notes:
http://wiki.blender.org/index.php/Dev:2.6/Source/Render/Cycles/Subsurface_Scattering
The issue here was that the proxy nodes created for connecting extern group node sockets to the internal nodes were generated by the input/output nodes themselves.
0 input/output nodes: there would be no proxy that external group node sockets can map to
2+ input/output nodes: additional nodes would overwrite entries from previous nodes, so that only one of the input/output nodes would be used.
Solution is to always generate exactly 1 proxy node for every group socket in advance, regardless of whether it is used internally. Internal node sockets can then all map to this proxy node.
In the case out output nodes there should only ever be one active node, otherwise the connection to the proxy would be ambiguous. For this purpose the NODE_DO_OUTPUT flag has been exposed to RNA, so that cycles can check it and only use the active output.
PyNodes opens up the node system in Blender to scripters and adds a number of UI-level improvements.
=== Dynamic node type registration ===
Node types can now be added at runtime, using the RNA registration mechanism from python. This enables addons such as render engines to create a complete user interface with nodes.
Examples of how such nodes can be defined can be found in my personal wiki docs atm [1] and as a script template in release/scripts/templates_py/custom_nodes.py [2].
=== Node group improvements ===
Each node editor now has a tree history of edited node groups, which allows opening and editing nested node groups. The node editor also supports pinning now, so that different spaces can be used to edit different node groups simultaneously. For more ramblings and rationale see (really old) blog post on code.blender.org [3].
The interface of node groups has been overhauled. Sockets of a node group are no longer displayed in columns on either side, but instead special input/output nodes are used to mirror group sockets inside a node tree. This solves the problem of long node lines in groups and allows more adaptable node layout. Internal sockets can be exposed from a group by either connecting to the extension sockets in input/output nodes (shown as empty circle) or by adding sockets from the node property bar in the "Interface" panel. Further details such as the socket name can also be changed there.
[1] http://wiki.blender.org/index.php/User:Phonybone/Python_Nodes
[2] http://projects.blender.org/scm/viewvc.php/trunk/blender/release/scripts/templates_py/custom_nodes.py?view=markup&root=bf-blender
[3] http://code.blender.org/index.php/2012/01/improving-node-group-interface-editing/
Issue was caused by cycles trying to find builtin images in a main
database and in case of preview render images are not in database,
they're just referenced by shader node tree.
Now builtin images in cycles have got void* pointer to store data
needed to load builtin images.
In case ob blender session, this pointer will store pointer from
PointerRNA for image datablock and used later to construct Image
class based on this pointer.
This also saves database lookup for final render which is nice :)
Reviewed by Brecht.
This adds support of movie textures for Cycles rendering.
Uses the same builtin images routines as packed/generated images,
but with some extra non-rna hookups from blender_session side.
Basically, it's not so clear how to give access to video frames
via C++ RNA -- it'll require exposing ImBuf to API, doing some
threading locks and so. Ended up adding two more functions which
are actually bad level call, but don't consider it's so much bad
-- we have few bad calls already, which are actually related.
Changed a bit how builtin images names are passing to image
manager. Now it's not just an ID datablock name, but also a frame
number concatenated via '@' character, which makes itpossible to
easily know frame number to be used for movie images, without
adding extra descriptors to image manager.
Decoding of builtin name is a bit slower now, but it should be
still nothing in comparison with rendering complexity.
Also exposed image user's frame_current to python API, which
is needed to get absolute frame number of movie from node's
image user.
P.S. Generated/packed images are also using bad level call but
only does it to make things more clear here. Either all images
are using C++ RNA here or no images does. That's the most clear
for now.
This commit adds support of packed and generated images
for Cycles when using SVM backend. Movies are still not
supported. This changes also doesn't touch OSL which is
much less trivial to adopt for any images which are not
saved to disk.
Implementation details:
- When adding images to Image Manager is now possible
to mark image as builtin. Builtin images will bypass
OIIO loader and will use special loading callbacks.
- Callbacks are set by Blender Session and they're
using C++ RNA interface to obtain needed data (pixels,
dimensions, is_float flag).
- Image Manager assumes file path is used as reference
to a builtin images, but in fact currently image
datablock name is used for reference. This makes it
easy to find an image in BlendData database.
- Added some extra properties to Image RNA:
* channels, which denotes actual number of channels
in ImBuf. This is needed to treat image's pixels
correct (before it wasn't possible because API
used internal number of channels for pixels which
is in fact doesn't correlate with image depth)
* is_float, which is truth if image is stored in
float buffer of ImBuf.
- Implemented string lookup for C++ RNA collections
for cases there's no manual lookup function.
OSL is not supported because it used own image loading
and filtering routines and there's seems to be no API
to feed pre-loaded pixels directly to the library.
Think we'll either need to add some API to support
such kind of feeding or consider OSL does not have
support of packed images at all.
Movies are not supported at this moment because of lack
of RNA API to load specified frame. It's not difficult
to solve, just need to consider what to best here:
* Either write some general python interface for ImBuf
and use it via C++ API, or
* Write a PY API function which will return pixels for
given frame, or
* Use bad-level BKE_* call
Anyway, small steps, further improvements later.
Reviewed by Brecht, thanks!
Patch [#33445] - Experimental Cycles Hair Rendering (CPU only)
This patch allows hair data to be exported to cycles and introduces a new line segment primitive to render with.
The UI appears under the particle tab and there is a new hair info node available.
It is only available under the experimental feature set and for cpu rendering.