Supports both smoke/fire and point density textures now.
Reduces number of textures available for sm_20 and sm_21, but you have
to compromise somewhere on such a limited hardware.
Currently limited to linear interpolation only, and decoupled ray
marching is not supported yet. Think those could be considered just a
further improvement.
Some quick example:
https://developer.blender.org/F282934
Code is minimal and we can fully consider it a fix for missing
support of 3D textures with CUDA.
Reviewers: lukasstockner97, brecht, juicyfruit, dingto
Reviewed By: brecht, juicyfruit, dingto
Subscribers: mib2berlin
Differential Revision: https://developer.blender.org/D1806
This commit changes the way how we pass bounce information to the Light
Path node. Instead of manualy copying the bounces into ShaderData, we now
directly pass PathState. This reduces the arguments that we need to pass
around and also makes it easier to extend the feature.
This commit also exposes the Transmission Bounce Depth to the Light Path
node. It works similar to the Transparent Depth Output: Replace a
Transmission lightpath after X bounces with another shader, e.g a Diffuse
one. This can be used to avoid black surfaces, due to low amount of max
bounces.
Reviewed by Sergey and Brecht, thanks for some hlp with this.
I tested compilation and usage on CPU (SVM and OSL), CUDA, OpenCL Split
and Mega kernel. Hopefully this covers all devices. :)
The idea of this node is to sampling of 3D voxels at a given coordinate
supporting different mapping strategies (world space mapping, object
local space etc).
Currently not in use, it's a preparation step for supporting point density
textures.
This will help figuring out cases when node was not properly handled by the SVM
by aborting execution on CPU, where all the nodes are expected to be supported.
This commits finishes initial selective nodes compilation into kernel, which
helps a lot performance-wise for AMD OpenCL kernels.
Split by node groups is based on statistics from simple scenes like BMW and
more complex scenes like mango and gooseberry production files. Further
tweaks are always possible, but it should be a good starting point.
TODO: Still need to ignore unused nodes when calculating requested shader
features.
This commit contains all the work related on the AMD megakernel split work
which was mainly done by Varun Sundar, George Kyriazis and Lenny Wang, plus
some help from Sergey Sharybin, Martijn Berger, Thomas Dinges and likely
someone else which we're forgetting to mention.
Currently only AMD cards are enabled for the new split kernel, but it is
possible to force split opencl kernel to be used by setting the following
environment variable: CYCLES_OPENCL_SPLIT_KERNEL_TEST=1.
Not all the features are supported yet, and that being said no motion blur,
camera blur, SSS and volumetrics for now. Also transparent shadows are
disabled on AMD device because of some compiler bug.
This kernel is also only implements regular path tracing and supporting
branched one will take a bit. Branched path tracing is exposed to the
interface still, which is a bit misleading and will be hidden there soon.
More feature will be enabled once they're ported to the split kernel and
tested.
Neither regular CPU nor CUDA has any difference, they're generating the
same exact code, which means no regressions/improvements there.
Based on the research paper:
https://research.nvidia.com/sites/default/files/publications/laine2013hpg_paper.pdf
Here's the documentation:
https://docs.google.com/document/d/1LuXW-CV-sVJkQaEGZlMJ86jZ8FmoPfecaMdR-oiWbUY/edit
Design discussion of the patch:
https://developer.blender.org/T44197
Differential Revision: https://developer.blender.org/D1200
The goal is to be able to compile kernel with nodes which are actually needed
to render current scene, hence improving performance of the kernel,
The idea is:
- Have few node groups, starting with a group which contains nodes are used
really often, and then couple of groups which will be extension of this one.
- Have feature-based nodes disabling, so it's possible to disable nodes related
to features which are not used with the currently used nodes group.
This commit only lays down needed routines for this approach, actual split will
happen later after gathering statistics from bunch of production scenes.
Now we calculate color in range 800..12000 using an approximation a/x+bx+c for R and G and ((at + b)t + c)t + d) for B.
Max absolute error for RGB for non-lut function is less than 0.0001, which is enough to get the same 8 bit/channel color as for OSL with a noticeable performance difference.
However there is a slight visible difference between previous non-OSL implementation because of lookup table interpolation and offset-by-one mistake.
The previous implementation gave black color outside of soft range (t > 12000), now it gives the same color as for 12000.
Also blackbody node without input connected is being converted to value input at shader compile time.
Reviewers: dingto, sergey
Reviewed By: dingto
Subscribers: nutel, brecht, juicyfruit
Differential Revision: https://developer.blender.org/D1280
This attribute missed derivatives calculation.
Not totally sure what's the proper approach for algebraic derivative
calculation, so calculating them by definition. This isn't fastest
way to do it in this case and could be replaced with some smarter magic
in the wireframe calculation loop.
At least currently implemented approach is better than nothing.
This is the same as blender internal's texture mapping from another object,
so this way it's possible to control texture space of one object by another.
Quite straightforward change apart from the workaround for the stupidness of
the dependency graph. Now shader has flag telling that it depends on object
transform. This is the simplest way to know which shaders needs to be tagged
for update when object changes. This might give some false-positive tags now
but reducing them should not be priority for Cycles and rather be a priority
to bring new dependency graph.
Also GLSL preview does not support using other object for mapping.
This is actually correct for BI shading as well and to be addressed as
a part of general GLSL viewport improvements since it's not really clear
how to support this in GLSL.
Reviewers: brecht, juicyfruit
Subscribers: eyecandy, venomgfx
Differential Revision: https://developer.blender.org/D1021
That was only needed in the beginning, when we did not had support for tangents. It's time to clean some of the defines up, it's getting a bit too much.
This was the original code to get things working on old GPUs, but now it is no
longer in use and various features in fact depend on this to work correctly to
the point that enabling this code is too buggy to be useful.
for one of the input shaders is zero.
This gives about 5% speedup for koro_final.blend. In general this is important
so you can design shaders that run faster for shadows, diffuse bounces, etc, for
example by skipping procedural textures or even using a single fixed color.
This works pretty much as you would expect, overlapping volume objects gives
a more dense volume. What did change is that world volume shaders are now
active everywhere, they are no longer excluded inside objects.
This may not be desirable and we need to think of better control over this.
In some cases you clearly want it to happen, for example if you are rendering
a fire in a foggy environment. In other cases like the inside of a house you
may not want any fog, but it doesn't seem possible in general for the renderer
to automatically determine what is inside or outside of the house.
This is implemented using a simple fixed size array of shader/object ID pairs,
limited to max 15 overlapping objects. The closures from all shaders are put
into a single closure array, exactly the same as if an add shader was used to
combine them.
This to avoids build conflicts with libc++ on FreeBSD, these __ prefixed values
are reserved for compilers. I apologize to anyone who has patches or branches
and has to go through the pain of merging this change, it may be easiest to do
these same replacements in your code and then apply/merge the patch.
Ref T37477.
* Added a new sky model by Hosek and Wilkie: "An Analytic Model for Full Spectral Sky-Dome Radiance" http://cgg.mff.cuni.cz/projects/SkylightModelling/
Example render:
http://archive.dingto.org/2013/blender/code/new_sky_model.png
Documentation:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/Textures#Sky_Texture
Details:
* User can choose between the older Preetham and the new Hosek / Wilkie model via a dropdown. For older files, backwards compatibility is preserved. When we add a new Sky texture, it defaults to the new model though.
* For the new model, you can specify the ground albedo (see documentation for details).
* Turbidity now has a UI soft range between 1 and 10, higher values (up to 30) are still possible, but can result in weird colors or black.
* Removed the limitation of 1 sky texture per SVM stack. (Patch by Lukas Tönne, thanks!)
Thanks to Brecht for code review and some help!
This is part of my GSoC 2013 project, SVN merge of r59214, r59220, r59251 and r59601.
* First (brute force) implementation for SVM. This works and delivers the same result as OSL, but it's slow.
* Code inside svm_blackbody.h inspired by a patch by Philipp Oeser (#35698), thanks.
Ideas:
* Use a lookup table to perform the calculations on render/ level.
* Implement it as a RNA property only, and do the calculation like Sun/Sky precompute.
* Added a node to convert wavelength (in nanometer, from 380nm to 780nm) to RGB values. This can be useful to match real world colors easier.
Example render:
http://www.pasteall.org/pic/show.php?id=53202
ToDo:
* Move some functions into an util file, maybe a common util_color.h or so.
* Test GPU, unfortunately sm_21 doesn't work for me yet.
Patch [#33445] - Experimental Cycles Hair Rendering (CPU only)
This patch allows hair data to be exported to cycles and introduces a new line segment primitive to render with.
The UI appears under the particle tab and there is a new hair info node available.
It is only available under the experimental feature set and for cpu rendering.
Each BSDF node now has a Normal input, which can be used to set a custom normal
for the BSDF, for example if you want to have only bump on one of the layers in
a multilayer material.
The Bump node can be used to generate a normal from a scalar value, the same as
what happens when you connect a scalar value to the displacement output.
Documentation has been updated with the latest changes:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes
Patch by Agustin Benavidez, some implementation tweaks by me.
It's using the Ward BSDF currently, which has some energy loss so might be a bit
dark. More/better BSDF options can be implemented later.
Patch by Mike Farnsworth, some modifications by me. Currently it's not possible yet
to set a custom tangent, that will follow as part of per-bsdf normals patch.
Regular rendering now works tiled, and supports save buffers to save memory
during render and cache render results.
Brick texture node by Thomas.
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/Textures#Brick_Texture
Image texture Blended Box Mapping.
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/Textures#Image_Texturehttp://mango.blender.org/production/blended_box/
Various bug fixes by Sergey and Campbell.
* Fix for reading freed memory in some node setups.
* Fix incorrect memory read when synchronizing mesh motion.
* Fix crash appearing when direct light usage is different on different layers.
* Fix for vector pass gives wrong result in some circumstances.
* Fix for wrong resolution used for rendering Render Layer node.
* Option to cancel rendering when doing initial synchronization.
* No more texture limit when using CPU render.
* Many fixes for new tiled rendering.
The particle data is stored in a separate texture if any of the dupli objects uses particle info nodes in shaders. To map dupli objects onto particles the store an additional particle_index value, which is different from the simple dupli object index (only visible particles, also works for particle dupli groups mode).
Some simple use cases on the code.blender.org blog:
http://code.blender.org/index.php/2012/05/particle-info-node/
pass index, and a random number unique to the instance of the object.
This can be useful to give some variation to a single material assigned to
multiple instances, either manually controlled through the object index, based
on the object location, or randomized for each instance.
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/More#Object_Info