Expand Cycles to use the new baking API in Blender.
It works on the selected object, and the panel can be accessed in the Render panel (similar to where it is for the Blender Internal).
It bakes for the active texture of each material of the object. The active texture is currently defined as the active Image Texture node present in the material nodetree. If you don't want the baking to override an existent material, make sure the active Image Texture node is not connected to the nodetree. The active texture is also the texture shown in the viewport in the rendered mode.
Remember to save your images after the baking is complete.
Note: Bake currently only works in the CPU
Note: This is not supported by Cycles standalone because a lot of the work is done in Blender as part of the operator only, not the engine (Cycles).
Documentation:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Bake
Supported Passes:
-----------------
Data Passes
* Normal
* UV
* Diffuse/Glossy/Transmission/Subsurface/Emit Color
Light Passes
* AO
* Combined
* Shadow
* Diffuse/Glossy/Transmission/Subsurface/Emit Direct/Indirect
* Environment
Review: D421
Reviewed by: Campbell Barton, Brecht van Lommel, Sergey Sharybin, Thomas Dinge
Original design by Brecht van Lommel.
The entire commit history can be found on the branch: bake-cycles
Notes:
* The motion steps only affect deformation motion blur.
* The actual number of steps is 2^(steps - 1). This avoids having to sample at
many different times for object with more/fewer steps, now the times overlap.
* Deformation motion blur is enabled by default in existing files that have
motion blur enabled. If the object is not deforming, this will be detected at
export time, so raytracing performance will not be affected.
Part of the code is from the summer of code project by Gavin Howard.
This adds a new option "Sample All Lights" to the Sampling panel in Cycles (Branched Path). When enabled, Cycles will sample all the lights in the scene for the indirect samples, instead of randomly picking one. This is already happening for direct samples, now you can optionally enable it for indirect.
Example file and renders:
Blend file: http://www.pasteall.org/blend/27411
Random: http://www.pasteall.org/pic/show.php?id=68033
All: http://www.pasteall.org/pic/show.php?id=68034
Sampling all lights is a bit slower, but there is less variance, so it should help in situations with many lights.
Patch by myself with some tweaks by Brecht.
Differential Revision: https://developer.blender.org/D391
This adds an option in the Volume Sampling panel, which helps rendering lamps
inside or near volumes with less noise. It can also increase noise though and
needs improvements to support MIS and heterogeneous volumes, but since it's
useful in some cases already (especially world volumes) it's there now.
Based on the code in the old branch by Stuart, with modifications by Thomas
and Brecht.
Differential Revision: https://developer.blender.org/D291
Indirect and Direct samples can now be clamped individually. This way we can clamp the indirect samples (fireflies), while keeping the direct highlights.
Example render: http://www.pasteall.org/pic/show.php?id=66586
WARNING: This breaks backwards compatibility. If you had Clamping enabled in an old file, you must re-enable either Direct/Indirect clamping or both again.
Reviewed by: brecht
Differential Revision: https://developer.blender.org/D303
This is done by adding a Volume Scatter node. In many cases you will want to
add together a Volume Absorption and Volume Scatter node with the same color
and density to get the expected results.
This should work with branched path tracing, mixing closures, overlapping
volumes, etc. However there's still various optimizations needed for sampling.
The main missing thing from the volume branch is the equiangular sampling for
homogeneous volumes.
The heterogeneous scattering code was arranged such that we can use a single
stratified random number for distance sampling, which gives less noise than
pseudo random numbers for each step. For volumes where the color is textured
there still seems to be something off, needs to be investigated.
Volumes can now have textured colors and density. There is a Volume Sampling
panel in the Render properties with these settings:
* Step size: distance between volume shader samples when rendering the volume.
Lower values give more accurate and detailed results but also increased render
time.
* Max steps: maximum number of steps through the volume before giving up, to
protect from extremely long render times with big objects or small step sizes.
This is much more compute intensive than homogeneous volume, so when you are not
using a texture you should enable the Homogeneous Volume option in the material
or world for faster rendering.
One important missing feature is that Generated texture coordinates are not yet
working in volumes, and they are the default coordinates for nearly all texture
nodes. So until that works you need to plug in object texture coordinates or a
world space position.
This is work by "storm", Stuart Broadfoot, Thomas Dinges and myself.
This is the simplest possible volume rendering case, constant density inside
the volume and no scattering or emission. My plan is to tweak, verify and commit
more volume rendering effects one by one, doing it all at once makes it
difficult to verify correctness and track down bugs.
Documentation is here:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Materials/Volume
Currently this hooks into path tracing in 3 ways, which should get us pretty
far until we add more advanced light sampling. These 3 hooks are repeated in
the path tracing, branched path tracing and transparent shadow code:
* Determine active volume shader at start of the path
* Change active volume shader on transmission through a surface
* Light attenuation over line segments between camera, surfaces and background
This is work by "storm", Stuart Broadfoot, Thomas Dinges and myself.
This actually works somewhat now, although viewport rendering is broken and any
kind of network error or connection failure will kill Blender.
* Experimental WITH_CYCLES_NETWORK cmake option
* Networked Device is shown as an option next to CPU and GPU Compute
* Various updates to work with the latest Cycles code
* Locks and thread safety for RPC calls and tiles
* Refactored pointer mapping code
* Fix error in CPU brand string retrieval code
This includes work by Doug Gale, Martijn Berger and Brecht Van Lommel.
Reviewers: brecht
Differential Revision: http://developer.blender.org/D36
* Change the default Light Path settings.
* Diffuse/Glossy bounces are now set to 4, to give a bit faster renders in default scenes. More bounces are often not needed (especially in animation).
* Transmission bounces have been increased to 12, to not run into problems with dark glass too quickly.
* Max/Min bounces are now 12/3.
and "Branched Path Tracing", to try to make it more clear that this is not
related to progressive refinement, non-progressive was always a bad name anyway.
* Add a "Total Samples" info at the bottom of the panel.
This makes understanding the Non-Progressive integrator easier, as it displays how many samples are used for the different ray types.
* Rename Squared Samples to Square samples, to indicate that the action is not already done. The new Total Samples info should make this easier to understand now as well. Also added back for Progressive integrator, for consistency.
Screenshot:
http://www.pasteall.org/pic/show.php?id=57980
These are not animatable! Note this is the case of most (all?) render settings, maybe we should go over both Cycles and internal ones, there are still quite a bunch of them that are marked as animatable... :/
- Removed the cycles subdivision and interpolation of hairkeys.
- Removed the parent settings.
- Removed all of the advanced settings and presets.
- This simplifies the UI to a few settings for the primitive type and a shape mode.
* Add a "Squared Samples" option to the UI, to use squared values for ease of use. This can make it easier from an artist point of view, to weak settings.
With this enabled, all Sample values will be squared. So 10 Samples become 100 Samples.
For the Non-Progressive integrator: 4 AA Samples * 5 Diffuse Samples would become 16 AA Samples * 25 Diffuse = 400 in total.
Patch by Matt Heimlich, with some minor edits by myself. Thanks!
instead of sobol. So far one doesn't seem to be consistently better or worse than
the other for the same number of samples but more testing is needed.
The random number generator itself is slower than sobol for most number of samples,
except 16, 64, 256, .. because they can be computed faster. This can probably be
optimized, but we can do that when/if this actually turns out to be useful.
Paper this implementation is based on:
http://graphics.pixar.com/library/MultiJitteredSampling/
Also includes some refactoring of RNG code, fixing a Sobol correlation issue with
the first BSDF and < 16 samples, skipping some unneeded RNG calls and using a
simpler unit square to unit disk function.
panel now has an option to specify how to use them. There's three options:
* Use: render layer samples override scene samples
* Bounded: bound render layer samples by scene samples
* Ignore: ignore render layer sample settings
Code is added to restrict the pixel size of strands in cycles. It works best with ribbon primitives and a preset for these is included. It uses distance dependent expansion of the strands and then stochastic strand removal to give a fading. To prevent a slowdown for triangle mesh objects in the BVH an extra visibility flag has been added. It is also only applied for camera rays.
The strand width settings are also changed, so that the particle size is not included in the width calculation. Instead there is a separate particle system parameter for width scaling.
well as I would like, but it works, just add a subsurface scattering node and
you can use it like any other BSDF.
It is using fully raytraced sampling compatible with progressive rendering
and other more advanced rendering algorithms we might used in the future, and
it uses no extra memory so it's suitable for complex scenes.
Disadvantage is that it can be quite noisy and slow. Two limitations that will
be solved are that it does not work with bump mapping yet, and that the falloff
function used is a simple cubic function, it's not using the real BSSRDF
falloff function yet.
The node has a color input, along with a scattering radius for each RGB color
channel along with an overall scale factor for the radii.
There is also no GPU support yet, will test if I can get that working later.
Node Documentation:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/Shaders#BSSRDF
Implementation notes:
http://wiki.blender.org/index.php/Dev:2.6/Source/Render/Cycles/Subsurface_Scattering
When using triangle primitives this fix enables 'closed tip'.
UVs and vertex colours are added when using triangle primitives for hair.
Two new preset modes have also been included to allow easy access to curves and triangle planes.
Added export of multiple UV coordinates and vertex colour attributes.
A debugging option to export the strands without using the cache has also been removed.
The curve segment primitive has been added. This includes an intersection function and changes to the BVH.
A few small errors in the line segment intersection routine are also fixed.
* Added new option to chose the tile order.
In addition to the "Center" method, 4 new methods are available now, like Top -> Bottom and Right -> Left.
Thanks to Sergey for code review and some tweaks!
This assumptions are now made:
- Internally float buffers are always linear alpha-premul colors
- Readers should worry about delivering float buffers with that
assumptions.
- There's an input image setting to say whether it's stored with
straight/premul alpha on the disk.
- Byte buffers are now assumed have straight alpha, readers should
deliver straight alpha.
Some implementation details:
- Removed scene's color unpremultiply setting, which was very
much confusing and was wrong for default settings.
Now all renderers assumes to deliver premultiplied alpha.
- IMB_buffer_byte_from_float will now linearize alpha when
converting from buffer.
- Sequencer's effects were changed to assume bytes have got
straight alpha. Most of effects will work with bytes still,
however for glow it was more tricky to avoid data loss, so
there's a commented out glow implementation which converts
byte buffer to floats first, operates on floats and returns
bytes back. It's slower and not sure if it should actually
be used -- who're using glow on alpha anyway?
- Sequencer modifiers should also be working nice with straight
bytes now.
- GLSL preview will predivide float textures to make nice shading,
shading with byte textures worked nice (GLSL was assuming straight
alpha).
- Blender Internal will set alpha=1 to the whole sky. The same
happens in Cycles and there's no way to avoid this -- sky is
neither straight nor premul and doesn't fit color pipeline well.
- Straight alpha mode for render result was also eliminated.
- Conversion to correct alpha need to be done before linearizing
float buffer.
- TIFF will now load and save files with proper alpha mode setting
in file meta data header.
- Remove Use Alpha from texture mapping and replaced with image
datablock setting.
Behaves much more predictable and clear from code point of view
and solves possible regressions when non-premultiplied images were
used as textures with ignoring alpha channel.
Patch [#33445] - Experimental Cycles Hair Rendering (CPU only)
This patch allows hair data to be exported to cycles and introduces a new line segment primitive to render with.
The UI appears under the particle tab and there is a new hair info node available.
It is only available under the experimental feature set and for cpu rendering.
was added for cycles.
This fixes the case where the option is disabled. I moved the option now to
Blender itself and made it keep the engine around only when it's enabled. Also
fixes case where there could be issues when switching to another renderer.
This option enables keeping loaded images in the memory in-between
of rendering.
Implemented by keeping render engine alive for until Render structure
is being freed.
Cycles will free all data when render finishes, optionally keeping
image manager untouched. All shaders, meshes, objects will be
re-allocated next time rendering happens.
Cycles cession and scene will be re-created from scratch if render/
scene parameters were changed.
This will also allow to keep compiled OSL shaders in memory without
need to re-compile them again.
P.S. Performance panel could be cleaned up a bit, not so much happy
with it's vertical alignment currently but not sure how to make
it look better.
P.P.S. Currently the only way to free images from the device is to
disable Persistent Images option and start rendering.