This feature is also known by the name Samples Offset, which allows
artists to render animation with given amount of samples N, but then
render more samples, starting from N and ending with M (where M > N)
and merge renders together as if they rendered exactly M samples.
Surely such effect could be achieved by changing Seed variable, but
that has possible issues with correlation artifacts and requiring to
manually deal with per render layer samples and such.
While we can't support all possible renderfarm-related features in
Cycles it's nice to support really commonly used stuff.
Here's a command how to run Blender with the new feature enabled:
blender -- --cycles-resumable-num-chunks 24 --cycles-resumable-current-chunk 2
This command will divide samples range in 24 parts and render
range #2 (chunk number is 1-based).
This feature might be changed a bit after we'll do some tests here
in the studio with it.
Buffer params needs to know camera's border, otherwise it'll create full buffer.
There might be some issues with stereo camera still, but in worst case it'll
only update camera twice as far as i can tell. Not ideal, but better than no
border render at all.
This is a new option for panorama cameras to render
stereo that can be used in virtual reality devices
The option is available under the camera panel when Multi-View is enabled (Views option in the Render Layers panel)
Known limitations:
------------------
* Parallel convergence is not supported (you need to set a convergence distance really high to simulate this effect).
* Pivot was not supposed to affect the render but it does, this has to be looked at, but for now set it to CENTER
* Derivatives in perspective camera need to be pre-computed or we shuld get rid of kcam->dx/dy (Sergey words, I don't fully grasp the implication shere)
* This works in perspective mode and in panorama mode. However, for fully benefit from this effect in perspective mode you need to render a cube map. (there is an addon for this, developed separately, perhaps we could include it in master).
* We have no support for "neck distance" at the moment. This is supposed to help with objects at short distances.
* We have no support to rotate the "Up Axis" of the stereo plane. Meaning, we hardcode 0,0,1 as UP, and create the stereo pair related to that. (although we could take the camera local UP when rendering panoramas, this wouldn't work for perspective cameras.
* We have no support for interocular distance attenuation based on the proximity of the poles (which helps to reduce the pole rotation effect/artifact).
THIS NEEDS DOCS - both in 2.78 release log and the Blender manual.
Meanwhile you can read about it here: http://code.blender.org/2015/03/1451
This patch specifically dates from March 2015, as you can see in the code.blender.org post. Many thanks to all the reviewers, testers and minor sponsors who helped me maintain spherical-stereo for 1 year.
All that said, have fun with this. This feature was what got me started with Multi-View development (at the time what I was looking for was Fulldome stereo support, but the implementation is the same). In order to make this into Blender I had to make it aiming at a less-specic user-case Thus Multi-View started. (this was December 2012, during Siggraph Asia and a chat I had with Paul Bourke during the conference). I don't have the original patch anymore, but you can find a re-based version of it from March 2013, right before I start with the Multi-View project https://developer.blender.org/P332
Reviewers: sergey, dingto
Subscribers: #cycles
Differential Revision: https://developer.blender.org/D1223
Now pass_filter is modified to have exactly the flags for the light components
that need to be baked, based on the shader type. This simplifies the logic.
This way we avoid passing structures which could be up to
few hundred bytes by value to the utility functions.
Ideally we'll also have to add `const` qualifier in majority
of the calls, but C++ RNA does not allow us to do that because
it does not know if some function modifies contents or not.
The combined pass is built with the contributions the user finds fit.
It is useful for lightmap baking, as well as non-view dependent effects
baking.
The manual will be updated once we get closer to the 2.77 release.
Meanwhile the new page can be found here:
http://dalaifelinto.com/blender-manual/render/cycles/baking.html
Reviewers: sergey, brecht
Differential Revision: https://developer.blender.org/D1674
Works totally similar to camera motion blur and majority of the changes are
related on just passing extra arguments to sync() functions.
Couple of things still to look into:
- Motion pass will not include motion caused by the zoom.
- Only perspective cameras are supported currently.
- Motion is being interpolated on projected coordinates, which might give
different results from constructing projection matrix from interpolated
field of view.
This could be good enough for us, but we need to consider improving this
at some point.
Reviewers: juicyfruit, dingto
Reviewed By: dingto
Differential Revision: https://developer.blender.org/D1383
This commit implements point density texture for Cycles shading nodes.
It's done via creating voxel texture at shader compilation time, Not
totally memory efficient, but avoids adding sampling code to kernel
(which keeps render time as low as possible), In the future this will
be compensated by using OpenVDB for more efficient storage of sparse
volume data.
Sampling of the voxel texture is happening at blender side and the
same code is used as for Blender Internal's renderer.
This texture is controlled by only object, particle system and radius.
Linear falloff is used and there's no turbulence. This is because
falloff is expected to happen using Curve Mapping node. Turbulence
will be done as a distortion on the input coordinate. It's already
possible to fake it using nose textures and in the future we can add
more proper turbulence distortion node, which then could also be used
for 2D texture mapping.
Particle color support is done by Lukas, thanks!
This means render devices now might skip building baking kernels in cases when
only actual render-related functionality is used.
For now it's only implemented for OpenCL split kernel device and mainly needed
to work around some compiler-specific bugs which crashes on building the kernel.
Using OpenCL for baking might still crash the driver, but at least there is now
higher probability of that GPU will be usable to render the scene.
Real fix should actually be done in the driver side.
TODO: We might want to refactor debug passes into PASS_DEBUG and some
debug_type (similar to Blender's side passes) to avoid issue of running
out of bits.
Quite straightforward implementation, but still needs some work for the split
kernel. Includes both regular and split kernel implementation for that.
The pass is not exposed to the interface yet because it's currently not really
easy to have same pass listed in the menu multiple times.
Combine all the highpoly pixel arrays into a single array with a lookup
object_id for each of the highpoly objects.
Note: This changes the Bake API, external engines should refer to the
bake_api.c for the latest API.
Many thanks for Sergey Sharybin for the complete review, changes
suggestion and feedback. (you rock!)
Reviewers: sergey
Subscribers: pildanovak, marcclintdion, monio, metalliandy, brecht
Maniphest Tasks: T41092
Differential Revision: https://developer.blender.org/D772
Official Documentation:
http://www.blender.org/manual/render/workflows/multiview.html
Implemented Features
====================
Builtin Stereo Camera
* Convergence Mode
* Interocular Distance
* Convergence Distance
* Pivot Mode
Viewport
* Cameras
* Plane
* Volume
Compositor
* View Switch Node
* Image Node Multi-View OpenEXR support
Sequencer
* Image/Movie Strips 'Use Multiview'
UV/Image Editor
* Option to see Multi-View images in Stereo-3D or its individual images
* Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images
I/O
* Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images
Scene Render Views
* Ability to have an arbitrary number of views in the scene
Missing Bits
============
First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report.
Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report.
Everything else is likely small todos, and may wait until we are sure none of the above is happening.
Apart from that there are those known issues:
* Compositor Image Node poorly working for Multi-View OpenEXR
(this was working prefectly before the 'Use Multi-View' functionality)
* Selecting camera from Multi-View when looking from camera is problematic
* Animation Playback (ctrl+F11) doesn't support stereo formats
* Wrong filepath when trying to play back animated scene
* Viewport Rendering doesn't support Multi-View
* Overscan Rendering
* Fullscreen display modes need to warn the user
* Object copy should be aware of views suffix
Acknowledgments
===============
* Francesco Siddi for the help with the original feature specs and design
* Brecht Van Lommel for the original review of the code and design early on
* Blender Foundation for the Development Fund to support the project wrap up
Final patch reviewers:
* Antony Riakiotakis (psy-fi)
* Campbell Barton (ideasman42)
* Julian Eisel (Severin)
* Sergey Sharybin (nazgul)
* Thomas Dinged (dingto)
Code contributors of the original branch in github:
* Alexey Akishin
* Gabriel Caraballo
This inconsistency drove me totally crazy, it's really confusing
when it's inconsistent especially when you work on both Cycles and
Blender sides.
Shouldn;t cause merge PITA, it's whitespace changes only, Git should
be able to merge it nicely.
This flag is global for all the sessions and never changes. so it doesn't
really make sense to pass it around to all sessions and synchronization
routines.
Switched to a static member of BlenderSession now, but it's probably more
logical to introduce some sort of BlenderGlobals. Doesn't currently worth
a hassle for a single boolean flag tho.
This patch makes Cycles ignore the time spent in BVH construction etc. when
estimating the remaining time. Considering that the remaining time is calculated
based on the average time per tile so far, as far as I understand it makes no
sense to include the preprocessing time.
Reviewers: sergey, #cycles
Reviewed By: sergey, #cycles
Subscribers: sergey
Projects: #cycles
Differential Revision: https://developer.blender.org/D895
This commit enables QBVH optimization structure automatically if rendering
with CPU and SSE2 support is detected.
This brings render time of agent shot back to the speed it used to be before
the watertight intersections commit, single koro and sponza scenes are about
7% faster here.
This way CUDA errors are visible in the image info line,
which makes things to behave the same across viewport and
final rendering.
That's right, we've got error reported via reports and info
line now. This is based on the feedback from our gooseberry
team.
This way when something goes wrong in Cycles (for example out of VRAM, timelimit
launching the kernel etc) we'll have a nice report in the Info space header.
Sure it'll be nice to have mention of error in the image editor's information
line, but that's for the future.
This fixes T42747: "CUDA error" appears only momentarily, then disappears
Currently only summed number of traversal steps and intersections used by the
camera ray intersection pass is implemented, but in the future we will support
more debug passes which would help checking what things makes the scene slow.
Example of such extra passes could be number of bounces, time spent on the
shader tree evaluation and so.
Implementation from the Cycles side is pretty much straightforward, could only
mention here that it's a build-time option disabled by default.
From the blender side it's implemented as a PASS_DEBUG with several subtypes
possible. This way we don't need to create an extra DNA pass type for each of
the debug passes, saving us a bits.
Reviewers: campbellbarton
Reviewed By: campbellbarton
Differential Revision: https://developer.blender.org/D813
There were several issues involved into triangle ribbons hair:
- Even for the viewport rendering the blender scene camera was
used for orientation. This made hair triangles oriented to
the scene camera, not to the viewport camera.
- Triangle orientation was actually supposing the camera is
perspective. Triangles weren't oriented properly for the
orthographic camera resulting in different hair width across
it's length.
This issues are solved now, but there are some related TODOs:
- Rotating viewport doesn't re-orient the triangles, so after
viewport navigation hair might not look correct. However,
with this fix toggling viewport render (to force hair sync)
makes viewport render correct.
This isn't so much trivial fix, would require making BVH
aware of the dynamic triangle orientation, so they get
properly oriented without full hair re-sync.
- Panorama camera behavior didn't change but looks like it
should, however not really sure atm what's the right thing
to do here.
It now uses the tile size to split the job. For CPU this may add
overhead, but for GPU this is highly needed.
Reviewers: sergey
Differential Revision: https://developer.blender.org/D690
(original patch by Sergey Sharybin)
Note: RNA API can't use size_t at the moment. Once it does this patch
can be tweaked a bit to fully benefit from size_t larger dimensions.
(right now num_pixels is passed as int)
Reviewed By: sergey, campbellbarton
Differential Revision: https://developer.blender.org/D688
Baking progress preview is not possible, in parts due to the way the API
was designed. But at least you get to see the progress bar while baking.
Reviewers: sergey
Differential Revision: https://developer.blender.org/D656
Now baking does one AA sample at a time, just like final render. There is
also some code for shader antialiasing that solves T40369 but it is disabled
for now because there may be unpredictable side effects.
Expand Cycles to use the new baking API in Blender.
It works on the selected object, and the panel can be accessed in the Render panel (similar to where it is for the Blender Internal).
It bakes for the active texture of each material of the object. The active texture is currently defined as the active Image Texture node present in the material nodetree. If you don't want the baking to override an existent material, make sure the active Image Texture node is not connected to the nodetree. The active texture is also the texture shown in the viewport in the rendered mode.
Remember to save your images after the baking is complete.
Note: Bake currently only works in the CPU
Note: This is not supported by Cycles standalone because a lot of the work is done in Blender as part of the operator only, not the engine (Cycles).
Documentation:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Bake
Supported Passes:
-----------------
Data Passes
* Normal
* UV
* Diffuse/Glossy/Transmission/Subsurface/Emit Color
Light Passes
* AO
* Combined
* Shadow
* Diffuse/Glossy/Transmission/Subsurface/Emit Direct/Indirect
* Environment
Review: D421
Reviewed by: Campbell Barton, Brecht van Lommel, Sergey Sharybin, Thomas Dinge
Original design by Brecht van Lommel.
The entire commit history can be found on the branch: bake-cycles
These can currently be accessed by adding an Attribute node and specifying one
of those three names. A Smoke/Fire node should be added at some point to make
this more convenient.
These values might change still before the release, in particular for flame the
meaning seems unclear, it's just values in the 0..1 range. This is useful for
color ramps, but it might be good if this was also available as temperature in
kelvin so it can be plugged into the blackbody node. But I couldn't figure out
from the smoke code if or how this corresponds to a physical unit.
Here's a (quite poor) example file for a fire + smoke setup:
http://www.pasteall.org/blend/27990
Issue was caused by the wrong usage of OCIO GLSL binding API. To make it
work properly on pre-GLSL-1.3 drivers shader is to be enabled after the
texture is binded to the opengl context. Otherwise it wouldn't know the
proper texture size.
This is actually a regression in 2.70 and to be ported to 'a'.
Z, Index, normal, UV and vector passes are only affected by surfaces with alpha
transparency equal to or higher than this threshold. With value 0.0 the first
surface hit will always write to these passes, regardless of transparency. With
higher values surfaces that are mostly transparent can be skipped until an opaque
surface is encountered.
Summary:
Mainly addressed to solve old TODO with color managed fallback
to CPU mode when displaying render result during rendering.
That fallback was caused by the fact that partial image update
was always acquiring image buffer for composite output and was
only modifying display buffer directly.
This was a big issue for Cycles rendering which renders layers
one by one and wanted to display progress of each individual
layer. This lead to situations when display buffer was based on
what Cycles passes via RenderResult and didn't take layer/pass
from image editor header into account.
Now made it so image buffer which partial update is operating
with always corresponds to what is set in image editor header.
To make Cycles displaying progress of all the layers one by one
made it so image_rect_update switches image editor user to
newly rendering render layer. It happens only once when render
engine starts rendering next render layer, so should not be
annoying for navigation during rendering.
Additional change to render engines was done to make it so
they're able to merge composite output to final result
without marking tile as done. This is done via do_merge_result
argument to end_result() callback. This argument is optional
so should not break script compatibility.
Additional changes:
- Partial display update for Blender Internal now happens from
the same thread as tile rendering. This makes it so display
conversion (which could be pretty heavy actually) is done in
separate threads. Also gives better UI feedback when rendering
easy scene with small tiles.
- Avoid freeing/allocating byte buffer for render result
if it's owned by the image buffer. Only mark it as invalid
for color management.
Saves loads of buffer re-allocations in cases when having
several image editors opened with render result. This change
in conjunction with the rest of the patch gave around
50%-100% speedup of render time when displaying non-combined
pass during rendering on my laptop.
- Partial display buffer update was wrong for buffers with number
of channels different from 4.
- Remove unused window from RenderJob.
- Made image_buffer_rect_update static since it's only used
in single file.
Reviewers: brecht
Reviewed By: brecht
CC: dingto
Differential Revision: http://developer.blender.org/D98