This is a new option for panorama cameras to render
stereo that can be used in virtual reality devices
The option is available under the camera panel when Multi-View is enabled (Views option in the Render Layers panel)
Known limitations:
------------------
* Parallel convergence is not supported (you need to set a convergence distance really high to simulate this effect).
* Pivot was not supposed to affect the render but it does, this has to be looked at, but for now set it to CENTER
* Derivatives in perspective camera need to be pre-computed or we shuld get rid of kcam->dx/dy (Sergey words, I don't fully grasp the implication shere)
* This works in perspective mode and in panorama mode. However, for fully benefit from this effect in perspective mode you need to render a cube map. (there is an addon for this, developed separately, perhaps we could include it in master).
* We have no support for "neck distance" at the moment. This is supposed to help with objects at short distances.
* We have no support to rotate the "Up Axis" of the stereo plane. Meaning, we hardcode 0,0,1 as UP, and create the stereo pair related to that. (although we could take the camera local UP when rendering panoramas, this wouldn't work for perspective cameras.
* We have no support for interocular distance attenuation based on the proximity of the poles (which helps to reduce the pole rotation effect/artifact).
THIS NEEDS DOCS - both in 2.78 release log and the Blender manual.
Meanwhile you can read about it here: http://code.blender.org/2015/03/1451
This patch specifically dates from March 2015, as you can see in the code.blender.org post. Many thanks to all the reviewers, testers and minor sponsors who helped me maintain spherical-stereo for 1 year.
All that said, have fun with this. This feature was what got me started with Multi-View development (at the time what I was looking for was Fulldome stereo support, but the implementation is the same). In order to make this into Blender I had to make it aiming at a less-specic user-case Thus Multi-View started. (this was December 2012, during Siggraph Asia and a chat I had with Paul Bourke during the conference). I don't have the original patch anymore, but you can find a re-based version of it from March 2013, right before I start with the Multi-View project https://developer.blender.org/P332
Reviewers: sergey, dingto
Subscribers: #cycles
Differential Revision: https://developer.blender.org/D1223
This re-enables the AA jittering, but with proper clamping so that u >= 0,
v >= 0 and u+v <= 1.
Differential Revision: https://developer.blender.org/D1254
Seems to be some compiler fault which leads to a wrong flag being used,
making it so wrong number of samples is used for the background.
This should in theory fix issue reported in T47213.
The issue here was actually somewhere else - the attached scene from the report used a light falloff node in a sunlamp (aka distant light).
However, since distant lamps set the ray length to FLT_MAX and the light falloff node squares this value, it overflows and produces a NaN
weight, which propagates and leads to a NaN intensity, which is then clamped to zero and produces the black pixels.
To fix that issue, the smoothing part of the light falloff is just ignored if the smoothing term isn't finite (which makes sense since
the term should converge to 1 as the distance increases).
The reason for the different results on CPUs and GPUs is not perfectly clear, but probably can be explained with different handling of
Inf/NaN edge cases.
Also, to notice issues like these faster in the future, kernel_asserts were added that evaluate as false as soon as a non-finite intensity is produced.
This was a hard decision, because going newer CUDA toolkit makes
rendering up to 5% slower. But on another hand, it solves major
speed regressions (up to 30%) with branched path tracing on a
top level cards.
Neither of those regressions have a meaningful and sane workaround
from the code itself.
Toolkit 6.5 could still be used, but it's no longer recommended one.
When using multiple portals, scene areas behind one of the portals were rendered darker than they should.
The reason for that is a pretty stupid mistake: Since portals are only used at positions that aren't behind them,
only portals that are used should be accounted for in the PDF calculation. That was actually the case, but the final
divide incorrectly divided by the total amount of portals, not the amount of visible ones.
Another issue with areas behind portals was the PDF evaluation function.
The new evaluation code is shorter, simpler and fixes this issue.
Also, the threshold for the distance check was increased to avoid artifacts where portals touch a surface.
Supports both smoke/fire and point density textures now.
Reduces number of textures available for sm_20 and sm_21, but you have
to compromise somewhere on such a limited hardware.
Currently limited to linear interpolation only, and decoupled ray
marching is not supported yet. Think those could be considered just a
further improvement.
Some quick example:
https://developer.blender.org/F282934
Code is minimal and we can fully consider it a fix for missing
support of 3D textures with CUDA.
Reviewers: lukasstockner97, brecht, juicyfruit, dingto
Reviewed By: brecht, juicyfruit, dingto
Subscribers: mib2berlin
Differential Revision: https://developer.blender.org/D1806
There are several fixes in here, which hopefully will make the shader
working correct without too much magic in there.
First of all, this commit brings BURLEY_TRUNCATE down from 30 to 16
which reduces noise a lot. It's still higher than original truncate
from Brecht, but this reduces PDF value at a cutoff distance by an
order of magnitude (now it's 0.008387, previously it was 0.063521
for the albedo of 0.8 and radius 1.0). This should converge to a
proper result faster and don't have artifacts.
This kind of reverts fix for T47356, but after additional thinking
came to conclusion Burley is not being totally smooth, it is about
giving less waxy results which it's kind of doing in the file.
Second of all, this commit fixes burley_eval() to use normalized
diffusion reflectance. This matches the way we calculate CDF and
solves numeric instability close to 0, making PDF profile looking
closer to other SSS profiles:
https://developer.blender.org/F282355https://developer.blender.org/F282356https://developer.blender.org/F282357
Reviewers: brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D1792
Basically the idea is to make code robust against extending
enum options in the future by falling back to a known safe
default setting when RNA is set to something unknown.
While this approach solves the issues similar to T47377,
but it wouldn't really help when/if any of the RNA values
gets ever deprecated and removed. There'll be no simple
solution to that apart from defining explicit mapping from
RNA value to Cycles one.
Another part which isn't so great actually is that we now
have to have some enum guards and give some explicit values
to the enum items, but we can live with that perhaps.
Reviewers: dingto, juicyfruit, lukasstockner97, brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D1785
After the clamping commit we need to bump BURLEY_TRUNCATE
constant a bit, otherwise mean free path does not really
match the disk radius needed for importance sampling.
Now pass_filter is modified to have exactly the flags for the light components
that need to be baked, based on the shader type. This simplifies the logic.
The value was too high, causing bad Newton iteration step.
Now the value is not so good, but it's still within 9 iterations
and those high number of iterations are only happening in
approx 1% of input values.
The idea is simply to pre-compute fitting and parameterization
in the bssrdf_setup() function and re-use the values in both
sample() and eval().
The only trick is where to store the pre-calculated values and
the answer is inside of ShaderClosure->custom{1,2,3}. There's
no memory bump here because we now simply re-use padding fields
for the pre-calculated values. Similar trick we can do for other
BSDFs.
Seems to give nice speedup up to 7% here on my desktop with
Core i7 CPU, SSE4.1 kernel.
The idea is to switch from allocating separate buffers for shader data's
structure of arrays to allocating one huge memory block and do some index
trickery to make it accessed as SOA.
This saves quite reasonable amount of lines of code in device_opencl and
also makes it possible to get rid of special declaration of ShaderData
structure.
As a side effect it also makes it easier to experiment with SOA vs. AOS
for split kernel.
Works fine here on NVidia GTX580, Intel CPU amd AMD Fiji cards.
Reviewers: #cycles, brecht, juicyfruit, dingto
Differential Revision: https://developer.blender.org/D1593
It was mainly unfinished code for volume in a split kernel which
should be done differently anyway to avoid such a code copy-paste.
The code didn't really work, so likely nobody will cry.
Use KernelGlobals to access all the global arrays for the intermediate
storage instead of passing all this storage things explicitly.
Tested here with Intel OpenCL, NVIDIA GTX580 and AMD Fiji, didn't see
any artifacts, so guess it's all good.
Reviewers: juicyfruit, dingto, lukasstockner97
Differential Revision: https://developer.blender.org/D1736
There is no function pointers in OpenCL specification. For as long
as we want to support this platform we should follow the specifications.
While the code is not totally optimal now, it should not be that huge
of performance issue on CPU since it does jump tables just nicely, so
it's not that much extra computation here.
The combined pass is built with the contributions the user finds fit.
It is useful for lightmap baking, as well as non-view dependent effects
baking.
The manual will be updated once we get closer to the 2.77 release.
Meanwhile the new page can be found here:
http://dalaifelinto.com/blender-manual/render/cycles/baking.html
Reviewers: sergey, brecht
Differential Revision: https://developer.blender.org/D1674
This commit removes the experimental CUDA kernel, making SSS and CMJ
regular features.
Several improvements have been made in the past few
weeks (thanks Sergey!) which make SSS render several times faster (2-3x
compared to 2.76b) on the GPU, and the increased VRAM usage has also been
fixed. Therefore the experimental kernel is no longer needed.
Differential Revision: https://developer.blender.org/D1726
Manual has been updated: too:
https://www.blender.org/manual/render/cycles/features.html
The goal is to make Experimental kernel closer in performance to the
official kernel, avoiding spills and such.
There should not be big impact on official kernel, own tests showed
few percent performance drop on laptop's GPU. CPU was always the
same speed on AVX, AVX2 and SSE4.1 CPUs i've been testing here.
This seems to be the last essential step before we can get rid of
Experimental kernel and enable SSS officially on GPU without causing
some major performance issues.
Surely some more tweaks are possibly required, but that we can do
for until cows go home anyway.