This inconsistency drove me totally crazy, it's really confusing
when it's inconsistent especially when you work on both Cycles and
Blender sides.
Shouldn;t cause merge PITA, it's whitespace changes only, Git should
be able to merge it nicely.
Issue was caused by accident in c8a9a56 which not only disabled glossy
reflection if Glossy visibility is disabled, but also Diffuse reflection.
Quite safe and should go to final release branch.
Issue was introduced in 01ee21f where i didn't notice *_setup()
function only doing partial initialization, and some of parameters
are expected to be initialized by callee function.
This was hitting only some setups, so tests with benchmark scenes
didn't unleash issues. Now it should all be fine.
This is to go to the 2.74 branch and we actually might re-AHOY.
Fix T44007: Cycles Volumetrics: block artifacts with overlapping volumes
The issue was caused by uninitialized parameters of some closures, which
lead to unpredictable behavior of shader_merge_closures().
The fix was really flacky, in terms during speed benchmarks i had
abort() in the fallback block to be sure it never runs in production
scenes, but that affected on the optimization as well. Without this
abort there's quite bad slowdown of 5-7% on the renders even tho
the Pleucker fallback was never run.
This is all weird and for now reverting the change which affects on
all the production scenes and will look into alternative fixes for
the original issue with precision loss on huge planes.
This reverts commit 9489205c5c.
This merges consecutive empty steps in the decoupled record function,
which can lead to fewer iterations in the scatter functions.
Only helps slightly though (1%), but doesn't hurt to have this.
Differential Revision: https://developer.blender.org/D873
The issue was caused by numerical instability whrn having ray origin close to a huge
triangle, which could have aused bad ray distance check.
Watertight Woop intersection isn't really addressing such cases, it's dealing with
small triangles far away from the ray origin instead, so it's a bit tricky yo make
it working reliably.
While we're quite close to the release it's safer to do check in Pleaucker coordinates
if ray close to a huge triangle. Likely this additional check combined with some other
tweaks to the code doesn't cause measurable slowdown in the scenes tested here.
After the release we can play a bit more with this code in order to make it more
stable without Pleucker fallback.
Seems it's just another issue with the compiler, worked around by explicitly
telling not to inline some function.
In theory we can unify this with CPU, but we're quite close to the release
so better be safe than sorry.
The following commits were supposed to add anti-alias and help with OSL
baking:
7b16fda3791b92dfa961
However they introduced other issues (artifacts mostly), see T43550 .
Leaving the code ifdef'ed for now.
This attribute missed derivatives calculation.
Not totally sure what's the proper approach for algebraic derivative
calculation, so calculating them by definition. This isn't fastest
way to do it in this case and could be replaced with some smarter magic
in the wireframe calculation loop.
At least currently implemented approach is better than nothing.
It was complaining about explicit __constant to __private memory conversion,
which is now worked around using implicit conversion.
It's not a real fix i'm afraid and i'm still failing to build OpenCL kernel
with latest Linux drivers, but maybe it'll let someone else to investigate
what causes compiler to run out of memory?
Issue was caused by the changes in 7b16fda which changed the initial
state for rng. This commit makes it so the same initial hash is used
(which solves the regression without distorting AA-looking image.
It also makes it so OpenCL compiler is happy about this code (before
this change it'll complain about trying to cast private variable to
global one).
OpenCL doesn't let you to get address of vector components, which
is kinda annoying. On the other hand, maybe now compiler will have
more chances to optimize something out.
It could have happened with really long rays and small steps.
Step size will be adjusted to the clamped number of steps in order
to preserve render result compatibility as much as possible.
We should probably reformulate this a bit, so it will give the
same looking results without step tweaks. But this new behavior
should already be much better that it was before.
This attribute means how "pointy" the geometry surface is, which allows to do
effects like dirt maps and wear-off effects on render geometry. This means the
attribute is calculated for the final mesh which means no baking (which implies
UV unwrap) is needed. Apart from this the behavior is quite close to how vertex
dirty colors works.
The new attribute is available as an output socket of Geometry node.
There's no penalty for the render time, only some delay on scene preparation
(the delay is linear of the mesh complexity).
Reviewers: brecht, juicyfruit
Subscribers: eyecandy, venomgfx
Differential Revision: https://developer.blender.org/D1086
From more investigation of the numeric failures in the kernel it appears
the check was rather correct. But in theory it;s also needed for the motion
triangles.
The issue was caused by lack of check for whether fresnel term is actually
giving total internal reflection in refraction BSDFs. This lead to usage of
arbitrary vector of (0, 0, 0) as reflection, giving numeric issues in other
areas of the kernel.
This gives some visual changes of sharp reflection but it seems to be rather
proper now. Which also corresponds with rough glossy reflection with sharpness
set to 0.001 (previously it was totally different from sharpness of 0.0, which
is just weird).
This is the same issue T43475: SSE4 code is more robust to non-finite values
in the ray origin/direction. So for now added a check before doing BVH traversal
for pre-SSE4 CPUs.
For sure actual root of the issue is a bit different and much more tricky to
solve, especially without disturbing render results too much. Still looking
into this.
In any case, it's kinda fine to have such a check, we might later make it to be
a kernel_assert() instead of just a return.
Previous fix didn't quite work well. For some reason everything worked fine when
using native nvcc in 32bit environment, but cross-compiling from 64bit platform
it was still running out of memory.
For now just made it so all the kernels are slower on 32bit CUDA as a temporary
solution. Either it'll be solved in next CUDA releases (by dropped 32bit? =\) or
we'll find better workaround.
The issue was caused by the way how we shoot the ray to see which rays we're
inside which might start bouncing back-n-forth between two close to parallel
intersecting faces.
Real solution would be to record all the intersections when shooting the ray,
but it's kinda tricky on GPU because of needed sorting and uncertainty of
how huge intersection array should be.
For now we'll just limit number of steps in the check so in worst case we'll
have some samples not being correct which will be compensated with further
sampling. Shouldn't be an issue since probability of such a lock is quite
small actually.