Seems set_intersection() requires passing explicit comparator if non-default
one is used for the sets. A bit weird, but can't really find another explanation
here about whats' going on here.
The issue was caused by not really optimal graph traversal for gathering nodes
dependencies which could have exponential complexity with a long tree branches
connected with multiple connections between them.
Now we optimize the depth traversal and perform early output if the node was
already traversed.
Please note that this adds some limitations to the use of SVM compiler's
find_dependencies() in the cases when skip_node is not NULL and one wants to
perform dependencies find sequentially with the same set. This doesn't happen
in the code, but one should be aware of this.
The issue was than nodes dependencies were stored as set<ShaderNode*> which
is actually a so called "strict weak ordered", meaning order of nodes in
the set is strictly defined, but based on the ShaderNode pointer. This means
that between different render invokations order of original nodes could be
different due to different pointers allocated for ShaderNode.
This commit makes it so dependencies and maps used for ShaderNodes are based
on the node->id which has much more predictable order. It's still possible
to trick the system by doing some crazy edits during viewport rendfer and
cause difference between viewport and final render stacks.
Reviewers: brecht
Reviewed By: brecht
Subscribers: LazyDodo
Differential Revision: https://developer.blender.org/D1630
This is sort of extension of existing Use Environment option which now allows to
disable AO on the render layer basis.
Useful in cases like disabling AO for the background because it might make it
too flat and so.
Reviewers: juicyfruit, dingto, brecht
Reviewed By: brecht
Subscribers: eyecandy, venomgfx
Differential Revision: https://developer.blender.org/D1633
This gives few percent extra memory saving for the CUDA kernel when
using regular path tracing.
Still more like an experiment, but will be handy in the future.
Summary: By calculating the Camera-to-Screen-Matrix first, one inversion can be saved in the Camera sync.
It won't really improve speed and/or precision, it's mainly a small cleanup.
Reviewers: sergey, dingto
Subscribers:
Basically we can not use sharp closure as a substitude when filter glossy is
used. This is because we can not blur sharp reflection/refraction.
This is quite quick and not really clean implementation. Not really happy
with manual handling of original settings, but this is as good as we can do
in the quick patch. It's a good acknowledgment and we now can re-consider
some aspects of graph simplification to make such cases more natively
supported.
P.S. This failure would have been shown by our regression tests, so please,
bother a bit to run Cycles's test sweep before doing such optimizations.
This commit adds the Blackman-Harris windows function as a pixel filter to Cycles. On some cases, such as wireframes or high-frequency textures,
Blackman-Harris can give subtle but noticable improvements over the Gaussian window.
Also, the gaussian window was truncated too early, which degraded quality a bit, therefore the evaluation region is now three times as wide.
To avoid artifacts caused by the wider curve, the filter table size is increased to 1024.
Reviewers: #cycles
Differential Revision: https://developer.blender.org/D1453
We fallback to Sharp closures for Glossy, Glass and Refraction nodes now, in case the Roughness input is disconnected and 0 (< 1e-4f to be exact).
This way we gain a few percentages of performance, in case the user did not manually set the closure type to "Sharp" in the UI.
Sharp will probably be removed from the UI as a followup, not needed anymore with this internal optimization.
Original idea by Lukas Stockner(Differential Revision: https://developer.blender.org/D1439), code implementation by myself.
Previously shutter was instantly opening, staying opened for the shutter time
period of time and then instantly closing. This isn't quite how real cameras
are working, where shutter is opening with some curve. Now it is possible to
define user curve for how much shutter is opened across the sampling period
of time.
This could be used for example to make motion blur trails softer.
This adds an option to control at what time relative to the current frame
the shutter is fully opened. Supported options are:
- Shutter is starting to open at the current frame
- Shutter is fully opened at the current frame
- Shutter is fully closed at the current frame
Custom shutter time offset is possible, same as custom curve for shutter
openness but those are considered nice things to have rather than something
crucial.
Reviewers: juicyfruit, dingto
Subscribers: venomgfx, hjalti
Differential Revision: https://developer.blender.org/D1380
The issue was caused by non-continuous tangent space calculated for triangles.
This commit adds a Tangent input to Hair BSDF node which can be used to hook up
Tangent calculated form UV as an input to the node in order to make sure the
tangent space is continuous.
Doing this as an input instead of using default tangent layer from UV because of
several reasons:
- This way it's really easy to preserve compatibility with existing setups.
- Default UV map is not necessarily giving continuous space, one might want to
use other tangent space sources or distort the space for some artistic
decision.
Reviewers: juicyfruit, dingto
Reviewed By: dingto
Differential Revision: https://developer.blender.org/D1428
Currently only image loading benefits of this and will give magenta color
when image manager detects it's running out of memory.
This isn't ideal solution and can't handle all cases. For example, OOM
killer might kill process before it realized it run out of memory, but
in other cases this could prevent some crashes.
Reviewers: juicyfruit, dingto
Differential Revision: https://developer.blender.org/D1502
Currently OpenCL devices are packing images into a single texture,
which means technically number of textures is not limited here.
Now OpenCL will use same number of textures as CPU. If we want
to bump number of textures further, this values are to be modified
in sync.
NOTE OpenCL still does not support float textures.
Original patch from a guy called bliblubli in the tracker with
some own modifications.
Reviewers: brecht, dingto, sergey
Differential Revision: https://developer.blender.org/D1530
This commit exposes the interpolation parameter for environment textures (requested by DolpheenDream on IRC), just as it already is for image textures.
Reviewers: sergey
Differential Revision: https://developer.blender.org/D1544
Notes:
- There is still some bvh cache code, but that is from the engines initial commit, we might clean this up further or keep it.
- Changes in util_cache.h/.c are kept, this might be re-used in the future.
It can't be simply removed in cases when it's connected to input which is
different from Normal. This is because the input wouldn't be connected to
default Normal geometry input, possibly breaking shading setup.
The fix is not really ideal, but should work at least.
This fixes skin having too much glossy reflection in the file from T46013.
Technically it was all wrong and it should have been called Extend instead
of Clip. Got confused by the naming in different libraries.
More options are still to come.
Currently only two mappings are supported by API, which is Repeat (old behavior)
and new Clip behavior. Internally this extension is being converted to periodic
flag which was already supported but wasn't exposed.
There's no support for OpenCL yet because of the way how we pack images into a
single texture.
Those settings are not exposed to UI or anywhere else and there should be no
functional changes so far.
Works totally similar to camera motion blur and majority of the changes are
related on just passing extra arguments to sync() functions.
Couple of things still to look into:
- Motion pass will not include motion caused by the zoom.
- Only perspective cameras are supported currently.
- Motion is being interpolated on projected coordinates, which might give
different results from constructing projection matrix from interpolated
field of view.
This could be good enough for us, but we need to consider improving this
at some point.
Reviewers: juicyfruit, dingto
Reviewed By: dingto
Differential Revision: https://developer.blender.org/D1383
The idea of this node is to sampling of 3D voxels at a given coordinate
supporting different mapping strategies (world space mapping, object
local space etc).
Currently not in use, it's a preparation step for supporting point density
textures.
This means render devices now might skip building baking kernels in cases when
only actual render-related functionality is used.
For now it's only implemented for OpenCL split kernel device and mainly needed
to work around some compiler-specific bugs which crashes on building the kernel.
Using OpenCL for baking might still crash the driver, but at least there is now
higher probability of that GPU will be usable to render the scene.
Real fix should actually be done in the driver side.
From artists perspective it makes sense to always apply displacement in a local
space.
TODO: Double-check that BVH is being packed properly. From quick tests seems it's
all fine, but might be missing some obvious failure still.
This is mainly useful for the render farms output when logging might stop at
the "Path Tracing Tile N/N" string, which makes it a bit difficult to follow
what exactly is happening (node going crazy, hardware issues or just last tile
is too much heavy).
This is more like an experiment, might be changed in the future.
TODO: We might want to refactor debug passes into PASS_DEBUG and some
debug_type (similar to Blender's side passes) to avoid issue of running
out of bits.