Technically it was all wrong and it should have been called Extend instead
of Clip. Got confused by the naming in different libraries.
More options are still to come.
Previous fix didn't work well enough because on Windows Python has different
environment than Blender ans setting variables in there made no effect from
Blender point of view.
That was basically not an issue with interpolation, but rather missing wrapping
options and periodic wrapping was always used.
It's still a bit questionable why certain graphics cards were doing clamping in
the file from the report, that's not something what is expected to happen from
the settings of textures being passed to GPU. In any case this issue i still
didn't manage to reproduce on any of the available GPUs, might be something
related on driver glitch or so.
In any case CPU now should behave just fine, rest of the issues we'll need to be
able to reproduce first.
Currently only two mappings are supported by API, which is Repeat (old behavior)
and new Clip behavior. Internally this extension is being converted to periodic
flag which was already supported but wasn't exposed.
There's no support for OpenCL yet because of the way how we pack images into a
single texture.
Those settings are not exposed to UI or anywhere else and there should be no
functional changes so far.
Works totally similar to camera motion blur and majority of the changes are
related on just passing extra arguments to sync() functions.
Couple of things still to look into:
- Motion pass will not include motion caused by the zoom.
- Only perspective cameras are supported currently.
- Motion is being interpolated on projected coordinates, which might give
different results from constructing projection matrix from interpolated
field of view.
This could be good enough for us, but we need to consider improving this
at some point.
Reviewers: juicyfruit, dingto
Reviewed By: dingto
Differential Revision: https://developer.blender.org/D1383
Fix T45381: Crash Blender 2.75 in Win7 x64 AMD card
The issue is basically caused by graphics card driver which crashes when
querying OpenCL platforms. This isn't something we can really solve from
the CLEW side, because opencl.dll does exist in old driver and even has
all the needed symbols, but first ever call to clGetPlatformIDs crashes.
While rest of the blender works fine with those older ATI/AMD cards it's
really needed to solve crashes of OpenCL device enumeration.
Solution here is to force disable OpenCL platforms if we've detected that
display card is using old ATI/AMD driver. It's not really proper solution
so it's done in the python side where it's easy to do tweaks. Reasoning
behind this change is:
- If one uses really old driver it's likely because it's the latest one
he/she can ever to install (because of discontinued support from AMD).
- If old card is used it's likely to not have dedicated GPUs for rendering.
- Even if there's a dedicated GPU device enumeration is likely to crash
because of attempt to query OpenCL from the old card.
There are still some tweaks needed likely, but this commit should make
some of the configurations to work.
This commit implements point density texture for Cycles shading nodes.
It's done via creating voxel texture at shader compilation time, Not
totally memory efficient, but avoids adding sampling code to kernel
(which keeps render time as low as possible), In the future this will
be compensated by using OpenVDB for more efficient storage of sparse
volume data.
Sampling of the voxel texture is happening at blender side and the
same code is used as for Blender Internal's renderer.
This texture is controlled by only object, particle system and radius.
Linear falloff is used and there's no turbulence. This is because
falloff is expected to happen using Curve Mapping node. Turbulence
will be done as a distortion on the input coordinate. It's already
possible to fake it using nose textures and in the future we can add
more proper turbulence distortion node, which then could also be used
for 2D texture mapping.
Particle color support is done by Lukas, thanks!
The idea is to give artists a simplier way to control memory usage in such
scenes as grass fields by doing automatic object culling based on whether
object is visible in the frame or not.
This is controlled on per-object level. In order to use this option few steps
are required:
- Enable Simplify in scene settings
- Enable Camera Cull option in the Simplify panel
- Set camera cull margin (measured in relative value to the render resolution)
This setting is used to avoid possible flickering caused by changes in shadow
which are cast by objects outside of the frame.
- Enable Camera Cull for objects which are desired to be culled
(object culling option could be found in Option panel in object buttons).
There is still room for improvements, but this worked quite well during
Gooseberry open movie project, so think it's useful feature even in it's current
non-ideal state.
This means render devices now might skip building baking kernels in cases when
only actual render-related functionality is used.
For now it's only implemented for OpenCL split kernel device and mainly needed
to work around some compiler-specific bugs which crashes on building the kernel.
Using OpenCL for baking might still crash the driver, but at least there is now
higher probability of that GPU will be usable to render the scene.
Real fix should actually be done in the driver side.
When using MIS, the world is treated as regular light and in this case
we can now also limit the maximum amount of bounces, the background light
will contribute to the scene.
This can improve performance in some cases, where it's e.g. sufficient to
only have a contribution on first 1-2 bounces.
Examples can be found in the differential.
Differential revision: https://developer.blender.org/D1399
This was a mistake in the original code from D1079.
With the current way how direction ot mirror ball works camera should look
into negative Y direction. Corrected it in the camera matrix synchronization,
so no extra calculations are needed at the render time.
That's a bit annoying, but we'd better port it to the release branch, or
otherwise we'll end up with files created with wrong camera mapping after
2.75 release.
The idea is to make it possible to control linked duplicated objects motion
blur from the scene file without need to do overrides on the linked object
settings. Currently only supported for dupligroup duplication and all now
if duplicator object has motion blur disabled then it'll be inherited into
all the duplicated objects.
There should be no regressions/changes in look of existing files because
objects do have motion blur enabled by default.
TODO: We might want to refactor debug passes into PASS_DEBUG and some
debug_type (similar to Blender's side passes) to avoid issue of running
out of bits.
This way it is now possible to select which exact debug pass is to be used
by the render engine. Accessible from the Passes panel.
Currently it could only be one debug pass, in the future we can make menus
and image users smarter and support multiple passes of the same type.
Quite straightforward implementation, but still needs some work for the split
kernel. Includes both regular and split kernel implementation for that.
The pass is not exposed to the interface yet because it's currently not really
easy to have same pass listed in the menu multiple times.
For animations, you often want an animated render seed (noise pattern).
This could be done by e.g. setting a driver on the seed value.
Now it's a little checkbox, that can be enabled.
The animated seed is based on the current Blender frame and
the seed value itself. Simply enabling it, will already result in an animated
seed (different on each Blender frame), but it can be randomized further
by setting a different seed value.
Disabled per default, so no backward compatibility break.
Differential Revision: https://developer.blender.org/D1285
This way it is possible to have viewport simplification bumped all the way up,
making viewport really responsive but still have final render to use highest
subdivision possible.
Reviewers: lukastoenne, campbellbarton, dingto
Reviewed By: campbellbarton, dingto
Subscribers: dingto, nutel, eyecandy, venomgfx
Differential Revision: https://developer.blender.org/D1273
This patch adds support for light portals: objects that help sampling the
environment light, therefore improving convergence. Using them tor other
lights in a unidirectional pathtracer is virtually useless.
The sampling is done with the area-preserving code already used for area lamps.
MIS is used both for combination of different portals and for combining portal-
and envmap-sampling.
The direction of portals is considered, they aren't used if the sampling point
is behind them.
Reviewers: sergey, dingto, #cycles
Reviewed By: dingto, #cycles
Subscribers: Lapineige, nutel, jtheninja, dsisco11, januz, vitorbalbio, candreacchio, TARDISMaker, lichtwerk, ace_dragon, marcog, mib2berlin, Tunge, lopataasdf, lordodin, sergey, dingto
Differential Revision: https://developer.blender.org/D1133
Combine all the highpoly pixel arrays into a single array with a lookup
object_id for each of the highpoly objects.
Note: This changes the Bake API, external engines should refer to the
bake_api.c for the latest API.
Many thanks for Sergey Sharybin for the complete review, changes
suggestion and feedback. (you rock!)
Reviewers: sergey
Subscribers: pildanovak, marcclintdion, monio, metalliandy, brecht
Maniphest Tasks: T41092
Differential Revision: https://developer.blender.org/D772
There were two major problems with the interactivity of material previews:
- Beckmann tables were re-generated on every material tweak.
This is because preview scene is not set to be persistent, so re-triggering
the render leads to the full scene re-sync.
- Images could take rather noticeable time to load with OIIO from the disk
on every tweak.
This patch addressed this two issues in the following way:
- Beckmann tables are now static on CPU memory.
They're couple of hundred kilobytes only, so wouldn't expect this to be
an issue. And they're needed for almost every render anyway.
This actually also makes blackbody table to be static, but it's even smaller
than beckmann table.
Not totally happy with this approach, but others seems to complicate things
quite a bit with all this render engine life time and so..
- For preview rendering all images are considered to be built-in. This means
instead of OIIO which re-loads images on every re-render they're coming
from ImBuf cache which is fully manageable from blender side and unused
images gets freed later.
This would make it impossible to have mipmapping with OSL for now, but we'll
be working on that later anyway and don't think mipmaps are really so crucial
for the material preview.
This seems to be a better alternative to making preview scene persistent,
because of much optimal memory control from blender side.
Reviewers: brecht, juicyfruit, campbellbarton, dingto
Subscribers: eyecandy, venomgfx
Differential Revision: https://developer.blender.org/D1132
Official Documentation:
http://www.blender.org/manual/render/workflows/multiview.html
Implemented Features
====================
Builtin Stereo Camera
* Convergence Mode
* Interocular Distance
* Convergence Distance
* Pivot Mode
Viewport
* Cameras
* Plane
* Volume
Compositor
* View Switch Node
* Image Node Multi-View OpenEXR support
Sequencer
* Image/Movie Strips 'Use Multiview'
UV/Image Editor
* Option to see Multi-View images in Stereo-3D or its individual images
* Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images
I/O
* Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images
Scene Render Views
* Ability to have an arbitrary number of views in the scene
Missing Bits
============
First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report.
Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report.
Everything else is likely small todos, and may wait until we are sure none of the above is happening.
Apart from that there are those known issues:
* Compositor Image Node poorly working for Multi-View OpenEXR
(this was working prefectly before the 'Use Multi-View' functionality)
* Selecting camera from Multi-View when looking from camera is problematic
* Animation Playback (ctrl+F11) doesn't support stereo formats
* Wrong filepath when trying to play back animated scene
* Viewport Rendering doesn't support Multi-View
* Overscan Rendering
* Fullscreen display modes need to warn the user
* Object copy should be aware of views suffix
Acknowledgments
===============
* Francesco Siddi for the help with the original feature specs and design
* Brecht Van Lommel for the original review of the code and design early on
* Blender Foundation for the Development Fund to support the project wrap up
Final patch reviewers:
* Antony Riakiotakis (psy-fi)
* Campbell Barton (ideasman42)
* Julian Eisel (Severin)
* Sergey Sharybin (nazgul)
* Thomas Dinged (dingto)
Code contributors of the original branch in github:
* Alexey Akishin
* Gabriel Caraballo
It is still possible to free a bit more memory by detecting buildin images
which are not used by shaders, but that's not going to improve memory usage
that much to bother about this now.
Such change brings peak memory usage from 4.1GB to 3.4GB when rendering
01_01_01_D layout scene from the Gooseberry project. Mainly because of
freeing memory used by rather huge environment map in the viewport.
Reviewers: campbellbarton, juicyfruit
Subscribers: eyecandy
Differential Revision: https://developer.blender.org/D1215
This inconsistency drove me totally crazy, it's really confusing
when it's inconsistent especially when you work on both Cycles and
Blender sides.
Shouldn;t cause merge PITA, it's whitespace changes only, Git should
be able to merge it nicely.
Issue was caused by cycles in shader graph confusing it's
simplification stage. Now we're ignoring links which are
marked as invalid from blender side so we don't run into
such cycles and keep graph code simple.