Based on an user feedback, previous implementation with providing
decoupled X and Y speeds didn't work in production at all: there
is no way to combine this speeds to an usable vector.
So now we're providing speed vector output instead, which provides
speed in an exactly the way Vector Blur node expects it to be:
first two components is a speed from the past, second two components
defines speed to the future.
Old behavior can be achieved by RGBA separating the speed output
and using first tow components.
Now this speed gives quite the same results as a speed pass, with
the only difference that track position speed uses "shutter" of
1 while pass uses shutter of 0.5 (and there's no way to affect on
that?).
velocity is measured in pixels per frame. It is basically a coordinate
difference of track coordinate at current frame and previous one (no future
prediction happens).
It's not really most intuitive place for such a things, but historically the
node was called this way..
Track velocity could be used to face effects like motion blur bu piping it
to the vector blur node.
Reviewers: campbellbarton
Reviewed By: campbellbarton
Subscribers: hype, sebastian_k
Differential Revision: https://developer.blender.org/D1591
Nodes have a feature for moving existing links to unoccupied sockets when connecting
to an already used input. This is based on the standard legacy socket types (value/float,
vector, color/rgba) and works reasonably well for shader, compositor and texture nodes.
For new pynode systems, however, the hardcoded nature of that feature has major drawbacks:
* It does not take different type systems into account, leading to meaningless connections
when sockets are swapped and making the feature useless or outright debilitating.
* Advanced socket behaviors would be possible with a registerable callback, e.g. creating
extensible input lists that move existing connections down to make room for a new link.
Now any handling of new links is done via the 'insert_links' callback, which can also be
registered through the RNA API. For the legacy shader/compo/tex nodes the behavior is the
same, using a C callback.
Note on the 'use_swap' flag: this has been removed because it was meaningless anyway:
It was disabled only for the insert-node-on-link feature, which works only for
completely unconnected nodes anyway, so there would be nothing to swap in the first place.
- Add blentranslation `BLT_*` module.
- moved & split `BLF_translation.h` into (`BLT_translation.h`, `BLT_lang.h`).
- moved `BLF_*_unifont` functions from `blf_translation.c` to new source file `blf_font_i18n.c`.
This commit fixes issues with wrong socket type being added to the Cycles debug
pass compositor operation, which lead to crashes with non-value pass types.
This commit also reverts socket renaming thing because while it's was behaving
ok on runtime file reload might have loose the links which is annoying.
Currently only works correct with single float output, RGBA and vector are not
supported so if one need to use this passes he'll need to wait a bit still.
It is coming, don't worry.
Official Documentation:
http://www.blender.org/manual/render/workflows/multiview.html
Implemented Features
====================
Builtin Stereo Camera
* Convergence Mode
* Interocular Distance
* Convergence Distance
* Pivot Mode
Viewport
* Cameras
* Plane
* Volume
Compositor
* View Switch Node
* Image Node Multi-View OpenEXR support
Sequencer
* Image/Movie Strips 'Use Multiview'
UV/Image Editor
* Option to see Multi-View images in Stereo-3D or its individual images
* Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images
I/O
* Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images
Scene Render Views
* Ability to have an arbitrary number of views in the scene
Missing Bits
============
First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report.
Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report.
Everything else is likely small todos, and may wait until we are sure none of the above is happening.
Apart from that there are those known issues:
* Compositor Image Node poorly working for Multi-View OpenEXR
(this was working prefectly before the 'Use Multi-View' functionality)
* Selecting camera from Multi-View when looking from camera is problematic
* Animation Playback (ctrl+F11) doesn't support stereo formats
* Wrong filepath when trying to play back animated scene
* Viewport Rendering doesn't support Multi-View
* Overscan Rendering
* Fullscreen display modes need to warn the user
* Object copy should be aware of views suffix
Acknowledgments
===============
* Francesco Siddi for the help with the original feature specs and design
* Brecht Van Lommel for the original review of the code and design early on
* Blender Foundation for the Development Fund to support the project wrap up
Final patch reviewers:
* Antony Riakiotakis (psy-fi)
* Campbell Barton (ideasman42)
* Julian Eisel (Severin)
* Sergey Sharybin (nazgul)
* Thomas Dinged (dingto)
Code contributors of the original branch in github:
* Alexey Akishin
* Gabriel Caraballo
This way re-mapping scene nodes to EXR files becomes much easier,
no extra trickery with separate RGBA setups is needed.
Plus makes it more consistent with regular EXR files.
This uses EGBA pass to get alpha from.
Quite striaghtforward change, and in theory we can even try supporting motion
blur for the corner pin node (which is tricky because coordinates actually
coming from sockets, but with some black magic should be doable).
This allows adding a "fake" sun beam effect, simulating crepuscular rays
from light being scattered in a medium like the atmosphere or deep water.
Such effects can be created also by renderers using volumetric lighting,
but the compositor feature is a lot cheaper and is independent from 3D
rendering. This makes it ideally suited for motion graphics.
The implementation uses am optimized accumulation method for gathering
color values along a line segment. The inner buffer loop uses fixed
offset increments to avoid unnecessary multiplications and avoids
variables by using compile-time specialization (see inline comments
for further details).
trees for localization (previews and viewer evaluation).
This is handled entirely by the compositor already. Doing this during
localization is redundant and risks divergent behavior.
render layer nodes in a pinned tree from different scene.
The way these updates work is a nasty legacy hack:
https://developer.blender.org/diffusion/B/browse/master/source/blender/nodes/composite/node_composite_tree.c$277
This function is called //very frequently// by the get_from_context
method. However, this does not get called for pinned node trees, so
when showing a different scene's compositing nodes in the editor they
may not get updated correctly.
Now moved this update call out of get_from_context so it happens in any
case. Will be called no more frequently than before (on every redraw).
Eventually the depsgraph should handle this more precisely, it's just a
simple ID dependency anyway ...
This was suggested by Christopher Barrett (terrachild). Corner pin is a common feature in compositing.
The corners for the plane warping can be defined by using vector node inputs to allow using perspective plane transformations without having to go via the MovieClip editor tracking data.
Uses the same math as the PlaneTrack node, but without the link to MovieClip and Object.
{F78199}
The code for PlaneTrack operations has been restructured a bit to share it with the CornerPin node.
* PlaneDistortCommonOperation.h/.cpp: Shared generic code for warping images based on 4 plane corners and a perspective matrix generated from these. Contains operation base classes for both the WarpImage and Mask operations.
* PlaneTrackOperation.h/.cpp: Current plane track node operations, based on the common code above. These add pointers to MovieClip and Object which define the track data from wich to read the corners.
* PlaneCornerPinOperation.h/.cpp: New corner pin variant, using explicit input sockets for the plane corners.
One downside of the current compositor design is that there is no concept of invariables (constants) that don't vary over the image space. This has already been an issue for Blur nodes (size input is usually constant except when "variable size" is enabled) and a few others. For the corner pin node it is necessary that the corner input sockets are also invariant. They have to be evaluated for each tile now, otherwise the data is not available. This in turn makes it necessary to make the operation "complex" and request full input buffers, which adds unnecessary overhead.
As discussed in T38340 the solution is to use the current scene from
context whenever feasible.
Composite does not use node->id at all now, the scene which owns the
compositing node tree is retrieved from context instead.
Defocus node->id is made editable by the user. By default it is not set,
which also will make it use the contextual scene and camera info.
The node->id pointer in Defocus is **not** cleared in older blend files.
This is done for backward compatibility: the node will then behave as
before in untouched scenes.
File Output nodes also don't store scene in node->id. This is only needed
when creating a new node for initializing the file format.
Reviewers: brecht, jbakker, mdewanchand
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D290
moved the hide preview logic to a method on bNodeTreeType. This way the node.c keeps clean, but logic could still be shared.
Implementing this per node, can lead to future errors.
mimics the previous behavior when settings were shared by both modes (but not equivalent).
NOTE: Due to the use of additional sRGB conversion in the LGG mode the result is not entirely accurate, this should perhaps be fixed.
Settings for each mode are kept in their own color values nevertheless, this avoids potential problems with float precision.
Scene/Render "spaces" are actually absolute values, they do not use the input X/Y scale factors, hide them in this case.
Thanks to Lukas for review and improvement!
scale and rotation in mapping node, there would be shearing, and the only way
to avoid that was to add 2 mapping nodes. This is because to transform the
texture, the inverse transform needs to be done on the texture coordinate
Now the mapping node has Texture/Point/Vector/Normal types to transform the
vector for a particular purpose. Point is the existing behavior, Texture is
the new default that behaves more like you might expect.