BF-admins agree to remove header information that isn't useful,
to reduce noise.
- BEGIN/END license blocks
Developers should add non license comments as separate comment blocks.
No need for separator text.
- Contributors
This is often invalid, outdated or misleading
especially when splitting files.
It's more useful to git-blame to find out who has developed the code.
See P901 for script to perform these edits.
See this page for motivation and description of concepts:
https://github.com/Ichthyostega/blender/wiki
See this video for UI explanation and demonstration of usage
http://vimeo.com/blenderHack/stabilizerdemo
This proposal attempts to improve usability of Blender's image stabilization
feature for real-world footage esp. with moving and panning camera. It builds
upon the feature tracking to get a measurement of 2D image movement.
- Use a weighted average of movement contributions (instead of a median).
- Allow for rotation compensation and zoom (image scale) compensation.
- Allow to pick a different set of tracks for translation and for
rotation/zoom.
- Treat translation / rotation / zoom contributions systematically in a
similar way.
- Improve handling of partial tracking data with gaps and varying
start / end points.
- Have a user definable anchor frame and interpolate / extrapolate data to
avoid jumping back to "neutral" position when no tracking data is available.
- Support for travelling and panning shots by including an //intended//
position/rotation/zoom ("target position"). The idea is for these parameters
to be //animated// by the user, in order to supply an smooth, intended
camera movement. This way, we can keep the image content roughly in frame
even when moving completely away from the initial view.
A known shortcoming is that the pivot point for rotation compensation is set to
the translation compensated image center. This can produce spurious rotation on
travelling shots, which needs to be compensated manually (by animating the
target rotation parameter). There are several possible ways to address that
problem, yet all of them are considered beyond the scope of this improvement
proposal for now.
Own modifications:
- Restrict line length, it's really handy for split-view editing
- In motion tracking we prefer fully human-readable comments, meaning we
don't use doxygen with it's weird markup and comments are supposed to
start with capital and end with a full stop,
- Add explicit comparison of pointer to NULL.
Reviewers: sergey
Subscribers: kusi, kdawg, forest-house, mardy, Samoth, plasmasolutions, willolis, sebastian_k, hype, enetheru, sunboy, jta, leon_cheung
Maniphest Tasks: T49036
Differential Revision: https://developer.blender.org/D583
Many parts of the compositor are unnecessarily complicated. This patch
aims at reducing the complexity of writing nodes and making the code
more transparent.
== Separating Nodes and Operations ==
Currently these are both mixed in the same graph, even though they have
very different purposes and are used at distinct stages in the
compositing process. The patch introduces dedicated graph classes for
nodes and for operations.
This removes the need for a lot of special case checks (isOperation etc.)
and explicit type casts. It simplifies the code since it becomes clear
at every stage what type of node we are dealing with. The compiler can
use static typing to avoid common bugs from mixing up these types and
fewer runtime sanity checks are needed.
== Simplified Node Conversion ==
Converting nodes to operations was previously based on "relinking", i.e.
nodes would start with by mirroring links in the Blender DNA node trees,
then add operations and redirect these links to them. This was very hard
to follow in many cases and required a lot of attention to avoid invalid
states.
Now there is a helper class called the NodeConverter, which is passed to
nodes and implements a much simpler API for this process. Nodes can add
operations and explicit connections as before, but defining "external"
links to the inputs/outputs of the original node now uses mapping
instead of directly modifying link data. Input data (node graph) and
result (operations graph) are cleanly separated.
== Removed Redundant Data Structures ==
A few redundant data structures have been removed, notably the
SocketConnection. These are only needed temporarily during graph
construction. For executing the compositor operations it is perfectly
sufficient to store only the direct input link pointers. A common
pointer indirection is avoided this way (which might also give a little
performance improvement).
== Avoid virtual recursive functions ==
Recursive virtual functions are evil. They are very hard to follow
during debugging. At least in the parts this patch is concerned with
these functions have been replaced by a non-virtual recursive core
function (which might then call virtual non-recursive functions if
needed). See for example NodeOperationBuilder::group_operations.
Pretty much straightforward change, made in the same way as
texture input node.
Shall not be any regressions or crashes when mixing usage
of 2.66 and current trunk.
Replace old color pipeline which was supporting linear/sRGB color spaces
only with OpenColorIO-based pipeline.
This introduces two configurable color spaces:
- Input color space for images and movie clips. This space is used to convert
images/movies from color space in which file is saved to Blender's linear
space (for float images, byte images are not internally converted, only input
space is stored for such images and used later).
This setting could be found in image/clip data block settings.
- Display color space which defines space in which particular display is working.
This settings could be found in scene's Color Management panel.
When render result is being displayed on the screen, apart from converting image
to display space, some additional conversions could happen.
This conversions are:
- View, which defines tone curve applying before display transformation.
These are different ways to view the image on the same display device.
For example it could be used to emulate film view on sRGB display.
- Exposure affects on image exposure before tone map is applied.
- Gamma is post-display gamma correction, could be used to match particular
display gamma.
- RGB curves are user-defined curves which are applying before display
transformation, could be used for different purposes.
All this settings by default are only applying on render result and does not
affect on other images. If some particular image needs to be affected by this
transformation, "View as Render" setting of image data block should be set to
truth. Movie clips are always affected by all display transformations.
This commit also introduces configurable color space in which sequencer is
working. This setting could be found in scene's Color Management panel and
it should be used if such stuff as grading needs to be done in color space
different from sRGB (i.e. when Film view on sRGB display is use, using VD16
space as sequencer's internal space would make grading working in space
which is close to the space using for display).
Some technical notes:
- Image buffer's float buffer is now always in linear space, even if it was
created from 16bit byte images.
- Space of byte buffer is stored in image buffer's rect_colorspace property.
- Profile of image buffer was removed since it's not longer meaningful.
- OpenGL and GLSL is supposed to always work in sRGB space. It is possible
to support other spaces, but it's quite large project which isn't so
much important.
- Legacy Color Management option disabled is emulated by using None display.
It could have some regressions, but there's no clear way to avoid them.
- If OpenColorIO is disabled on build time, it should make blender behaving
in the same way as previous release with color management enabled.
More details could be found at this page (more details would be added soon):
http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Color_Management
--
Thanks to Xavier Thomas, Lukas Toene for initial work on OpenColorIO
integration and to Brecht van Lommel for some further development and code/
usecase review!
Issue was caused by getting image from compositor node conversion code,
now it'll check whether rendering happens and if so, frame wouldn't be
stored in the cache.
This possible fixes#32465: Memory leak when rendering
- Re-arrange functions in headers and implementation file to make them
more grouped by entity they're operating with. Also order of functions
in implementation file should match order of functions in header for
easier navigation.
- Rename some functions to match conventions of naming public functions.
- Some code de-duplication, still some room for improvements tho.
- Split main 2D tracking functions into smaller steps to make it more clear.
Accidentally OpenMP was disabled in some of previous commits, re-enable it.
`````|````` | | | ..''''
| | | |______ .''
| | | | ..'
| | |_______ |___________ ....''
merge to TRUNK!
* The old compositor is still available (Debug Menu: 200)
This commit was brought to you by:
Developers:
* Monique Dewanchand
* Jeroen Bakker
* Dalai Felinto
* Lukas Tönne
Review:
* Brecht van Lommel
Testers:
* Nate Wiebe
* Wolfgang Faehnle
* Carlo Andreacchio
* Daniel Salazar
* Artur Mag
* Christian Krupa
* Francesco Siddi
* Dan McGrath
* Bassam Kurdali
But mostly by the community:
Gold:
Joshua Faulkner
Michael Tiemann
Francesco Paglia
Blender Guru
Blender Developers Fund
Silver:
Pablo Vazquez
Joel Heethaar
Amrein Olivier
Ilias Karasavvidis
Thomas Kumlehn
Sebastian Koenig
Hannu Hoffrén
Benjamin Dansie
Fred M'ule
Michel Vilain
Bradley Cathey
Gianmichele Mariani
Gottfried Hofmann
Bjørnar Frøyse
Valentijn Bruning
Paul Holmes
Clemens Rudolph
Juris Graphix
David Strebel
Ronan Zeegers
François Tarlier
Felipe Andres Esquivel Reed
Olaf Beckman
Jesus Alberto Olmos Linares
Kajimba
Maria Figueiredo
Alexandr Galperin
Francesco Siddi
Julio Iglesias Lopez
Kjartan Tysdal
Thomas Torfs
Film Works
Teruyuki Nakamura
Roger Luethi
Benoit Bolsee
Stefan Abrahamsen
Andreas Mattijat
Xavier Bouchoux
Blender 3D Graphics and Animation
Henk Vostermans
Daniel Blanco Delgado
BlenderDay/2011
Bradley Cathey
Matthieu Dupont de Dinechin
Gianmichele Mariani
Jérôme Scaillet
Bronze (Ivo Grigull, Dylan Urquidi, Philippe Derungs, Phil Beauchamp, Bruce Parrott, Mathieu Quiblier, Daniel Martinez, Leandro Inocencio, Lluc Romaní Brasó,
Jonathan Williamson, Michael Ehlen, Karlis Stigis, Dreamsteep, Martin Lindelöf, Filippo Saracino, Douwe van der Veen, Olli Äkräs, Bruno D'Arcangeli,
Francisco Sedrez Warmling, Watchmike.ca, peter lener, Matteo Novellino, Martin Kirsch, Austars Schnore, KC Elliott, Massimiliano Puliero, Karl Stein,
Wood Design Studios, Omer Khan, Jyrki Kanto, Michał Krupa, Lars Brubaker, Neil Richmond, Adam Kalisz, Robert Garlington, Ian Wilson, Carlo Andreacchio,
Jeremias Boos, Robert Holcomb, Gabriel Zöller, Robert Cude, Natibel de Leon, Nathan Turnage, Nicolas Vergnes, Philipp Kleinhenz, Norman Hartig, Louis Kreusel,
Christopher Taylor, Giovanni Remondini, Daniel Rentzsch, Nico Partipilo, Thomas Ventresco, Johannes Schwarz, Александр Коротеев, Brendon Harvey,
Marcelo G. Malheiros, Marius Giurgi, Richard Burns, Perttu Iso-Metsälä, Steve Bazin, Radoslav Borisov, Yoshiyuki Shida, Julien Guigner, Andrew Hunter,
Philipp Oeser, Daniel Thul, Thobias Johansson, Mauro Bonecchi, Georg Piorczynski, Sebastian Michailidis, L M Weedy, Gen X, Stefan Hinze, Nicolò Zubbini,
Erik Pusch, Rob Scott, Florian Koch, Charles Razack, Adrian Baker, Oliver Villar Diz, David Revoy, Julio Iglesias Lopez, Coen Spoor, Carlos Folch,
Joseph Christie, Victor Hernández García, David Mcsween, James Finnerty, Cory Kruckenberg, Giacomo Graziosi, Olivier Saraja, Lars Brubaker, Eric Hudson,
Johannes Schwarz, David Elguea, Marcus Schulderinsky, Karel De Bruijn, Lucas van Wijngaarden, Stefano Ciarrocchi, Mehmet Eribol, Thomas Berglund, Zuofei Song,
Dylan Urquidi )