Not all file formats/calls are supported yet. It will be expended.
Please from now on use BLI_fopen, BLI_* for file manipulations.
For non-windows systems BLI_fopen just calls fopen.
For Windows, the utf-8 string is translated to utf-16 string in order to call UTF version of the function.
- spelling - turns out we had tessellation spelt wrong all over.
- use \directive for doxy (not @directive)
- remove BLI_sparsemap.h - was from bmesh merge IIRC but entire file commented and not used.
1) Old CMP_NODE_OUTPUT_FILE and CMP_NODE_OUTPUT_MULTI_FILE have been merged,
only CMP_NODE_OUTPUT_FILE remains. All functions renamed accordingly.
2) do_versions code for converting single-file output nodes into multi-file
output nodes. If a Z buffer input is used, the node is made into a multilayer
exr with two inputs. (see below). Also re-identifies multi-file output nodes
with the CMP_NODE_OUTPUT_FILE type.
3) "Global" format is stored in node now. By default this overrides any
per-socket settings.
4) Multilayer EXR output implemented. When M.EXR format is selected for node
format, all socket format details are ignored. Socket names are used for layer
names.
5) Input buffer types are used as-is when possible, i.e. stored as B/W, RGB or
RGBA. In regular file output the format dictates the number of actual channels,
so the CompBuf is typechecked to the right type first. For multilayer EXR the
number of channels is more flexible, so an input buffer will store only the
channels it actually uses.
6) The editor socket type is updated from linked sockets as an indicator of the
actual data written to files. This may not be totally accurate for regular file
output though, due to restrictions of format setting.
Unlike the existing file output node this node has an arbitrary number of
possible input slots. It has a base path string that can be set to a general
base folder. Every input socket then uses its name as an extension of the base
path for file organization. This can include further subfolders on top of the
base path. Example:
Base path: '/home/user/myproject'
Input 1: 'Compo'
Input 2: 'Diffuse/'
Input 3: 'details/Normals'
would create output files
in /home/user/myproject: Compo0001.png, Compo0002.png, ...
in /home/user/myproject/Diffuse: 0001.png, 0002.png, ... (no filename base
given)
in /home/user/myproject/details: Normals0001.png, Normals0002.png, ...
Most settings for the node can be found in the sidebar (NKEY). New input sockets
can be added with the "Add Input" button. There is a list of input sockets and
below that the details for each socket can be changed, including the sub-path
and filename. Sockets can be removed here as well. By default each socket uses
the render settings file output format, but each can use its own format if
necessary.
To my knowledge this is the first node making use of such dynamic sockets in
trunk. So this is also a design test, other nodes might use this in the future.
Adding operator buttons on top of a node is a bit unwieldy atm, because all node
operators generally work on selected and/or active node(s). The operator button
would therefore either have to make sure the node is activated before the
operator is called (block callback maybe?) OR it has to store the node name
(risky, weak reference). For now it is only used in the sidebar, where only the
active node's buttons are displayed.
Also adds a new struct_type value to bNodeSocket, in order to distinguish
different socket types with the same data type (file inputs are SOCK_RGBA color
sockets). Would be nicer to use data type only for actual data evaluation, but
used in too many places, this works ok for now.
Added AnimData block to MovieClip datablock which allows to animate different properties in clip.
Currently supports animation of stabilization influence only.
--
svn merge -r44129:44130 ^/branches/soc-2011-tomato
=========================
Documentation: http://wiki.blender.org/index.php/User:Psy-Fi/UV_Tools
Major features include:
*16 bit image support in viewport
*Subsurf aware unwrapping
*Smart Stitch(snap/rotate islands, preview, middlepoint/endpoint stitching)
*Seams from islands tool (marks seams and sharp, depending on settings)
*Uv Sculpting(Grab/Pinch/Rotate)
All tools are complete apart from stitching that is considered stable but with an extra edge mode under development(will be in soc-2011-onion-uv-tools).
This commit implements basis stuff needed for object tracking,
use case isn't perfect now, interface also should be cleaned a bit.
- Added list of objects to be tracked. Default there's only one object called
"Camera" which is used for solving camera motion. Other objects can be added
and each of them will have it;s own list of tracks. Only one object can be used
for camera solving at this moment.
- Added new constraint called "Object Tracking" which makes oriented object be
moving in the save way as solved object motion.
- Scene orientation tools can be used for orienting object to bundles.
- All tools which works with list of tracks or reconstruction data now
gets that lists from active editing object.
- All objects and their tracking data are available via python api.
===========================
Rest of changes from camera tracking gsoc project.
This commit includes:
- New compositor nodes:
* Movie Clip input node
* Movie Undistortion node
* Transformation node
* 2D stabilization node
- Slight changes in existing node to prevent code duplication
===========================
Commiting camera tracking integration gsoc project into trunk.
This commit includes:
- Bundled version of libmv library (with some changes against official repo,
re-sync with libmv repo a bit later)
- New datatype ID called MovieClip which is optimized to work with movie
clips (both of movie files and image sequences) and doing camera/motion
tracking operations.
- New editor called Clip Editor which is currently used for motion/tracking
stuff only, but which can be easily extended to work with masks too.
This editor supports:
* Loading movie files/image sequences
* Build proxies with different size for loaded movie clip, also supports
building undistorted proxies to increase speed of playback in
undistorted mode.
* Manual lens distortion mode calibration using grid and grease pencil
* Supervised 2D tracking using two different algorithms KLT and SAD.
* Basic algorithm for feature detection
* Camera motion solving. scene orientation
- New constraints to "link" scene objects with solved motions from clip:
* Follow Track (make object follow 2D motion of track with given name
or parent object to reconstructed 3D position of track)
* Camera Solver to make camera moving in the same way as reconstructed camera
This commit NOT includes changes from tomato branch:
- New nodes (they'll be commited as separated patch)
- Automatic image offset guessing for image input node and image editor
(need to do more tests and gather more feedback)
- Code cleanup in libmv-capi. It's not so critical cleanup, just increasing
readability and understanadability of code. Better to make this chaneg when
Keir will finish his current patch.
More details about this project can be found on this page:
http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011
Further development of small features would be done in trunk, bigger/experimental
features would first be implemented in tomato branch.
* removed struct for SpaceType and all usages
* SPACE_IMASEL in enum nees to be kept to identify it in old files
* it is replaces with SPACE_EMPTY on load, which is overridden by SPACE_INFO which has same struct members
* also removed theme settings