This commits merges object tracking implementation from tomato branch.
Summarized changes from branch:
- Added list of objects to be tracked. Default there's only one object called
"Camera" which is used for solving camera motion. Other objects can be added
and each of them will have it;s own list of tracks. Only one object can be used
for camera solving at this moment.
- Added new constraint called "Object Tracking" which makes oriented object be
moving in the save way as solved object motion.
- Scene orientation tools can be used for orienting object to bundles.
- Object has got scale to define "depth" in camera space.
- All tools which works with list of tracks or reconstruction data now
gets that lists from active editing object.
- All objects and their tracking data are available via python api.
- Improvements in witness cameras workflow,
For premultiplied alpha images, this makes any color space conversion for the image
or render output work on color without alpha multiplied in.
This is typically useful to avoid fringing when the image was or will be composited
over a light background. If the image will be composited over a black background on
the other hand, leaving this option off will give correct results.
In an ideal world, there should never be any color space conversion on images with
alpha, since it's undefined what to do then, but in practice it's useful to have
this option.
Patch by Troy Sobotka, with changes by me.
---
Locking only redistributes or restricts weights when using bone groups.
So, in addition to adding a NULL check to my last bit of code, I made
has_locked_group() check for bone groups.
- Enable bicybic filtering fir image displayed in track preview
- Option to show grayscale content of track preview
- When some channels are disabled, display exactly the same
content of preview image which is sending to tracker library.
Merged from tomato branch using command:
svn merge -r42382:42383 -r42384:42385 -r42394:42395 \
-r42397:42398 -r42398:42399 -r42406:42407 \
-r42410:42411 -r42417:42418 -r42471:42472 \
^/branches/soc-2011-tomato
* the accumulated blur weight now takes into account how far verts are from the brush, giving more even results
* verts where the weight wasnt found were being ignored, now treat them as zero weight verts.
also use math functions for calc_vp_strength(), and project the vertices as floats rather then ints to get better accuracy, otherwise no functional changes.
* this loop is called multiple times per vertex (not addressed in this commit)
* functions like brush_use_size_pressure(), brush_use_size_pressure() called unified_settings() twice when they didnt need to.
looks like this code cant work right with multiple scenes, added a comment on this - but at least avoid calling unified_settings() multiple times in single functions.
It is just a limitation of multires baker which doesn't deal correct with
baking to subdivision level 0. It was supposed to work with levels on which
sculpt data is affecting on mesh, so interpolation between grids works correct.
Fully accurate baking in this case will need raycasting stuff which will make
it much slower and will remove main benefit of regular baker -- speed and
low memory usage.
Another option would be to make multires apply sculpting data on level 0,
but it's not related at baking at all and has got it's own difficulties.
byte => float, float => float, byte => byte conversions with profile, dither
and predivide. Previously code for this was spread out too much.
There should be no functional changes, this is so the predivide/table/dither
patches can work correctly.