This is apart of a code cleanup to make ED_view3d_project_short/ED_view3d_project_int/ED_view3d_project_float interchangeable. Currently they work very differently in a way thats quite confusing (and cause of bugs in blender that remain uncorrected) - fixes coming.
There are also cases where ED_view3d_project_short is used, then the values are converted from shorts into int's after because ED_view3d_project_int() behaves differently, will unify behavior of these functions after this commit.
- rather then clip/noclip versions, pass flags (for bound-box clip, window clip).
- rather then store the invalid clip-value, return success (or error value clip_near, clip_bb, clip_win, overflow).
- remove local copies of project functions from drawobject.c: view3d_project_short_clip, view3d_project_short_noclip, view3d_project_short_clip_persmat.
add functions:
- ED_view3d_project_short_global() global space projection
- ED_view3d_project_short_object() object space projection.
- ED_view3d_project_short_ex() take perspective matrix and local space option as args.
- ED_view3d_project_base() - special function to set the Object 'Base' screen coords (sx, sy), since this is a common enough operation.
- make view3d project names more consistent.
- remove apply_project_float() its not needed.
- update comments referencing an old function name.
- move doxygen docs into the C file, prefer they are kept here to avoid getting out of sync with code.
Replace old color pipeline which was supporting linear/sRGB color spaces
only with OpenColorIO-based pipeline.
This introduces two configurable color spaces:
- Input color space for images and movie clips. This space is used to convert
images/movies from color space in which file is saved to Blender's linear
space (for float images, byte images are not internally converted, only input
space is stored for such images and used later).
This setting could be found in image/clip data block settings.
- Display color space which defines space in which particular display is working.
This settings could be found in scene's Color Management panel.
When render result is being displayed on the screen, apart from converting image
to display space, some additional conversions could happen.
This conversions are:
- View, which defines tone curve applying before display transformation.
These are different ways to view the image on the same display device.
For example it could be used to emulate film view on sRGB display.
- Exposure affects on image exposure before tone map is applied.
- Gamma is post-display gamma correction, could be used to match particular
display gamma.
- RGB curves are user-defined curves which are applying before display
transformation, could be used for different purposes.
All this settings by default are only applying on render result and does not
affect on other images. If some particular image needs to be affected by this
transformation, "View as Render" setting of image data block should be set to
truth. Movie clips are always affected by all display transformations.
This commit also introduces configurable color space in which sequencer is
working. This setting could be found in scene's Color Management panel and
it should be used if such stuff as grading needs to be done in color space
different from sRGB (i.e. when Film view on sRGB display is use, using VD16
space as sequencer's internal space would make grading working in space
which is close to the space using for display).
Some technical notes:
- Image buffer's float buffer is now always in linear space, even if it was
created from 16bit byte images.
- Space of byte buffer is stored in image buffer's rect_colorspace property.
- Profile of image buffer was removed since it's not longer meaningful.
- OpenGL and GLSL is supposed to always work in sRGB space. It is possible
to support other spaces, but it's quite large project which isn't so
much important.
- Legacy Color Management option disabled is emulated by using None display.
It could have some regressions, but there's no clear way to avoid them.
- If OpenColorIO is disabled on build time, it should make blender behaving
in the same way as previous release with color management enabled.
More details could be found at this page (more details would be added soon):
http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Color_Management
--
Thanks to Xavier Thomas, Lukas Toene for initial work on OpenColorIO
integration and to Brecht van Lommel for some further development and code/
usecase review!
dist_squared_to_line_segment_v2() was returning the sqrt'd value in some cases.
also use int's for edge_inside_circle() rather then shorts since it was doing int/float/short conversions and we're now using int's for screen vars in more places.
when the source and destination vectors were the same pointer, the X value would get overwritten.
now the rip tool uses the best side to grab as in trunk.
[#30454] perspective_matrix not update in real time with bpy.ops.view3d.zoom
This is so you can modifify the view settings and get the view matrix after without waiting for a redraw.
- spelling - turns out we had tessellation spelt wrong all over.
- use \directive for doxy (not @directive)
- remove BLI_sparsemap.h - was from bmesh merge IIRC but entire file commented and not used.
added some missing functions too - which are not used yep but should be there for api completeness.
* CDDM_set_mloop
* CDDM_set_mpoly
* BLI_mempool_count
* made bmesh_structure.h function names more consistant.
* remove unused code in bmesh_structure.c
* removed 'Edge Flip' operator (missing from bmesh but looked into trunk feature and dont think its worth keeping).
* tagged some BMESH_TODO's
* remove 'select' and 'hide' from BMLoop
* remove BMesh.update
* add BMesh.normal_update(skip_hidden=False)
* add BMElemSet.index_update(), eg: bm.verts.index_update()
bmesh api
* BM_mesh_normals_update() now takes skip_hidden as an argument
(previously this was default behavior), however this isnt good when
using BMesh modifiers, where you want all normals to be recalculated.
* add bm_iter_itype_htype_map[], to get the iter type from a BMesh
iterator.
and projection matrix, so operators which depends on such things can easily save
settings in operator properties in invoke and then reuse them in exec callback.
This will fix: #24767: Knife tool last operations panel doesn't cause changes even though F6 pop-up does.
#27129: Problem with knife cuts/midpoint type in quad view
Usage is pretty simple:
- From operator template declaration function call ED_view3d_operator_properties_viewmat()
to register all needed properties in operator.
- From invoke callback call ED_view3d_operator_properties_viewmat_set to
store all needed settings in operator properties().
- To access this settings from exec callback, use function
ED_view3d_operator_properties_viewmat_get().
Additional change:
added function apply_project_float() which does the same as
project_float() but accepts actual values for viewport width/height and
projection matrix.
=========================
Documentation: http://wiki.blender.org/index.php/User:Psy-Fi/UV_Tools
Major features include:
*16 bit image support in viewport
*Subsurf aware unwrapping
*Smart Stitch(snap/rotate islands, preview, middlepoint/endpoint stitching)
*Seams from islands tool (marks seams and sharp, depending on settings)
*Uv Sculpting(Grab/Pinch/Rotate)
All tools are complete apart from stitching that is considered stable but with an extra edge mode under development(will be in soc-2011-onion-uv-tools).
One function converts bounding boxes to screen space, the other
converts a screen-space rectangle to 3D clipping planes.
Also const-ified some parameters in the ED_view3d API.
===========================
Commiting camera tracking integration gsoc project into trunk.
This commit includes:
- Bundled version of libmv library (with some changes against official repo,
re-sync with libmv repo a bit later)
- New datatype ID called MovieClip which is optimized to work with movie
clips (both of movie files and image sequences) and doing camera/motion
tracking operations.
- New editor called Clip Editor which is currently used for motion/tracking
stuff only, but which can be easily extended to work with masks too.
This editor supports:
* Loading movie files/image sequences
* Build proxies with different size for loaded movie clip, also supports
building undistorted proxies to increase speed of playback in
undistorted mode.
* Manual lens distortion mode calibration using grid and grease pencil
* Supervised 2D tracking using two different algorithms KLT and SAD.
* Basic algorithm for feature detection
* Camera motion solving. scene orientation
- New constraints to "link" scene objects with solved motions from clip:
* Follow Track (make object follow 2D motion of track with given name
or parent object to reconstructed 3D position of track)
* Camera Solver to make camera moving in the same way as reconstructed camera
This commit NOT includes changes from tomato branch:
- New nodes (they'll be commited as separated patch)
- Automatic image offset guessing for image input node and image editor
(need to do more tests and gather more feedback)
- Code cleanup in libmv-capi. It's not so critical cleanup, just increasing
readability and understanadability of code. Better to make this chaneg when
Keir will finish his current patch.
More details about this project can be found on this page:
http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011
Further development of small features would be done in trunk, bigger/experimental
features would first be implemented in tomato branch.