* The symmetrize operation makes the input mesh elements symmetrical,
but unlike mirroring it only copies in one direction. The edges and
faces that cross the plane of symmetry are split as needed to
enforce symmetry.
* The symmetrize operator can be controlled with the "direction"
property, which combines the choices of symmetry plane and
positive-negative/negative-positive. The enum for this is
BMO_SymmDirection.
* Added menu items in the top-level Mesh menu and the WKEY specials
menu.
* Documentation:
http://wiki.blender.org/index.php/User:Nicholasbishop/Symmetrize
* Reviewed by Brecht:
https://codereview.appspot.com/6618059
This operator used to be called "Jump to Frame". It basically takes the midpoint
(frame number and/or value) of selected keyframes, and positions the current
frame (or2d-cursor in Graph Editor) at this point.
The hotkey for this is now Ctrl-G (i.e. as it's similar to a "Goto Frame"
feature). It is also now in the Key menu instead of in the relatively obscure
View menu, even though it doesn't actually result in any keyframe edits taking
place.
(Also, fixed a typo/grammer issue with one of Remove Bone Group operator)
The Ctrl-G menu for managing Bone Groups has always been a bit clunky,
especially when compared to the Hooks menu (Ctrl-H). This was because the old
menu was more data-orientated (Bone Group Management, Membership to these
groups) whereas this new arrangement should be a bit more task-orientated (Add
to new group, Add to active group, Remove from all groups, Remove active group).
Alt+V will fill the area inbetween the ripped faces - a bit like extrude.
faces are flipped to match existing geometry and customdata (uv, vcols etc) is copied from surrounding geometry too.
This operator (Ctrl-F) allows you to flip the lattice coordinates without
inverting the normals of meshes deformed by the lattice (or the lattice's
deformation space for that matter). Unlike the traditional mirror tool, this
tool is aware of the fact that the vertex order for lattice control points
matters, and that simply mirroring the coordinates will only cause the lattice
to have an inverted deform along a particular axis (i.e. it will be scaling by a
negative scaling factor along that axis).
The problems (as I discovered the other day) with having such an inverted
deformation space are that:
- the normals of meshes/objects inside that will be incorrect/flipped (and will
disappear in GLSL shading mode for instance)
- transforming objects deformed by the lattices will become really tricky and
counter-intuitive (e.g. rotate in opposite direction by asymmetric amounts to
get desired result)
- it is not always immediately obvious that problems have occurred
Specific use cases this operator is meant to solve:
1) You've created a lattice-based deformer for one cartoonish eye. Now you want
to make the second eye, but want to save some time crafting that basic shape
again but mirrored.
2) You've got an even more finely crafted setup for stretchy-rigs, and now need
to apply it to other parts of the rig.
Notes:
* I've implemented a separate operator for this vs extending/patching mirror
transform tool as it's easier to implement this way, but also because there are
still some cases where the old mirroring seems valid (i.e. you explicitly want
these sort of distortion effects).
* Currently this doesn't take selections into account, as it doesn't seem useful
to do so.
The Limit Distance constraint is now allowed to use the owner/target space
settings. Previously this wasn't exposed it didn't seem sensible/useful.
However, this can be useful when dealing with dependencies between bones and the
armature gets scaled.
Usage notes:
- It is advised to select the same space for both owner and target, otherwise
the comparisons are meaningless
Documentation & Test blend files:
------------------
http://wiki.blender.org/index.php/User:MiikaH/GSoC-2012-Smoke-Simulator-Improvements
Credits:
------------------
Miika Hamalainen (MiikaH): Student / Main programmer
Daniel Genrich (Genscher): Mentor / Programmer of merged patches from Smoke2 branch
Google: For Google Summer of Code 2012
with errors
This filtering option is useful when rigging and you want to figure out if any
of your drivers are not functioning, and/or which one(s) are not, so that you
can go through fixing them. It saves you from having to check on each one
individually, or going into the console to try to infer which ones are not
working.
from Kesten Broughton (kestion)
Usage: In weight paint mode, select the mesh to have its weights culled. Click on "Limit Weights" button. A sub-panel will appear "Limit Number of Vertex Weights" with a slider field "Limit" which you can set to the appropriate level. The default level is 4, and it gets executed upon pressing "Limit Weights" so you will need to do an "undo" if your max bone limit is above 4. The checkbox "All Deform Weights" will consider all vertex weights, not just bone deform weights.
By default, this is enabled, so that newbie users who are most likely to be
caught short by this will get the benefits of this option, while seasoned
animators are likely to know where to go to turn things off (i.e. the scratch-
an-itch urge is quite a powerful motivating force...)
Added a convenience operator to the Follow Path constraint which adds a F-Curve
for the path (or the operator's "fixed position" value if no path is assigned),
with options for setting the start frame and length of motion. This makes it
easier for common users to just set up a quick follow-path animation where the
camera (e.g. flying around a set over certain number of frames).
A key advantage of this is that it takes care of the underlying math required
for setting up the generator curve accordingly (I've got some plans for making
this a bit friendlier to use later). Now, animating the paths is a one-click
operation, with the start and length properties able to be controlled using the
operator properties.
* Get rid of ED_object_add_generic_invoke() and all invoke callbacks using it, it was doing nothing exec() callbacks would not do. In fact, its only action (setting part of common add ops properties, like loc, layers, etc.) was needed too by direct exec call, so it was done twice in case of using invoke()!
* Replace custom invoke code for metaballs by WM_menu_invoke helper (as already used by lamps).
* Add a new OBJECT_OT_empty_add op, to allow direct addition of empties of a given drawtype.
* And some general code cleanup (like trailing spaces, empty lines, ...).
Did quite a bunch of tests/verifications, but obviously could not tackle all possible scenarios... Anyway, if any, bugs should arize quite quickly (but I don’t expect any! :p ).
On second thought, perhaps it is more convenient/natural if this was shown in
both places, given that many
people may only find the motion paths options through the UI now.
There was strange context changes in the Add menu... Now everything uses the EXEC_REGION_WIN one (no need to invoke here, and metaballs have a strange specific invoke func...). This fixes the problem when using Add menu from a 3D view. Obviously, it still doesn't work when used from Info window's header, but that can't be helped for now (and never worked for any kind of object).
Anyway, imho all this "add object" code could use some review/cleanup, both on py menu and C ops side, but this is obviously postponed to after 2.64!
from special NDOF menu, added them in user preferences as well now. Also made it do
proper version patch for conversion from old user preferences, and changed turntable
choice from a boolean to enum for consistency.
Main problem was in py UI code (has to set the context to INVOKE_REGION_PREVIEW for the shortcut lookup to succeed).
Also moved the N properties item into SequencerCommon keymap, and removed the View Selected menu entry from preview-only mode View menu (thx to Ejner Fergo for pointing this out).
updating data was only being done on the active object but sticly was being calculated for the selection.
split this into 2 operators, one that works on the selection and another that operates on the active object - so we can have a button in the mesh panels that calculates sticky.
also note that there was no way to calculate sticky from the UI - perhaps this feature should die a quiet death?
anyway - it works better then it used to for now.
currently can remove sticky/mask/skin vertex layers.
regarding the skin layer - while adding and removing the modifier normally works fine, its not 100% reliable since the mesh may be linked into another scene, or be a linked duplicate and the object with the modifier deleted.
Replace old color pipeline which was supporting linear/sRGB color spaces
only with OpenColorIO-based pipeline.
This introduces two configurable color spaces:
- Input color space for images and movie clips. This space is used to convert
images/movies from color space in which file is saved to Blender's linear
space (for float images, byte images are not internally converted, only input
space is stored for such images and used later).
This setting could be found in image/clip data block settings.
- Display color space which defines space in which particular display is working.
This settings could be found in scene's Color Management panel.
When render result is being displayed on the screen, apart from converting image
to display space, some additional conversions could happen.
This conversions are:
- View, which defines tone curve applying before display transformation.
These are different ways to view the image on the same display device.
For example it could be used to emulate film view on sRGB display.
- Exposure affects on image exposure before tone map is applied.
- Gamma is post-display gamma correction, could be used to match particular
display gamma.
- RGB curves are user-defined curves which are applying before display
transformation, could be used for different purposes.
All this settings by default are only applying on render result and does not
affect on other images. If some particular image needs to be affected by this
transformation, "View as Render" setting of image data block should be set to
truth. Movie clips are always affected by all display transformations.
This commit also introduces configurable color space in which sequencer is
working. This setting could be found in scene's Color Management panel and
it should be used if such stuff as grading needs to be done in color space
different from sRGB (i.e. when Film view on sRGB display is use, using VD16
space as sequencer's internal space would make grading working in space
which is close to the space using for display).
Some technical notes:
- Image buffer's float buffer is now always in linear space, even if it was
created from 16bit byte images.
- Space of byte buffer is stored in image buffer's rect_colorspace property.
- Profile of image buffer was removed since it's not longer meaningful.
- OpenGL and GLSL is supposed to always work in sRGB space. It is possible
to support other spaces, but it's quite large project which isn't so
much important.
- Legacy Color Management option disabled is emulated by using None display.
It could have some regressions, but there's no clear way to avoid them.
- If OpenColorIO is disabled on build time, it should make blender behaving
in the same way as previous release with color management enabled.
More details could be found at this page (more details would be added soon):
http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Color_Management
--
Thanks to Xavier Thomas, Lukas Toene for initial work on OpenColorIO
integration and to Brecht van Lommel for some further development and code/
usecase review!