This makes code closer to id_override/assent-engine ones, which
introduce a new type of linked data, and hence reserve
ID_IS_LINKED_DATABLOCK to real linked datablocks.
* Alt-G, Alt-R, Alt-S --> These don't clear delta transforms by default anymore,
making it possible to use these to properly store a "rest" pose
* Alt-Shift-G, Alt-Shift-R, Alt-Shift-S --> These WILL clear both the normal
transform and the delta, should you need to do so.
Idea is to replace hard-to-track (id->lib != NULL) 'is linked datablock' check everywhere in Blender
by a macro doing the same thing. This will allow to easily spot those checks in future, and more importantly,
to easily change it (see work done in asset-engine branch).
Note: did not touch to readfile.c, since there most of the time 'id->lib' check actually concerns the pointer,
and not a check whether ID is linked or not. Will have a closer look at it later.
Reviewers: campbellbarton, brecht, sergey
Differential Revision: https://developer.blender.org/D2082
This is purely internal sanitizing/cleanup, no change in behavior is expected at all.
This change was also needed because we were getting short on ID flags, and
future enhancement of 'user_one' ID behavior requires two new ones.
id->flag remains for persistent data (fakeuser only, so far!), this also allows us
100% backward & forward compatibility.
New id->tag is used for most flags. Though written in .blend files, its content
is cleared at read time.
Note that .blend file version was bumped, so that we can clear runtimeflags from
old .blends, important in case we add new persistent flags in future.
Also, behavior of tags (either status ones, or whether they need to be cleared before/after use)
has been added as comments to their declaration.
Reviewers: sergey, campbellbarton
Differential Revision: https://developer.blender.org/D1683
It is now possible to use "Apply Scale" for Empties. While Empties don't exactly
have any Object data attached to them which can be used for supporting "true"
apply scale (i.e. with non-uniform scaling), they do have a drawsize value which
controls how large the empties are drawn (before scaling). This works by taking
the scale factor on the most-scaled axis, and combines this with the existing
empty drawsize to maintain the correct dimensions on that axis at least.
Core Assumptions:
1) Most scaled empties have uniform scaling anyways (i.e. most empties used for
bone shapes)
2) On balance, preserving non-uniform scaling of empties after apply scale is
not as important as not being able to do it at all
This means when you've got reconstructed scene assigned to a
3d camera (via camera solver constraint) and applies scale on
this camera from Ctrl-A menu, scale will be applied on the
reconstructed scene and reset camera size to identity.
This is very useful feature for scene orientation, when you'll
just scale camera by S in the viewport to match bundles
some points in the space, and then you'll easiy make camera
have identity scale (which is needed for nice working moblur
and other things mentioning by Sebastian :) without loosing
scale of bundles themselves.
Behavior of apply scale for cameras without clip assigned
to them does not change at all.
besides performance in some cases.
* DAG_scene_sort is now removed and replaced by DAG_relations_tag_update in
most cases. This will clear the dependency graph, and only rebuild it right
before it's needed again when the scene is re-evaluated.
This is done because DAG_scene_sort is slow when called many times from
python operators. Further the scene argument is not needed because most
operations can potentially affect more than the current scene.
* DAG_scene_relations_update will now rebuild the dependency graph if it's not
there yet, and DAG_scene_relations_rebuild will force a rebuild for the rare
cases that need it.
* Remove various places where ob->recalc was set manually. This should go
through DAG_id_tag_update() in nearly all cases instead since this is now
a fast operation. Also removed DAG_ids_flush_update that goes along with
such manual tagging of ob->recalc.
This uses the weighted average of polygon centroids based on area
It work well in most cases but will be slightly wrong when polygons have
many vertices.