Better use higher-level code from common ID management when possible.
Helps to de-duplicate logic, and reduces outside usages of more
'dangerous' functions.
Note that we could get rid of many of those `BKE_<id_type>_add`
functions now, but on the other hand several of those take extra
parameters and perform additional actions, so think we can keep them all
for now as 'non-standard ID specific creation functions'.
This adds color tagging to collections. There are 8 color
options which are themable in the user preferences, with an additional
option for no color tag by default.
This adds a new filled collection icon and 8 colored variants of the
icon that can be themed in the user preferences.
In this commit the only interface to setting the color tags is through
Python, and there is nowhere in the interface where the collections are
shown colored. Setting and viewing the color tags from the outliner will
follow.
Manifest Task: https://developer.blender.org/T77777
Differential Revision: https://developer.blender.org/D8622
This is part of T76372.
It adds the `blend_write`, `blend_read_data`, `blend_read_lib`
and `blend_read_expand` which correspond to the various
steps when reading and writing .blend files.
Having these callbacks allows us to decentralize the blenloader
code a lot more. This has the affect that code related to any
specific ID type is less scattered.
Reviewers: mont29
Differential Revision: https://developer.blender.org/D8670
Code dealing with object copy of master collection was bugy in case one
of the new object copy would get a name lesser than the original object,
leading to new copy being inserted before original one in lists.
Maniphest Tasks: T79931
Differential Revision: https://developer.blender.org/D8656
Old and new collections are the same data in Master collection case
here, so we cannot consider the `gobject` listbase of `collection_old`
as always immutable.
That project cannot be opened correctly ayway, it has recursive
collections intanciating themselves...
But at least now we have a check at startup to detect and 'fix' those
nasty cycles in collections.
Note that this behavior is enforced on user level for now, but on code
side it is controlled with a flag, which should make it easy to refine
that behavior if needed.
Only exception is when we duplicate a linked ID directly (then we assume
user wants a local deep-copy of that linked data, and we always also
duplicate linked sub-data-blocks).
Note that this commit also slightly refactor the handling of actions of
animdata, by simplifying `BKE_animdata_copy_id_action()` and adding an
explicit new `BKE_animdata_duplicate_id_action()` to be used during ID
duplication (deep copy).
This also allows us to get rid of the special case for liboverrides.
Now that we have a uniform consistent behavior in all our ID duplicate
funtions, we can easily factorize it greatly. Code gets cleaner,
smaller, and less error-prone.
Note that ultimately, this duplicate/deep copy behavior could be added
as a callback of IDTypeInfo.
We could also rethink the duplicate flags (some data, even some obdata,
like Lattice, are not coverred currently).
And so on. But at least code should now be much more easily maintainable
and extendable.
Previously, object (and sub-data) actions would be controlled by the
user preferences flag, collections actions would never be duplicted, and
scenes actions were always duplicated...
Now they all follow the user preferences settings.
Main change from user side, besides that all pointers should now be
properly remapped to new IDs, is that linked objects are no longer
preserved when doing a full copy of scenes.
Will open a task to check whether we actually still want that behavior
(and re-code it in a more correct way then).
This is the main part of work done here, it aims at uniformizing and
sanitizing that 'deep copy' process for supported IDs (currently scenes,
collections and objects).
Note that there will be more follow up commits after that one, but this
should be the most risky and changing one.
Using enum type itself in implementations, and uint in headers (as using
enums types in headers is a pain when enum are not defined and used in a
single same header file...).
It makes no sense to deep-copy a collection and not also copy its
children collections... Parameter was not used anymore anyway.
So now this duplicate function will always at least deep-duplicate all
of its children collections, recursively.
We want to get rid of those for all ID types ultimately, but that one
was only used in one place, being the only one calling
`BKE_collection_duplicate` without hierarchical duplicate and parent
collection pointer, effectively using the full power of the complex deep
duplication code for a mere `BKE_id_copy` call...
This will allow for further cleanup in duplicate code.
Now that `BKE_main_collections_parent_relations_rebuild()` is called
from readfile code, we need to make it resilient to potential NULL
master collection pointer in scenes.
Override collections do not support that, add proper checks in BKE code
adding objects to collections.
Also try to find a suitable collection in parents in that case.
Note that this is enforced on 'public' API level, internal code can
still bypass those checks if needed. Exposing this possibility to public
API should not be needed.
Those are then assumed already duplicated, and not touched. However, all
of ther objects and sub-collections can still be processed as with any
other regular collection...
This commit enables basic copy of overrides on generic ID level, as well
as from (deep) copy operators for objects and collections.
So e.g. if your linked overridden caracter is in a collection, you can
now (from the outliner) Duplicate that override collection to get a new
overriding copy of the character.
We still need operators (new or modifying existing ones) to handle that
from 3DView e.g.
Note that deep copy code for objects/collections (and incidently
animdata) had to be modified to avoid duplicating/making local IDs that
remain linked ones being used by overrides ones.
When using the "Make Library Override" operator on instance collections, keep
the overriden collection in the parent collection of the instance empty.
Previoulsy the collection would be added to the scene collection, which was
confusing and not what users expected. It was placed there for a reason after
all.
Part of T76555.
Reviewed by: Andy Goralczyk, Bastien Montange.
Differential Revision: https://developer.blender.org/D7626
Note this only changes cases where the variable was declared inside
the for loop. To handle it outside as well is a different challenge.
Differential Revision: https://developer.blender.org/D7320
'Private' can be a rather confusing term, especially when considering
its meaning in programming languages.
So now root node trees and master collections are 'embedded' IDs
instead.
Instead of using anonymous booleans flags, also allows to keep the same
behavior in all cases, without needing special handling from calling
code for our beloved oddballs object proxies...
Once again, am not exactly sure why that was working before, and not
anymore - but in any case, doing that kind of update here is not only
useless (since we have to do it at the end of the whole
collections/objects duplication and remapping anyway), it is also rather
dangerous, as collections are currently in rather invalid states at that
point of the code...
Note that in ideal world, `BKE_main_collection_sync()` & co would be
lazy (setting only a flag, then code actually needing this to be valid
again should call some sort of `BKE_main_collection_sync_ensure()`).
Then we would not have to worry about such things (and we'd get nice
performance improvements in some cases, also in main remapping code,
etc.).
Food for some refactoring, some day...
That would lead to crashes and other issues. The solution is not elegant
though, it involves resyncing all the collections again.
Differential Revision: https://developer.blender.org/D6043
Side-reported in T70505.
Code did not ensure deleted object was removed from the RBW constraints
collection, leading to some invalid status (object in constraints
collection but without relevant contraints data).
Also fixed another issue - code deleting RBW objects would try to remove
any constraint one using it as target, in a very bad and broken way,
since you cannot iterate over objects of a collection while removing
some... Now instead just NULLify relevant pointers... I hope it works,
otherwise we'll have to take a different approach.
Needless to stress again how weak the whole RBW code is in general, and
regarding same object being used by RBW in more than one scene in particular,
that is known broken situation anyway.
Now that we 'properly' support private ID data in lib management, there
is no reason anymore to have that custom func, badly named and
by-passing the whole generic ID management code.
Same issue here as with root nodetrees, those are private ID data owned
by another ID, and not in Main DB. This requires special handling.
there are still quiet a few things to do here, like getting rid of
special code for master collection (regular ID copying should handle
that just as it already does for root nodetrees), cleanup in ID copying
code, etc.