this is not well suited to RNA so this is a native python api.
This uses:
bpy.data.libraries.load(filepath, link=False, relative=False)
however the return value needs to use pythons context manager, this means the library loading is confined to a block of code and python cant leave a half loaded library state.
eg, load a single scene we know the name of:
with bpy.data.libraries.load(filepath) as (data_from, data_to):
data_to.scenes = ["Scene"]
eg, load all scenes:
with bpy.data.libraries.load(filepath) as (data_from, data_to):
data_to.scenes = data_from.scenes
eg, load all objects starting with 'A'
with bpy.data.libraries.load(filepath) as (data_from, data_to):
data_to.objects = [name for name in data_from.objects if name.startswith("A")]
As you can see gives 2 objects like 'bpy.data', but containing lists of strings which can be moved from one into another.
fix for crash when iterating over a collection which allocates the collection and frees on when finished.
The ability for BPy_StructRNA to hold a reference to other PyObject's was added to support this.
Previously the api just converted the collection to a list and got the iterator from the list to return.
This has the advantage that it uses minimal memory on large collections where before it would make an array.
Though the main reason for this change is to support a bugfix for collections which free memory when they are done, this currently crashes the python api since once the list is built, the data is freed which is used by the list items in some cases (dynamic enums for eg).
Second method for not having python crash blender on invalid access (ifdef'd out ATM, so no functional change).
This uses a weakref list per ID, and invalidates all members of that list when the ID is freed.
the list is not stores in the ID pointer but using a hash table since storing python in DNA data is not acceptable.
This is more correct then the previous method but shows down execution of scripts significantly since its always adding and removing from lists when data is created and freed.
This uses pythons GC so its no overhead during runtime but makes removing ID's slower.
Commented definition 'USE_PYRNA_INVALIDATE_GC' for now, so no functional change.
the length check was running sequence checks on every number which would fail, small speedup by avoiding this.
should eventually get this working faster by reading once into an allocated array.
note: BPY_class_validate() could come in handy later if we need to check classes for properties/functions but for now there is no point in keeping it in.