Make sure we don't reallocate arrays in the pointcache when not needed, the
size of a memory allocation can be slightly bigger than the requested size.
Also, use consistent check for shared cached in copy and free functions.
Convention was not to but after discussion on 918941483f we agree its
best to change the convention.
Names now mostly follow RNA.
Some exceptions:
- Use 'nodetrees' instead of 'nodegroups'
since the struct is called NodeTree.
- Use 'gpencils' instead of 'grease_pencil'
since 'gpencil' is a common abbreviation in the C code.
Other exceptions:
- Leave 'wm' as it's a list of one.
- Leave 'ipo' as is for versioning.
We already have different storages for cddata of verts, edges etc.,
'simply' do the same for the mask flags we use all around Blender code
to request some data, or limit some operation to some layers, etc.
Reason we need this is that some cddata types (like Normals) are
actually shared between verts/polys/loops, and we don’t want to generate
clnors everytime we request vnors!
As a side note, this also does final fix to T59338, which was the
trigger for this patch (need to request computed loop normals for
another mesh than evaluated one).
Reviewers: brecht, campbellbarton, sergey
Differential Revision: https://developer.blender.org/D4407
Do not force to link indirectly linked collections into current scene,
that is usually not desired. Note that user can always add this link
manually if they want.
All this 'implicit instantiation' post-linking process is rather hairy
to get it correct, hope this time it's not breaking something else...
Use 'size' instead of 'len' to represent the size of data in bytes,
'len' is used for the result of 'strlen' or the length of an array
in some parts of 'makesdna.c' & 'dna_genfile.c'.
Also clarify comments and some variable names, no functional changes.
Now always refresh when the material changes. Depsgraph tag moved out
of the refresh function since that gets called on depsgraph update,
which should not trigger a second depsgraph update.
Delay loading all DATA sections of the blend file until they're needed.
Loading all data-blocks caused high peak memory usage especially with
libraries - since a lot of data may exist which isn't used directly.
In one test (spring project: 10_010_A.anim.blend),
peaked at ~12.5gig, dropping back to ~2.5gig once loaded.
With this change peaks memory usage reaches ~2.7gig while loading.
Besides this there are some minor gains from not having to read data
from the file-system and we can skip an alloc + memcpy reading data
written with the same version of Blender.
Now that we are looping over all image users that were previously ignored,
it shows some scene pointers are invalid. Always clear them on load, and
don't keep scene permanently in the image user except for the image editor.
Otherwise the pointer can go out of date.
Do not instance linked object immediately in scene, this was never a
good idea and is doomed to fail nowadays, with complex relations between
objects, collections and scenes.
Instead, this commit refactors a bit linking code to add loose objects
to current scene *after* everything has been imported, and ID pointers
have been properly remapped to new ones - i.e. once new linked data is
supposed to be fully valid, just like we were already doing with
collections.
As a bonus, it means we do not have to pass around scene, view3d etc. to
`BLO_library_link_named_part_ex()` and co.