This will allow much finer controll over how we copy data-blocks, from
full copy in Main database, to "lighter" ones (out of Main, inside an
already allocated datablock, etc.).
This commit also transfers a llot of what was previously handled by
per-ID-type custom code to generic ID handling code in BKE_library.
Hopefully will avoid in future inconsistencies and missing bits we had
all over the codebase in the past.
It also adds missing copying handling for a few types, most notably
Scene (which where using a fully customized handling previously).
Note that the type of allocation used during copying (regular in Main,
allocated but outside of Main, or not allocated by ID handling code at
all) is stored in ID's, which allows to handle them correctly when
freeing. This needs to be taken care of with caution when doing 'weird'
unusual things with ID copying and/or allocation!
As a final note, while rather noisy, this commit will hopefully not
break too much existing branches, old 'API' has been kept for the main
part, as a wrapper around new code. Cleaning it up will happen later.
Design task : T51804
Phab Diff: D2714
Noisy change, but safe, and better do it sooner than later if we are to
rework copying code. Also, previous commit shows this *is* useful to
catch some mistakes.
Was internally a no-op operation, which only caused extra work
to be done during depsgrpah traversal and evaluation, without
making any measurable improvement.
The thing i'm really starting to hate is the requirement to specify both
operation code and node type. Seems to be duplicated enums without real
need for that.
The bug T46099 no longer applies since the addition of `dist_squared_to_projected_aabb_simple`
Has also been added comments that relates to an occlusion bug with the ruler. I'll investigate this.
The previous solution took arbitrary values to determine if the mouse was near or not to the Bound Box (it simply scaled the Bound Box).
Now the same function that detected the distance from the BVHTree nodes to the mouse is used in the Bound Box
This allows appending of an entire scene from another blend file into this one,
even when that blend file contains proxified armatures.
This replaces the approach from commit 1cdc54dc7d.
Thanks @sergey for the help.
Turns out most BKE_foo_make_local datablock-specific functions are actually doing
exactly the same thing, only two currently need special additional operations
(object and brush ones). So added a BKE_id_make_local_generic instead
of copying same code over and over.
Also, changed a bit how make_local works in case we are localizing a whole library.
We need to do the 'remap' step (from old linked ID to new local one) in the second loop,
otherwise we miss some dependencies. This fixes main part of T48907.
This reverts commit 47d0d9cca4.
Reverting the commit. Not only it did not solve all the cases of proxy popping,
but also broke real cases with single proxy involved.
Now using modern features from libquery/libremap areas.
Also, it should handle much better cases where localized ID was also indirectly used by non-refcounting users
(typical case: object used as modifier/constraint/whatever target from another linked object, previous
code would not take those into account and just localize original object instead of making a local copy.
Would result in local object used by linked one, which would be partially 'undone' on next file reload... Crappy behavior).
And it fixes some obvious errors too (nullifying all proxy pointers unconditionnaly,
some missing refcounted usages cases in extern_local_object(), etc.).
This commit changes a lot of how IDs are handled internally, especially the unlinking/freeing
processes. So far, this was very fuzy, to summarize cleanly deleting or replacing a datablock
was pretty much impossible, except for a few special cases.
Also, unlinking was handled by each datatype, in a rather messy and prone-to-errors way (quite
a few ID usages were missed or wrongly handled that way).
One of the main goal of id-remap branch was to cleanup this, and fatorize ID links handling
by using library_query utils to allow generic handling of those, which is now the case
(now, generic ID links handling is only "knwon" from readfile.c and library_query.c).
This commit also adds backends to allow live replacement and deletion of datablocks in Blender
(so-called 'remapping' process, where we replace all usages of a given ID pointer by a new one,
or NULL one in case of unlinking).
This will allow nice new features, like ability to easily reload or relocate libraries, real immediate
deletion of datablocks in blender, replacement of one datablock by another, etc.
Some of those are for next commits.
A word of warning: this commit is highly risky, because it affects potentially a lot in Blender core.
Though it was tested rather deeply, being totally impossible to check all possible ID usage cases,
it's likely there are some remaining issues and bugs in new code... Please report them! ;)
Review task: D2027 (https://developer.blender.org/D2027).
Reviewed by campbellbarton, thanks a bunch.
Makes behavior of proxy_from backlink working similar to the old dependency graph.
it's nasty, but needed here in the studio to get proxies fixes ASAP.
As in the title. In the smoke version, there was also an extra
'for_render' parameter that wasn't used, and wasn't used by the callers
either, so it was removed altogether.
Reviewers: brecht
Reviewed By: brecht
Subscribers: brecht
Differential Revision: https://developer.blender.org/D1718
Idea is, instead of ignoring completely missing linked datablocks, to
create void placeholders for them.
That way, you can work on your file, save it, and find again your missing data once
lib becomes available again. Or you can edit missing lib's path (in Outliner),
save and reload the file, and you are done.
Also, Outliner now shows broken libraries (and placeholders) with a 'broken lib' icon.
Future plans are also to be able to relocate missing libs and reload them at runtime.
Code notes:
- Placeholder ID is just a regular datablock of same type as expected linked one,
with 'default' data, and a LIB_MISSING bitflag set.
- To allow creation of such datablocks, creation of datablocks in BKE was split in two step:
+ Allocation of memory itself.
+ Setting of all internal data to default values.
See also the design task (T43351).
Reviewed by @campbellbarton, thanks a bunch!
Differential Revision: https://developer.blender.org/D1394
Duplicate wasn't updating links,
so duplicatinvg a objects would still point to the originals for curve-taper, texmesh, drivers.
Use generic id-looper to handle replacing data.
Added a helper that ensures a bbox has some non-NULL dimension along all its axes.
Also, fixed some (rather unlikely) NULL dereference cases (though it should not in this context,
`BKE_object_boundbox_get()` can return NULL).
This commit only adds callbacks which then later be used with major dependency
graph commit, keeping the upcoming commit more clean to follow.
Should be no functional changes so far still.
Currently it is just moving existing functions into a new file,
but in the future those new files will be grown much more due
to upcoming more granular scene updates.
Should be no functional changes.
Issue this commit is addressed to is that particle system and particle modifier
will contain caches once derived mesh was requested and this cached data will
never be freed.
This could easily lead to unwanted memory peaks during synchronization stage
of rendering.
The idea is to have RNA function in object which would free caches which can't
be freed otherwise. This function is not intended to deal with derived final
since it might be used by other objects (for example by object with boolean
modifier).
This cache freeing is only happening in the background rendering and locked
interface rendering.
From quick tests with victor file this change reduces peak memory usage by
command line rendering by around 6% (1780MB vs. 1883MB). For rendering from
the interface it's about 12% (1763MB vs. 1998MB).
Reviewers: campbellbarton, lukastoenne
Differential Revision: https://developer.blender.org/D1121
Issue is a regression since threaded objetc update and caused
by the fact that some objects might share the same proxy object.
It's all fine but object_handle_update() will call update for
a proxy object which screws up threaded update.
The thing is, proxy object is marked as depending on a scene
object and such a call makes it so the children objetc is
being updated.
This is really bad and depsgraph is to take all responsibility
on updating the proxy objects.
So for now used a simple solution (which is safe to backport
to 'a') which is skipping proxy update if the scene update is
threaded and based on the DAG traversal.
There are some still areas which calls object update directly
and for that cases proxy object is still being updated from
object_handle_update().
It was intended to work actually using particle cache's stack index
but this index might have been calculated incorrect in special case:
* With default cube scene, add particle system to the cube
* Add disk cache to the particle system
* Save file and reload it
* Add another particle system and enable disk cache
This would lead to two point caches with the same stack index of zero.
This happened because point cache indices list wasn't stored in the
.blend file so once you've reload your file blender doesn't know anything
about number or point caches used.
And what was even more confusing is that point cache indices list was
trying to be load from the file, but this failed because it wasn't in the
file.
This commit solves the root of the issue which is ability of producing
.blend file with two point caches using the same disk cache. This is
done by making it sure that point cache indices list is stored in the
.blend file. And also made it so disabling disk cache will tag it to
recalculate stack index.
Old broken files wouldn't magically start working, but fixing them is
rather simple manually by toggling Disk Cache option.
Reviewers: lukastoenne, brecht
CC: sergof
Differential Revision: https://developer.blender.org/D286
Issue is caused by start point of ray used to detect faces under the mouse is set rather far away in ortho 3dviews.
The loss of precision on the ray location induced by this can lead to face snapping failures.
Solution is to do the raycasting with a temp start point, much closer to the object we check, and add back
to the found distance the diff to the real start point once detection is done (as we need all hit distances
from all tested objects to be relative to a common point!).
Note this commit only addresses the "face snapping on mesh" case, other kind of snapping do not seem to suffer
from this issue.
Reviewers: brecht, campbellbarton
Differential Revision: https://developer.blender.org/D268
This adds an ImageUser to such empties with all the typical settings.
Reviewed By: brecht, campbellbarton
Differential Revision: https://developer.blender.org/D108
It doesn't make any sense anymore with the current depsgraph and probably was
not useful for a long time, just a leftover from the pre 2.04 game engine.
Summary:
Made objects update happening from multiple threads. It is a task-based
scheduling system which uses current dependency graph for spawning new
tasks. This means threading happens on object level, but the system is
flexible enough for higher granularity.
Technical details:
- Uses task scheduler which was recently committed to trunk
(that one which Brecht ported from Cycles).
- Added two utility functions to dependency graph:
* DAG_threaded_update_begin, which is called to initialize threaded
objects update. It will also schedule root DAG node to the queue,
hence starting evaluation process.
Initialization will calculate how much parents are to be evaluation
before current DAG node can be scheduled. This value is used by task
threads for faster detecting which nodes might be scheduled.
* DAG_threaded_update_handle_node_updated which is called from task
thread function when node was fully handled.
This function decreases num_pending_parents of node children and
schedules children with zero valency.
As it might have become clear, task thread receives DAG nodes and
decides which callback to call for it.
Currently only BKE_object_handle_update is called for object nodes.
In the future it'll call node->callback() from Ali's new DAG.
- This required adding some workarounds to the render pipeline.
Mainly to stop using get_object_dm() from modifiers' apply callback.
Such a call was only a workaround for dependency graph glitch when
rendering scene with, say, boolean modifiers before displaying
this scene.
Such change moves workaround from one place to another, so overall
hackentropy remains the same.
- Added paradigm of EvaluaitonContext. Currently it's more like just a
more reliable replacement for G.is_rendering which fails in some
circumstances.
Future idea of this context is to also store all the local data needed
for objects evaluation such as local time, Copy-on-Write data and so.
There're two types of EvaluationContext:
* Context used for viewport updated and owned by Main. In the future
this context might be easily moved to Window or Screen to allo
per-window/per-screen local time.
* Context used by render engines to evaluate objects for render purposes.
Render engine is an owner of this context.
This context is passed to all object update routines.
Reviewers: brecht, campbellbarton
Reviewed By: brecht
CC: lukastoenne
Differential Revision: https://developer.blender.org/D94
Summary:
Issue was caused by access to pchan->custom object from channel free
function when freeing all objects from main. Order of objects free
is not defined and such an access might easily end up with access
to freed memory.
We don't need to do user counter stuff when freeing main, so added
an _ex functions with do_id_user flag which is used when freeing main.
We had the same issue with other datablocks, so now it should be
easier to support relevant user counter.
This issue was caused by the fix for T36391, so perhaps that's indeed
high time to do real user counter.
Reviewers: brecht, campbellbarton
Reviewed By: campbellbarton
Maniphest Tasks: T37709
Differential Revision: https://developer.blender.org/D137
Levels of detail can be added and modified in the object panel. The object
panel also contains new tools for generating levels of detail, setting up
levels of detail based on object names (useful for importing), and
clearing an object's level of detail settings. This is meant as a game
engine feature, though the level of details settings can be previewed in
the viewport.
Reviewed By: moguri, nexyon, brecht
Differential Revision: http://developer.blender.org/D109