Issue is caused by the evaluation flags getter called with
NULL depsgraph. It happens on direct object update from the
transform code after duplicating the curve.
Proper solution is probably to make sure depsgraph is rebuild
after duplication, but for now it's better to prevent crashes.
Issue was caused by evaluation flags getter function polluting
the DAG. Need to use dag_find_node() instead.
Still need to doublecheck exporting objects with curve deform
works properly. On the first thought it should, but might be
wrong again.
This adds an ImageUser to such empties with all the typical settings.
Reviewed By: brecht, campbellbarton
Differential Revision: https://developer.blender.org/D108
Graph traversal which is based on counting parents which are still
to be updated fails in cases there are cycles in the graph.
If there are cyclic dependencies in the scene all the objects from
the cycles will be updated in a single thread now one by one. This
makes blender behave the same way as it was before multi-threaded
DAG landed to master.
This needed to tweak depsgraph a bit so now dag_check_cycle() sets
is_acyclic field of DAG forest if there are cycles in the graph.
TODO: It might be possible to save some time on evaluation when
all the tagged objects were updated in multi-threaded DAG
traversal.
This is a regression since threaded dependency graph landed to master.
Root of the issue goes to the loads of graph preparation being done
even if there's nothing to be updated.
The idea of this change is to use ID type recalc bits to determine
whether there're objects to be updated. Generally speaking, we now
check object and object data datablocks with DAG_id_type_tagged()
and if there's no such IDs tagged we skip the whole task pool creation
and so,
The only difficult aspect was that in some circumstances it was possible
that there are tagged objects but nothing in ID recalc bit fields.
There were several different circumstances when it was possible:
* When one assigns object->recalc flag directly DAG flush didn't
set corresponding bits to ID recalc bits. Partially it is fixed
by making it so flush will set bitfield, but also for object
types there's no reason to assign recalc flag directly. Using
generic DAG_id_type_tag works almost the same fast as direct
assignment, ensures all the bitflags are set properly and for the
long run it seems it's what we would actually want to.
* DAG_on_visible_update() didn't set recalc bits at all.
* Some areas were checking for object->recalc != 0, however it is was
possible that object recalc flag contains PSYS_RECALC_CHILD which
was never cleaned from there.
No idea why would we need to assign such a flag when enabling
scene simplification, this is to be investigated separately.
* It is possible that scene_update_post and frame_update_post handlers
will modify objects. The issue is that DAG_ids_clear_recalc is called
just after callbacks, which leaves objects with recalc flags but no
corresponding bit in ID recalc bitfield. This leads to some kind of
regression when using ID type tag fields to check whether there objects
to be updated internally comparing threaded DAG with legacy one.
For now let's have a workaround which will preserve tag for ID_OB
if there're objects with OB_RECALC_ALL bits. This keeps behavior
unchanged comparing with 2.69 release.
This solves threading conflict which happens when having
multiple objects using Curve Deform modifier with the same
curve datablock. This conflict was caused by the fact that
curve_deform_verts() used to temporary override curve's
flags to make it path is there.
Actually, it was setting CU_FOLLOW flag temporary which
was only used where_on_path() (only in terms that this
temporary assignment only affected this function) but it
is now commented out for a while, so no reason to set
this flag temporary, If it's ever to be done, we'll need
to pass flags as an additional function argument.
For the path creation i've extended DegNode structure
which now holds extra bits which indicates what additional
data depending on the graph topology is to be evaluated.
Currently this is only used to indicate that curve needs
path to be evaluated regardless to cu->flag state. This
is so Curve Deform modifier is always happy.
In the future this flag might also be used to indicate
whether bmesh verts are to update (see recent commit to
3-vertex parent crash fix) or to indicate that the object
is the motherball etc.
This goes back to ancient era again and such a call isn't
safe for threading and really DAG is to make it sure display
list for dependencies is always there.
It was some kind of workaround for DAG glitch in 2009
(commit hash 8c5c7ebb0) and according to the comment
was needed to make select outline show immediately.
After some tests it appears DAG behaves almost fine now
(just needed to make it so layer is flushed properly to
the set scene) and no reason to have rather confusing
call in the code.
DAG node tagging was rather an experiment to make derived render working.
However, it ended up in a whole can of worms and need to be re-considered.
It is likely that regular object update tagging and scene update routines
are to be used for this.
Meanwhile no need to keep extra field in dag node. Would save us the whole
byte of the struct which we can use for other purposes meanwhile.
Issue was caused by missing objects update for temporary
freestyle objects. This happened because of the fact that
such objects doesn't have any relations, as in they're
corresponding to root nodes in the DAG.
This situation wasn't handled by DAG_threaded_update_begin()
which considered there's only one root node in the DAG.
Summary:
Made objects update happening from multiple threads. It is a task-based
scheduling system which uses current dependency graph for spawning new
tasks. This means threading happens on object level, but the system is
flexible enough for higher granularity.
Technical details:
- Uses task scheduler which was recently committed to trunk
(that one which Brecht ported from Cycles).
- Added two utility functions to dependency graph:
* DAG_threaded_update_begin, which is called to initialize threaded
objects update. It will also schedule root DAG node to the queue,
hence starting evaluation process.
Initialization will calculate how much parents are to be evaluation
before current DAG node can be scheduled. This value is used by task
threads for faster detecting which nodes might be scheduled.
* DAG_threaded_update_handle_node_updated which is called from task
thread function when node was fully handled.
This function decreases num_pending_parents of node children and
schedules children with zero valency.
As it might have become clear, task thread receives DAG nodes and
decides which callback to call for it.
Currently only BKE_object_handle_update is called for object nodes.
In the future it'll call node->callback() from Ali's new DAG.
- This required adding some workarounds to the render pipeline.
Mainly to stop using get_object_dm() from modifiers' apply callback.
Such a call was only a workaround for dependency graph glitch when
rendering scene with, say, boolean modifiers before displaying
this scene.
Such change moves workaround from one place to another, so overall
hackentropy remains the same.
- Added paradigm of EvaluaitonContext. Currently it's more like just a
more reliable replacement for G.is_rendering which fails in some
circumstances.
Future idea of this context is to also store all the local data needed
for objects evaluation such as local time, Copy-on-Write data and so.
There're two types of EvaluationContext:
* Context used for viewport updated and owned by Main. In the future
this context might be easily moved to Window or Screen to allo
per-window/per-screen local time.
* Context used by render engines to evaluate objects for render purposes.
Render engine is an owner of this context.
This context is passed to all object update routines.
Reviewers: brecht, campbellbarton
Reviewed By: brecht
CC: lukastoenne
Differential Revision: https://developer.blender.org/D94
I know this is not so much nice to have this guys hanging
around in a general Object datablock and ideally they better
be wrapped around into a structure like DerivedMesh or
something like this. But this is pure runtime only stuff and
we could re-wrap them around later.
Main purpose of this is making curves more thread safe,
so no separate threads will ever start freeing the same path
or the same bevel list.
It also makes sense because path and bevel shall include
deformation coming from modifiers which are applying on
pre-tesselation point and different objects could have
different set of modifiers. This used to be really confusing
in the past and now data which depends on object is stored
in an object, making things clear for understanding even.
This doesn't make curve code fully thread-safe due to
pre-tesselation modifiers still modifies actual nurbs and
lock is still needed in makeDispListsCurveTypes, but this
change makes usage of paths safe for threading.
Once modifiers will stop modifying actual nurbs, curves
will be fully safe for threading.
Actually, this commit also contains wrapping runtime curve
members into own structure
This allows easier assignment on file loading, keeps curve-
specific runtime data grouped and saves couple of bytes in
Object for non-curve types.
--
svn merge -r57938:57939 ^/branches/soc-2013-depsgraph_mt
svn merge -r57957:57958^/branches/soc-2013-depsgraph_mt
On file save the mesh gets loads from the editmesh but the derived mesh caches
wer not cleared. This usually happens through the depsgraph but it needs to be
done manually here. Most changes are some refactoring to deduplicate derived
mesh freeing code.
Because of our release soon, feature has been added behind the Debug Menu.
CTRL+ALT+D and set it to -1. Or commandline --debug-value -1.
When debug set to -1, you can put the viewport to 'render' mode, just like
for Cycles. Notes for testers: (and please no bugs in tracker for this :)
- It renders without AA, MBlur, Panorama, Sequence, Composite
- Only active render layer gets rendered. Select another layer will re-render.
- But yes: it works for FreeStyle renders!
- Also does great for local view.
- BI is not well suited for incremental renders on view changes. This only
works for non-raytrace scenes, or zoom in ortho or camera mode, or for
Material changes. In most cases a full re-render is being done.
- ESC works to stop the preview render.
- Borders render as well. (CTRL+B)
- Force a refresh with arrow key left/right. A lot of settings don't trigger
re-render yet.
Tech notes:
- FreeStyle is adding a lot of temp objects/meshes in the Main database. This
caused DepsGraph to trigger changes (and redraws). I've prepended the names
for these temp objects with char number 27 (ESC), and made these names be
ignored for tag update checking.
- Fixed some bugs that were noticable with such excessive re-renders, like
for opening file window, quit during renders.
besides performance in some cases.
* DAG_scene_sort is now removed and replaced by DAG_relations_tag_update in
most cases. This will clear the dependency graph, and only rebuild it right
before it's needed again when the scene is re-evaluated.
This is done because DAG_scene_sort is slow when called many times from
python operators. Further the scene argument is not needed because most
operations can potentially affect more than the current scene.
* DAG_scene_relations_update will now rebuild the dependency graph if it's not
there yet, and DAG_scene_relations_rebuild will force a rebuild for the rare
cases that need it.
* Remove various places where ob->recalc was set manually. This should go
through DAG_id_tag_update() in nearly all cases instead since this is now
a fast operation. Also removed DAG_ids_flush_update that goes along with
such manual tagging of ob->recalc.
This was caused by multiple instantiations of the same basic problem. The
rigidbody handling code often assumed that "scene" pointers referred to the
scene where an object participating in the sim resided (and where the rigidbody
world for that sim lived). However, when dealing with background sets, "scene"
often only refers to the active scene, and not the set that the object actually
came from. Hence, the rigidbody code would often (wrongly) conclude that there
was nothing to do.
For example, we may have the following backgound set/scene chaining scenario:
"active" <-- ... <-- set i (rigidbody objects live here) <-- ... <-- set n
The fix here is a multi-part fix:
1) Moved sim-world calculation from BKE_scene_update_newframe() to
scene_update_tagged_recursive()
+ This is currently the only way that rigidbody sims in background sets will
get calculated, as part of the recursion
- These checks will get run on each update. <--- FIXME!!!
2) Tweaked depsgraph code so that when checking if there are any time-dependent
features on objects to tag for updating, the checking is done relative to the
scene that the object actually resides in (and not the active scene). Otherwise,
even if we recalculate the sim, the affected objects won't get tagged for
updating. This tagging is needed to actually flush the transforms out of the
RigidBodyObject structs (written by the sim/cache) and into the Object
transforms (obmat's)
3) Removed the requirement for rigidbody world to actually exist before we can
flush rigidbody transforms. In many cases, it should be sufficient to assume
that because the object with rigidbody data attached has been tagged for
updates, it should have updates to perform. Of course, we still check on this
data if we've got it, but that's only if the sim is in the active scene.
- TODO: if we have further problems, we should investigate passing the
"actual" scene down alongside the "active" scene for BKE_object_handle_update().
This is just the basic structure, the simulation isn't hooked up yet.
Scenes get a pointer to a rigid body world that holds rigid body objects.
Objects get a pointer to a rigdid body object.
Both rigid body world and objects aren't used directly in the simulation
and only hold information to create the actual physics objects.
Physics objects are created when rigid body objects are validated.
In order to keep blender and bullet objects in sync care has to be taken
to either call appropriate set functions or flag objects for validation.
Part of GSoC 2010 and 2012.
Authors: Joshua Leung (aligorith), Sergej Reich (sergof)
Setup: 2 windows, 2 scenes, shared objects and groups.
Errors:
- editing in 1 window, didn't correctly update shared stuff in the other
(like child - parent relations)
- deleting group members in 1 scene, could crash the other.
Fixes:
- On load, only a depsgraph was created for the "active" scene. Now it makes
depsgraphs for all visible scenes.
- "DAG ID flushes" were only working on active scenes too, they now take
the other visible into account as well.
- Delete object - notifier was only sent to the active scene.
All in all it's a real depsgraph fix (for once!) :) Using multi-window and
multi-scene setups now is more useful.
Documentation & Test blend files:
------------------
http://wiki.blender.org/index.php/User:MiikaH/GSoC-2012-Smoke-Simulator-Improvements
Credits:
------------------
Miika Hamalainen (MiikaH): Student / Main programmer
Daniel Genrich (Genscher): Mentor / Programmer of merged patches from Smoke2 branch
Google: For Google Summer of Code 2012
- don't overwrite the font path with "<builtin>" when the font file cant be found, it caused bad problems when loading files on someone elses systems when paths couldn't be found blender would silently clobber paths (tsk tsk).
- when fonts are freed their temp data is now freed too.
- assigning a new filepath to a font now refreshes the object data.
When initially coding this functionality, I was aware of the potential for
infinite recursion here, just not how frequently such setups are actually
used/created out in the wild (nodetree.ma_node -> ma -> ma.nodetree is all too
common, and often even with several levels of indirection!).
However, the best fix for these problems was not immediately clear. Alternatives
considered included...
1) checking for common recursive cases. This was the solution employed for one
of the early patches committed to try and get around this. However, it's all too
easy to defeat these measures (with all the possible combinations of indirection
node groups bring).
2) arbitrarily restricting recursion to only go down 2/3 levels? Has the risk
of missing some deeply chained/nested drivers, but at least we're guaranteed to
not get too bad. (Plus, who creates such setups anyway ;)
*3) using the generic LIB_DOIT flag (check for tagged items and not recurse down
there). Not as future-proof if some new code suddenly decides to start adding
these tags to materials along the way, but is easiest to add, and should be
flexible enough to catch most cases, since we only care that at some point those
drivers will be evaluated if they're attached to stuff we're interested in.
4) introducing a separate flag for Materials indicating they've been checked
already. Similar to 3) and solves the future-proofing, but this leads to...
5) why bother with remembering to clear flags before traversing for drivers to
evaluate, when they should be tagged for evaluation like everything else?
Downside - requires depsgraph refactor so that we can actually track the fact
that there are dependencies to/from the material datablock, and not just to the
object using said material. (i.e. Currently infeasible)
It used to be a dependency cycle which lead to incorrect or
missed tesselation on some circumstances.
Seems to be introduced in rev41627.
This commit seems to behaving properly on simple cases,
probably could fail in some other cases, so need to be
checked further.
Discovered when was looking into:
#32034: Metaball used as render object(group) for particle will display wire only.