Issue is a regression since threaded objetc update and caused
by the fact that some objects might share the same proxy object.
It's all fine but object_handle_update() will call update for
a proxy object which screws up threaded update.
The thing is, proxy object is marked as depending on a scene
object and such a call makes it so the children objetc is
being updated.
This is really bad and depsgraph is to take all responsibility
on updating the proxy objects.
So for now used a simple solution (which is safe to backport
to 'a') which is skipping proxy update if the scene update is
threaded and based on the DAG traversal.
There are some still areas which calls object update directly
and for that cases proxy object is still being updated from
object_handle_update().
Located on topology panel.
To use just click on button and click on mesh.
Operator will just use the dimensions of the triangles below to set the
constant detail setting.
Also changed pair of scale/detail size with nice separate float
percentage value.
Nothing spectacular here, fill tools are easy. Just take the dyntopo
code and repeat until nothing more to do.
The tool can be located in the dyntopo panel when the dyntopo constant
detail is on.
Also added scale factor for constant detail. This may change when detail
sampling gets in, I am not very happy with having two numbers here,
still it will give some more control for now.
This commit introduces support for a number of new interpolation types
which are useful for motion-graphics work. These define a number of
"easing equations" (basically, equations which define some preset
ways that one keyframe transitions to another) which reduce the amount
of manual work (inserting and tweaking keyframes) to achieve certain
common effects. For example, snappy movements, and fake-physics such
as bouncing/springing effects.
The additional interpolation types introduced in this commit can be found
in many packages and toolkits (notably Qt and all modern web browsers).
For more info and a few live demos, see [1] and [2].
Credits:
* Dan Eicher (dna) - Original patch
* Thomas Beck (plasmasolutions) - Porting/updating patch to 2.70 codebase
* Joshua Leung (aligorith) - Code review and a few polishing tweaks
Additional Resources:
[1] http://easings.net
[2] http://www.robertpenner.com/easing/
as well.
These were already doing the same thing, just not as nice. Only
difference is the do_action argument (false for BKE_libblock_copy_nolib)
but this is of no consequence because the function is only called for
trees nested inside material, scene, etc., which never have own actions.
depsgraph updates.
Material datablocks were localized by first making a regular datablock
copy, which always gets inserted into the bmain list, and then removing
it again from bmain.
Problem is that this localization happens in preview threads, which can
run while the depsgraph is also updating GPU materials. In case the
copying of materials takes any amount of time, this can cause the
depsgraph call to material_changed to use an invalid, localized material
and access invalid GPUMaterial lists which have already been freed for
the actual material.
Solution is to not add localized datablocks to the bmain lists in the
first place. bmain should be totally immutable during preview or render
threads.
Yet another attempt at fixing the problems here. This time, I've added a new
function/version of the binary search utility so that we can pass in custom
thresholds (Note: This ability is only used for evaluation currently, with
everything else using a wrapper which still uses the old default threshold),
making it ok to start trusting the "exact" parameter.
Basically issue was caused by the fact that strip for proxy has been
post-processed but proxy files were considered pre-processed. This lead
to situation of postprocessing being applied twice.
This commit attempts to fix some of the FCurve evaluation regressions arising from
an earlier commit to speed up the process using binary search. Further tweaks may still
be needed though to get this to an acceptable level of reliability (namely, tuning the
threshold defining which keyframes get considered "close together"). Since we're still
in an early stage of the 2.71 dev cycle, for now it's still worth trying to get this
working instead of simply reverting this (which can still be done later if it proves too
problematic).
Specific fixes:
* The previous code was somewhat dangerous in that it allowed out-of-bounds accessing
of memory when a == 0. It turns out this was more common than originally anticipated
(as the assert I added here ended up failing in the "action_bug.blend" file in the report)
* Tweaked the code used to test for closely-spaced points so that the "Clive.blend" example
for driver curves won't fail. The approach used here has the downside though that
since "exact" uses a might coarser threshold for equality, there may be some precision
loss issues causing backwards compat issues (namely with closely spaced keyframes, or
for certain NLA strips).
For now, I've left in some debug prints that can be enabled by running Blender in debug
mode (i.e. "blender -d"), which can provide some useful tuning info should we need to
look more into our approach here.
Fluid sims have a very nasty feature for interaction, in which a psys
can directly update the bvhtree for //another object's psys//. This
breaks with threaded depsgraph evaluation and is generally a no-go.
To avoid crashes for now, use a global mutex to avoid concurrent writes
to an object psys' bvhtree.
This provides a speedup to evaluating long F-Curves in fcurve_eval_keyframes()
by using the pre-existing binarysearch_bezt_index() function (used for keyframe
insertion) to find the relevant BezTriple on the FCurve at the current evaltime.
The current code loops over all BezTriples (sometimes not even breaking from the
loop after cvalue has been evaluated).
Reviewer Notes:
- Unlike in the original patch, we use the old/existing logic instead of
checking that (exact == true). See comments in code and also on the tracker
entry for this patch for more details.
Patch By: Josh Wedlake
This is a failure of viewport direct displist creation
caused by existing curve_cache pointer which empty content.
Made it so if the curve isn't evaluated it's curve_cache is NULL.
This is just-another-regression to be ported to the release.
Fluid particles use the particle system's bvhtree structure, which is a
runtime BVH tree. This was not reset properly on copying objects/psys,
which lead to concurrent access in threaded depsgraph updates and memory
corruption.
This was suggested by Christopher Barrett (terrachild). Corner pin is a common feature in compositing.
The corners for the plane warping can be defined by using vector node inputs to allow using perspective plane transformations without having to go via the MovieClip editor tracking data.
Uses the same math as the PlaneTrack node, but without the link to MovieClip and Object.
{F78199}
The code for PlaneTrack operations has been restructured a bit to share it with the CornerPin node.
* PlaneDistortCommonOperation.h/.cpp: Shared generic code for warping images based on 4 plane corners and a perspective matrix generated from these. Contains operation base classes for both the WarpImage and Mask operations.
* PlaneTrackOperation.h/.cpp: Current plane track node operations, based on the common code above. These add pointers to MovieClip and Object which define the track data from wich to read the corners.
* PlaneCornerPinOperation.h/.cpp: New corner pin variant, using explicit input sockets for the plane corners.
One downside of the current compositor design is that there is no concept of invariables (constants) that don't vary over the image space. This has already been an issue for Blur nodes (size input is usually constant except when "variable size" is enabled) and a few others. For the corner pin node it is necessary that the corner input sockets are also invariant. They have to be evaluated for each tile now, otherwise the data is not available. This in turn makes it necessary to make the operation "complex" and request full input buffers, which adds unnecessary overhead.
option after reentering a paint mode.
Solution by Bastien with modifications, thanks!
Show Brush flag need not be reenabled always, but make sure it is at
least enabled once on paint initialization.
When the keyframes at either end of the source curve don't lie on exact frame boundaries,
this casued problems with the Cycles F-Modifier, as part of the cycle would get chopped
off.
This was caused by float -> integer truncation that was occurring, since one variable
was of the wrong type. The problem here wasn't discovered until now (thanks to gcc's
invalid-type warnings on printf's) as in standard usage, we can safely assume that all
keyframes are strictly on frame boundaries.
dyntopo
Layer brush would not invalidate the layer_disp arrays in dyntopo mode,
checking only for the existence of the array. This means that if a tool
resized the node due to topology changes, the layer brush code could
index (and write!) out of bounds in the array. Solution is to invalidate
the layer data prior to each stroke in dyntopo.
in threaded depgraph updates and effector list construction.
Gathering effectors during depgraph updates will call the
psys_check_enabled function. This in turn contained a DNA alloc call
for the psys->frand RNG arrays, which is really bad because data must be
immutable during these effector constructions.
To avoid such allocs the frand array is now global for all particle
systems. To avoid correlation of pseudo-random numbers the psys->seed
value is complemented with random offset and multiplier for the actual
float array. This is not ideal, but work sufficiently well (given that
random numbers were already really limited and show repetition quite
easily for particle counts > PSYS_FRAND_COUNT).
Basically proxy colorspace didn't work well enough.
It is still a bit weird and mainly:
- Proxies for image sequences are built in the image color space.
- Proxies for movies are built in the movie color space.
This could be unified but would need some work in proxy build
to make it not just pipe frames from one FFmpeg context to
another but also apply OCIO on it.