In cases when the subsurf modifier is the last in the stack and there
are only deformation modifiers before it we can skip doing full orig
vertex lookup.
This is rather common situation here in animatic.
Vertex parent was using original non-modified nurbs list, simply because
it didn't have something else to operate with.
Now we've got deformed by pre-tessellation modifiers nurbs in the curve
cache which might be used y the vertex parent.
Root of the issue goes to the fact that bevel list calculation might drop some points
if they're at the same position. This made spline length calculation goes wrong.
Now the length of the bevel segments is stored in the bevel list, so values are
always reliable.
Initial patch by Lukas Treyer with some tweaks from me.
Fix T39286: Display percentage ignored in Cycles viewport.
The threaded depsgraph update changes included a cleanup of the global
is_rendering flag, which was replaced by a general EvalContext being
passed to dupli functions.
Problem is that the global flag was true for viewport duplis before
(ugly hack), which was used as a check for generating dupli orco/UV from
mesh data layers. The new flag is stricter and only true for actual
renders, which disables these attributes and breaks the Cycles
Texture Coordinates and UVMap nodes.
The solution is to extend the simple for_render boolean to an enum:
* VIEWPORT: OpenGL viewport drawing (dupli tex coords omitted)
* PREVIEW: Viewport preview render (simplified modifiers)
* RENDER: Full render with all details and attributes
There are still some areas that need to be examined, in particular
modifiers seem to totally ignore the EvaluationContext!
Instead they generally execute without render params from the depsgraph
(BKE_object_handle_update_ex) and are built with render settings
explicitly.
Differential Revision: https://developer.blender.org/D613
Skip doing particle update in object_handle_update if object is in
edit mode.
Object will be re-evaluated on exit from edit mode anyway, so it's
_expected_ to be a safe change.
The helper function `make_freestyle_edge_mark_hash()` was referring to the
original mesh to determine Freestyle edge marks for individual derived mesh edges.
This is no longer necessary now that derived meshes deliver CD_FREESTYLE_EDGE
and CD_FREESTYLE_FACE layers of their own. The reference of the original mesh
was also inappropriate since the edges coming from one of the operands of a boolean
modifier don't have proper CD_ORIGINDEX values but ORIGINDEX_NONE's.
Many thanks to Sergey Sharybin for patch contributions and discussions.
Issue is a regression since threaded objetc update and caused
by the fact that some objects might share the same proxy object.
It's all fine but object_handle_update() will call update for
a proxy object which screws up threaded update.
The thing is, proxy object is marked as depending on a scene
object and such a call makes it so the children objetc is
being updated.
This is really bad and depsgraph is to take all responsibility
on updating the proxy objects.
So for now used a simple solution (which is safe to backport
to 'a') which is skipping proxy update if the scene update is
threaded and based on the DAG traversal.
There are some still areas which calls object update directly
and for that cases proxy object is still being updated from
object_handle_update().
This is a failure of viewport direct displist creation
caused by existing curve_cache pointer which empty content.
Made it so if the curve isn't evaluated it's curve_cache is NULL.
This is just-another-regression to be ported to the release.
Fluid particles use the particle system's bvhtree structure, which is a
runtime BVH tree. This was not reset properly on copying objects/psys,
which lead to concurrent access in threaded depsgraph updates and memory
corruption.
in threaded depgraph updates and effector list construction.
Gathering effectors during depgraph updates will call the
psys_check_enabled function. This in turn contained a DNA alloc call
for the psys->frand RNG arrays, which is really bad because data must be
immutable during these effector constructions.
To avoid such allocs the frand array is now global for all particle
systems. To avoid correlation of pseudo-random numbers the psys->seed
value is complemented with random offset and multiplier for the actual
float array. This is not ideal, but work sufficiently well (given that
random numbers were already really limited and show repetition quite
easily for particle counts > PSYS_FRAND_COUNT).
It was intended to work actually using particle cache's stack index
but this index might have been calculated incorrect in special case:
* With default cube scene, add particle system to the cube
* Add disk cache to the particle system
* Save file and reload it
* Add another particle system and enable disk cache
This would lead to two point caches with the same stack index of zero.
This happened because point cache indices list wasn't stored in the
.blend file so once you've reload your file blender doesn't know anything
about number or point caches used.
And what was even more confusing is that point cache indices list was
trying to be load from the file, but this failed because it wasn't in the
file.
This commit solves the root of the issue which is ability of producing
.blend file with two point caches using the same disk cache. This is
done by making it sure that point cache indices list is stored in the
.blend file. And also made it so disabling disk cache will tag it to
recalculate stack index.
Old broken files wouldn't magically start working, but fixing them is
rather simple manually by toggling Disk Cache option.
Reviewers: lukastoenne, brecht
CC: sergof
Differential Revision: https://developer.blender.org/D286