Solves these security issues from T52924:
CVE-2017-12102
CVE-2017-12103
CVE-2017-12104
While the specific overflow issue may be fixed, loading the repro .blend
files may still crash because they are incomplete and corrupt. The way
they crash may be impossible to exploit, but this is difficult to prove.
Differential Revision: https://developer.blender.org/D3002
The RenderResult struct still has a listbase of RenderLayer, but that's ok
since this is strictly for rendering.
* Subversion bump (to 2.80.2)
* DNA low level doversion (renames) - only for .blend created since 2.80 started
Note: We can't use DNA_struct_elem_find or get file version in init_structDNA,
so we are manually iterating over the array of the SDNA elements instead.
Note 2: This doversion change with renames can be reverted in a few months. But
so far it's required for 2.8 files created between October 2016 and now.
Reviewers: campbellbarton, sergey
Differential Revision: https://developer.blender.org/D2927
2.8x branch added bContext arg in many places,
pass eval-context instead since its not simple to reason about what
what nested functions do when they can access and change almost anything.
Also use const to prevent unexpected modifications.
This fixes crash loading files with shadows,
since off-screen buffers use a NULL context for rendering.
This will allow much finer controll over how we copy data-blocks, from
full copy in Main database, to "lighter" ones (out of Main, inside an
already allocated datablock, etc.).
This commit also transfers a llot of what was previously handled by
per-ID-type custom code to generic ID handling code in BKE_library.
Hopefully will avoid in future inconsistencies and missing bits we had
all over the codebase in the past.
It also adds missing copying handling for a few types, most notably
Scene (which where using a fully customized handling previously).
Note that the type of allocation used during copying (regular in Main,
allocated but outside of Main, or not allocated by ID handling code at
all) is stored in ID's, which allows to handle them correctly when
freeing. This needs to be taken care of with caution when doing 'weird'
unusual things with ID copying and/or allocation!
As a final note, while rather noisy, this commit will hopefully not
break too much existing branches, old 'API' has been kept for the main
part, as a wrapper around new code. Cleaning it up will happen later.
Design task : T51804
Phab Diff: D2714
This will allow much finer controll over how we copy data-blocks, from
full copy in Main database, to "lighter" ones (out of Main, inside an
already allocated datablock, etc.).
This commit also transfers a llot of what was previously handled by
per-ID-type custom code to generic ID handling code in BKE_library.
Hopefully will avoid in future inconsistencies and missing bits we had
all over the codebase in the past.
It also adds missing copying handling for a few types, most notably
Scene (which where using a fully customized handling previously).
Note that the type of allocation used during copying (regular in Main,
allocated but outside of Main, or not allocated by ID handling code at
all) is stored in ID's, which allows to handle them correctly when
freeing. This needs to be taken care of with caution when doing 'weird'
unusual things with ID copying and/or allocation!
As a final note, while rather noisy, this commit will hopefully not
break too much existing branches, old 'API' has been kept for the main
part, as a wrapper around new code. Cleaning it up will happen later.
Design task : T51804
Phab Diff: D2714
Note that some little parts of code have been dissabled because eval_ctx
was not available there. This should be resolved once DerivedMesh is
replaced.
Previously tagging particle settings for update will iterate over all objects and
all their particle system to see whether something needs an update or not. Now we
put ParticleSettings as an ID to the dependency graph, so tagging it for update
will nicely flush updates to all dependent particle systems.
Current downside of this is that due to limitation of flush routines it will cause
some extra particle system re-evaluation when it technically not needed, and what's
more annoying currently it will discard point caches more often.
However, this is a good and simple demonstration case to improve tagging/flushing
system to accommodate for such cases (similar issues happens with CoW and shading
components). So let's try to find some generic solution to the problem!
Noisy change, but safe, and better do it sooner than later if we are to
rework copying code. Also, previous commit shows this *is* useful to
catch some mistakes.
There were two issues here. One is that the fix done originally for this
bug only checks for colliding with the same face as the single preceeding
hit. If the particle hits an edge or vertex of the collider, it in fact
hits two or more faces, so the loop ends up cycling between first two
of them and reaches the max collision limit.
The fix is to disable the collider for the sim step once a permeability
roll succeeds, by adding it to a skip list. Skipping just one face causes
some particles to bounce at odd angles in case of partial permeability.
The second problem was that the collider bounced back a small percentage
of particles, and the cause seemed to be that the code was set to flip
the velocity if the particle was just past the collider but still within
collision distance. Inverting both values causes a half permeable collider
to stop particles, so it seems that this if branch shouldn't bounce at all.
Test file: {F327322}
Reviewers: lukastoenne, brecht
Reviewed By: brecht
Subscribers: brecht, #physics
Maniphest Tasks: T26658
Differential Revision: https://developer.blender.org/D2120
Turns out most BKE_foo_make_local datablock-specific functions are actually doing
exactly the same thing, only two currently need special additional operations
(object and brush ones). So added a BKE_id_make_local_generic instead
of copying same code over and over.
Also, changed a bit how make_local works in case we are localizing a whole library.
We need to do the 'remap' step (from old linked ID to new local one) in the second loop,
otherwise we miss some dependencies. This fixes main part of T48907.
Replaces `G.is_rendering` with `use_render_params` argument.
This is needed for Cycles, which attempts to restore render-preview settings from particles,
after it gets its own particle data, but fails to restore because
`G.is_rendering` was being checked in psys_cache_paths (and other places).
It also fixes another issue (crash) related to symmetric editing.
Quite involved, we (try to!) fix complete broken logic of parts of particle code, which would use poly index
as tessface one (or vice-versa). Issue most probably goes back to BMesh integration time...
This patch mostly fixes particle editing mode:
- Adding/removing particles when using generative modifiers (like subsurf) should now work.
- Adding/removing particles with a non-tessellated mesh (i.e. one having ngons) should also mostly work.
- X-axis-mirror-editing particles over ngons does not really work, not sure why currently.
- All this in both 'modes' (with or without using modifier stack for particles).
Tech side:
- Store a deformed-only DM in particle modifier data.
- Rename existing DM to make it clear it's a final one.
- Use deformed-only DM's tessface2poly mapping to 'solve' poly/tessface mismatches.
- Make (part of) mirror-editing code able to use a DM instead of raw mesh, so that we can mirror based on final DM
when editing particles using modifier stack (mandatory, since there is no way currently to find orig tessface
from an final DM tessface index).
Note that this patch is not really nice and clean (current particles are beyond hope on this side anyway),
it's more like some urgency bandage. Whole crap needs complete rewrite anyway,
BMesh's polygons make it really hard to work with current system (and looptri would not help much here).
Also, did not test everything possibly affected by those changes, so it needs some users' testing & validation too.
Reviewers: psy-fi
Subscribers: dfelinto, eyecandy
Maniphest Tasks: T47038
Differential Revision: https://developer.blender.org/D1685
Note that the collision modifier doesn't have any use for Loop indices,
so to avoid duplicating the loop array too,
MVertTri has been added which simply stores vertex indices (runtime only).
This commit only adds callbacks which then later be used with major dependency
graph commit, keeping the upcoming commit more clean to follow.
Should be no functional changes so far still.
Added some guards to prevent clumping to non existing particles. Also, adjusted threaded child path evaluation, so each child is evaluated once - previously virtual parents were done twice.
Particle textures always override timing information of particles.
Previously particle times could be scripted, but now these changes are
discarded by the texture evaluation function.
The patch disables texture overriding when no textures are defined, this
way at least some old scripts can keep working.
This adds another level of clumping on child hairs. When enabled, child
hairs chose a secondary clumping target using a Voronoi pattern. This
adds visual detail on a smaller scale, which is useful particularly when
the number of parents is relatively small.
Natural fibres behave in a similar way when they become sticky and
intertwined. Hairs close to each other form a first twisted strand, then
combine into larger strands. Similar features can be found in ropes:
http://en.wikipedia.org/wiki/Hair_twistshttp://en.wikipedia.org/wiki/Rope
Conflicts:
source/blender/blenloader/intern/versioning_270.c
This is an alternative method to the current fixed function with a
clump factor and "shape" parameter. This function is quite limited and
does not give the desired result in many cases (e.g. long, parallel
rasta strands are problematic). So rather than trying to add more
parameters there is now a fully user-defined optional curve for setting
the tapering shape.
This contains a few pieces of code for a future "modifier" system that
would allow more flexible combination of effects. Eventually a node
system is the way to go, but the current code makes that impossible.
to prevent double-freeing/invalid mem access.
This can happen with the "virtual parents" feature, which generates both
parent and child paths. Each task free function also freed the shared
context, leading to double freeing.
easier.
This code is badly broken and needs to be replaced, but at least having
a workable code structure might help with quick hacks to fix the worst
cases.
distribution and path caching for child particles.
This gives a significant improvement of viewport playback performance
with higher child particle counts. Particles previously used their own
threads and had a rather high limit for threading. Also threading
apparently was disabled because only 1 thread was being used ...