The conditionals in particle code are... some sort of madness... I'm not
even sure what the correct behavior should be from looking at it.
In this case the path cache generation was being skipped in edit mode.
Now all the fine-tuning is happening using parallel range settings structure,
which avoid passing long lists of arguments, allows extend fine-tuning further,
avoid having lots of various functions which basically does the same thing.
The RenderResult struct still has a listbase of RenderLayer, but that's ok
since this is strictly for rendering.
* Subversion bump (to 2.80.2)
* DNA low level doversion (renames) - only for .blend created since 2.80 started
Note: We can't use DNA_struct_elem_find or get file version in init_structDNA,
so we are manually iterating over the array of the SDNA elements instead.
Note 2: This doversion change with renames can be reverted in a few months. But
so far it's required for 2.8 files created between October 2016 and now.
Reviewers: campbellbarton, sergey
Differential Revision: https://developer.blender.org/D2927
2.8x branch added bContext arg in many places,
pass eval-context instead since its not simple to reason about what
what nested functions do when they can access and change almost anything.
Also use const to prevent unexpected modifications.
This fixes crash loading files with shadows,
since off-screen buffers use a NULL context for rendering.
Even strands that were excluded by the density texture were being added
to the DM passed to cloth, but these ended up having some invalid data,
because they were not fully constructed.
This simply excludes `UNEXISTED` particles from the DM generation, as
would be expected.
Note that some little parts of code have been dissabled because eval_ctx
was not available there. This should be resolved once DerivedMesh is
replaced.
Previously tagging particle settings for update will iterate over all objects and
all their particle system to see whether something needs an update or not. Now we
put ParticleSettings as an ID to the dependency graph, so tagging it for update
will nicely flush updates to all dependent particle systems.
Current downside of this is that due to limitation of flush routines it will cause
some extra particle system re-evaluation when it technically not needed, and what's
more annoying currently it will discard point caches more often.
However, this is a good and simple demonstration case to improve tagging/flushing
system to accommodate for such cases (similar issues happens with CoW and shading
components). So let's try to find some generic solution to the problem!
Design Documents
----------------
* https://wiki.blender.org/index.php/Dev:2.8/Source/Layers
* https://wiki.blender.org/index.php/Dev:2.8/Source/DataDesignRevised
User Commit Log
---------------
* New Layer and Collection system to replace render layers and viewport layers.
* A layer is a set of collections of objects (and their drawing options) required for specific tasks.
* A collection is a set of objects, equivalent of the old layers in Blender. A collection can be shared across multiple layers.
* All Scenes have a master collection that all other collections are children of.
* New collection "context" tab (in Properties Editor)
* New temporary viewport "collections" panel to control per-collection
visibility
Missing User Features
---------------------
* Collection "Filter"
Option to add objects based on their names
* Collection Manager operators
The existing buttons are placeholders
* Collection Manager drawing
The editor main region is empty
* Collection Override
* Per-Collection engine settings
This will come as a separate commit, as part of the clay-engine branch
Dev Commit Log
--------------
* New DNA file (DNA_layer_types.h) with the new structs
We are replacing Base by a new extended Base while keeping it backward
compatible with some legacy settings (i.e., lay, flag_legacy).
Renamed all Base to BaseLegacy to make it clear the areas of code that
still need to be converted
Note: manual changes were required on - deg_builder_nodes.h, rna_object.c, KX_Light.cpp
* Unittesting for main syncronization requirements
- read, write, add/copy/remove objects, copy scene, collection
link/unlinking, context)
* New Editor: Collection Manager
Based on patch by Julian Eisel
This is extracted from the layer-manager branch. With the following changes:
- Renamed references of layer manager to collections manager
- I doesn't include the editors/space_collections/ draw and util files
- The drawing code itself will be implemented separately by Julian
* Base / Object:
A little note about them. Original Blender code would try to keep them
in sync through the code, juggling flags back and forth. This will now
be handled by Depsgraph, keeping Object and Bases more separated
throughout the non-rendering code.
Scene.base is being cleared in doversion, and the old viewport drawing
code was poorly converted to use the new bases while the new viewport
code doesn't get merged and replace the old one.
Python API Changes
------------------
```
- scene.layers
+ # no longer exists
- scene.objects
+ scene.scene_layers.active.objects
- scene.objects.active
+ scene.render_layers.active.objects.active
- bpy.context.scene.objects.link()
+ bpy.context.scene_collection.objects.link()
- bpy_extras.object_utils.object_data_add(context, obdata, operator=None, use_active_layer=True, name=None)
+ bpy_extras.object_utils.object_data_add(context, obdata, operator=None, name=None)
- bpy.context.object.select
+ bpy.context.object.select = True
+ bpy.context.object.select = False
+ bpy.context.object.select_get()
+ bpy.context.object.select_set(action='SELECT')
+ bpy.context.object.select_set(action='DESELECT')
-AddObjectHelper.layers
+ # no longer exists
```
Better to have clear way to tell whether flag is parameter for
BKE_library_foreach_ID_link(), parameter for its callback function, or
return value from this callback function.
Those 'never null' ID pointers are really a PITA to handle... luckily we don't have much of those around!
Found by Sybren, thanks.
Should be backported to 2.78.
Point cache read code contains checks designed to prevent it reading
stale data when the relevant simulation code should instead compute
the next frame from the previous one. However in some situations like
motion blur subframes the simulation can't possibly do it and just
exits. This causes completely incorrect motion blur at or after the
last cached frame.
To fix, add a parameter that tells the cache code whether it should
apply the checks and exit, or read what it can even if stale (true
means exactly same as old behavior).
Doing this in cache rather than clamping the frame number better in
the caller lets it handle the case of incomplete cache that stops
before the official last frame.
Reviewed By: mont29, lukastoenne
Maniphest Tasks: T49004
Differential Revision: https://developer.blender.org/D2144
There were two issues here. One is that the fix done originally for this
bug only checks for colliding with the same face as the single preceeding
hit. If the particle hits an edge or vertex of the collider, it in fact
hits two or more faces, so the loop ends up cycling between first two
of them and reaches the max collision limit.
The fix is to disable the collider for the sim step once a permeability
roll succeeds, by adding it to a skip list. Skipping just one face causes
some particles to bounce at odd angles in case of partial permeability.
The second problem was that the collider bounced back a small percentage
of particles, and the cause seemed to be that the code was set to flip
the velocity if the particle was just past the collider but still within
collision distance. Inverting both values causes a half permeable collider
to stop particles, so it seems that this if branch shouldn't bounce at all.
Test file: {F327322}
Reviewers: lukastoenne, brecht
Reviewed By: brecht
Subscribers: brecht, #physics
Maniphest Tasks: T26658
Differential Revision: https://developer.blender.org/D2120
The cause seems to be that despite dt_frac being computed as
1/(subframes+1) with integer subframes value, it doesn't always
add up to exactly 1.0 due to precision limitations. If the sum
is similar to 1.00000???, the last subframe is skipped, and all
particles that were supposed to be emitted in that interval are
emitted next frame, with the code working incorrectly due to
skewed time range.
To fix, separate the code from the dynamic timestep feature that
adjusts the last subframe length into a separate function, and
use it even when dynamic timestep is disabled.
Replaces `G.is_rendering` with `use_render_params` argument.
This is needed for Cycles, which attempts to restore render-preview settings from particles,
after it gets its own particle data, but fails to restore because
`G.is_rendering` was being checked in psys_cache_paths (and other places).
Point-cached particles (those using simulations) would not update at all outside of
first frame, due to PSYS_RECALC_RESET flag being ingnored in `system_step()`...
For some mysterious reasons, udate is still non-fully functional outside of startframe
(e.g. changing face distribution between random and jittered), but at least when choosing
'Vertices' you get particles from verts and not faces!
Problem was, during initialization of boids particles in `dynamics_step()`,
psys of target objects was not obtained with generic `psys_get_target_system()`
as later in code, which could lead to some uninitialized `psys->tree` usage...
Think it's safe enough for 2.77, though not a regression.
Static schedule was responsible here...
Also, made a minor optimization in case adaptative (auto) subframes are enabled,
gives a few percent of speedup here.
The threaded code is twice quicker now (from an average of 20ms/frame to 10ms/frame
while baking 10000 particles here e.g.)!
Think this is mostly due to usage of 'dynamic' scheduler in OMP code though,
from my experience so far this tends to have dramatic effects over performances,
static scheduler is usually much much more efficient.
It also fixes another issue (crash) related to symmetric editing.
Quite involved, we (try to!) fix complete broken logic of parts of particle code, which would use poly index
as tessface one (or vice-versa). Issue most probably goes back to BMesh integration time...
This patch mostly fixes particle editing mode:
- Adding/removing particles when using generative modifiers (like subsurf) should now work.
- Adding/removing particles with a non-tessellated mesh (i.e. one having ngons) should also mostly work.
- X-axis-mirror-editing particles over ngons does not really work, not sure why currently.
- All this in both 'modes' (with or without using modifier stack for particles).
Tech side:
- Store a deformed-only DM in particle modifier data.
- Rename existing DM to make it clear it's a final one.
- Use deformed-only DM's tessface2poly mapping to 'solve' poly/tessface mismatches.
- Make (part of) mirror-editing code able to use a DM instead of raw mesh, so that we can mirror based on final DM
when editing particles using modifier stack (mandatory, since there is no way currently to find orig tessface
from an final DM tessface index).
Note that this patch is not really nice and clean (current particles are beyond hope on this side anyway),
it's more like some urgency bandage. Whole crap needs complete rewrite anyway,
BMesh's polygons make it really hard to work with current system (and looptri would not help much here).
Also, did not test everything possibly affected by those changes, so it needs some users' testing & validation too.
Reviewers: psy-fi
Subscribers: dfelinto, eyecandy
Maniphest Tasks: T47038
Differential Revision: https://developer.blender.org/D1685