This fix a bug when occluder are on the edge of the screen and occludes more than they should.
Grouped the texture fetches together and clamp the ray at the border of the screen.
Also add a few util functions.
This should fixes the error message that a stall occured because of busy mipmap.
This happened on the minmax buffer generation and introduced a random 0.2ms latency.
I'm not sure of what was happening though.
Creating ngons with multiple axis aligned shapes in the middle of a
single face would fail in some cases.
This exposed multiple problems in BM_face_split_edgenet_connect_islands
- Islands needed to be sorted on Y axis when X was aligned.
- Checking edge intersections needed increased endpoint bias.
- BVH epsilon needed to be increased.
Note: this commit seems to work as expected (also with transform
snapping etc.). However, it is rather unsafe - not enough for 2.79 at
least, unless we get much more testing on it. It also depends on three
previous ones.
Note that using a global lock here is far from ideal, we should rather
have a lock per DM, but that will do for now, whole DM thing is doomed
to oblivion anyway in 2.8.
Also, we may need a `DM_DIRTY_LOOPTRIS` dirty flag at some point. Looks
like we can survive without it for now though... Probably because cached
looptris are never copied accross DM's?
This was... horribly wrong, CDDM will often *not* need to allocate
anything to return arrays of mesh items! Just check whether array
pointer is NULL.
Also, remove `DM_get_looptri_array`, that one is useless currently,
`dm->getLoopTriArray` will always return cached array (computing it if
needed).
The tweakmode flag and the selected-channels flag accidentally
used the same value, due to confusion over where these flags were
supposed to be set. The selected-channels flag has now been moved
to use a different value, so that there shouldn't be any further
conflicts.
To be ported to 2.79.
Old bevel 'Clamp overlap' code was very naive: just limit amount
to half edge length. This uses more accurate (but not perfect)
calculations for the max amount before (many) geometry collisions
happen. This is not a backward compatible change - meshes that
have modifiers with 'Clamp overlap' will likely have larger allowed
bevel widths now. But that can be fixed by turning off clamp overlap
and setting the amount to the desired value.
Own previous fix (rBd5d626df236b) was not valid, curves are actually
supported by SoftBodies. It was rather a mere UI bug, which was not
including Surfaces and Font obect types in those valid for softbody UI.
Thanks to @brecht for the head up!
Also, fix safe for 2.79, btw.
* 255 maximum seems excessive for F-Curve handle vertices; now reduced to 100
* Vertex Size is no longer restricted to the old 10px maximum size limit
(used because Windows limited the maximum vertex size drivers needed to
support)
It's more important that there is some form of feedback that the strips
are muted (i.e. dotted borders) than the fact that those dotted borders
may have slightly rounded corners. So, just use a regular sharp-cornered
rect when the strips need to be muted.
There's no reason to manually iterate over items in a DLRBT_Tree,
as the structure is designed to be able to be safely casted down
to a ListBase and ListBase-like nodes..
Adding structs was checking for duplicates
causing approx 75k string comparisons on startup.
While overall speedup is minimal,
Python access to `bpy.types` will now use a hash lookup
instead of a full linked list search.
See D2774
Changes for 2.8x to use EvaluationContext caused some confusion
- Would use scene layer passed from snap context.
- Would generate duplis from Main eval context.
- Would take context argument and use it to create another eval context.
Adding context args all over and filling in a new eval-context
for every ray-cast test isn't ideal either.
Remove the context argument since the purpose of
SnapObjectContext is to avoid this kind of confusion.
Store the EvaluationContext once and re-use.
This effectivly reduce firefly bleeding all over the place.
We still need the clamp in the resolve pass because the level 0 has not been clamped.
NOTE: I did not clamped each sample individually for performance BUT I did not profile it to know how much it cost.
This was surely cause by float overflow. Limit roughness in this case to limit the brdf intensity.
Also compute VH faster.
Add a sanitizer to the SSR pass for investigating where NANs come from. Play with the roughness until you see where the black pixel is / comes from.
We're adding some bias by default, which now I think is the right thing
to do from a usability point of view since you really need to use those
options anyway to get clean renders in a practical time.
Differential Revision: https://developer.blender.org/D2769