changes in BLI_kdopbvh:
- `BLI_bvhtree_find_nearest_to_ray` now takes is_ray_normalized and scale argument.
- `BLI_bvhtree_find_nearest_to_ray_angle` has been added (use for perspective view).
changes in BLI_bvhutils:
- `bvhtree_from_editmesh_edges_ex` was added.
changes in math_geom:
- `dist_squared_ray_to_seg_v3` was added.
other changes:
- `do_ray_start_correction` is no longer necessary to snap to verts.
- the way in which the test of depth was done before is being simulated in callbacks.
So the error seems to be in cubic_tangent_factor_circle_v3(),
which was introduced with D2001.
I've tweaked the most obvious culprit here - the epsilon factor.
It used to be 10^-7, but I've reduced it down to 10^-5 now,
and it's looking a lot more stable now :)
---------
BTW, about the derivation of the magic 0.390464 factor I briefly subbed back
as a workaround for this bug, see:
http://www.whizkidtech.redprince.net/bezier/circle/
Regression from recent rB2c5dc66d5effd4072f438afb, if last item of last chunk of a mempool was valid,
it would not be returned by mempool iterator step, which would always return NULL in that case.
Together with the extended loop callback and userdata_chunk, this allows to perform
cumulative tasks (like aggregation) in a lockfree way using local userdata_chunk to store temp data,
and once all workers have finished, to merge those userdata_chunks in the finalize callback
(from calling thread, so no need to lock here either).
Note that this changes how userdata_chunk is handled (now fully from 'main' thread,
which means a given worker thread will always get the same userdata_chunk, without
being re-initialized anymore to init value at start of each iter chunk).
New code is actually much, much better than first version, using 'fetch_and_add' atomic op
here allows us to get rid of the loop etc.
The broken CAS issue remains on windows, to be investigated...
There are some serious issues under windows, causing deadlocks somehow (not reproducible under linux so far).
Until further investigation over why this happens, better to revert to previous
spin-locked behavior.
This reverts commits a83bc4f597 and 98123ae916.
Reading the shared state->iter value after storing it in the 'reference' var could in theory
lead to a race condition setting state->iter value above state->stop, which would be 'deadly'.
This **may** be the cause of T48422, though I was not able to reproduce that issue so far.
Pass distance argument so its possible to limit the range we get all hits from.
Other changes:
- Use boundbox test before calling callback, avoids redundant calls.
- Remove meaningless return value.
- Add doc string, explaining purpose of this function.
This commit makes use of new taskpool feature (instead of allocating own tasks),
and removes the spinlock used to generate chunks (using atomic ops instead).
In best cases (dynamic scheduled loop with light processing func callback), we
get a few percents of speedup, in most cases there is no sensible enhancement.
Appears mutex was guarateeing number of tasks is not modified at moments
when it's not expected. Removing those mutexes resulted in some hard-to-catch
locks where worker thread were waiting for work by all the tasks were already
done.
This reverts commit a1d8fe052c.
Brain melt here, intention was to reduce number of tasks in case we have not much chunks of data to loop over,
not to increase it!
Note that this only affected dynamic scheduling.
This commit implements new function BLI_task_pool_push_from_thread()
who's main goal is to have less parasitic load on the CPU bu avoiding
memory allocations as much as possible, making taks pushing cheaper.
This function expects thread ID, which must be 0 for the thread from
which pool is created from (and from which wait_work() is called) and
for other threads it mush be the ID which was sent to the thread working
function.
This reduces allocations quite a bit in the new dependency graph,
hopefully gaining some visible speedup on a fewzillion core machines
(on my own machine can only see benefit in profiler, which shows
significant reduce of time wasted in the memory allocation).