The `SNAP_FORCED` setting is set to the operation and not the snap
status.
Therefore, this option should not be cleared along with the other
statuses when resetting snapping.
Move then the location of this setting to `TransInfo::modifiers`.
BGL deprecation calls used to be reported on each use. As bgl calls
are typically part of a handler that is triggered at refresh this
could lead to overflow of messages and slowing down systems when
the terminal/console had to be refreshed as well.
This patch only reports the first 100 bgl deprecation calls. This
gives enough feedback to the developer that changes needs to be made
. But still provides good responsiveness to users when they have
such add-on enabled. Only the first frames can have a slowdown.
The merge down operator was sometimes copying the wrong frame, which altered the animation.
While merging the layers, it is sometimes needed to duplicate a keyframe,
when the lowest layer does not have a keyframe but the highest layer does.
Instead of duplicating the previous keyframe of the lowest layer, the code
was actually duplicating the active frame of the layer which was the current frame in the timeline.
This patch fixes the issue by setting the previous keyframe of the layer as its active frame before duplication.
Related issue: T104371.
Differential Revision: https://developer.blender.org/D17214
There were two errors with the function used to convert face sets
to the legacy mesh format for keeping forward compatibility:
- It was moved before `CustomData_blend_write_prepare` so it
operated on an empty span.
- It modified the mesh when it's only supposed to change the copy
of the layers written to the file.
Differential Revision: https://developer.blender.org/D17210
This implements two optimizations:
* If the duplication count is constant, the offsets array can be
filled directly in parallel.
* Otherwise, extracting the counts from the virtual array is parallelized.
But there is still a serial loop over all elements in the end to compute
the offsets.
In the unlikely case an area could not be created OPERATOR_CANCELLED
was returned, this has the same value of WM_HANDLER_HANDLED however
break is logical in this situation and both flags work.
If GetFileAttributesW returns an error, only debug assert if the reason
is file not found.
See D17204 for more details.
Differential Revision: https://developer.blender.org/D17204
Reviewed by Ray Molenkamp
Allow preview region to change cursor as per the selected tool
Reviewed by: campbellbarton, ISS
Differential Revision: https://developer.blender.org/D16878
This makes the code more readable.
In this commit also the `int curr_side_unclamp` member was moved to
`EdgeSlideParams` as it is a common value for all "Containers".
In some cases panels without headers were drawn too wide in sidebars
with region overlap.
Instead of always using the maximum width the view allows, headerless
panels now also use the width calculated in
`panel_draw_width_from_max_width_get`. The function already returns the
correct width in all cases and also takes care of insetting the panels,
when their backdrop needs to be drawn, which is necessary for headerless
panels, too, when there is region overlap.
Reviewed By: Hans Goudey
Differential Revision: http://developer.blender.org/D17194
Own mistake in d204830107.
For some buttons the type is changed after construction, which means the button
has to be reconstructed. For example number buttons can be turned into number
slider buttons this way. New code was unintentionally overriding the button
type after reconstruction with the old type again.
Use the `SharedCache` concept introduced in D16204 to share lazily
calculated evaluated data between original and multiple evaluated
curves data-blocks. Combined with D14139, this should basically remove
most costs associated with copying large curves data-blocks (though
they add slightly higher constant overhead). The caches should
interact well with undo steps, limiting recalculations on undo/redo.
Options for avoiding the new overhead associated with the shared
caches are described in T104327 and can be addressed separately.
Simple situations affected by this change are using any of the following data
on an evaluated curves data-block without first invalidating it:
- Evaluated offsets (size of evaluated curves)
- Evaluated positions
- Evaluated tangents
- Evaluated normals
- Evaluated lengths (spline parameter node)
- Internal Bezier and NURBS caches
In a test with 4m points and 170k curves, using curve normals in a
procedural setup that didn't change positions procedurally gave 5x
faster playback speeds. Avoiding recalculating the offsets on every
update saved about 3 ms for every sculpt update for brushes that don't
change topology.
Differential Revision: https://developer.blender.org/D17134
This will add a proper modal keymap for the node link drag operator.
It allows the user to customize the keys used to start drag and so on.
Also it gets rid of the custom status bar message.
Differential Revision: https://developer.blender.org/D17190
No user-visible changes expected.
Essentially, this makes it possible to use C++ types like `std::function`
inside `uiBut`. This has plenty of benefits, for example this should help
significantly reducing unsafe `void *` use (since a `std::function` can hold
arbitrary data while preserving types).
----
I wanted to use a non-trivially-constructible C++ type (`std::function`) inside
`uiBut`. But this would mean we can't use `MEM_cnew()` like allocation anymore.
Rather than writing worse code, allow non-trivial construction for `uiBut`.
Member-initializing all members is annoying since there are so many, but rather
safe than sorry. As we use more C++ types (e.g. convert callbacks to use
`std::function`), this should become less since they initialize properly on
default construction.
Also use proper C++ inheritance for `uiBut` subtypes, the old way to allocate
based on size isn't working anymore.
Differential Revision: https://developer.blender.org/D17164
Reviewed by: Hans Goudey
Currently this conversion (which happens when using modifiers in edit
mode, for example) is completely single threaded. It's harder than some
other areas to multithread because BMesh elements don't always know
their indices (and vise versa), and because the dynamic AoS format
used by BMesh makes some typical solutions not helpful.
This patch proposes to split the operation into two steps. The first
updates the indices of BMesh elements and builds tables for easy
iteration later. It also checks if some optional mesh attributes
should be added. The second uses parallel loops over all elements,
copying attribute values and building the Mesh topology.
Both steps process different domains in separate threads (though the
first has to combine faces and loops). Though this isn't proper data
parallelism, it's quite helpful because each domain doesn't affect the
others.
**Timings**
I tested this on a Ryzen 7950x with a 1 million face grid, with no
extra attributes and then with several color attributes and vertex
groups.
| File | Before | After |
| Simple | 101.6 ms | 59.6 ms |
| More Attributes | 149.2 ms | 65.6 ms |
The optimization scales better with more attributes on the BMesh. The
speedup isn't as linear as multithreading other operations, indicating
added overhead. I think this is worth it though, because the user is
usually actively interacting with a mesh in edit mode.
See the differential revision for more timing information.
Differential Revision: https://developer.blender.org/D16249
Whenever a transform operation is activated by gizmo, the gizmo
modal is maintained, but its drawing remains the same even if the
transform mode or constrain is changed.
So update the gizmo according to the mode or constrain set.
NOTE: Currently only 3D view gizmo is affected
Implements T103858: in OBJ importer and exporter, and in Collada
exporter, present axis choices as a dropdown instead of inline button
row.
Reviewed By: Bastien Montagne
Differential Revision: https://developer.blender.org/D17186
The recursion depth was checked for equality with a maximum depth,
allowing leaves with more primitives if a certain depth was reached.
However, a single leaf must always use the same material (including
set_smooth), so if a leaf contained multiple materials it was split
anyway. This meant that in the next recursion step the depth was
larger than the cutoff value and it would go back to recursing until
the number of primitives was small enough, ignoring the recursion
depth for the rest of the process.
In certain edge cases this could lead to a stack overflow.
Even with the check changed from 'equality' to 'larger or equal'
this could still fail in the pathological case where every primitive
has another material. But that can't be helped, and it wouldn't
realistically happen either.
Differential Revsision: https://developer.blender.org/D17188