This was because `stroke_id` was not using `vertex_start`.
But since `vertex_start` is not 1 based like it used to be, we need to add
1 to it to avoid a fragment depth of `0.0` which would be equal to the
background and not render.
There were two problems:
* The stroke was deleted if the last point was selected. Now
the stroke is flipped because is faster.
* If the second point was selected, the first point was removed
because the internal api, removed one point strokes by
default. This was done becaus ethe tools that used this API
did not need one point strokes as result. Now this optional
and keep one point strokes.
This allows using drawcalls with non default vertex range.
These calls will be culled like any other instance by the GPU culling
pipeline. But they will not be batched together since the vertex range
is part of the group.
Regression in [0], accessing the path from the file selector relied on
BLI_join_dirfile adding a trailing "/" when the filename was empty.
[0]: 9f6a045e23
There was a problem with the hash table that was
not created as expected.
Also fixed an unreported memory leak in Grab tool not
related to this crash but detected during debug.
* External engines do not use the PBVH and need slower depsgraph updates.
* Final depsgraph tag after stroke finishes was missing for sculpt color
painting, caused missing updates for other viewports as well as any
modifiers or nodes on other objects using the colors.
Updates for cursor could cause the paint data to be continuously refreshed,
which is pretty cheap by itself, but not when it starts tagging the depsgraph.
The paint slot refresh code ideally should not be doing depsgraph tags at all,
but checking if there were changes at least avoids continuous updates.
Recent refactoring to use uint relied on indirect includes and precompiled
headers for uint to be defined. Explicitly include BLI_sys_types where this
type is used now.
Followup to rBfb424db2b7bb. Found some more candidates.
UI colors should use PROP_COLOR_GAMMA to avoid being affected by scene
color management (clarification by @brecht).
Differential Revision: https://developer.blender.org/D16337
This change the attribute binding scheme to something similar to the
curves objects. Attributes are now buffer textures sampled per points.
The actual geometry is now rendered using an index buffer that avoid too
many vertex shader invocation.
Drawcall is wrapped in a DRW function to reduce complexity of future
changes.
This led to the color actually looking different on the node body itself
vs. in the panel, also using the colorpicker gave unexpected results.
UI colors should use PROP_COLOR_GAMMA to avoid being affected by scene
color management (clarification by @brecht).
Maniphest Tasks: T99603
Differential Revision: https://developer.blender.org/D16334
As a consequence of this, subsequent box-selection of bones would not
show correctly in Animation Editors (not showing the channels there
because of the lack of an active object).
The bug was caused by rBba6d59a85a38.
Prior to said commit, code logic was relying on the check for `basact`
being NULL to determine if object selection changes need to happen.
After that commit, this was handled by a `handled` variable, but this
was not set correctly if `basact` is actually NULL after the initial
pick (aka deselection by picking).
Maniphest Tasks: T101933
Differential Revision: https://developer.blender.org/D16326
The number of node groups was including the fake user count.
I was ignoring the Fake User, and how it affects the id->us count.
This problem was present since the initial commit: 84825e4ed2.
UI panel may suggest, that disabling "Proxy & timecode" would cause
timecodes not being used, but this was not the case. Now timecodes will
be used only if the checkbox is checked.
The bug has existed since crasy space was implemented.
rBbf8a26b7453d made the error even worse as the
`modifiers_disable_subsurf_temporary` function, which works like a
toggle, did not temporarily re-enable subsurf.
The main problem is that the derived mesh is modified but not marked as
dirty at the end.
Internal links are run-time/derived data. Therefore it is not necessary
to load them from .blend files where invalid internal links may be stored.
They will be regenerated after a node tree is loaded anyway.
Mistake in own rBb6a35a8153c3 which caused code to always recurse into
bone hierarchies (no matter which collapsed level an armature was
found).
This led to bone counts always being displayed even outside a collapsed
armature (e.g. if an armature was somewhere down a object or collection,
the collapsed object or collection would show this bonecount).
This is inconsistent with other data counting in the Outliner, e.g.
vertexgroups or bonegroups do have their indicator at object level,
however the counter only shows if `Vertex Groups` or `Bone Groups` line
shows (so if the object is not collapsed).
And this also led to the bug reported in T101946 which was that the bone
counts would be treated as collections when further down a collapsed
hierarchy.
Background: The whole concept of `MergedIconRow` is based on the concept
of counting **objects types or collectinons/groups**. If other things
like overrides, vertexgroups or bonegroups are displayed in a counted/
merged manner, then these will always be counted in the array spots that
are usually reserved for groups/collections. But for things this is not
a problem (since these are only displayed below their respective
outliner entry -- and will never be reached otherwise).
So to correct all this, we now only recurse into a bone hierarchy if a
bone is at the "root-level" of a collapsed subtree (direct child of the
collapsed element to merge into).
NOTE: there are certainly other candidates for counted/merged display
further up the hierarchy (not just bones -- constraints come to my mind
here, but that is for another commit)
Maniphest Tasks: T101946
Differential Revision: https://developer.blender.org/D16319
Currently, if an image exceed the texture limit setup by the user or the
GPU backend, it will be scaled down to satisfy the limit. However,
scaling happens independently per axis, that means the aspect ratio of
the image will not be maintained.
This patch corrects the smaller size to maintain the aspect ratio.
Differential Revision: https://developer.blender.org/D16327
Reviews By: Clement Foucault
rBb70bbfadfece allowed scaling of zero-radius points, but as it behaves
differently from other radius, it may not be suitable for multi-point
transformation.
This could result in wrong behavior depending on the order in which the
Image.filepath and Image.source fields are set from within Python for
example.
Caused by rB72ab6faf5d80
Kind of intentional regression on rB2d1fe736fabd.
But the solution now is (theoretically) better than adding a hard coded
threshold.
For cases with zero radius, the new radius is now the offset of the
ratio projected onto the plane of the origin point.
For the JPG preview, the only thing that was changed in the image
format was the format itself. However, the colorspace code now also
checks the bitdepth through BKE_image_format_is_byte, so the depth
needs to be explicitly set to 8-bit for the JPG preview output.