This uses a StorageBuf as the source of indirect dispatch argument.
The user needs to make sure the parameters are in the right order.
There is no support for argument offset for the moment as there is no
need for it. But this might be added in the future.
Note that the indirect buffer is synchronized at the backend level. This is
done for practical reasons and because this feature is almost always used
for GPU driven pipeline.
This is a faster way to clear a buffer instead of reuploading new data.
It is equivalent to `memset` and runs directly on the GPU.
This is better to clear huge buffers and to avoid the sync cost of data upload.
This was getting in the way in multiple instances. Compute shaders dispatch
are still made in the presence of the last bound framebuffer even if they
do not interact with it.
This is supposed to hold the latest improvement from the EEVEE rewrite branch.
Note that a restart is necessary in order for the engine to appear.
The registration code is a bit convoluted as it needs to be after the WM_init.
Add a new operator to the Graph Editor that blends selected keyframes
to their default value.
The operator can be accessed from
Key>Slider Operators>Blend To Default Value
Reviewed by: Sybren A. Stüvel
Differential Revision: https://developer.blender.org/D9376
Ref: D9367
As the grease pencil simplify is a subotion of general simplify, if the general switch is disabled, the grease pencil simplify must be disabled too.
This patch also disable the UI panel.
This is supposed to hold the latest improvement from the EEVEE rewrite branch.
Note that a restart is necessary in order for the engine to appear.
The registration code is a bit convoluted as it needs to be after the WM_init.
Create a function on CurvesGeometry that can also be used for an edit
mode operator in the future. Dealing with CustomData directly means the
code is a bit more verbose than would be ideal, but this would be a
simple thing to clean up in the future if we get an attribute API here.
Also change the reverse node to first work on a read-only geometry
component, and only get write access if there is a curve selected.
Differential Revision: https://developer.blender.org/D14375
This will mostly just remove the overhead of converting
to and from the old curves type, though it also does open
some opportunities for multi-threading in the future.
A mistake in 8538c69921. The offsets include the segment at the
corresponding index, but the evaluated offset calculation was adjusting
the offset for the second to last segment.
Make the new curves' translate and transform functions also affect
the handle position attributes.
Differential Revision: https://developer.blender.org/D14372
Resizing nodes used the cursor location when the event was triggered
instead of the drag-start, harmless but means the drag location isn't
under the cursor especially with a high drag threshold.
Noticed when investigating other drag issues,
unrelated to recent changes to drag behavior.
`RNA_def_struct_ui_text(srna, ...)` was reused for `is_valid` and `is_muted`
which would set the documentation to theirs (actually to that of the last
call).
`RNA_def_property_ui_text(prop, ...)` should be used for the properties.
Remove the conversion to and from `CurveEval` by supporting the
new Curves data-block in the node. This allows for some simplifications
to the code, as well as a fix for transfering curve domain attributes
when duplicating the curve domain.
The performance improvements (obverved through the timings overlay)
can be relatively massive with many curves. When duplicating 10000
4-point curves to become 2 million curves, I observed an approximate
150x improvement, from about 3 seconds to about 20ms.
- Pass less redundant information in function arguments.
- Use `IndexRange` more instead of direct offset calculations.
- Use specific geometry component types for specialized functions.
- Use const arguments.
- Declare variables closer to where they are created or used.
- Remove some redundant logic.
- Simplify the description for the output geometry.
The menu for Timeline > Keying > Active Keying Set wouldn't show up.
Caused by d8e3bcf770. The function to attach search menu data to the button
would be called twice with different arguments for the same button now.
Shouldn't be an issue in general, but the first call now had the unexpected
side effect that the button would get disabled. Make sure it's re-enabled when
the second call sets the proper search data now.
Caused by rB43bc494892c3, moving this 'new id' relink to generic
remapping code added the over-head of proper, generic post-processing,
compared to the special-cases previous code was only designed to handle.
Fortunately with recent 'multi-remapping' work we can easily rewrite
that new id relink code to use the multi-remapping approach too.
No behavioral change is expected from this commit, besides the improved
performances (essentially restored to what they where before
rB43bc494892c3).
This reduces the complexity and avoid framebuffer setup costs.
This also "remove" the prefiltering of the glossy cubemaps in favor
of a simple bilinear filtering of the mipchain.
Change the sample mode to don't duplicate the last vertex of the
stroke and instead use the cyclic flag to close previously cyclic
strokes. This is necessary for the following modifiers.
Reviewed By: NicksBest
Differential Revision: http://developer.blender.org/D14359
From hair particle mode:
* Add
* Comb
* Cut
* Grow
New:
* Delete
Only comb and delete are used at the moment (by the new tools which are
under experimental).
Selecting an object that was already active & selected would de-select
it when the cursor was over the objects center.
This was caused by [0] that added a check which assumed more than one
hits from GPU_select meant there were multiple objects to select from.
This is not necessarily the case since bones, camera tracks or the
objects own center can add additional hits.
Resolve by keeping track of the best hit with & without the
active-selected object, only using the non-active-selected if it's found.
[0] 1550573360
Support for differentiating the tweak tool from the 3D cursor when
select is set to RMB.
This is currently an experimental preference:
Tweak Tool: Left Mouse Select & Drag
When enabled the tweak tool can now tweak the existing selection
without de-selecting first, a single click can be used to replace
the selection.
This matches selection in the graph & node editors.
This preferences is only available with "Developer Extras" enabled.
Ref T96544.
Needed so mapping selection to click doesn't pass the click event
through to setting the 3D cursor for e.g.
While this doesn't happen with the default key-map, setting selection
to LMB-click would set the 3D cursor as well (when the selection
fell through to nothing).
De-selecting objects meant that selecting a bone would de-select
all the other pose objects - making exiting & entering pose-mode
loose the current set of pose objects.
Match edit-mode behavior: avoid de-selecting objects in the current mode
(unless the action is explicitly performed in the outliner for e.g.).
Previously setting the 'basact' to NULL was done, but this wasn't
so simple to use with deselect_all which needs to check if there was
anything found at the cursor.
Add a 'handled' variable to differentiate this case, when set
don't attempt object selection.
While basic single track selection worked,
toggling and de-selection has been broken since at least 2.83.
Support SelectPick_Params with the exception of deselect_all
which doesn't make sense for tracks as de-selecting all objects
is expected in that case.
This was an involved operation to include inline,
making ed_object_select_pick more difficult to follow.
Prepare for track selection to properly support SelectPick_Params.
Currently this isn't used in the key-map, it will eventually
allow the 3D viewports tweak tool to match the behavior of other
editors that support tweaking a selection without first de-selecting
all other elements.
This is only part of the experimental "Full Frame" mode (disabled
by default). See T88150.
Currently the viewer node uses buffer paddings to display image offset
in the backdrop as a temporal solution implemented for {D12466}.
This solution is inefficient memory and performance-wise. Another
issue is that the paddings are part the image when saved.
This patch instead sets the offset in the Viewer node image
as variables and makes the backdrop take it into account
when drawing the image or any related gizmo.
Reviewed By: jbakker
Differential Revision: https://developer.blender.org/D12750
The previous fix including `<algorithm>` was an improvement
but not the actual error, which appears to be that `int64_t` is
long long int on one platform but just long int on another.
The fix includes the template argument directly.
This patch adds evaluation for NURBS, Bezier, and Catmull Rom
curves for the new `Curves` data-block. The main difference from
the code in `BKE_spline.hh` is that the functionality is not
encapsulated in classes. Instead, each function has arguments
for all of the information it needs. This makes the code more
reusable and removes a bunch of unnecessary complications
for keeping track of state.
NURBS and Bezier evaluation works the same way as existing code.
The Catmull Rom implementation is new, with the basis function
based on Cycles code. All three types have some basic tests.
For NURBS and Catmull Rom curves, evaluating positions is the
same as any generic attribute, so it's implemented by the generic
interpolation to evaluated points. Bezier curves are a bit special,
because the "handle" control points are stored in a separate attribute.
This patch doesn't include generic interpolation to evaluated points
for Bezier curves.
Ref T95942
Differential Revision: https://developer.blender.org/D14284
With the deferred pipeline, the materials needs different shading groups
depending on their matflags.
Note that this is potentially slower because execution order of shaders
may now be random. This might be fixed in a later commit.
Similar to other changes to ID remapping, gives huge speedups in some
cases, like certain types of liboverride creation.
Case from {T96092} goes from 1725 seconds (almost 30 minutes) to 45
seconds to generate the liboverride, on my machine.
Reviewed By: jbakker
Maniphest Tasks: T96092
Differential Revision: https://developer.blender.org/D14240
Ever since d5b72fb06c, shader nodes have been in the
`blender::nodes` namespace, so they don't need to use that to access
Blender's C++ types and functions.
Somehow exposed after 943b919fe8, linking could fail because
bf_nodes was not properly configured as a dependency of bf_nodes_shader.
Also add the dependency to the geometry nodes module.
To make porting to other architectures easier, clarifying that this does not
need to be supported. The unused parallel_reduce implementation assumed warp
size 32, but is easy to update if we ever need it in the future.
When using inverted filling and click inside a closed area and not outside as is expected, the algorithm to detect the contour to fill is unable to find the filling shape and try to fill outside of the valid index.
The infinite loop was adding more memory for each loop and the process continued while there was system resources and finally crashed the system.
As the tool in negative mode is designed to fill all areas when you click outside of any shape, now the algorithm check if the outline is not working as expected and cancels the filling process.
This commit removes the implementations of legacy nodes,
their type definitions, and related code that becomes unused.
Now that we have two releases that included the legacy nodes,
there is not much reason to include them still. Removing the
code means refactoring will be easier, and old code doesn't
have to be tested and maintained.
After this commit, the legacy nodes will be undefined in the UI,
so 3.0 or 3.1 should be used to convert files to the fields system.
The net change is 12184 lines removed!
The tooltip for legacy nodes mentioned that we would remove
them before 4.0, which was purposefully a bit vague to allow
us this flexibility. In a poll in a devtalk post showed that the
majority of people were okay with removing the nodes.
https://devtalk.blender.org/t/geometry-nodes-backward-compatibility-poll/20199
Differential Revision: https://developer.blender.org/D14353
Solved by introducing introducing a variant of MEM_cnew which behaves
as a copy-constructor for a trivial types.
Alternative approach would be to surround DNA structs with clang/gcc
diagnostics push/modify/pop so that implicitly defined constructors
and copy operators are allowed to access deprecated fields.
The downside of the DNA approach is that it will require some way to
easily apply diagnostics modifications to many structs, which is not
possible currently.
The newly added MEM_cnew has other good usecases, so is easiest to
use this route, at least for now.
Differential Revision: https://developer.blender.org/D14356
Resolves a fair amount of noisy warnings with default build on macOS.
Tested using render_layer render test which includes Freestyle layer.
Differential Revision: https://developer.blender.org/D14355
Meta-element selection now follows conventions for other picking
functions (e.g. EDBM_select_pick).
- Split meta-element find-nearest into a separate function.
- Cycle the meta-element starting from the active & selected
instead of comparing & setting a static variable.
- Order elements using depth (from front-to-back)
when cycling multiple elements.
This uses a StorageBuf as the source of indirect dispatch argument.
The user needs to make sure the parameters are in the right order.
There is no support for argument offset for the moment as there is no
need for it. But this might be added in the future.
Note that the indirect buffer is synchronized at the backend level. This is
done for practical reasons and because this feature is almost always used
for GPU driven pipeline.
This is a faster way to clear a buffer instead of reuploading new data.
It is equivalent to `memset` and runs directly on the GPU.
This is better to clear huge buffers and to avoid the sync cost of data upload.
This was getting in the way in multiple instances. Compute shaders dispatch
are still made in the presence of the last bound framebuffer even if they
do not interact with it.
Volatile fields were introduced to the RenderResult struct years ago[1].
However, volatile is most likely not doing what it was intended to do
in this instance, and is problematic when moving files to c++ (see
discussion from D13962). There are complex rules around what happens to
these fields but none of them guarantee what the above commit alluded to.
This patch drops the volatile and cleans up the APIs surrounding it.
[1] rB7930c40051ef1b1a26140629cf1299aa89eed859
Passing on all platforms:
https://builder.blender.org/admin/#/builders/18/builds/338
Differential Revision: https://developer.blender.org/D14298
- Rename 'location' to 'mval', typically used for region cursor coords.
- Rename 'retval' to 'changed', typically used for operators
when their return value depends on a change being made.
- Add SelectPick_Params struct to make picking logic more
straightforward and easier to extend.
- Use `eSelectOp` instead of booleans (extend, deselect, toggle)
which were used to represent 4 states (which wasn't obvious).
- Handle deselect_all when pocking instead of view3d_select_exec,
de-duplicate de-selection which was already needed in when replacing
the selection in picking functions.
- Handle outliner update & notifiers in the picking functions
instead of view3d_select_exec.
- Fix particle select deselect_all option which did nothing.
As proposed in T95802, this adds buttons to a new column on the right to modify
the override in the Library Override display mode. Some further usability
improvements are planned. E.g. this does not yet expand collections (modifiers,
constraints, etc) nicely or group modified properties of a modifier together.
Vector properties with more than 3 items or matrices aren't displayed nicely
yet, they are just squeezed into the column. If this actually becomes a problem
there are some ideas to address this.
Differential Revision: https://developer.blender.org/D14268
While the correlation may not work well with adaptive sampling, in practice
this appears to work ok in most cases
Automatic scrambling distance uses the minimum samples from adaptive sampling,
which provides a good default estimate to avoid artifacts.
Contributed by Alaska.
Differential Revision: https://developer.blender.org/D13325
This allows users to type in values larger than 1, for use in conjunction
with automatic scrambling distance.
Contributed by Alaska.
Differential Revision: https://developer.blender.org/D13580
When the light direction is not pointing away from the geometric normal and
there is a shadow terminator offset, self intersection is supposed to occur.
Some old platforms and drivers have limited amount of SSBO binding per
compute shader. This disables GPU subdivision if we cannot possibly
bind all required buffers within this limit.
For now the maximum number of buffers used by the GPU code is hardcoded,
but will be programmatically detected when shader creation is automated.
Ref D14337
This adds detection of the maximum number of shader storage buffer
bindings that is supported on the current platform. This can be
useful to turn off features that require compute shaders but use
more buffer bindings than available.
Differential Revision: https://developer.blender.org/D14337
The performance issue was noticeable when tracking a lot of tracks
which are using keyframe pattern matching. What was happening is that
at some cache gets filled in and the furthest away frame gets removed
from the cache: the frame at marker's keyframe gets removed and needs
to be re-read from disk on the next tracking step.
This change makes it so frames at markers' keyframes are not removed
from cache during tracking.
Steps to easily reproduce:
- Set cache size to 512 Mb.
- Open image sequence in clip editor
- Detect features
- Track all markers
Originally was reported by Rik, thanks!
Modified source Armature ID in the join operation was not properly
tagged as such for the depsgraph (and therefore memfile undo)..
Issue caused/revealed by rBe648e388874a.
Should be backported to 3.1 should we make a corrective release.
After rB9b298cf3dbec, the `StructRNA` declarations can now be accessed via
`RNA prototypes.h`
Also, since all redundated declarations are now removed,
`_WM_MESSAGE_EXTERN_BEGIN` and `_WM_MESSAGE_EXTERN_END` are also no
longer needed.
Differential Revision: https://developer.blender.org/D14342
Caused by 0cb5eae9d0 which restored
support for 3D depth when selecting gizmos - making it difficult
to select single lines drawn in front of other gizmos.
Previously the first hit was always used.
Resolve by using a margin around arrow stems when selecting
which was already done for 2D arrows.
This commit adds three nodes:
- `Remove Attribute`: Removes an attribute with the given name
- `Named Attribute`: A field input node
- `Store Named Attribute`: Puts results of a field in a named attribute
They are added behind a new experimental feature flag, because further
development of attribute search and name dependency visualization will
happen as separate steps.
Ref T91742
Differential Revision: https://developer.blender.org/D12685
So far it was needed to declare a new RNA struct to `RNA_access.h` manually.
Since 9b298cf3db we generate a `RNA_prototypes.h` for RNA property
declarations. Now this also includes the RNA struct declarations, so they don't
have to be added manually anymore.
Differential Revision: https://developer.blender.org/D13862
Reviewed by: brecht, campbellbarton
Lets `makesrna` generate a `RNA_prototypes.h` header with declarations for all
RNA properties. This can be included in regular source files when needing to
reference RNA properties statically.
This solves an issue on MSVC with adding such declarations in functions, like
we used to do. See 800fc17367. Removes any such declarations and the related
FIXME comments.
Reviewed By: campbellbarton, LazyDodo, brecht
Differential Revision: https://developer.blender.org/D13837
This patch remove all duplicate code for the same Bake modifier logic.
Still some modifiers need custom bake functions and cannot use this generic bake.
Steps to reproduce:
- Add image sequence to movie clip editor.
- Set cache limit to a low value in the user preferences.
- Playback until old frames starts to be removed from cache.
- Jump to the beginning of the image sequence.
The reason of dead-lock comes from two factors:
- Due to global nature of the cache limiter calls needs to be
guarded with locks.
- Image buffers stored in the cache can have their own cache
(which is used for color management).
Didn't find a better solution than to use recursive lock.
Kind of makes sense since the thread-guardable resource is
recursive (moviecache can have nested moviecaches).
Differential Revision: https://developer.blender.org/D14331
This reverts commit 1558b270e9.
An earlier commit (rB101fadcf6b93c) introduced some new functionality,
which was overlooked in reviewing this commit & got broken.
Will re-commit after the issue has been fixed.
Ref: D13687
If the scale in the offset modifier was set to a value lower than -1,
the object would get mirrored. The problem was, that the thickness
was set to 0 by that. This fix makes the thickness calculation only
use the absolute values.
Differential Revision: http://developer.blender.org/D14324
For now just assume that a node group without output sockets is
an output node. Ideally, we would use run-time information stored
on the node group itself to determine if the group contains a
top-level output node (e.g. Material Output). That can be
implemented separately.
In the larger scheme of things, top-level outputs within node
groups seem to break the node group abstraction and reusability
a bit.
Correction to the calculation of font size used for the tabs on the
Sidebar so that they are always the same size as other content on the
panel.
See D14322 for more details.
Differential Revision: https://developer.blender.org/D14322
Reviewed by Brecht Van Lommel
Instead of allocating a vector of the basis weights cache for
each evaluated point, allocate a single vector for all of the
weights. This should reduce memory usage by avoiding the
overhead of storing many vectors. I noticed a small performance
improvement to evaluated position calculation with an order of 5,
which is larger than `Vector`'s default inline buffer capacity.
This change is possible because of previous commits that
made the basis cache for each evaluated point always have
the same "order" size.
Currently a single buffer is used as working space for all evaluated
points. In order to make evaluations more independent, opening
options like multi-threading in the future, instead use a separate
array for each call. Using an inline buffer capacity higher than
the default allows a few percent performance improvement, and removes
allocations for every evaluated point.
The step after calculating the NURBS basis for a single evaluated
point trimmed extra zeroes from the weights. However, in practice
this rarely did anything, only for the first and last evaluated point
of certain knot configurations. Remove it in order to simplify code.
Also use a separate span for the result, to clarify its length.
Previously, the popover menu in sculpt/texture paint mode did not
take into account the `UnifiedBrushSettings` for the unit.
To fix this, the behavior of `class _draw_tool_settings_context_mode` is matched
by checking the same conditions when setting up the UI of the right-click popover menu.
Fixes T81616
Reviewed By: #sculpt_paint_texture, pablodp606
Maniphest Tasks: T81616
Differential Revision: https://developer.blender.org/D9168
Label alignment in top bar by using `ui_text_icon_width_ex` instead of `w_hint`
Old:
{F12733743}
New:
{F12733742}
Fixes T61558
Reviewed By: Severin
Maniphest Tasks: T61558
Differential Revision: https://developer.blender.org/D13552
This commit fixes T96229.
The maximum possible radius was being used for 3 point
splines, regardless of the current radius.
Reviewed By: HooglyBoogly
Maniphest Tasks: T96229
Differential Revision: https://developer.blender.org/D14311
When OneDrive files are offline, show preexisting thumbnails.
See D13930 for details.
Differential Revision: https://developer.blender.org/D13930
Reviewed by Julian Eisel
Add the ability to move `CurvesGeometry` without copying its attributes
and data. The benefit is more intuitive management of the data-block
copying, and less overhead for copying in some cases. The "moved-from"
source is left in an empty but valid state. A test file is added to test
the move constructor.
Before this patch, users had to switch render engines just to change how the
hair should be displayed in solid and material preview viewport shading modes.
Differential Revision: https://developer.blender.org/D14290
By not resetting the line width, other scopes were using the
wrong line thickness.
Contributed by RedMser.
Differential Revision: https://developer.blender.org/D12030
My own error when committing 0602852860. It appears that
the "Endpoint" knots modes should be handled together, otherwise
out of bounds array access is possible.
Win32: Replace SHGetFileInfoW as means to get friendly display names
for volumes because it causes long pauses for disconnected remote
drives.
See D14305 for more details.
Differential Revision: https://developer.blender.org/D14305
Reviewed by Brecht Van Lommel
There was an error with the attribute API implementation for vertex
groups. If the vertex group layer referenced an original mesh, it wasn't
properly duplicated for writing.
When rendering using the command line the curvature wasn't rendered. The reason
was that the ui_scale wasn't initialized and therefore the same pixels where
sampled to detect the curvature. This is fixed by setting the ui_scale to 1 for any
image render.
Change early drag evaluation added in
1f1dcf41d5 to only apply to drag events
from mouse buttons. Otherwise pressing two keyboard keys at one would
create a drag event for the first pressed key. While this didn't cause
any bugs as far as I know, this behavior makes most sense for drags
that come from cursor input.
Only set press events in the windows eventstate, not the current event
since it's not useful for these to be set current events press values.
This makes it possible for a press event to access values for the
previous press.
Activating a gizmo used the windows eventstate which may have values
newer than the event used to activate the gizmo.
This meant transforms check for the key that activated transform
could be incorrect.
Support passing an event when calling operators to avoid this problem.
It was possible that a render thread will be freeing cache while the
interface is iterating over cache items to build cache line.
Found while looking into T94738. It might be a fix, but I am unable
to reproduce the original issue, so can not know for sure whether
there is something else going or or not.
This function was copied from txt_sel_to_buf, including unnecessary
complexity to support selection as well as checks for the cursor
which don't make sense when copying the whole buffer.
Use a simple loop to copy all text into the destination buffer.
This patch enables all 8 combinations of Nurbs modes: Cyclic,
Bezier and Endpoint. Also removes restriction on Bezier Nurbs order.
The most significant changes are mode combinations bringing new
meaning. In D13891 is a scheme showing NURBS with same control
points in a modes, and also further description of each possible case.
Differential Revision: https://developer.blender.org/D13891
Regression in 265d97556a.
Where iterating directly on a property group failed, e.g.:
`iter(group)`, tests missed this since only `group.keys()`
was checked.
`3DView`'s `use_snap` option has little or nothing to do with using
snapping in `UV`, `Nodes` or `Sequencer`.
So there are no real advantages to keeping these options in sync.
Therefore, individualize the option to use snap for each "spacetype".
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D13310
The result handle attributes for non-bezier types are zeroed.
By mistake though, the entire array was zeroed, not just the
area corresponding to that curves source.
When viewing meta strip, it had orange color. This was caused by
overflow because of hard-coded offset. Theme got darker, and background
was also set again further in code, but redundant drawing was removed in
f4492629ea.
Realizing and copying attributes of meshes, curves, and points are very
similar processes, but currently the logic is duplicated three times in
the realize instances code. This commit combines the implementation
for copying generic attributes and creating the result id attribute.
The functions for threaded copying and filling should ideally be in
some file elsewhere, since they're not just useful here. But it's not
clear where they would go yet.
Differential Revision: https://developer.blender.org/D14294
Caused by an integer overflow in the tiling utilities of OptiX SDK.
Seems for now it's easier to copy and modify code to our sources so
that we don't need to bump SDK version requirement (which might lead
to an increased driver requirement as well).
There are still some fixes needed from a newer driver to have such
denoising to work properly: Windows requires 511.79, Linux 510.54.
Thanks Patrick for investigation!
Differential Revision: https://developer.blender.org/D14300
Mark the chain length of regular and spline IK constraints
non-animatable. Changing the IK chain length requires a rebuild of
depsgraph relations. This makes it unsuitable for animation. It's better
to simply avoid having this property animatable than to allow animation
but produce unstable results.
Ref: T96203
When dragging with a large threshold (using a tablet for example),
it's possible to press another key before the drag threshold is reached.
So tweaking then pressing X would show the delete popup instead of
transforming along the X-axis.
Now key presses while dragging cause the drag event to be evaluated
before the key press.
Note that to properly base the mouse-move event on the previous
state the last handled event is now stored in the window.
Without this the inserted mouse-move event may contain invalid values
from the next event (it's modifier state or other `prev_*` values).
Requested by @JulienKaspar.
Regression in 08d8eee006 caused
emulate-middle mouse to work once, clearing the modifier key.
Now the modifier key from emulated mouse events is never stored
in the windows event-state.
The realize instances code used "assign", but the attribute buffers on
the result aren't necessarily initialized. This doesn't make a difference
for trivial types like `int`, but it would with more complex types.
This commit replaces the temporary conversion to `CurveEval` with
use of the new curves data-block. The end result is that the
process looks more like the other components-- somewhere in between
meshes and point clouds in terms of complexity.
The final result is that the logic between meshes and curves is
very similar. There are a few different strategies to reduce
duplication here, so I'll investigate that separately.
There is some special behavior for the radius and handle position
attributes. I used the attribute API to store spans of these
attributes temporarily. Using access methods on `CurvesGeometry`
would be reasonable to, storing spans separately feels a bit more
predictable for now though.
There should be significant performance improvements in some cases,
I haven't tested that specifically though.
Differential Revision: https://developer.blender.org/D14247
Passing a `TreeElement *` instead of its `TreeStoreElement *` to
`TSELEM_OPEN()` would seem to work but cause a bug. Add a type check
that will cause a compiler error if it fails.
I don't see a reason to use 2x the element height for the "in-view"
checks. That seems incorrect (although shouldn't cause issues). So
remove that, I don't expect behavior changes.
For whatever reason the "in-view" check was using 2x the element height.
From what I can see this isn't needed, so I'll remove it in a follow-up
commit.
Restrict a lot deletion/moving around of liboverride objects and
collections in the Outliner.
While some of those operations may be valid in some specific cases, in
the vast majority of cases they would just end up breaking override
hierarchies/relationships.
Part of T95708/T95707.
Blender crashes when a multi-user grease pencil object has vertex
groups and is modified by modifiers, layer transform or parenting.
The fix makes sure that we copy the vertex group names list.
Reviewed By: antoniov
Maniphest Tasks: T96233
Differential Revision: https://developer.blender.org/D14275
Fix T95462: Partly transparent objects appear to glow in the dark
The issue was caused by incorrect check for exceeded number
of transparent bounces: the same maximum distance was used
for picking up N closest intersections and counting overall
intersections count.
Now made it so intersection count is using ray distance which
matches the way how Embree and OptiX implementation works.
Benchmark result:
{F12907888}
There is no big time difference in the pabellon scene. The
Victor scene timing doesn't seem to be very reliable as the
variance in time across different benchmark runs is quite
high.
Differential Revision: https://developer.blender.org/D14280
At the time of naming these members only some event types generated
click events so it made some sense to differentiate a click.
Now all buttons support click & drag it's more logical to use the
prefix "prev_press_" as any press event will set these values.
Also update doc-strings.
Operator area_dupli_invoke should not create modal windows.
See D14253 for details.
Differential Revision: https://developer.blender.org/D14253
Reviewed by Brecht Van Lommel
This commit fixes an issue, where for instance, when merging vertices
with the "Merge by Distance" geometry node, the resulting vertices had
their boolean attributes set unpredictably.
Boolean attributes are implemented as custom data, and when welding
vertices, the custom data for the resulting vertices comes from
interpolating the custom data of the source vertices.
This commit implements the missing interpolation function for the
boolean custom data type. This interpolation function is implemented in
terms of the logical or operation, that is to say, if any of the source
vertices (with a weight greater than zero) have the boolean set, the
boolean will also be set on the resulting vertex.
This logic matches 95981c9876.
In geometry nodes, attribute interpolation generally does not use the
CustomData API for performance reasons, but other areas of Blender
still do.
Differential Revision: https://developer.blender.org/D14172
This avoids transform jumping which is a problem when tweaking values a
small amount. A fix for T40549 was made box-select used the location
when the key was pressed.
While it's important for box-select or any operator where it's expected
the drag-start location is used, this is only needed in some cases.
Since the event stores the click location and the current location,
no longer overwrite the events real location. Operators that depend on
using the drag-start can use this location if they need.
In some cases the region relative cursor location (Event.mval) now needs
to be calculated based on the click location.
- Added `WM_event_drag_start_mval` for convenient access to the region
relative drag-start location (for drag events).
- Added `WM_event_drag_start_xy` for window relative coordinates.
- Added Python property Event.mouse_prev_click_x/y
Resolves T93599.
Reviewed By: Severin
Ref D14213
Prevents a few unneeded calls to `std::sin`, with an observed
performance improvement of about 1 percent.
Differential Revision: https://developer.blender.org/D14279
When the geometry of the sculpt mesh was replaced when restoring from
a full undo step, the runtime data was not cleared (including any
normals, triangulation data, or any other cached derived data).
In the report, only the invalid normals were observed.
The fix is to simply clear these caches. Later they will be reallocated
and recalculated if necessary. Since the whole mesh replaced here
anyway, this should be a safe fix.
Differential Revision: https://developer.blender.org/D14282
The new mode only builds the new strokes in each frame.
The code is assuming somebody uses "additive" drawing, so that each frame is different only in its NEW strokes. Already existing strokes are skipped.
I used a simple solution: Count the number of strokes in the previous frame and ignore this many strokes in the current frame.
Differential Revision: https://developer.blender.org/D14252
Tangents are computed from UVs on the CPU side when using GPU subdivision
and require that the normals are available there as well (at least for smooth
shading, flat normals can be computed on the fly). This simply adds the missing
normals update call for the `MeshRenderData` setup for the subdivision case.
Differential Revision: https://developer.blender.org/D14278
Some drivers for legacy platforms seem to have issues with compute
shaders, as revealed by T94936. This disables compute shader for the
known drivers where this issue is present. It is not clear if the issue
is Windows only or not, so this disable them for all operating systems.
See T94936 for a list of configurations where the issue is reproducible
or not.
Differential Revision: https://developer.blender.org/D14264
Sound properties like volume, pitch and muting are handled in
`BKE_sound_add_scene_sound()`. This is unnecessary, because in
properties are then set to real values in `SEQ_edit_update_muting()` and
`seq_update_seq_cb()`.
Alternatively, it may be better to remove all other updates leave them
in `BKE_sound_add_scene_sound()`. But I want to add muting per channel,
whhich is easier and probably cleaner to do with function
`SEQ_edit_update_muting()`.
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D14269
Avoid re-creating & freeing the depsgraph for every driver evaluation.
Now the depsgraph is kept in the name-space (matching self),
only re-created when the value changes.
In a contrived test-case with many drivers this gave ~15% overall
speedup for animation playback.
Code cleaning up no-more-needed override data during diffing process
would systematically remove override data from linked IDs.
While this is not a critical issue in theory, it has bad consequences at
the very least on user UI/UX, and potentially can cause bugs in some
corner-cases scenarii.
The exact behavior of the brushes is still being iterated on, but it
helps having a base implementation that we can work upon.
All of that is still hidden behind an experimental feature flag anyway.
The brushes will get a name in the ui soon.
Differential Revision: https://developer.blender.org/D14241
Issue only happens in release builds on windows. That said it was an
actual error in the code. This class is compiled inline in release
builds. When updating multiple textures it would reuse the same memory
to collect the changes. When the previous loaded tilenumber was exactly the
same but from a different image the tile buffer wasn't loaded.
Reviewed By: sergey
Maniphest Tasks: T96213
Differential Revision: https://developer.blender.org/D14274
This part has to be refactored soon anyway, because more types
curves have to be drawn for the new Curves object.
For now, 3 is a better default than 2, because that matches the
actual resolution of the curve currently.
This makes the brushes more smooth, because the brush has an
effect after every mouse move, instead of only every x pixels.
For this to work well, the brushes have to look at the stroke
segments instead of at the mouse positions separately.
A more fine grained check might be added in the future.
Issue was introduced after the python 3.10 switch
Explicit conversion to int will fix the issue.
Same issue is likely to happen with `MovieTrackingSettings.default_search_size`
So I did the same change over there.
Differential Revision: https://developer.blender.org/D14273
Support this for completeness, as it's simpler to support click-drag
for all events types that support press/release instead of having to
document which kinds buttons support click-drag.
In practice this didn't cause a bug since assigning hot-keys was also
checking for "press" events (which NDOF_MOTION doesn't generate).
Add ISNDOF_BUTTON macro which is now used by ISHOTKEY to avoid
problems in the future.
The main improvement is a code simplification, because attributes don't
have to be transferred separately for each curve, and all attributes can
be handled generically. Performance improves significantly when the
output contains many curves. Basic testing with a 2 million curve output
shows an approximate 10x performance improvement.
This fixes the second part of T93573 that 8506f3d9fe didn't
properly address. Specifically, outlines of instances still had the
selected color in edit mode in wireframe view. This change is the
same as that commit, just in a different place.
Differential Revision: https://developer.blender.org/D14229
Previously the labels and values in number fields and value sliders
used different padding for the text. This looks weird when they are
placed underneath each other in a column and, as noted by a comment
in the code of `widget_numslider`, they are actually meant to be
aligned.
This patch fixes that by using the same padding that is used for the
number field for the value slider, as well. This also has the benefit,
that the labels of the value sliders don't shift anymore when adjusting
the corner roundness.
Differential Revision: https://developer.blender.org/D14091
This quick fix will populate the runtime orig pointers to avoid
crashes when a grease pencil object uses layer transforms, parenting
or modifiers.
This will have to be revisited and fixed with a better solution.
Enables image user nodes to display the file alpha mode, similar to the
colorspace setting.
Also removes image_has_alpha in favor of using BKE_image_has_alpha, because it
did not check if the image actually had an alpha channel, just if the file format
was capable of supporting an alpha channel.
Differential Revision: https://developer.blender.org/D14153
An alpha component can be specified for an object's color. This adds an alpha
socket to the object info shader node allowing for the alpha component of the
object's color to be accessed in the shader editor.
Differential Revision: https://developer.blender.org/D14141
Fix crash when creating a pose asset for which the file list entry in
the asset browser is scrolled off-screen. Because of the
off-screen-ness, it wasn't loaded into memory, which eventually caused
an unexpected NULL pointer.
The solution was to use a different function (`filelist_file_find_id`)
that can reliably find the file list entry, after which the cache entry
can be created.
Reviewed by: Severin
Differential Revision: https://developer.blender.org/D14265
When drawing the driver editor, only skip drawing the "scrubbing area"
and not the Y-axis values or the scroll bars.
The issue was introduced in rBb3431a88465db2433b46e1f6426c801125d0047d
to avoid drawing the playhead in the Driver Editor but also prevented
the text on the y axis from being drawn.
Reviewed by: Severin, sybren
Maniphest Tasks: T95531
Differential Revision: https://developer.blender.org/D14022
Fix crash when creating a pose asset for which the file list entry in
the asset browser is scrolled off-screen. Because of the
off-screen-ness, it wasn't loaded into memory, which eventually caused
an unexpected NULL pointer.
The solution was to use a different function (`filelist_file_find_id`)
that can reliably find the file list entry, after which the cache entry
can be created.
Reviewed by: Severin
Differential Revision: https://developer.blender.org/D14265
Fix an assert by commenting out the assert.
In normal situations all keyframes are sorted. However, while keys are
transformed, they may change order and then this assertion no longer
holds. The effect is that the drawing isn't perfect during the
transform; the "constant value" bars aren't updated until the
transformation is confirmed. Apart from that, the code runs fine, so it
seems like a workable workaround.
When drawing the driver editor, only skip drawing the "scrubbing area"
and not the Y-axis values or the scroll bars.
The issue was introduced in rBb3431a88465db2433b46e1f6426c801125d0047d
to avoid drawing the playhead in the Driver Editor but also prevented
the text on the y axis from being drawn.
Reviewed by: Severin, sybren
Maniphest Tasks: T95531
Differential Revision: https://developer.blender.org/D14022
Annotation tool is used as a general mark tool for many add-ons. To be able to detect when an annotation is done is very handy to integrate the annotation tool in add-ons and other studio workflows.
The new callback names are: `annotation_pre` and `annotation_post`
Both callbacks are exposed via the Python module `bpy.app.handlers`
Example use:
```
import bpy
def annotation_starts(gpd):
print("Annotation starts")
def annotation_done(gpd):
print("Annotation done")
bpy.app.handlers.annotation_pre.clear()
bpy.app.handlers.annotation_pre.append(annotation_starts)
bpy.app.handlers.annotation_post.clear()
bpy.app.handlers.annotation_post.append(annotation_done)
```
Note: The handlers are called for any annotation tool, including eraser.
Reviewed By: campbellbarton
Differential Revision: https://developer.blender.org/D14221
Using press to activate the Tweak tool doesn't work well when used a
fallback tool as the drag event is often used by the current tool -
making it impossible not to select when dragging (unless the fallback
tool is disabled entirely).
Resolve this by using CLICK events when the Tweak tool is used as a
fallback.
Even though this avoids the crash, check for null-pointer de-reference
since changes to the key-map shouldn't cause operators to crash.
Note that the ability for operators to access a gizmo before it's fully
initialized is a more general problem that should be addressed, but out
of scope for a bug-fix.
Reviewed By: zeddb, JulienKaspar, Severin
Maniphest Tasks: T95591
Ref D14231
When a grease pencil data-block has multiple users and is subject
to modifiers, layer transforms or parenting, performance
(especially playback) is greatly affected.
This was caused by the grease pencil eval process which does per
instance full-copies of the original datablock in case those
kinds of transformations need to be applied.
This commit changes the behavior of the eval process to do shallow
copies (layers with empty frames) of the datablock instead
and duplicates only the visible strokes.
When we need to have a unique eval data
per instance, only copy the strokes of visible
frames to this copy.
Performance:
On a test file with 1350 frames 33k strokes and 480k points
in a single grease pencil object that was instanced 13 times:
- master: 2.8 - 3.3 fps
- patch: 42 - 52 fps
Co-authored by: @filedescriptor
This patch was contributed by The SPA Studios.
Reviewed By: #grease_pencil, pepeland
Differential Revision: https://developer.blender.org/D14238
Regression in d961adb866,
it's important that for the Mesh used for undo storage matches
the shape-key instead of using the coordinates of the Basis key.
Prior to bfdbc78466 a different method of
restoring the basis shape-key coordinates was used (restoring from the
input `Mesh.mvert` array). When undo wrote the edit-mesh into the mesh
this was always NULL so the basis shape keys coordinates were never
used.
Now a parameter has been added so undo can use the active shape for the
meshes vertex coordinates.
Reviewed By: sergey
Maniphest Tasks: T96205
Ref D14258
Undo would invalidate image owned GPU textures only. Textures
that are owned by the editor were not refreshed. This patch would
invalidate all the GPU textures by marking the whole image dirty.
This can be improved later as we could add partial updates of GPU
textures.
Reviewed By: mont29
Maniphest Tasks: T96163
Differential Revision: https://developer.blender.org/D14259
Caused by oversight in 2bcf93bbbe. Operator returns `OPERATOR_CANCELLED`
when it should return `OPERATOR_FINISHED`.
Reviewed By: mano-wii, campbellbarton
Differential Revision: https://developer.blender.org/D14243
The "curve_type" was transferred to instances because it isn't a
built-in curve attribute. Then it was interpolated as a point
domain attribute from the instance domain in the realize
instances node.
The fix was just missing from 9ec12c26f1.
`curve_type` needs to be marked as a built-in attribute.
Support drag/drop of materials to Properties Material Slots.
See D13549 for more details.
Differential Revision: https://developer.blender.org/D13549
Reviewed by Julian Eisel
This changes drastically the implementation to leverage arbitrary writes
in order to reduce complexity, memory usage and increase speed.
Since we are no longer dependent on the framebuffer requirement, we can
allocate bigger size texture that fits all views and avoid the extra.
Transparency, holdout and emissions are no longer deferred and are now
composited using dual source blending.
The indirect lighting and raytracing is still not functional but will
also gets a large refactor on its own
The node should be faster than in 3.1, for a few reasons:
- It doesn't need to calculate and allocate the curve offsets.
- It doesn't need to de-reference a pointer for each curve.
- The inputs are accessed from the virual arrays fewer times.
On top of that, I added two other performance improvements:
- The node is multi-threaded when there are many curves.
- There are generated special cases for single value and span inputs.
**Performance**
With a set position node affecting 1 million splines with a selection
based on this node, on an Intel i5 8250U (times are approximate):
| Before | After | Speedup |
| 760 ms | 60 ms | 13x |
Differential Revision: https://developer.blender.org/D14233
When removing a modifier, changing the layer transform or updating
the parent of a grease pencil object that has a multi-user datablock
and animation data, the eval data is not updated properly (after a
frame change). This can also cause memory leaks.
The fix makes sure that we free and reset any runtime copy
(`ob->runtime.gpd_eval`) in `BKE_gpencil_prepare_eval_data`.
Note: As far as we can tell, `ob->runtime.gpd_orig` is unused and could
be removed. The assignment in `BKE_gpencil_prepare_eval_data`
seemed to be unnecessary.
Co-authored-by: @yann-lty
Reviewed By: antoniov
Maniphest Tasks: T96145
Differential Revision: https://developer.blender.org/D14236
Previously you'd have to be careful to drag the image itself. Dragging
anywhere else on the tile (e.g. between the preview and the text, or the
text itself) would trigger border select. This often conflicts with user
expectations and causes frustration when trying to work quick, I've seen
many people complain about this.
Note that the "hitbox" for dragging is a bit smaller than the tile, to
not make border select by dragging from in-between the tiles too hard.
Differential Revision: https://developer.blender.org/D14228
Just use the image-buffer size and the already provided scale to
determine the size, not the button size (which would always have to
match the scaled image-buffer size or it would give unexpected results).
This patch adds edge selection support for UV editing (refer T76545).
Developed as a part of GSoC 2021 project - UV Editor Improvements.
Previously, selections in the UV editor always flushed down to vertices
and this caused multiple issues such as T76343, T78757 and T26676.
This patch fixes that by adding edge selection support for all UV
operators and adding support for flushing selections between vertices
and edges. Updating UV select modes is now done using a separate
operator, which also handles select mode flushing and undo for UV
select modes. Drawing edges (in UV edge mode) is also updated to match
the edit-mesh display in the 3D viewport.
Notes on technical changes made with this patch:
* MLOOPUV_EDGESEL flag is restored (was removed in rB9fa29fe7652a).
* Support for flushing selection between vertices and edges.
* Restored the BMLoopUV.select_edge boolean in the Python API.
* New operator to update UV select modes and flushing.
* UV select mode is now part of editmesh undo.
TODOs added with this patch:
* Edge support for shortest path operator (currently uses vertex path logic).
* Change default theme color instead of reducing contrast with edge-select.
* Proper UV element selections for Reveal Hidden operator.
Reviewed By: campbellbarton
Differential Revision: https://developer.blender.org/D12028
Just showing the library override icon for every item doesn't add much
information, it's just redundant. Displaying the data-block type icon on
the other hand can be useful.
Differential Revision: https://developer.blender.org/D14208
This patch adds a button in the scene to add a new one, but this does not change to the new created scene because this breaks the storyboarding workflow.
This is a common request for Storyboarding artists.
Reviewed By: mendio, brecht, ISS
Differential Revision: https://developer.blender.org/D14148
When exiting edit-mode set the vertex coordinates to the basis-shape when editing non-basis keys.
Regression in bfdbc78466.
Reviewed By: sergey
Ref D14234
This is needed since 4d0f846b93
however change in the operator instead of the event handler is correct,
as accepting a press event should suppress drag events unless
the pass-through flag is set.
This is how select & tweak already works.
It's not useful to wrap vertical motion when dragging markers.
It was too easy to accidentally wrap the cursor to the top of a region,
as markers need to be dragged from the bottom edge of the region.
The logic to cycle selected markers wasn't cycling back to the beginning
of the list.
The marker after the selected marker at the cursor frame was also used
to check if a selection existed, causing dragging to transform all
selected markers to de-select all when when dragging the last marker.
The node unnecessarily converted to the old data structure to check if
there were any poly splines. Instead, that warning is just removed,
because the node now still sets resolution values in that case, they
just aren't used (before the values weren't set at all). Either way, it
wasn't clear that looping though all of the curve types was worth
the performance cost here.
In `ffmpeg_read_video_frame` fix assignment used as truth value.
In `ffmpeg_seek_recover_stream_position` loop while return value is
greater or equal to 0.
Passing around coordinates for drawing can be quite confusing, it's
often not clear what they represent and where they are currently.
Instead pass around the tile rectangle for drawing and let all code draw
based on that, it's way more clear that way.
Changes shouldn't be user visible.
Correct misspellings in code comments of "vertex" and "vertices".
See D13932 for more details.
Differential Revision: https://developer.blender.org/D13932
Reviewed by Harley Acheson
This adds a prototype for the first brush that can add new curves by
painting on a surface. Note that this can only be used when the curves
object has a surface object set in the properties panel.
The brush can take minimum distance into account. This allows
distributing curves with a somewhat consistent density.
Differential Revision: https://developer.blender.org/D14207
This commit removes the outline from instances generated from an object
when in edit mode. This takes the change in aa13c4b386 a bit further,
with the idea that instance outlines are more like regular outlines.
Because evaluated object data that doesn't match the original object
type is treated as an instance internally, this fixes the way evaluated
meshes for curves objects have an outline, for example.
See the differential revision for a visual comparison.
Differential Revision: https://developer.blender.org/D14226
This patch enables enables the outliner to use the correct icon for each
of the curve subtypes (Curve/Surface/Font).
Differential Revision: https://developer.blender.org/D14093
Reviewed by: Julian Eisel
Since removal of tweak events 4986f71848,
box-select is activating while dragging files.
As far as I can tell this used to work because of differences
int the order tweak / click-drag events are handled.
Apply a workaround since dragging files doesn't prevent other parts
of the UI from activated (it's possible to open menus for e.g),
this is something we will likely want to limit which would resolve
this bug too.
There are two issues revealed in the bug report:
- the GPU subdivision does not support meshes with only loose geometry
- the loose geometry is not subdivided
For the first case, checks are added to ensure we still fill the
buffers with loose geometry even if no polygons are present.
For the second case, this adds
`BKE_subdiv_mesh_interpolate_position_on_edge` which encapsulates the
loose vertex interpolation mechanism previously found in
`subdiv_mesh_vertex_of_loose_edge`.
The subdivided loose geometry is stored in a new specific data structure
`DRWSubdivLooseGeom` so as to not pollute `MeshExtractLooseGeom`. These
structures store the corresponding coarse element data, which will be
used for filling GPU buffers appropriately.
Differential Revision: https://developer.blender.org/D14171
Partially revert aa71414dfc
This attempted have the click-drag event compatible with old tweak
events, but needs to be re-thought since it caused events to be handled
in unexpected situations.
This utility is useful when using C types that own some resource in
a C++ file. It mainly helps in functions that have multiple return
statements, but also simplifies code by moving construction and
destruction closer together.
Differential Revision: https://developer.blender.org/D14215
This fix contains two parts. There was one critical mistake where
order of two indices was wrong when removing constraint planes from
the array. The other changes are improvements to the used thresholds
to keep everything numerically stable.
Differential Revision: http://developer.blender.org/D14183
This branch was previously run when the action had been handled,
since action checks were removed it was running. This assignment
does nothing but shouldn't be kept.
previously_visible_components_mask was not preserved for Image ID nodes, which
meant it was always detected as newly visible and tagged to be updated, which
in turn caused the geometry nodes using it to be always updated also.
Reviewed By: sergey, JacquesLucke
Maniphest Tasks: T94609
Differential Revision: https://developer.blender.org/D14217
Supporting two kinds of dragging is redundant, remove tweak events as
they only supported 3 mouse buttons and added complexity from using the
'value' to store directions.
Support only click-drag events (KM_CLICK_DRAG) which can be used with
any keyboard or mouse button.
Details:
- A "direction" member has been added to keymap items and events which
can be used when the event value is set to KM_CLICK_DRAG.
- Keymap items are version patched.
- Loading older key-maps are also updated.
- Currently the key-maps stored in ./release/scripts/presets/keyconfig/
still reference tweak events & need updating. For now they are updated
on load.
Note that in general this wont impact add-ons as modal operators don't
receive tweak events.
Reviewed By: brecht
Ref D14214
Clicking and dragging markers accumulates flags from multiple operators
in a way that can't be interpreted when combine.
Follow tweak behavior for cancelling click-drag events.
Replacing tweak key-map items with click-drag caused selection
in the graph/sequencer/node editors to ignore drag events
(all uses of WM_generic_select_modal).
Operators that return OPERATOR_PASS_THROUGH | OPERATOR_FINISHED
result in the event loop considering the event as being handled.
This stopped WM_CLICK_DRAG events from being generated which is not the
case for tweak events.
As click-drag is intended to be compatible with tweak events,
accept drag events when WM_HANDLER_BREAK isn't set or when
wm_action_not_handled returns true.
A simple case of missing the tangent VBO. The tangents are computed from
the coarse mesh, and interpolated on the GPU for the final mesh. Code for
initializing the tangents, and the vertex format for the VBO was
factored out of the coarse extraction routine, to be shared with the
subdivision routine.
Last step of proxy building caused crash due to thread race contition
when acessing ed->seqbase.
Looking at code, it seems that calling IMB_close_anim_proxies on
original strips is unnecessary so this code was removed to resolve this
issue.
Locking seqbase data may be possible, but it's not very practical as
many functions access this data on demand which can easily cause
program to freeze.
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D14210
Mousemove events are sent to windows.
In Windows OS, almost all mousemove events are sent to the window whose
mouse cursor is over.
On MacOS, the window with mousemove events is always the active window.
It doesn't matter if the mouse cursor is inside or outside the window.
So, in order for non-active windows to also have events,
`WM_window_find_under_cursor` is called to find those windows and send
the same events.
The problem is that to find the window, `WM_window_find_under_cursor`
only has the mouse coordinates available, it doesn't differentiate
which monitor these coordinates came from.
So the mouse on one monitor may incorrectly send events to a window on
another monitor.
The solution used is to use a native API on Mac to detect the window
under the cursor.
For Windows and Linux nothing has changed.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D14197
The subdivision modifier for Grease Pencil handles closed strokes
correctly now and does converge to the same shape as the mesh
subdivision surface.
Differential Revision: http://developer.blender.org/D14218
Create `Curves` directly, instead of using the conversion from
`CurveEval`. This means that the `tilt` and `radius` attributes
don't need to be allocated. The old behavior is kept by using the
right defaults in the conversion to `CurveEval` later on.
The Bezier segment primitive isn't ported yet, because functions
to provide easy access to built-in attributes used for Bezier curves
haven't been added yet.
Differential Revision: https://developer.blender.org/D14212
Currently, any time a Curves data-block is created, the `curves_random`
function runs, filling it with 500 random curves, also adding a radius
attribute. This is just left over from the prototype in the initial
commit that added the type.
This commit moves the code that creates the random data to the curve
editors module, like the other primitives are organized.
Differential Revision: https://developer.blender.org/D14211
This patch hides the MetalRT checkbox for AMD GPUs, pending fixes for MetalRT argument encoding on AMD.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D14175
The issue was uncovered by the 0f89bcdbeb, but the root cause goes
into a much earlier design violation happened in the code: the modifier
evaluation function is modifying input mesh, which is not something
what is ever expected.
Bring code closer to the older state where such modification is only
done for the object in edit mode.
---
From own tests works seems to work fine, but extra eyes and testing
is needed.
Differential Revision: https://developer.blender.org/D14191
db4313610c added support for modifier
keys to be released while dragging.
The key release events set wmEvent.prev_type which is used select the
drag threshold, causing the wrong threshold to be used.
Add wmEvent.prev_click_type which is only set when the drag begins.
The image engine is depth aware when using tile drawing the depth is
only updated for the central image what lead to showing the background
on top of other areas.
Also makes sure that switching the tile drawing would lead to an update
of the texture slots.
When dimension of images aren't a multifold of 256 parts of the gpu
textures are not updated. This patch will calculate the correct part of
the image that needs to be reuploaded.
Internally the update tiles are 256x256. Due to some miscalculations
tiles were not generated correctly if the dimension of the image wasn't
a multifold of 256.
Previously we used to cache a float image representation of the image in
rect_float. This adds some incorrect behavior as many areas only expect
one of these buffers to be used.
This patch stores float buffers inside the image engine. This is done per
instance. In the future we should consider making a global cache.
Use a flag for events to avoid adding struct members every time a new
kind of tag is needed - so events remain small.
This also simplifies copying settings as flags can be copied at once
with a mask.
- Rename ED_view3d_win_to_delta `mval` argument to `xy_delta` as it
as it was misleading since this is an screen-space offset not a region
relative cursor position (typical use of the name `mval`).
Also rename the variable passed to this function which also used the
term `mval` in many places.
- Re-order the output argument of ED_view3d_win_to_delta last.
use an r_ prefix for return arguments.
- Document how the `zfac` argument is intended to be used.
- Split ED_view3d_calc_zfac into two functions as the `r_flip` argument
was only used in some special cases.
Small fixes to the drawing of multi input sockets:
- Make the outline thickness consistent with normal node sockets,
independent from the screen DPI.
- Only highlight multi input sockets when they are actually selected.
- Skip selected multi inputs when drawing normal selected sockets.
Differential Revision: https://developer.blender.org/D14192
Currently the code expects the radius attribuet to always exist on the
input Curves. This won't be true in the future though, so the correct
default value of one should be used when creating the data on CurveEval,
where the data is not optional.
This commit improves the drawing of selected node links:
- Highlight the entire link to make it easier to spot where the link
is going/coming from.
- Always draw selected links on top, so they are always clearly
visible.
- Don't fade selected node links when the sockets they are connected
to are out out view.
- Dragged node links still get a partial highlight when they are only
attached to one socket.
Differential Revision: https://developer.blender.org/D11930
Add a std::move in some places to prevent arrays from being copied.
These cases were potentially optimized by the compiler, but this makes
it more explicit.
Differential Revision: https://developer.blender.org/D14129
New code from the vertex normal refactor cfa53e0fbe combined with older code
from 592759e3d6 that disabled instancing for custom normals and autosmooth
meant that instancing was always disabled.
However we do not need to disable instancing for custom normals and autosmooth
at all, this can be shared between instances just fine.
The constraint operators for delete, apply, copy and copy to selected
were missing null checks and could crash blender when called wrongly
from the python api.
Differential Revision: http://developer.blender.org/D14195
This commit changes `CurveComponent` to store the new curve
type by adding conversions to and from `CurveEval` in most nodes.
This will temporarily make performance of curves in geometry nodes
much worse, but as functionality is implemented for the new type
and it is used in more places, performance will become better than
before.
We still use `CurveEval` for drawing curves, because the new `Curves`
data-block has no evaluated points yet. So the `Curve` ID is still
generated for rendering in the same way as before. It's also still
needed for drawing curve object edit mode overlays.
The old curve component isn't removed yet, because it is still used
to implement the conversions to and from `CurveEval`.
A few more attributes are added to make this possible:
- `nurbs_weight`: The weight for each control point on NURBS curves.
- `nurbs_order`: The order of the NURBS curve
- `knots_mode`: Necessary for conversion, not defined yet.
- `handle_type_{left/right}`: An 8 bit integer attribute.
Differential Revision: https://developer.blender.org/D14145
The UI context was only set for the operator polls, but not for the
drop-box polls. Initially I thought this wouldn't be needed since the
drop-boxes should leave up context polls to the operator, but in
practice that may not be what API users expect. Plus the tooltip for the
drop-boxes will likely have to access context anyway, so they should be
able to check it beforehand.
Motion paths can now be initialised to more sensible frame ranges,
rather than simply 1-250:
- Scene Frame Range
- Selected Keyframes
- All Keyframes
The Motion Paths operators are now also added to the Object context menu
and the Dopesheet context menu.
The scene range operator was removed, because the operators now
automatically find the range when baking the motion paths.
The clear operator now appears separated in "Selected Only" and "All",
because it was not clear for the user what the button was doing.
Reviewed By: sybren, looch
Maniphest Tasks: T93047
Differential Revision: https://developer.blender.org/D13687
Caused by 0f89bcdbeb and was not fully addressed by 6f9828289f:
tagging an ID with flag 0 is to be seen as an explicit tag for copy
on write.
Would be nice to either consolidate code paths of flag 0 and explicit
component tag, or get rid of tagging with 0 flag, but that is above of
what we can do for the upcoming release.
When using ancored stroked the diameter of the stroke can be 0 what
leads to a division by zero that on certain platforms wrap to a large
negative number that cannot be looked up. This fix will clamp the size
of the brush to 1.
Now drag & tweak can have modifier keys to be released while dragging.
without this, modifier keys needs to be held which is more noticeable
for tablet input or whenever the drag threshold is set to a large value.
Resolves T89989.
This fixes T93051, T76405 and maybe others.
Characters like '²', '<' are not recognized in Blender's shortcut keys.
And sometimes simple buttons like {key .} and {key /} on the Windows
keyboard, although the symbol is "known", Blender also doesn't
detect for shortcuts.
For Windows, some of the symbols represented by `VK_OEM_[1-8]` values,
depending on the language, are not mapped by Blender.
On Mac there is a fallback reading the "actual character value of the
'remappable' keys". But sometimes the character is not mapped either.
On Windows, the solution now mimics the Mac and tries to read the button's
character as a fallback.
For unmapped characters ('²', '<', '\''), now another value is chosen as a
substitute.
More "substitutes" may be added over time.
Differential Revision: https://developer.blender.org/D14149
Some logic and comments in the vertex normal calculation were
left over from when normals were stored in MVert, before
cfa53e0fbe. Normals are never allocated and freed
locally anymore.
In some cases, the normal edit modifier calculated the normals on one
mesh with the "ensure" functions, then copied the mesh and retrieved
the layers "for write" on the copy. Since 59343ee162, normal
layers are never copied, and normals are allocated with malloc instead
of calloc, so the mutable memory was uninitialized.
Fix by calculating normals on the correct mesh, and also add a warning
to the "for write" functions in the header.
The check to see if newly requested attributes are not already in the
cache was not taking into account the possibility that we do not have
new requested attributes (`num_requests == 0`). In this case, if
`attr_used` already had attributes, but `attr_requested` is empty, we
would consider the cache as dirty, and needlessly rebuild the attribute
VBOs.
These features are complicated to support on GPU and hardly compatible
with subdivision in the first place. In the future, with T68891 and
T68893, subdivision and custom smooth shading will be separate workflows.
For now, and to better prepare for this future (although long term
plan), we should discourage workflows mixing subdivision and custom
smooth normals, and as such, this disables GPU subdivision when
autosmoothing or custom split normals are used.
This also adds a message in the modifier's UI to indicate that GPU
subdivision will be disabled if autosmooth or custom split normals are
used on the mesh.
Differential Revision: https://developer.blender.org/D14194
Reuse the same vertex normals calculation as for the GPU code, by
weighing each vertex normals by the angle of the edges incident to the
vertex on the face.
Additionally, remove limit normals, as the CPU code does not use them
either, and would also cause different shading issues when limit surface
is used.
Fixes T95242: shade smooth artifacts with edge crease and limit surface
Fixes T94919: subdivision, different shading between CPU and GPU
The custom data code checks for `LayerTypeInfo.defaultname` before
adding a second layer with a certain type. This was missed in
e7912dfa19. In practice, this default name
is not actually used.
Added call to ensure that the USD plugins are registered
when opening a USD cache archive. This is to avoid USD
load errors due to missing USD file format plugins when
opening blender files that contain USD transform cache
constraints and mesh sequence cache modifilers.
Fixes T94396
This operation can only be applied on one ID at a time, so only apply it
to the active Outliner item, and not all the selected ones.
Also renamed `Make Library Override` menu entry to `Make Library Override
Single` to emphasis this is not the 'default expected' option for the
user.
This affects essentially the Outliner 'create hierarchy' tool currenlty.
Previously code did not handle properly hierarchy root in case overrides
where created from a non-root ID (e.g. an object inside of a linked
collection), and in case additional partial overrides were added to an
existing partially overrided hierarchy.
Also did some renaming on the go to avoid using 'reference' in override
context for anything else but the reference linked IDs.
Trust user count to actually delete or not the dragged ID when current
dragging is cancelled, since it may be already used by others.
NOTE: This is more a band-aid fix than anything else, cancelling drag
has a lot of other issues here (like never deleting any indirectly
linked/appended data, etc.). It needs a proper rethink in general.
To keep consistency with the new contract option, the dilate now expand the shape beyond the internal closed area.
Note: This was committed only in master (3.2) by error.
This is requested by artist for some animation styles where is necessary to fill the area, but create a gap between fill and stroke.
Also some code cleanup and fix a bug in dilate for top area.
Reviewed By: pepeland, mendio
Differential Revision: https://developer.blender.org/D14082
Note: This was committed only in master (3.2) by error.
Using flags makes checking multiple modifiers at once more convenient
and avoids macros/functions such as IS_EVENT_MOD & WM_event_modifier_flag
which have been removed. It also simplifies checking if modifier keys
have changed.
This means textures need to have the number of mipmap levels specified
upfront. It does not mean the data is immutable.
There is fallback code for OpenGL < 4.2.
Immutable storage will enables texture views in the future.
Without ray offsets intersections at neigbhoring triangles are found, as
the ray start is exactly at the vertex. There was a small offset towards
the center of the triangle, but not enough.
Now this offset computation is moved into Cycles and modified for better
results. It's still not perfect though like any offset approach, especially
with long thin triangles.
Additionaly, this uses the shadow terminate offset for AO rays now, which
helps remove some pre-existing artifacts.
This is similar to f8fe0e831e, which made the change to the
handle position attributes. This commit removes the way that setting the
`position` attribute also changes the handle position attributes. Now,
the "Set Position" node still has this behavior, but changing the
attribute directly (with the modifier's output attributes) does not.
The previous behavior was a relic of the geometry nodes design
from before fields and the set position node existed.
This makes the transition to the new curves data structure simpler.
There is more room for optimizing the Bezier case of the set position
node in the future.
Also fix a couple other places where normals layers weren't properly
tagged dirty or reallocated when the mesh changes.
Caused by cfa53e0fbe. When the size of a mesh changes,
the normal layers need to be reallocated. There were a couple of places
that cleared other runtime data with `BKE_mesh_runtime_clear_geometry`
but didn't deal with normals properly. Clearing the runtime "geometry"
is different from clearing the normals, because sometimes the size of
the normal layers doesn't have to change, in which case simply tagging
them dirty is fine.
Reverts 6d97fdc37e. A function like this should not return a different
tree-display object than of the requested type. This may hide errors,
and leaves the Outliner in an undefined state (where the stored display
mode doesn't match the tree-display object). I rather don't hide the
fact that all display-modes should be handled here, and emit a clear
error if one isn't.
When X-ray mode is active the selection is done using the mesh data to
select what is closest to the cursor. When GPU subdivision is active with
the "show on cage" modifier option, this fails as the mesh used for selection
is the unsubdivided one.
This creates a subdivision wrapper before running the selection routines to
ensure that subdivision is available on the CPU side as well.
Differential Revision: https://developer.blender.org/D14188
This was a double free error which happened because `BM_mesh_bm_from_me`
was taking ownership of arrays that were still owned by the Mesh. Note that
this only happens when the mesh is empty but some custom data layers still
have a non-null data pointer. While usually the data pointer should be null in
this case for performance reasons, other functions should still be able to
handle this situation.
Differential Revision: https://developer.blender.org/D14181
When exporting generated coordinates, the subdivision export code was
using the schema for the non-subdivision case, which is invalid as
non-initialized. This typo existed since the initial commit for the
feature (rBf9567f6c63e75feaf701fa7b78669b9a436f13dd).
The scale-to-fit option did nothing for single words when
the text box had a height. This happened because it was expected that
text would be wrapped however single words never wrap.
Now the same behavior for zero-height text boxes is used when text
can't be wrapped onto multiple lines.
The handle position attributes `handle_left` and `handle_right` had
rather complex behavior to get expected behavior when aligned or auto/
vector handles were used. In order to simplify the attribtue API and
make the transition to the new curves data structure simpler, this
commit moves that behavior from the attribute to the "Set Handle
Positions" node. When that node is used, the behavior should be the
same as before. However, if the modifier's output attributes were used
to set handle positions, the behavior may be different. That situation
is expected to be very rare though.
This adds a node with a boolean field output which returns true if all of the
points of the evaluated face are on the same plane. A float field input allows
for the threshold of the face/point comparison to be adjusted on a per face basis.
One clear use case is to only triangulate faces that are not planar.
Differential Revision: https://developer.blender.org/D13906
Recently we changed the build pipeline to always create a version
number in the url and point 'dev' to the latest version rather than creating the version number url once we release.
This makes the check to `bpy.app.version_cycle` unnecessary.
tbb/enumerable_thread_specific.h drags in windows.h
which will define min/max macro's unless you politely
ask it not to.
it's bit of an eyesore, but it is what it is
0fd72a98ac called functions to set bezier handle positions
that used uninitialized memory. The fix is to define the handle positions
explicitly, like before.
The main goal here is to add the boilerplate code to make it possible
to add the actual sculpt tools more easily. Both brush implementations
added by this patch are meant to be prototypes which will be removed
or refined in the coming weeks.
Ref T95773.
Differential Revision: https://developer.blender.org/D14180
This adds a node which copies part of a geometry a dynamic number
of times.
Different parts of the geometry can be copied differing amounts
of times, controlled by the amount input field. Geometry can also
be ignored by use of the selection input.
The output geometry contains only the copies created by the node.
if the amount input is set to zero, the output geometry will be
empty. The duplicate index output is an integer index with the copy
number of each duplicate.
Differential Revision: https://developer.blender.org/D13701
Sometimes it is useful to get the index ranges that are in an index mask.
That is because some algorithms can process index ranges more efficiently
than generic index masks.
Extracting ranges from an index mask is relatively efficient, because it is
cheap to check if a span of indices contains a contiguous range.
Drag Action was constantly resetting itself to "move".
Solve this by storing the tool settings per tool and no longer clear
gizmo properties when activating a new tool.
- Reserve "test" for tests & testing frameworks.
- Add "check_mypy" to "make help" text.
- Output to the standard output instead of redirecting to log-files,
leave redirecting output log-files to the user running the command.
The build-bot directly referenced this file and doesn't
have publically available configuration.
Add an empty file until this can be removed by the build scripts.
This will make the transition to the new curves data structure
a bit simple, since the handle types can be copied directly between
the two. The change to CurveEval is simple because it is runtime-only.
Helps with building against different OpenXR SDK versions (i.e. for
downstream builds that require specific versions), as the extension was
only defined since OpenXR 1.0.22.
These were only set in two places. One was related to "tessellated loop
normal", and the other derived corner normals. The values were never
checked though, after 59343ee162. The handling of dirty face
corner normals is clearly problematic, but in the future it should be
handled like the normal layers on the other domains instead.
Ref D14154, T95839
Currently, when normals are calculated for a const mesh, a custom data
layer might be added if it doesn't already exist. Adding a custom data
layer to a mesh is not thread-safe, so this can be a problem in some
situations.
This commit moves derived mesh normals for polygons and
vertices out of `CustomData` to `Mesh_Runtime`. Most of the
hard work for this was already done by rBcfa53e0fbeed7178.
Some changes to logic elsewhere are necessary/helpful:
- No need to call both `BKE_mesh_runtime_clear_cache` and
`BKE_mesh_normals_tag_dirty`, since the former also does the latter.
- Cleanup/simplify mesh conversion and copying since normals are
handled with other runtime data.
Storing these normals like other runtime data clarifies their status
as derived data, meaning custom data moves more towards storing
original/editable data. This means normals won't automatically benefit
from the planned copy-on-write refactor (T95845), so it will have to be
added manually like for the other runtime data.
Differential Revision: https://developer.blender.org/D14154
Allow SCREEN_OT_area_swap to operate between different Blender
windows, and other minor feedback improvements.
See D14135 for more details and demonstrations.
Differential Revision: https://developer.blender.org/D14135
Reviewed by Campbell Barton
Currently the RNA functions to add mesh elements like vertices
don't clear the runtime cache of things like triangulation, BVH
trees, etc. This is important, since they might be accessed with
incorrect sizes. This is split from a fix for T95839.
This is still far from perfect but it is better than not working
correctly.
The view/casters intersection bounds are too big and rough to
compute a decent tilemap level that is near the desire shadow pixels
density.
The algorithm works relatively ok if the sun direction is almost
parallel to the ortho view direction.
Atomic operations performed by the C++ standard library might require
libatomic on platforms which do not have hardware support for those
operations.
This change makes it that such configurations are automatically detected
and -latomic is added when needed.
Differential Revision: https://developer.blender.org/D14106
NOTE: This function is currently unused. However, it does use a callback
defined by a few RNA properties through
`RNA_def_property_editable_array_func`, so don't think it should be
removed without further thinking.
This function had two main issues:
* It was doing bitwise AND on potentially three sources of property
flag, when actually used `RNA_property_editable` just use one source
ever.
* It was completely ignoring liboverride cases.
TODO: Deduplicate code between `RNA_property_editable`,
`RNA_property_editable_info` and `RNA_property_editable_index`.
There was accidentally some displacement related code running even when not
using displacement.
Differential Revision: https://developer.blender.org/D14169
Previously, objects and geometries were mapped between frames
using different hash tables in a way that is incompatible with
geometry instances. That is because the geometry mapping happened
without looking at the `persistent_id` of instances, which is not possible
anymore. Now, there is just one mapping that identifies the same
object at multiple points in time.
There are also two new caches for duplicated vbos and textures used for
motion blur. This data has to be duplicated, otherwise it would be freed
when another time step is evaluated. This caching existed before, but is
now a bit more explicit and works for geometry instances as well.
Differential Revision: https://developer.blender.org/D13497
For an upcoming prototype we would introduced a new eTexPaintMode
option. That would add more cases and if statements. This change migrate
the eTexPaintMode to 3 classes. AbstractPaintMode contains a shared interface.
ImagePaintMode for 2d painting and ProjectionPaintMode for 3d painting.
Reset Defaults left the undo stack in an invalid state,
with the active undo step left at the previous state then it should
have been.
Now the buttons own undo logic is used to perform undo pushes.
1. Now handles cyclic strokes correctly.
2. Added a sharp threshold value to allow preservation of sharp corners.
Reviewed By: Antonio Vazquez (antoniov), Aleš Jelovčan (frogstomp)
Ref D14044
1. Now will remove lines if both adjacent faces are back face.
2. Added a check to respect material back face culling setting.
3. Changed label in the modifier to "Force Backface Culling" (which reflect more accurately with what the checkbox does).
Reviewed By: Antonio Vazquez (antoniov), Aleš Jelovčan (frogstomp)
Ref D14041
- No need for `normal_tx` array if we normalize the planes in `plane_tx`.
- No need to calculate the distance squared to a plane (with `dist_signed_squared_to_plane_v3`) if the plane is normalized. `plane_point_side_v3` gets the real distance, accurately, efficiently and also signed.
So normalize the planes of the member `CameraViewFrameData::plane_tx`.
This is a regression partially introduced in rB0a6f428be7f0.
Bones being transformed into edit mode were snapping to themselves.
And the bones of the pose mode weren't even snapping.
(Curious that this was not reported).
Since Python 3.10 is now supported on all platform,
bump the minimum version to reduce the number of Python versions that
need to be supported simultaneously.
Reviewed By: LazyDodo, sybren, mont29, brecht
Ref D13943
This is an alternate fix for T35170 since it caused T44415.
Having the undo system manipulate the key-block coordinates is error
prone as (in the case of T44415) there are situations when it's
important to apply the difference with the original shape key.
This reverts dab0bd9de6, and instead
avoids the problem by not using the data in `Mesh.key` as a reference
for updating shape-keys when exiting edit-mode.
The assumption that the `Mesh.key` in edit-mode won't be modified
until leaving edit-mode isn't always true. Leading to synchronization
errors. (details noted in code-comments).
Resolve this by using shape-key data stored in the BMesh.
Resolving both T35170 & T44415.
Details:
- Remove use of the original vertices when exiting edit mode.
- Remove use of the original shape-key coordinates when exiting
edit-mode (except as a last resort).
- Move shape-key synchronization into a separate function:
`bm_to_mesh_key`.
- Split the synchronization loop into two branches,
depending on the existence of BMesh shape-key coordinates.
- Always write shape-key values back to the BMesh CD_SHAPEKEY layers.
This was only done in some cases but is now necessary for all
shape-keys as these are used to calculate offsets where the `Mesh.key`
was previously used.
- Report a warning when the shape-key layer isn't found as this uses an
imperfect method of restoring coordinates which should only be used as
a last resort.
Reviewed By: mont29
Ref D14127
The problem was when the Object Offset was enabled because the Constant Offset flag was not checked and the offset always was added to the transformation matrix.
Limit the min and max of the IDProperty for the node group input
from 0 to infinity, and the soft min and max between 0 and 1.
Thanks to @PratikPB2123 for investigation.
Instead of accessing the `CD_NORMAL` layer directly,
use the proper API for accessing mesh normals. Even if the
layer exists, the values might be incorrect due to a deformation.
Related to ef0e21f0ae, 969c4a45ce, and T95839.
The flags overlapped ever since normalize was added, so this
requires versioning to copy the flag value. This needs to be
done both in Blender 3.1 and 3.2.
Differential Revision: https://developer.blender.org/D14165
The flags overlapped ever since normalize was added, so this requires
versioning to copy the flag value.
Differential Revision: https://developer.blender.org/D14165
The code was using the same flag value for different modifiers,
resulting in matching the toggle to random overlapping flags.
Differential Revision: https://developer.blender.org/D14165
The constraints solver is now able to handle more cases correctly.
Also the behavior of the boundary fixes is slightly changed if
the constraints thickness mode is used.
Differential Revision: https://developer.blender.org/D14143
The modifier supports arithmetic operations, like Add or Multiply,
but for some reason omits Minimum and Maximum. They are similarly
simple and useful math functions and should be supported.
Differential Revision: https://developer.blender.org/D14164
When RMB-select uses "Select Tweak" as a fallback tool,
ignore all bindings mapped to the Control key as these are
used for path selection.
This was fixed in 2a2d873124
however that caused shift-select to fail (T93100).
The node animation versioning code passes `nullptr` to the `oldName` and
`newName` parameters, but those weren't `NULL`-safe. I added an extra
check for this.
No functional changes, just a crash fix.
Previously, all operators using `PaintStroke` would have to store
the stroke in `op->customdata`. That made it impossible to store
other operator specific data in `op->customdata` that was unrelated
to the stroke.
This patch changes it so that the `PaintStroke` is passed to api
functions as a separate argument, which allows storing the stroke
as a subfield of some other struct in `op->customdata`.
Althought the float buffers are only used as cache, current code paths
don't look at the flags to identify which kind of image it is. Actual
fix would be to check flags, but that wouldn't be something to add one
week before release.
This commit fixes it by removing the buffers after use in the image
engine.
The previous behavior called RNA_enum_item_add a second time,
filled it's contents with invalid values then subtracted totitem,
this caused an unusual reallocation pattern where MEM_recallocN
would run in order to create the dummy item, then again when adding
another itme (reallocating an array of the same size).
Simply memset the array to 0xff instead.
Even though the size of the map was set back to DEFAULT_SIZE_EXP,
the underlying arrays were left unchained. In some cases this caused
further expansions to result in an unusual reallocation pattern
where MEM_reallocN would run expand the entries into an array
that was in fact the same size.
Bug was introduced in D12167.
Reading input size from an input socket is not possible in tiled compositor before execution is initialized. A similar change was done for the base class, see BlurBaseOperation::init_data().
Reviewed by: Jeroen Bakker
Differential Revision: https://developer.blender.org/D14067
This allows removing the indirection for lods during shading since the
tile is not owner of the page unless it uses it.
The cache system is quite more complex but makes it easier to spot
errors since the pages are not scattered into the tile texture.
This also simplify allocation since the free heap is separated from the
cache.
This adds maintence overhead and it is not that useful when we have reset to default.
If this is something that we want it should be added dynamically.
Reviewed By: HooglyBoogly, Severin, #user_interface
Differential Revision: https://developer.blender.org/D14151
After using MEM_dupallocN() on the original item/binding, the
user/component path ListBase for the new item/binding needs to be
cleared and each path copied separately.
This commit should suffice to make the shader API agnostic now (given that
all users of it use the GPU API).
This makes the shaders not trigger a false positive error anymore since
the binding slots are now garanteed by the backend and not changed at
after compilation.
This also bundles all uniforms into UBOs. Making them extendable without
limitations of push constants. The generated uniforms from OCIO are not
densely packed in the UBO to avoid complexity. Another approach would be to
use GPU_uniformbuf_create_from_list but this requires converting uniforms
to GPUInputs which is too complex for what it is.
Reviewed by: brecht, jbakker
Differential Revision: https://developer.blender.org/D14123
This commit should suffice to make the shader API agnostic now (given that
all users of it use the GPU API).
This makes the shaders not trigger a false positive error anymore since
the binding slots are now garanteed by the backend and not changed at
after compilation.
This also bundles all uniforms into UBOs. Making them extendable without
limitations of push constants. The generated uniforms from OCIO are not
densely packed in the UBO to avoid complexity. Another approach would be to
use GPU_uniformbuf_create_from_list but this requires converting uniforms
to GPUInputs which is too complex for what it is.
Reviewed by: brecht, jbakker
Differential Revision: https://developer.blender.org/D14123
This removes manual handling of normals that was hard-coded
to false in the one place the function was called. This change
will help to make a fix to T95839 simpler.
It's better not to expose the details of where the dirty flags are
stored to every place that wants to know if the normals are dirty.
Some of these places are relics from before vertex normals were
computed lazily anyway, so this is more of an incrememtal cleanup.
This will make part of the fix for T95839 simpler.
Make relation match material and world nodes. Does not address the reported
issue regarding muted nodes, but another missing update found investigating.
This was an old issue, but recent image partial update changes made this more
likely to happen in some cases. Now ensure that whenever the rendered scene
switches the image is updated.
This patch aims to fix the issues presented in T87829 and T95331,
namely precision issues while connecting two nodes when being too
close together in the node editor editors, in a few cases even
resulting in the complete inability to connect nodes.
Sockets are found by intersecting a padded rect around the cursor
with the nodes' sockets' location. That creates ambiguities, as it's
possible for the padded rect to intersect with the wrong node,
as the distance between two nodes is smaller than the rect is padded.
The fix in this patch is checking against an unpadded rectangle in
visible_node().
Differential Revision: https://developer.blender.org/D14122
The problem was that the code for sorting polygons around a vertex
assumed that it was a manifold or boundary vertex. However in some cases
the vertex could still be nonmanifold causing the crash. The cases where
the sorting fails are now detected and these vertices are then marked as
nonmanifold.
Differential Revision: https://developer.blender.org/D14065
The "Fill Caps" option on the Curve to Mesh node introduced in
rBbc2f4dd8b408ee makes it possible to fill the open ends of the sweep
to create a manifold mesh.
This patch fixes an edge case, where caps were created even when the
rail curve (the curve used in the "Curve" input socket) was cyclic
making the resulting mesh non-manifold.
Differential Revision: https://developer.blender.org/D14124
In ffmpeg 5.0, several variables were made const to try to prevent bad API usage.
Removed some dead code that wasn't used anymore as well.
Reviewed By: Richard Antalik
Differential Revision: http://developer.blender.org/D14063
Significantly improves loading speed of preview images from disk, e.g. custom
previews loaded using `bpy.utils.previews.ImagePreviewCollection.load()`.
See D14144 for details & comparison videos.
Differential Revision: https://developer.blender.org/D14144
Reviewed by: Bastien Montagne
Currently, whenever any BMesh is converted to a Mesh (except for edit
mode switching), original index (`CD_ORIGINDEX`) layers are added.
This is incorrect, because many operations just convert some Mesh into
a BMesh and then back, but they shouldn't make any assumption about
where their input mesh came from. It might even come from a primitive
in geometry nodes, where there are no original indices at all.
Conceptually, mesh original indices should be filled by the modifier
stack when first creating the evaluated mesh. So that's where they're
moved in this patch. A separate function now fills the indices with their
default (0,1,2,3...) values. The way the mesh wrapper system defers
the BMesh to Mesh conversion makes this a bit less obvious though.
The old behavior is incorrect, but it's also slower, because three
arrays the size of the mesh's vertices, edges, and faces had to be
allocated and filled during the BMesh to Mesh conversion, which just
ends up putting more pressure on the cache. In the many cases where
original indices aren't used, I measured an **8% speedup** for the
conversion (from 76.5ms to 70.7ms).
Generally there is an assumption that BMesh is "original" and Mesh is
"evaluated". After this patch, that assumption isn't quite as strong,
but it still exists for two reasons. First, original indices are added
whenever converting a BMesh "wrapper" to a Mesh. Second, original
indices are not added to the BMesh at the beginning of evaluation,
which assumes that every BMesh in the viewport is original and doesn't
need the mapping.
Differential Revision: https://developer.blender.org/D14018
This commit renames enums related the "Curve" object type and ID type
to add `_LEGACY` to the end. The idea is to make our aspirations clearer
in the code and to avoid ambiguities between `CURVE` and `CURVES`.
Ref T95355
To summarize for the record, the plans are:
- In the short/medium term, replace the `Curve` object data type with
`Curves`
- In the longer term (no immediate plans), use a proper data block for
3D text and surfaces.
Differential Revision: https://developer.blender.org/D14114
Fix boundary error in `BLI_str_unescape_ex`. The `dst_maxncpy` parameter
indicates the maximum buffer size, not the maximum number of characters.
As these are strings, the loop has to stop one byte early to allow space
for the trailing zero byte.
Thanks @mano-wii for the patch!
When removing a node that has a dependence on an ID, like the object
info node, the dependency graph relations weren't updated. This can
cause unexpected performance issues if a complex node tree continues
to depend on an ID that it doesn't actually use anymore. To fix this case,
tag relations for an update if the node has a data-block socket.
Fixes part of T88332
Differential Revision: https://developer.blender.org/D14121
If there is no animation at all, or it's all hidden, the Euler Filter
operators poll now fails with a message that explains this a bit more,
instead of just the generic "context is wrong" error.
Reviewed By: sybren
Maniphest Tasks: T95135
Differential Revision: https://developer.blender.org/D13967
While this should not happen in theory, very bad/broken/dirty files can
lead to such situations.
So we need to re-ensure valid root IDs after resync (for now, done after
each 'library indirect level' pass of resync, this may not be 100%
bulletproof though, time will say).
Found while investigating Blender studio issues in Snow parkour short.
In some cases broken files could lead to selecting a shapekey as
hierarchy root ID, which is not allowed.
Found while investigating Blender studio issues in Snow parkour short.
When new display driver is given to the PathTrace ensure that there are
no GPU resources used from it by the work. This solves graphics interop
descriptors leak.
This aqlso fixes Invalid graphics context in cuGraphicsUnregisterResource
error when doing final render on the display GPU.
Fixes T95837: Regression: GPU memory accumulation in Cycles render
Fixes T95733: Cycles Cuda/Optix error message with multi GPU devices. (Invalid graphics context in cuGraphicsUnregisterResource)
Fixes T95651: GPU error (Invalid graphics context in cuGraphicsUnregisterResource)
Fixes T95631: VRAM is not being freed when rendering (Invalid graphics context in cuGraphicsUnregisterResource)
Fixes T89747: Cycles Render - Textures Disappear then Crashes the Render
Maniphest Tasks: T95837, T95733, T95651, T95631, T89747
Differential Revision: https://developer.blender.org/D14146
Add check for `NULL` `from` pointer to `BLO_main_validate_shapekeys`,
and delete these shapekeys, as they are fully invalid and impossible to
recover.
Found in a studio production file (`animation
test/snow_parkour/shots/0040/0040.lighting.blend`, svn rev `1111`).
Would be nice to know how this was generated too...
This adds initial support for edit mode for the experimental new curves
object. For now we can only toggle in and out of the mode, no real
interraction is possible.
This patch also adds empty menus in edit mode. Those were added mainly
to quiet warnings as the menus are programmatically added to the edit
mode based on the object type and context.
Ref T95769
Reviewed By: JacquesLucke, HooglyBoogly
Maniphest Tasks: T95769
Differential Revision: https://developer.blender.org/D14136
This adds the boilerplate code that is necessary to use the tool/brush/paint
systems in the new sculpt curves mode.
Two temporary dummy tools are part of this patch. They do nothing and
only serve to test the boilerplate. When the first actual tool is added,
those dummy tools will be removed.
Differential Revision: https://developer.blender.org/D14117
Previous implementation had a copy of the image user, which doesn't
contain all the data to identify changes. This patch introduces a new
struct to store the data and can be extended with other data as well
(color spaces, alpha settings).
Check for a camera-view before checking if the view is locked
to the cursor/object since the camera-view takes priority,
it reads better to check that first.
Also reuse the event offset variable.
NDOF navigation in a camera view now behaves like orthographic pan/zoom.
Note that NDOF orbiting out of the camera view has been disabled,
see code comment for details.
Resolves T93666.
For the attribute search button, the tooltip was missing
if the input socket type has attribute toggle activated.
Differential Revision: https://developer.blender.org/D14142
This affected loading of EXR files with set to Linear ACES colorspace, as
well as the sky texture for in some custom OpenColorIO configurations.
Use the builtin OpenColorIO transform from ACES AP0 to XYZ D65 to fix this.
The idea is to keep `is_any_zero` in the `blender::math` namespace,
so instead of trying to be clever, just move it there and expand the
function where it was used in the class.
I noticed that there were a few variables that should not be visible per default.
It seems to me to simply be an oversight, so I went ahead and cleaned them up.
Reviewed By: Sybren, Ray molenkamp
Differential Revision: http://developer.blender.org/D14132
This commit should suffice to make the shader API agnostic now (given that
all users of it use the GPU API).
This makes the shaders not trigger a false positive error anymore since
the binding slots are now garanteed by the backend and not changed at
after compilation.
This also bundles all uniforms into UBOs. Making them extendable without
limitations of push constants. The generated uniforms from OCIO are not
densely packed in the UBO to avoid complexity. Another approach would be to
use GPU_uniformbuf_create_from_list but this requires converting uniforms
to GPUInputs which is too complex for what it is.
Reviewed by: brecht, jbakker
Differential Revision: https://developer.blender.org/D14123
This is a bug on the Blender side, where the depsgraph does not have proper
relations for text object duplis and fails to include the required materials
in the dependency graph. But at least Cycles should not crash.
- No need for `normal_tx` array if we normalize the planes in `plane_tx`.
- No need to calculate the distance squared to a plane (with `dist_signed_squared_to_plane_v3`) if the plane is normalized. `plane_point_side_v3` gets the real distance, accurately, efficiently and also signed.
So normalize the planes of the member `CameraViewFrameData::plane_tx`.
FindOpenImageIO was updated to link to separate OpenImageIO_Util for new
versions, where it is required. For older versions, we can not link to it
because there will be duplicated symbols.
Ref D14128
FindOpenEXR was updated to find new lib names and separate Imath. It's all
added to the list of OpenEXR include dirs and libs.
This keeps it compatible with both version 2 and 3 for now, and doesn't
require changes outside the find module.
Ref D14128
Coarse meshes with high polycount would show as corrupted when GPU
subdivision is used with AMD cards This was caused by the OpenSubdiv
library not taking `GL_MAX_COMPUTE_WORK_GROUP_COUNT` into account when
dispatching computes. AMD drivers tend to set the limit lower than
NVidia ones (2^16 for the former, and 2^32 for the latter, at least
on my machine).
This moves the `GLComputeEvaluator` from the OpenSubdiv library into
`intern/opensubdiv` and modifies it to compute a dispatch size in a
similar way as for the draw code: we split the dispatch size into a 2
dimensional value based on `GL_MAX_COMPUTE_WORK_GROUP_COUNT` and
manually compute an index in the shader.
We could have patched the OpenSubdiv library and sent the fix upstream
(which can still be done), however, moving it to our side allows us to
better control the `GLComputeEvaluator` and in the future remove some
redundant work that it does compared to Blender (see T94644) and
probably prepare the ground for Vulkan support. As a matter of fact,
this patch also removes the OpenGL initialization that OpenSubdiv would
do here. This removal is not related to the bug fix, but necessary to not
have to copy more files/code over.
Differential Revision: https://developer.blender.org/D14131
Previously, the number of action map subactions was limited to two per
action (identified by user_path0, user_path1), however for devices with
more than two user paths (e.g. Vive Tracker) it will be useful to
support a variable amount instead.
For example, a single pose action could then be used to query the
positions of all connected trackers, with each tracker having its own
subaction tracking space.
NOTE: This introduces breaking changes for the XR Python API as follows:
- XrActionMapItem: The new `user_paths` collection property
replaces the `user_path0`/`user_path1` properties.
- XrActionMapBinding: The new `component_paths` collection property
replaces the `component_path0`/`component_path1` properties.
Reviewed By: Severin
Differential Revision: https://developer.blender.org/D13949
This fixes VR pink screen issues when using the DirectX backend, caused
by `wglDXRegisterObjectNV()` failing to register the shared
OpenGL-DirectX render buffer. The issue is mainly present on AMD
graphics, however, there have been reports on NVIDIA as well.
A limited workaround for the SteamVR runtime (AMD only) was provided
in rB82ab2c167844, however this patch provides a more complete solution
that should apply to all OpenXR runtimes. For example, with this patch,
the Windows Mixed Reality runtime that exclusively uses DirectX can now
be used with AMD graphics cards.
Implementation-wise, a `GL_TEXTURE_2D` render target is used as a
fallback for the shared OpenGL-DirectX resource in the case that
registering a render buffer (`GL_RENDERBUFFER`) fails. While using a
texture render target may be less optimal than a render buffer, it
enables proper display in VR using the OpenGL/DirectX interop (tested
on AMD Vega 64).
Reviewed By: Severin
Differential Revision: https://developer.blender.org/D14100
After running the breakdown operator for the graph editor,
the factor property in the redo panel didn't reflect the value you chose
to mitigate that issue down the line there is a
new helper function to get the factor value, and
store it at the same time
Reviewed by: Sybren A. Stüvel
Differential Revision: https://developer.blender.org/D14105
Ref: D14105
Brew's Python framework's site-packages is a symlink so the assumption
that Resources and site-packages would be in the same directory
doesn't hold. So install scripts etc relative to bpy.so. Part of D14111
StringGrid has been deprecated in openvdb 9.0.0 and will be removed soon
Reviewed By: Brecht
Differential Revision: http://developer.blender.org/D14133
The general idea here is to wrap the `CurvesGeometry` DNA struct
with a C++ class that can do most of the heavy lifting for the curve
geometry. Using a C++ class allows easier ways to group methods, easier
const correctness, and code that's more readable and faster to write.
This way, it works much more like a version of `CurveEval` that uses
more efficient attribute storage.
This commit adds the structure of some yet-to-be-implemented code,
the largest thing being mutexes and vectors meant to hold lazily
calculated evaluated positions, tangents, and normals. That part might
change slightly, but it's helpful to be able to see the direction this
commit is aiming in. In particular, the inherently single-threaded
accumulated lengths and Bezier evaluated point offsets might be cached.
Ref T95355
Differential Revision: https://developer.blender.org/D14054
Finding the greatest and/or smallest element in an array is a common
need. This commit refactors the point cloud bounds code added in
6d7dbdbb44 to a more general header in blenlib.
This will allow reusing the algorithm for curves without duplicating it.
Differential Revision: https://developer.blender.org/D14053
This is meant to complement the `blender::math` functions recently
added by D13791. It's sometimes desired to template an operation to work
on vector types, but also basic types like `float` and `int`. This patch
adds that ability with a new `BLI_math_base.hh` header.
The existing vector math header is changed to use the `vec_base` type
more explicitly, to allow the compiler's generic function overload resolution
to determine which implementation of each math function to use.
This is a relatively large change, but it also makes the file significantly
easier to understand by reducing the use of macros.
Differential Revision: https://developer.blender.org/D14113
Fix possible overflow of Modifier UUID
The code prior this change was re-generating modifier's session UUID
prior to copying this id from the source. This approach has a higher
risk of modifiers session UUID to overflow and start colliding with
existing modifiers.
This change makes it so that modifier copy does not re-generated the
session UUID unless it is needed.
Differential Revision: https://developer.blender.org/D14125
GLUT does not support offscreen contexts, which is required for the new
display driver. So we use SDL instead. Note that this requires using a
system SDL package, the Blender precompiled SDL does not include the video
subsystem.
There is currently no text display support, instead info is printed to
the terminal. This would require adding an embedded font and GLSL shaders,
or using GUI library.
Another improvement to be made is supporting OpenColorIO display transforms,
right now we assume Rec.709 scene linear and display.
All OpenGL, GLEW and SDL code was move out of core cycles and into
app/opengl. This serves as a template for apps that want to integrate
Cycles interactive rendering, with a simple OpenGLDisplayDriver example.
In general this would be adapted to the graphics API and color management
used by the app.
Ref T91846
Due to recent changes there have been reports of incorrect loading of
GPU textures. This fix reverts a part of {D13238} that might be the
source of the issue.
This does not happen with **any** image, but with images that have ID
properties.
ID properties are used to store view projection matrices (e.g. for
reprojection with `Image from View` or `Quick Edit` -- these are the
ones we are interested in), but of course they can be used for anything
else, too. The images in the file from the report have ID properties from
an Addon for example.
So the crash can reliably be reproduced with **any** image doing the
following:
```
bpy.data.images['myImage']['myIDprop'] = "foo"
```
This would lead code in `texture_paint_camera_project_exec` to think the
needed `view_data` is on the image (but in reality it was just some
other IDprop).
Solution is simple: just check `view_data` is really valid after getting
it from the IDprops.
Maniphest Tasks: T95787
Differential Revision: https://developer.blender.org/D14116
Having this setting stored in the image space caused low level selection
logic to have to pass around the image space (which could be NULL
in some cases). Use the tool-settings instead since there doesn't seem
to be much/any advantage in having this setting per-space.
Although rB56407432a6a did fix missing subdivision in some cases, in
other cases it did not return the mesh wrapper (like when using
autosmooth, which requires a copy of the mesh), so the non-subdivided
mesh was still returned.
Not all coarse vertices were used to compute the center value (off by
one), and the interpolation for the current would always start at the
base corner for the base face instead of the base corner for the current
patch.
This patch reverses the dependency between `BLI_math_vec_types.hh` and
`BLI_math_vector.hh`. Now the higher level `blender::math` functions
depend on the header that defines the types they work with, rather than
the other way around.
The initial goal was to allow defining an `enable_if` in the types header
and using it in the math header. But I also think this operations to types
dependency is more natural anyway.
This required changing the includes some files used from the type
header to the math implementation header. I took that change a bit
further removing the C vector math header from the C++ header;
I think that helps to make the transition between the two systems
clearer.
Differential Revision: https://developer.blender.org/D14112
This commit renames `mesh_validate.cc` to `mesh_calc_edges.cc`.
I would like to move `mesh_validate.c` to C++, but it makes sense to
keep this specific algorithm in a smaller file.
When running `./blender/build_files/build_environment/install_deps.sh` on Ubuntu 20.04.3 LTS the following error can be seen:
```
./blender/build_files/build_environment/install_deps.sh: line 1266: [: too many arguments
```
This error results from the call:
```
check_package_version_ge_DEB $CLANG_FORMAT $CLANG_FORMAT_VERSION
```
with `CLANG_FORMAT_VERSION` being undefined.
Also as `clang-format` 13 is already released and hopefully didn't break anything `CLANG_FORMAT_VERSION_MEX` could use version bump.
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D13924
This adds a new sculpt mode to the experimental new curves object.
Currently, this mode can only be entered and exited, nothing else.
The main initial purpose of this node will be to use it for hair grooming.
The patch also adds the `editors/curves/` directory for the new curves
object, which will be necessary for many other things as well.
I added a completely new mode (`OB_MODE_SCULPT_CURVES`), because
`OB_MODE_SCULPT` seems to be rather specific to meshes, and reusing
it doesn't seem worth the trouble. The tools/brushes used in mesh vs.
curves sculpt mode are quite distinct as well.
I had to add DNA_userdef_enums.h to make the patch compile with C++
(forward declaration of enums isn't allowed). This follows the same
pattern that we use for other enums in dna.
Differential Revision: https://developer.blender.org/D14107
The path calculation method for animation players: frame-cycler, rv &
mplayer would fail when the number of digits exceeded the range of a
32bit int causing RenderData.frame_path() to raise a ValueError.
Use a simpler method of extracting the range that uses the sign to
detect the beginning of the number.
Since the output file stays unmodified for most developer builds,
install step installed it redundantly.
Create readme.html using `configure_file`:
- Now it's modified only if final output changes (handled by CMake).
- If input file (from git) or blender version changes,
it //will// be modified.
Also don't re-implement what CMake can do.
Reviewed By: campbellbarton, LazyDodo
Differential Revision: https://developer.blender.org/D13863
- Increment the argument index at the end of the loop.
Otherwise using the index after incrementing required subtracting 1.
- Move error prefix creation into a function: `pyrna_func_error_prefix`
so it's possible to create an error prefix without duplicate code.
This simplifies further changes for argument parsing from D14047.
Set "view2d_edge_pan" to true for the NODE_OT_translate_attach operator,
which is used by the duplication operator. This is done in the keymap so
that it's not hard-coded.
Differential Revision: https://developer.blender.org/D13934
The root issue was caused by a mistake in modifier copy data which was
wrongly re-generating source modifier data identifier.
The c8cca88851 simply exposed a bug in code which always was there
since the modifiers session UUID was introduced.
Shows an importance of const qualifier :)
Since now we delegate the evaluation of the last subsurf modifier in the stack
to the draw code, Cycles does not get a subdivided mesh anymore. This is because
the subdivision wrapper for generating a CPU side subdivision is never created
as it is only ever created via `BKE_object_get_evaluated_mesh` which Cycles does
not call (rather, it accesses the Mesh either via `object.data()`, or via
`object.to_mesh()`).
This ensures that a subdivision wrapper is created when accessing the object data
or converting an Object to a Mesh via the RNA/Python API.
Reviewed by: brecht
Differential Revision: https://developer.blender.org/D14048
This is requested by artist for some animation styles where is necessary to fill the area, but create a gap between fill and stroke.
Also some code cleanup and fix a bug in dilate for top area.
Reviewed By: pepeland, mendio
Differential Revision: https://developer.blender.org/D14082
The crash is caused as we did not check that the RNA pointer is null
before trying to use it. This moves the existing checks from the
modifier panels into the template functions so the logic is a bit
centralized.
The issue has two causes: on one hand origin indices were not handled
properly, on the other hand the extraction type (Mesh, BMesh, or mapped)
was not detected correctly.
For the second case reuse the MeshRenderData creation from the coarse
code path so that we make the same decisions. Loose geometry extraction
had to be updated to properly handle the BMesh cases.
For the origin indices, in some cases (for edges and faces), the arrays
used by the subdivision code already have the origin indices baked into
them, so mapping them a second time through the origin index layer is
wrong, and could cause out of bounds accesses.
For vertices especially, we would use two arrays: one for mapping
subdivision vertices to coarse vertices, and another one to map coarse
vertices to subdivision loops used for the selection index buffer. The
second one is now removed (which saves a bit of memory) as it is did not
have the proper data setup for use with the origin indices and we can
easily compute it using the first array anyway.
Fix segfault when calling `some_id.id_properties_ui("propname").update()`,
i.e. call the `update()` function without any keyword arguments. In such
a case, Python passes `kwargs = NULL`, but `PyDict_Contains()` is not
`NULL`-safe.
The Viewer marked the gpu texture to be out of date. But it should have used
the mark_full_update as the gpu textures
are only used by the render/draw engines.
The image/node editor uses the image engine that have its own GPU textures.
Currently one a single texture slot is used to update the screen.
Current design is implemented to use multiple textures.
for now limit the number of texture slots to 1.
This node is a bit of a weird case, because it uses the value stored in an
output socket as an input. So when we want to determine if the Dot
changed, we also have to check if the Normal output changed.
A cleaner solution would be to refactor this by either storing the normal
on the node directly (instead of in an output socket), or by exposing it
by a separate input. This refactor should be done separately though.
Currently whenever gl queries are performed for the viewport, a large
1024 byte array is allocated to store the query results (256 of them).
Unfortunately, if any gizmo using a `draw_select` callback is active
(e.g. the transform gizmos), these queries (and allocations) will occur
during every mouse move event.
Change the vector to allow for up to 16 query results before making an
allocation. This provides enough space for every built-in gizmo except
Scale Cage (which needs 27 queries). It also removes unnecessary
allocations from two other related vectors used during query processing.
Differential Revision: https://developer.blender.org/D13784
This is a regression partially introduced in rB0a6f428be7f0.
Bones being transformed into edit mode were snapping to themselves.
And the bones of the pose mode weren't even snapping.
(Curious that this was not reported).
This fix avoid the drif checking if the previous position is equals to new one, in this case, the pen has not moved and can be canceled.
Differential Revision: https://developer.blender.org/D13870
The animation playback did not take into account individual stereoscopic views.
This patch fixes this by playing back the active view render.
Reviewed By: campbellbarton
Maniphest Tasks: T91423
Differential Revision: https://developer.blender.org/D14070
Workaround for a compilation issue preventing kernels compiling for AMD GPUs: Avoid problematic use of templates on Metal by making `gpu_parallel_active_index_array` a wrapper macro, and moving `blocksize` to be a macro parameter.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D14081
A zero length vector was normalized and the resulting NaN used in further calculations.
This caused trouble on some compilers when using fast math.
Reviewed By: brecht, sergey
Differential Revision: https://developer.blender.org/D14058
* Apple Silicon support enabled on macOS 12.2+
* AMD support enabled on macOS 12.3+
This patch also fixes a device enumeration crash on certain AMD configs which
was caused by over-release of MTLDevice objects.
Differential Revision: https://developer.blender.org/D14090
Improve the nodes' drop shadow by making it scale with the view
and replace the loop for the alpha calculation with something more
explicit.
The amount of drop shadow softness was scaled with the zoom level
and therefore had a fixed screen space size. DPI and UI scale
weren't taken into account either. This patch fixes both issues by
basing the shadow softness on the `widget_unit` that scales correctly
in zoomable views and takes UI scale etc. into account.
Differential Revision: https://developer.blender.org/D13356
* Replace license text in headers with SPDX identifiers.
* Remove specific license info from outdated readme.txt, instead leave details
to the source files.
* Add list of SPDX license identifiers used, and corresponding license texts.
* Update copyright dates while we're at it.
Ref D14069, T95597
Drivers make it way too easy to create dependenciy loops between IDs, so
need to use the same trick as in other dependency-following code in this
file to prevent those infinite loops.
hard to predict for sure how bad of a hierarchy root this can end up
producing, but in general cases think this should be OK.
The rbit instruction is only available starting with ARMv6T2 and
the register prefix is different from what AARCH64 uses.
Separate the 32 and 64 bit ARM branches, add missing ISA checks.
Made sure the code works as intended on macMini with Apple silicon,
and on Raspberry Pi 4 B running 32bit Raspbian OS.
Differential Revision: https://developer.blender.org/D14056
Reduce compute effort of liboverrides resync process by only re-syncing
the parts of the override hierarchy that actually need it.
The main change compared to existing code (which was systematically resyncing
a whole override hierarchy), is that resyncing now operates over several
sub-hierarchies at once, each defined by their own 'resync root' ID.
This ensures that we do not get several new overrides for the same data inside
of the same hierarchy.
Implements T95682.
Differential Revision: https://developer.blender.org/D14079
This patch increases the performance when remapping data.
{D13615} introduced a mechanism to remap multiple items in a single go.
This patch uses the same mechanism when remapping data inside ID datablocks.
Benchmark results when loading the village scene of sprite fright on AMD Ryzen 7 3800X 8-Core Processor
Before this patch 115 seconds
When patch applied less than 43 seconds
There is still some room for improvement by porting relink code.
Reviewed By: mont29
Maniphest Tasks: T95279
Differential Revision: https://developer.blender.org/D14043
Adds helper functions to debug IDRemapper data structure.
`BKE_id_remapper_result_string` converst a given IDRemapperApplyResult
to a readable form for logging purposes.
`BKE_id_remapper_print` prints out the rules inside a IDRemapper struct.
Instead of creating and destroying threads when starting and stopping renders,
keep a single thread alive for the duration of the session. This makes it so all
display driver OpenGL resource allocation and destruction can happen in the same
thread.
This was implemented as part of trying to solve another bug, but it did not
help. Still I prefer this behavior, to eliminate potential future issues wit
graphics drivers or with future Cycles display driver implementations.
Differential Revision: https://developer.blender.org/D14086
For reasons unclear, destroying and then recreating a vertex buffer in the
render OpenGL context is affecting the immediate mode vertex buffer in the
draw manager OpenGL context.
Instead just create a single vertex buffer and use it for the lifetime of
the render OpenGL context. There's not really any need to have a separate
one per tile as far as I can tell.
Differential Revision: https://developer.blender.org/D14084
This adds support for exporting attributes from a Blender Curves object to Cycles.
The implementation follows that of the Mesh object. This also creates motion blur
data if the "velocity" attribute is present on the Curves.
Ref T94193
Reviewed By: brecht
Maniphest Tasks: T94193
Differential Revision: https://developer.blender.org/D14088
Due to the freeing and re-creation of textures performed when binding
offscreen viewports, VR viewport textures would be needlessly
re-created every drawing iteration, leading to a negative impact on VR
frame rate.
This was brought to light by 6738ecb64e, which introduced an
additional texture clear operation on initialization and was
prohibitively costly on some systems when performed every frame.
Now, the textures for VR viewports will not be always re-created
during offscreen binding, but only when necessary using a pre-drawing
step (`wm_xr_session_surface_offscreen_ensure()`).
Reviewed By: jbakker, fclem
Differential Revision: https://developer.blender.org/D14059
When using a RGBA16 (`GL_RGBA16`, `DXGI_FORMAT_R16G16B16A16_UNORM`)
swapchain format with Quest 2, no image is presented to the headset.
This can occur when using the SteamVR runtime with an AMD graphics card
(ex. T95374).
Workaround is to move this format after the Quest 2-compatible RGBA16F
formats in the candidates list so that the RGBA16F formats are chosen
instead.
Reviewed By: Severin
Differential Revision: https://developer.blender.org/D14024
Crash was caused since the function pointers
`s_xrGetOpenGLGraphicsRequirementsKHR_fn`/
`s_xrGetD3D11GraphicsRequirementsKHR_fn` were static and were not
updated with the correct proc address after being set the first time.
As stated in the OpenXR spec: "function pointers returned by
xrGetInstanceProcAddr using one XrInstance may not be valid when used
with objects related to a different XrInstance".
Although it would seem reasonable that the proc address would not
change if the instance was the same (hence the `static XrInstance s_instance;`),
in testing, repeated calls to `xrGetInstanceProcAddress()`
with the same instance still can result in changes (at least for the
SteamVR runtime) so the workaround is to simply set the function pointers
every time, essentially trivializing their `static` designations.
Reviewed By: Severin
Maniphest Tasks: T94268
Differential Revision: https://developer.blender.org/D14023
For the majority of node groups created in Blender 3.0 the behavior does not change.
So far we only found a single file where this setting has an effect.
Differential Revision: https://developer.blender.org/D14078
For an upcoming project we would want to match multiple id types in a
single go. To not replicate the implementation using other types we
introduce `BKE_library_id_can_use_filter_id` that returns all supported
types as a filter.
Not all ID types have a filter_id (ID_LI, ID_KE, ID_SCR) These
exceptions are not available in the filter_id function.
Reviewed By: mont29
Maniphest Tasks: T95279
Differential Revision: https://developer.blender.org/D14061
Use a shorter/simpler license convention, stops the header taking so
much space.
Follow the SPDX license specification: https://spdx.org/licenses
- C/C++/objc/objc++
- Python
- Shell Scripts
- CMake, GNUmakefile
While most of the source tree has been included
- `./extern/` was left out.
- `./intern/cycles` & `./intern/atomic` are also excluded because they
use different header conventions.
doc/license/SPDX-license-identifiers.txt has been added to list SPDX all
used identifiers.
See P2788 for the script that automated these edits.
Reviewed By: brecht, mont29, sergey
Ref D14069
Complex Solidify creates edge bevel weights on the rim if the
according vertex has some vertex bevel weight. If there are no
edge bevel weights, they were left disabled even if vertex bevel
weights are used.
Some curve objects don't have an evaluated mesh at all, but line art
currently assumes that all curve objects have one before converting
it to a mesh internally. Fix this by checking if the curve object has an
evaluated mesh before skipping it.
The remaining problem is that evalauted from non-mesh objects or
evaluated curves from non-curve objects, etc. will be ignored if
"Allow Duplicates" is off. That's a different problem though.
Differential Revision: https://developer.blender.org/D14036
For curve-heavy scenes, memory consumption regressed when we switched from MetalRT to bvh2. Allow users to opt in to MetalRT to workaround this.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D14071
Disable binary archives on Apple Silicon (issue stems from instancing multiple PSOs from the same binary archive). Pipeline creation still filters through the OS shader cache, mitigating any impact on setup times after the initial render.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D14072
This is part of the project of converting `MVert` into `float3`.
(more details in T93602), The pbvh update flag is removed and
replaced with a bitmap stored in the PBVH structure. This
patch is similar to D13878. This is mainly setup for an eventual
performance improvement by removing the extra data from
mesh vertices, but if it's consistent with testing in the other patch
doing the same thing for another "temp tag", then it may actually
increase the speed of sculpt code slightly, since less memory needs
to be loaded when checking/changing the flags.
Differential Revision: https://developer.blender.org/D14000
The main issue is that the image and image user is not updated correctly
in `rna_ImageUser_update`. `BKE_image_user_frame_calc` does not set the
correct frame, because the image is null. Also `IMA_GPU_REFRESH` is not
set for the same reason.
When gpu materials are first created, it is expected that the frame is set
correctly, and the flag is set if necessary. Therefore, somewhere during
depsgraph evaluation, those have to be updated. The depsgraph node
to do the update existed already. Now there is a new relation so that it is
executed when the node tree changed, not only when the frame changed.
This is partially caused by a stupid mistake in cfa53e0fbe
where I missed initializing the `vert_normals` pointer in
`MResolvePixelData`. It's also caused by questionable assumptions
from DerivedMesh code that vertex normals would be valid.
The fix used here is to create a temporary mesh with the data necessary
to compute vertex normals, and ensure them here. This is used because
normal calculation is only implemented for `Mesh` and edit mesh, not
`DerivedMesh`. While this might not be great for performance, it's
potentially aligned with future refactoring of this code to remove
`DerivedMesh` completely. Since this is one of the last places the data
structure is used, that would be a great improvement.
Differential Revision: https://developer.blender.org/D13960
While applying liboverrides on linked data, RNA code (in
`rna_property_override_check_resync`) would detect a lot of false positive
regarding IDs needing resync, because the temporary ID used to apply
overrides was always local, breaking the libraries comparison check.
NOTE: that whole 'apply liboverrides' code could use some refreshment,
it did not change much from initila version several years ago, we now
have better tools and control over non-main data.
But for now the 'trick' in that commit should do the job, ultimately
those temps IDs should never be put in Main at all.
The crash was happening when the mesh had loose edges.
Loose edges are not part of OpenSubdiv topology and hence should not be
communicated to the refiner. Pass ta boolean flag indicating whether an
edge is loose or not in the mesh foreach routines, which seems to be
the easiest way.
Decouple the reference (linked) root ID and the hierarchy (override) root ID.
Previous code was assuming that the reference ID was always also tagged
for override creation, which is true in current master, but cannot be
assumed in general (and won't be true with partial resync anymore).
Also add asserts to validate conditions that the reference/hierarchy_root
variables must meet.
There are two things achieved by this change:
- No possible downcast of size_t to int when calculating motion steps.
- Disambiguate call to `min()` which was for some reason considered
ambiguous on 32bit platforms `min(int, unsigned int)`.
- Do the same for the `max()` call to keep them symmetrical.
On an implementation side the `min()` is defined for a fixed width
integer type to disambiguate uint from size_t on 32bit platforms,
and yet be able to use it for 32bit operands on 64bit platforms without
upcast.
This ended up in a bit bigger change as the conditional compile-in of
functions is easiest if the functions is templated. Making the functions
templated required to remove the other source of ambiguity which is
`algorithm.h` which was pulling min/max from std.
Now it is the `math.h` which is the source of truth for min/max.
It was only one place which was relying on `algorithm.h` for these
functions, hence the choice of `math.h` as the safest and least
intrusive.
Fixes 32bit platforms (such as i386) in Debian package build system.
Differential Revision: https://developer.blender.org/D14062
This implements the update cache described in T95401.
The cache is currently only used for drawing strokes and
sculpting (using the push brush).
**Note: Making use of the cache throughout grease pencil will
have to be done incrementally in other patches. **
The update cache stores what elements have changed in the
original data-block since the last time the eval object
was updated. Additionally, the update cache can store multiple
updates to the data and minimizes the number of elements
that need to be copied.
Elements can be tagged using `BKE_gpencil_tag_full_update` and
`BKE_gpencil_tag_light_update`. A full update means that the element
itself will be copied but also all of the content inside. E.g. when a
layer is tagged for a full update, the layer, all the frames inside the
layer and all the strokes inside the frames will be copied.
A light update means that only the properties of the element are copied
without any of the content. E.g. if a layer is tagged with a light
update, it will copy the layer name, opacity, transform, etc.
When the update cache is in use (e.g. elements have been tagged) then
the depsgraph will not trigger a copy-on-write, but an update-on-write.
This means that the update cache will be used to determine what elements
have changed and then only those elements will be copied over to the
eval object.
If the update cache is empty or the data block was tagged with a full
update, we always fall back to a copy-on-write.
Currently, the update cache is only used by the active depsgraph. This
is because we need to free the update cache after an update-on-write so
it's reset and we need to make sure it is not freed or read by other
depsgraphs.
Co-authored-by: @yann-lty
This patch was contributed by The SPA Studios.
Reviewed By: sergey, antoniov, #dependency_graph, pepeland, mendio
Maniphest Tasks: T95401
Differential Revision: https://developer.blender.org/D13984
Simple upgrade of OpenXR to 1.0.22, following the steps from
https://wiki.blender.org/wiki/Source/OpenXR_SDK_Dependency and
rBb69ab42982a1. No changes to Blender code were necessary, only a version
bump.
The primary motivation for this upgrade is to utilize the
`XR_HTCX_vive_tracker_interaction` extension introduced in ver. 1.0.20.
However, the latest release (1.0.22) also adds a number of potentially
useful extensions such as:
- `XR_FB_render_model`
- `XR_HTC_facial_expression`
- `XR_HTC_vive_focus3_controller_interaction`
Ref T95206
Reviewed By: LazyDodo, sybren, mont29
Maniphest Tasks: T95206
Differential Revision: https://developer.blender.org/D13950
Turn off "Sync Length" when splitting an NLA strip.
The NLA Strip-Action Clip has a setting called "Sync Length" that is on
by default and helps to update the length of the clip to the current
actions keyframes so the strip shows the entire actions stored
animation.
When you split one of these strip-action clips in the NLA to trim it
shorter or to move it somewhere else in the NLA tracks to blend or work
with, the "Sync Length" setting stays checked on. You can have many
strips in the NLA that all look to the same action, if you split one
strip , you now have two strips showing or linked to the same action.
To see or edit keyframes on a strip, you enter tweak mode. When you exit
tweak mode, if "Sync Length" is active on the strip-action clip
settings, the strip length is changed/reset to match the action keys.
When you have many clips, this causes all of them to evaluate and update
no matter where they are in the NLA. This destroys/undoes all the user
work to trim down and place the clips in the NLA.
**Description of the proposed solution:**
This patch/change would turn off, the "Sync Length" setting when the
split tool was used on a strip/action clip to help protect the users
choice to trim and keep the clip that length. It doesn't change the
ability to turn the setting back on or off, it just makes sure that the
user doesn't accidentally or unknowingly loose work.
The same process happens when an action is created that has a "manual
range" set that is different than the length of the actions start-end
keyframes. It makes sense to do the same thing for the split tool.
**Alternative solutions:**
While the user could know this and turn off this setting by hand, it is
easy to forget and or the user might think that it only happened to the
one clip they were editing and not realize it happened to all the
trimmed versions, changing the users choice without the user knowing it
happened.
**Limitations of the proposed solution:**
It only fixes the split tool, if another tool was created and some how
impacted the clip length we would need to have that tool also take the
sync length into account.
Reviewed By: sybren
Maniphest Tasks: T84486
Differential Revision: https://developer.blender.org/D10168
From a strict language point of view the code required a braces around
`trgba` initialization. But it is easier to rely on the fact that fields
which are not specified are zero-initialized.
Under some circumstances, simply adding a curve object and going
to edit mode would cause a crash. This is because the evaluated
`CurveEval` was accessed but also freed by the dependency graph.
The fix reverts the part of b76918717d that uses the
`CurveEval` for the curve object bounds. While this isn't ideal,
it was the previous behavior, and some unexpected behavior
with object bounds is much better than a crash. Plus, given the plans
of using the new "Curves" data-block for evaluated curves, this
situation will change relatively soon anyway.
Strip aligning wasn't done when length of 2 strip is equal, but since
these strips are aligned according to position in stream this can
produce offset.
There are two things achieved by this change:
- No possible downcast of size_t to int when calculating motion steps.
- Disambiguate call to min() which was for some reason considered
ambiguous on 32bit platforms `min(int, unsigned int)`.
On an implementation side the `min()` is defined for a fixed width
integer type to disambiguate uint from size_t on 32bit platforms,
and yet be able to use it for 32bit operands on 64bit platforms without
upcast.
Fixes 32bit platforms (such as i386) in Debian package build system.
Differential Revision: https://developer.blender.org/D13992
This lib allows any shader to use `print()` like functions for
logging and debugging shaders.
Usage is described in the comment at the top of the file.
Added a new "Sharpen Less" kernel to the filter compositor node. The intent here is to provide a much less aggressive sharpening filter that can't simply be solved by toning down the factor on the existing sharpen filter.
The existing "Sharpen" filter uses a "box" kernel:
```
-1 -1 -1
-1 9 -1
-1 -1 -1
```
The new "Sharpen Less" filter uses a "diamond" kernel:
```
0 -1 0
-1 5 -1
0 -1 0
```
The difference between the two is clear to see in the following side-by-side:
{F12847431}
Below shows the difference between the filtering kernels as applied to a B&W render of Suzanne with the UV grid as a texture. The left side of the render using the existing "Sharpen" filter, and the right side showing the new "Sharpen Less" filter. Notice that the left side is more aggressive in accentuating localized contrasts across the image. This can lead to what appears to be aliasing or striations in the resulting image:
{F12847429}
https://developer.blender.org/T95275https://blender.community/c/rightclickselect/57Kq/?sorting=hot
{F12847428}
Reviewed By: #compositing, jbakker
Differential Revision: https://developer.blender.org/D14019
This is almost always meaningless as most code has changed
since the comment was added. Besides this, version control can be used
to check if/when a file was modified.
Some cases of this were kept when they contain details
about the original copyright holder.
Order copyright immediately after the license block,
this was done almost everywhere with a few exceptions.
Remove authors from a few files (we had already removed "Contributors"
section however with old patches being applied this gets added back in).
Also move descriptive text into the doxygen comment block under \file.
In some cases remove the text as it was accidentally copied.
GHOST_ISystem::toggleConsole had a somewhat misleading name
it could be fed 4 different values, so it was not as much a
toggle as a set console window state.
This change renames `toggleConsole` to a more appropriately
named `setConsoleWindowState` and replaces the integer it had
to an enum so it's easy to tell what is being asked of it at
the call site.
Reviewed By: LazyDodo
Differential Revision: https://developer.blender.org/D14020
Symmetrize Armature now also symmetrizes the transform of custom bone
shapes.
Adds a new property to `bpy.ops.armature.symmetrize`:
//Parameters//:
- **direction **(enum in ['NEGATIVE_X', 'POSITIVE_X'], (optional)) –
**Direction**, Which sides to copy from and to (when both are selected)
- **custom_shape **(enum in ['SYMMETRIZE_SAME', 'SYMMETRIZE_ALL',
'SYMMETRIZE_NONE'], (optional)) – **Custom Shapes**, Wether to
symmetrize non symmetric custom bone shapes, all custom shapes, or
none at all
//Rationale//:
Reviewed By: #animation_rigging, Mets, sybren
Maniphest Tasks: T91871
Differential Revision: https://developer.blender.org/D13416
Add files with extension ".woff" and ".woff2" to FILE_TYPE_FTFONT
file type. Allows selecting and using these types of font files.
See D13822 for more details.
Differential Revision: https://developer.blender.org/D13822
Reviewed by Campbell Barton
Avoid using the uint64_t as an intermediate cast since it complicates
behavior for signed types (which first need to be cast to an int64_t).
Assign both old_value_i & old_value_f from the original value to avoid
the need for different handling of signed/unsigned types.
Reviewed By: JacquesLucke
Ref D14039
A Bump node without a Height input is meaningless and does nothing.
As such, it is available as an old workaround that allows making Node
Group inputs that default to normal when not connected, by routing
via a no-op Bump node before doing math.
Cycles specifically recognizes this use case and either bypasses
the node, or converts it into a Geometry Normal node, but Eevee
was still evaluating it as usual. That incurred performance cost,
and also normalized the vector unlike Cycles.
This implements the same bypass logic for Eevee. Since I'm not
sure if it's possible to totally remove the node at this stage,
it emits a no-op function call to copy the input vector.
Differential Revision: https://developer.blender.org/D14045
Viewport cull bones during selection to avoid depth-picking
reading the depth buffer for bones that aren't in the viewport.
Files with thousands of bones could hang blender for seconds while
selecting. The issue could still happen with overlapping bones or when
zoomed out so all bones are under the cursor, however in practice this
rarely happens.
Now files with many bones select quickly.
Related changes include:
- Split `BKE_pchan_minmax` out of `BKE_pose_minmax`.
- Add `mat3_to_size_max_axis` to return the length of the largest
axis (used for scaling the radius).
Reviewed By: sybren
Maniphest Tasks: T91253
Ref D13990
This new info simplifies greatly the handling of resync and deleting
liboverride hierarchies, and makes those operations more robust to
complex dependencies cases between different hierarchies.
Related to T95283.
This change will make handling of liboverrides hierarchies (especially
resyncing) much easier and efficient. It should also make it more
resilient to 'degenerate' cases, and allow proper support of things like
parenting an override to another override of the same linked data (e.g.
a override character parented to another override of the same
character).
NOTE: this commit only implements minimal changes to add that data and
generate it for existing files on load. Actual refactor of resync code
to take advantage of this new info will happen separately.
This change will make handling of liboverrides hierarchies (especially
resyncing) much easier and efficient. It should also make it more
resilient to 'degenerate' cases, and allow proper support of things like
parenting an override to another override of the same linked data (e.g.
a override character parented to another override of the same
character).
NOTE: this commit only implements minimal changes to add that data and
generate it for existing files on load. Actual refactor of resync code
to take advantage of this new info will happen separately.
Previously, node selection made no distinction between a frame node and
other nodes. So a frame node would be selected by their whole rect or
center (depending on box/lasso/circle select). As a consequence of this,
box and lasso could not pratically be started inside a frame node (with
the intention to select a subset of contained child nodes) because the
frame would be selected immediately and tweak-transforming started.
Circle selecting would always contain the frame node as well (making
transforming a subset of nodes without also transforming the whole frame
impossible).
Now change selection behavior so that for all selection modes only the
border [the margin area that is automatically added around all nodes,
see note below] of a frame node is considered in selection. This makes
for a much more intuitive experience when arranging nodes inside frames.
note: to make the area of interest for selection/moving more obvious,
the cursor changes when hovering over (as is done for resizing).
note: this also makes the resize margin consistent with other nodes.
note: this also fixes right resize border (was exclusive instead of
inclusive as every other border)
Also fixes T46540.
This adds an option to the "Select Similar" operator in edit mode to
select vertices based on vertex crease similarity. The implementation
follows that of the edge crease, with a 1-dimensional KD-tree used to
store and retrieve vertex indices base on crease values.
To maintain compatibility with old files (scripts), the `SIMEDGE_CREASE`
enumeration identifier remains `CREASE`, while the one for the new
`SIMVERT_CREASE` is `VCREASE` to follow the naming convention of other
enum values.
Reviewed By: campbellbarton
Differential Revision: https://developer.blender.org/D14037
This Diff changes the computation of the input and output socket
position to being always aligned with its corresponding label.
Reviewed By: campbellbarton
Ref D14025
Regression in 51befa4108
caused negative values to overflow into a uint64_t.
Resolve by casting to a signed int from originally signed types.
This caused D14033 not to work properly.
Assign the actual value before casting to large uint64_t/double types.
This improves readability, especially in cases where both pointer
and integer casts were used in one expression, to make matters worse
clang-format treated these casts as a multiplication.
This also made debugging/printing the values more of a hassle.
No functional changes (GCC produced identical output).
Account for `CurveEval`, which stores the proper deformed and
procedurally created data, unlike the `nurb` list, which has always
just meant a copy of the original curve.
Also account for the case when the curve is empty by using a -1, 1,
fallback bounding box in that case, just like mesh objects.
The problem was that nullptr was returned which is a valid value for
Mesh * and hence the returned optional was treated as having some value.
There was no check for point clouds so that was fixed as well.
Differential Revision: https://developer.blender.org/D14026
Based on discussions from T95355 and T94193, the plan is to use
the name "Curves" to describe the data-block container for multiple
curves. Eventually this will replace the existing "Curve" data-block.
However, it will be a while before the curve data-block can be replaced
so in order to distinguish the two curve types in the UI, "Hair Curves"
will be used, but eventually changed back to "Curves".
This patch renames "hair-related" files, functions, types, and variable
names to this convention. A deep rename is preferred to keep code
consistent and to avoid any "hair" terminology from leaking, since the
new data-block is meant for all curve types, not just hair use cases.
The downside of this naming is that the difference between "Curve"
and "Curves" has become important. That was considered during
design discussons and deemed acceptable, especially given the
non-permanent nature of the somewhat common conflict.
Some points of interest:
- All DNA compatibility is lost, just like rBf59767ff9729.
- I renamed `ID_HA` to `ID_CV` so there is no complete mismatch.
- `hair_curves` is used where necessary to distinguish from the
existing "curves" plural.
- I didn't rename any of the cycles/rendering code function names,
since that is also used by the old hair particle system.
Differential Revision: https://developer.blender.org/D14007
This patch makes it possible to set the UI color of a node's
header bar and override the default from the node's typeinfo.
Currently the color is taken from the `.nclass`
member of the node's bNodeType->TypeInfo struct.
This is created once when registering the node.
The TypeInfo is used for both UI and non-UI functionality.
Since the TypeInfo is shared, the header bar for the node
can't be changed without changing all nodes of that type.
The Map Range node is shown as a `Converter` or blue color by default.
This patch allows this to be changed dynamically to `Vector` or purple.
This is done by adding a `ui_class` callback to node typeinfo struct.
Reviewed By: HooglyBoogly
Differential Revision: https://developer.blender.org/D13936
If the console was hidden on windows, it would
be made visible during shutdown in an effort
not hide a potentially active interactive console
session. This however did not take in account
if blender was actually launched from an interactive
console causing the console window to briefly flash
during shutdown even when launched from the new
launcher, the brief flash concerned some users.
This change adjusts the behaviour to restore the
console only when blender was started from the
console.
Reviewed By: LazyDodo
Differential Revision: https://developer.blender.org/D14016
If the "Asset Indexing" option was disabled in Preferences > Experimental
> Debugging, the asset-view template (used by pose libraries for
example) would still use the indexing.
Previously, nearest interpolation filter was used for preview, because
it offered good performance and bilinear was used for rendering. This
is not always desirable behavior, so filter method can now be chosen by
user. Chosen method will be used for preview and for rendering.
Reviewed By: ISS
Differential Revision: https://developer.blender.org/D12807
Dashed line display was changed since 2.7x, using white on black
instead of grey on black.
This made selection difficult to see as white is brighter
than the default selected color.
Match the grey used in 2.7x.
This patch from Henrik Dick improves the continuity between the
grid forming corners and the edge polyons on multisegment bevels.
For details, see patch D13867.
Fixes T95384. New exporter was missing a fix for T94516 that recently got applied to the python exporter.
Also changed the obj export tests code so that when save_failing_test_output is requested and MTL result is different from the golden expectation, it is saved as well, similar to how it's done for the OBJ file result.
This change from Aras further parallelizes wihin large meshes (the previous one
just parallelized over objects).
Some stats: on A Windows machine, AMD Ryzen (32 threads):
(one mesh) Monkey subdivided to level 6: 4.9s -> 1.2s (blender 3.1 was 6.3s; 3.0 was 49.4s).
(one mesh) "Rungholt" minecraft level: 8.5s -> 2.9s (3.1 was 10.5s; 3.0 was 73.7s).
(lots of meshes) Blender 3 splash: 6.2s -> 5.2s (3.1 was 48.9s; 3.0 was 392.3s).
On a Linux machine (Threadripper, 48 threads, writing to SSD):
Monkey - 5.08s -> 1.18s (4.2x speedup)
Rungholt - 9.52s -> 3.22s (2.95x speedup)
Blender 3 splash - 5.91s -> 4.61s (1.28x speedup)
For details see patch D14028.
Caused by rBf75449b5f2b04b79, which was missing a null check when
attempting to extract a `CustomData` pointer from an mesh that might
be null if the object isn't a mesh object. The commit added null checks
elsewhere, so simply adding them here is a straightforward fix.
Fixes T95526, T95539
Also fixed conflicts due to the change in file writing in the new obj exporter
in master, and fixed one of the tests that was added in master but not 3.1.
The new wavefront .obj exporter in 3.1 was producing slightly invalid parm line syntax (missing u), and was not setting first/last N params to zeroes and ones for curves with "endpoint" flag properly.
Removes unnecessary calls to blf_glyph_cache_find, simplifies
blf_font_size, and reduces calls to it. blf_glyph_cache_new
and blf_glyph_cache_find made static.
See D13374 for more details.
Differential Revision: https://developer.blender.org/D13374
Reviewed by Campbell Barton
Allowing setting and storing of the default font size as float.
See D13230 for more details.
Differential Revision: https://developer.blender.org/D13230
Reviewed by Campbell Barton
Removal of declaration of unused blf_font_draw_ascii
See D13624 for more details.
Differential Revision: https://developer.blender.org/D13624
Reviewed by Campbell Barton
Cache the font size's ideal fixed width column size in the glyph cache
rather than the font itself to improve performance.
See D13226 for more details.
Differential Revision: https://developer.blender.org/D13226
Reviewed by Campbell Barton
blf_glyph_cache_free does not need to be public.
See D13395 for more details.
Differential Revision: https://developer.blender.org/D13395
Reviewed by Campbell Barton
`viewops_data_alloc` allocates and stores some pointers in
`ViewOpsData` while `viewops_data_create` reuses already stored
pointers and also stores others in `ViewOpsData`.
The similar names and usages can confuse and in this case it also
creates a dependency on the order in which these functions are called.
Merging these functions simplifies usage and deduplicates code.
Since we have a node that sets a mesh's auto smooth angle
(unfortunately, in retrospect), we generally can't assume at all
that value is the same as whatever input mesh. Similar asserts
were removed previously in 8216b759e9. While the attempt
at assertions to clarify assumptions is noble, this one doesn't
make sense anymore.
I found this while investigating T95479.
Differential Revision: https://developer.blender.org/D14009
- Fix image.format conversion to string
- Fix warnings about ARB_conservative_depth not found even if GL > 4.2
- Add `array(type)` define for portable array definition
Thanks to the new `ShaderCreateInfo` we now include source files without
any modification. This let us query which are the source files passed to the
`print_log` function. The log will now include a file with row and column
number which is interpreted as a link in most IDE.
DEBUG_CONTEXT_LINES will add more lines around the error lines for more
context. This is also useful if the error line is imprecise (because of
driver bugs) and the reported line is not sufficient to know the location
of the error.
The DEBUG_DEPENDENCIES option will display the list of included files in
the shader sources. Note that it will not print generated source.
This commit also fixes some issues with unhelpful logs, bogus row & column
numbers, other error format, and bug if row was 0.
Source files are now only referenced and listed for the driver to ingest.
Shader sources now includes generated data if any.
Also cleans up gpu_shader_dependency_get_builtins casts.
This includes multiple commits:
- Fix crash when using std::cerr for error output
- Add auto_resource_location which overrides all resources location (not vert input)
- Improve codestyle of error reporting.
- Add type conversion to string and to `eGPUType`
- Add comparison operator (will be used for hash collision resolution).
- Add members related to generated code (codegen)
Detected on `amdgpu-pro` libGL implementation. The workaround is to not
use explicit location for vertex attributes. This is not a real problem
as we don't rely on them for now.
This was an oversight in the patch that added this node,
the default merge distance is meant to be the same as the weld
modifier, 0.001m, meaning by in most situations it removes
vertices generally at the same location.
- Compute ref let the size of dispatch be modified just before drawing.
- Barrier call makes it possible to chain multiple compute passes in one pass.
- DRW_shgroup_vertex_buffer_ref is the analog of DRW_shgroup_uniform_block_ref.
Since we have a node that sets a mesh's auto smooth angle
(unfortunately, in retrospect), we generally can't assume at all
that value is the same as whatever input mesh. Similar asserts
were removed previously in 8216b759e9. While the attempt
at assertions to clarify assumptions is noble, this one doesn't
make sense anymore.
I found this while investigating T95479.
Differential Revision: https://developer.blender.org/D14009
This commit adds infrastructure for 8 bit signed integer attributes.
This can be useful given the discussion in T94193, where we want to
store spline type, Bezier handle type, and other small enums as
attributes.
This is only exposed in the interface in the attribute lists, so it
shouldn't be an option in geometry nodes, at least for now.
I expect that this type won't be used directly very often, it
should mostly be cast to an enum type. However, with support
for 8 bit integers, it also makes sense to add things like mixing
implementations for consistency.
Differential Revision: https://developer.blender.org/D13721
Required to solve a crash on windows (T95367)
Mostly an uneventful update, except for FreeType
giving its cmake options a rename.
Reviewed By: brecht, sybren
Differential Revision: https://developer.blender.org/D13968
Every translation unit that included the modified headers generated
some extra code, even though it was not used. This adds unnecessary
compile time overhead and is annoying when investigating the
generated assembly.
Initialization with `MEM_new()` depends a lot on the initialization rules
of C++, which are not obvious. Calling it with no arguments to be passed
to the constructor may cause the resulting object to be implicitly 0
initialized (or parts of it). This can have an impact on performance
sensitive code, so it's something to document.
Alternatively we could enforce default initialization (as opposed to the
value initalization that happens now), but this could cause
uninitialized memory in a way that isn't obvious either. This is a
possible source of bugs, so Jacques and I decided against it.
Use the global F2 rename panel for the NLA editor to rename NLA strips.
Reviewed By: sybren, RiggingDojo
Differential Revision: https://developer.blender.org/D12300
It was missing framework flags added in `setup_platform_linker_flags`.
Keep it off until QuickLook Thumbnailing is implemented.
Differential Revision: https://developer.blender.org/D13997
The check for existence of custom data layers did not take wrapper nature of
mesh into account.
Quickest and safest for 3.1 solution is to take care of branching of checks
in the draw manager.
Ideally both wrapper and mesh access will happen via the same public API
without branching in the "user" code. That is something outside of the fix
for the coming release though.
Differential Revision: https://developer.blender.org/D14013
Instead of using a manual list of dependency, the new implementation
scans all shader files beginning by `gpu_shader_material_` and extract
all function declarations.
This way we can deduce the internal dependencies between theses files.
This new implementation is merged with the manual pragma dependency system
uses by other shader files. This way it is compatible with the shader
logging system and does not require any string duplication during shader
building.
This callback was only needed to allow specific handling of proxies, now
that theses have been removed the generic
`BKE_lib_id_make_local_generic` code works for objects as well.
Byte images are converted to float. Due to an issue how VSE cache is
freeing its images we cannot store these float buffers what leads
to recalculating it for each change in the image editor.
This fix will reduce the slowdown to areas that have the root cause of
the memory leak, so the buffers can be reused between refreshes.
NOTE: The root cause should still be fixed.
Thanks for reporting Sybren!
Didn't cause visible issues, because the layout uses spacers to
right-align text, which happens to use the region size with pixel-size
applied for calculations.
Some navigation operators check flags like `RV3D_LOCK_ROTATION` in the
invoke function to see if the operation can be performed.
As the comment indicates, these checks should be in the poll function.
This avoids redundant initialization.
Note that this brings functional changes as now operators with context
`EXEC_DEFAULT` will also be affected by the flag.
(There doesn't seem to be a problem with the current code).
Differential Revision: https://developer.blender.org/D14005
This introduce `DEBUG_DEPENDENCIES` (not a cmake flag but a local define)
which when set to 1, will list all the original files included in this
shader while omitting the generated / non original code.
This is the layout used by the amdgpu-pro GL implementation.
This also add some sanitizing of the parse output because the bespoke
implementation has bogus error when it comes to compute shaders.
The strings in the `get_description` functions for operators need
translation, they are not found by the translation system automatically,
and there is no translation applied afterwards either (as far as I could
tell). Some used `N_` before, but most did nothing.
Differential Revision: https://developer.blender.org/D14011
The case where Y rotation is mapped to Y rotation was not handled.
This is now fixed.
Also added an automated test to make sure that the symmetrize operator
functions as intended.
Reviewed By: Sybren
Differential Revision: http://developer.blender.org/D9214
This was caused by macros interpreted as recursive. Workaround by
not using macros at all and just define local variables which
hopefully will be optimized.
This was caused by macros interpreted as recursive. Workaround by
not using macros at all and just define local variables which
hopefully will be optimized.
Crash introduced by {rB0cb5eae}.
When switching to between drawing modes the region.draw_buffer could be
uninitialized when the gizmo depth test is performed. When the mouse is
placed on top of a gizmo part that could be highlighted would crash.
This fix adds a early exit when depth testing is requested, but there
isn't a draw_buffer. Not sure this is an root cause fix.
Reported by multiple animators in Blender Studio.
Technically, this can't be relied upon in the long term. It worked more or
less accidentally before. It was broken by a previous fix accidentally. I mainly
bring it back because rBa985f558a6eb16cd6f0 was not expected to have
this side effect.
Note, this change can result in slower performance. Writing to a vertex
groups is less efficient than using a generic attribute.
Previously, these methods used the more generic substring-finding
algorithm, which is more complex and slower.
Using the more specialized methods results in a noticable speedup
in the obj importer (D13958).
Differential Revision: https://developer.blender.org/D14012
When viewing backdrop on top of the node grid, the grid would be
rendered black when the mode wasn't set to RGBA. This fix fixes this by
reverting the previous fix of drawing the backdrop and implement a
different one that recomputes the UV coordinates on the screen edges.
DNAstr was assumed to be 4-byte aligned which is not necessarily
the case for byte-arrays.
Use a compiler attribute to ensure this is the case.
Thanks to @mtasaka for investigating and providing a patch.
Part of T91671.
Not much else to say, this is mainly a massive deletion of code.
Note that a few cleanups possible after this proxy removal were kept out
of this commit to try to reduce a bit its size.
Reviewed By: sergey, brecht
Maniphest Tasks: T91671
Differential Revision: https://developer.blender.org/D13995
This doesn't make a user visible difference since it's only used for
brackets at the moment, this is more for general correctness as the
width calculation for mono-spaced text drawing is different
(as it uses BLI_wcwidth).
ED_transverts_create_from_obedit expected an evaluated object.
Add flag to request TX_VERT_USE_MAPLOC to be set, which avoids having to
calculate this data when it's not used as well as the requirement
that the input object be evaluated from the depsgraph.
This fixes the crash by removing the `do_view3d_header_buttons` handler.
The code can work at a higher level here, using the operator for setting
the select mode, which makes this patch a cleanup as well.
The operator now has a description callback to add the custom
description used for the behavior in its invoke method.
Differential Revision: https://developer.blender.org/D13660
Since f59767ff97, these hair layer types are unused. Since DNA
compatibility was broken with any files that would contain them, the
indices can be reused to avoid growing custom data's typemap.
In the future when we have a docs staging area it will be
important to change where this JSON is pulled from.
For now, always pull from the "Production" versions
This commit adds a version switch similar to the one on the user manual,
in the future it would be nice to refactor both of these into a more generic
code that works for both. Maybe develop this into a sphinx extension.
As part of this change I had to change how the blender hash is displayed.
Instead of the version hash in the top left it has been moved to the page footer.
This change will also be backported to 2.93 LTS, 2.93 LTS, and 3.0.
This patch refactors the "Hair" data-block, which will soon be renamed
to "Curves". The larger change is switching from an array of `HairCurve`
to find indices in the points array to simply storing an array of offsets.
Using a single integer instead of two halves the amount of memory for that
particular array.
Besides that, there are some other changes in this patch:
- Split the data-structure to a separate `CurveGeometry`
DNA struct so it is usable for grease pencil too.
- Update naming to be more aligned with newer code and the style guide.
- Add direct access to some arrays in RNA
-- Radius is now retrieved as a regular attribute in Cycles.
-- `HairPoint` has been renamed to `CurvePoint`
-- `HairCurve` has been renamed to `CurveSlice`
- Add comments to the struct in DNA.
The next steps are renaming `Hair` -> `Curves`, and adding support
for other curve types: Bezier, Poly, and NURBS.
Ref T95355
Differential Revision: https://developer.blender.org/D13987
This is partially caused by a stupid mistake in cfa53e0fbe
where I missed initializing the `vert_normals` pointer in
`MResolvePixelData`. It's also caused by questionable assumptions
from DerivedMesh code that vertex normals would be valid.
The fix used here is to create a temporary mesh with the data necessary
to compute vertex normals, and ensure them here. This is used because
normal calculation is only implemented for `Mesh` and edit mesh, not
`DerivedMesh`. While this might not be great for performance, it's
potentially aligned with future refactoring of this code to remove
`DerivedMesh` completely. Since this is one of the last places the data
structure is used, that would be a great improvement.
Differential Revision: https://developer.blender.org/D13960
Iterating over scene's objects while we modify those (through proxy to
override conversion code) is call for problems (use after free etc.).
Instead, all proxy objects need to be gathered first in a temporary
list, and processed all at once in a second loop.
`BKE_collection_object_add` ensures given object is added to an editable
collection, and not e.g. a linked or override one.
However, some processes like do_version manipulate collections also from
libraries, i.e. linked collections, in those cases we need a version of
the code that unconditionnally adds the given object to the given
colleciton.
This is from patch D13988. It removes the "- New" from the menu of the
new obj exporter, changes the default addon to just io_import_obj,
and does the right versioning thing.
Also disables the python tests for the old python exporter.
This patch reverts the normal behavior of the spotlights. In the last fix,
the returned normal of a spot light was equal to its direction. This broke
some texturing methods used by artists.
Differential Revision: https://developer.blender.org/D13991
The view3d_edit.c file is already getting big (5436 lines) and mixes
operators of different uses.
Splitting the code makes it easier to read and simplifies the
implementation of new features.
Differential Revision: https://developer.blender.org/D13976
Those cases are fairly hard to track down... Added some more checks,
also at lower levels, more generic levels of object editing, and fixed
core check in liboverride (previously code was assuming that an override
of a collection only could have overrides of objects or linked objects,
but this is not necessarily true).
This is from patch D13988. It removes the "- New" from the menu of the
new obj exporter, changes the default addon to just io_import_obj,
and does the right versioning thing.
Also disables the python tests for the old python exporter.
This displays the error source such as IDE can find identify them
as path and let the programmer follow the direct link instead of
manual string search.
This only works for shaders compiling from unaltered sources as
it uses the source `char*` as key for filename search.
Use the ID.recalc flag to detect when updates after frame-change is
needed. Since comparing the last calculated frame doesn't take undo into
account (see code-comment for details).
`ID_RECALC_AUDIO_SEEK` has been renamed to `ID_RECALC_FRAME_CHANGE`
since this is not only related to audio however internally this flag is
still categorized in `NodeType::AUDIO`.
Reviewed By: sergey
Ref D13942
The issue was happening with a specific file where the ID management
code was not fully copying all modifiers because of the extra check
in the `BKE_object_support_modifier_type_check()`.
While it is arguable that copy-on-write should be a 1:1 copy there is
no real need to maintain the per-modifier pointer to its original.
Use its SessionUUID to perform lookup in the original datablock.
Downside of this approach is that it is a linear lookup instead of
direct pointer access, but the upside is that there is less pointers
to manage and that the file with unsupported modifiers does behave
correct without any asserts.
Differential Revision: https://developer.blender.org/D13993
The modifiers are mapped between original and evaluated objects based on
their session IDs. The pointer to original modifier is no longer needed
for the backup: it remained from the initial implementation which was
rewritten at some point.
This is a preparation for removal of the pointer to original modifier.
c9d9bfa84a caused a regression in when
the right-mouse select action was set to "Select & Tweak" (default).
Now the fallback tool works with RMB select as it did before.
For some reason on some GL implementation (amdgpu) this particular
syntaxes shift the error lines.
Remove the context lines by default as they are not useful anymore.
We now use ShaderCreateInfo as a way to setup the custom material
implementation.
This is more versatile and flexible while not require parsing of
snippets of code.
This will allow to keep the access to deprecated DNA proxy data in that
specific file, instead of allowing deprecated accesses in the whole
override kernel code.
Part of T91671.
Part of the resynching code would access collections' objects base
cache, which can be invalid at that point (due to previous ID remapping
and/or deletion). Use a custom recursive iterator over collections'
objects instead, since those 'raw' data like collection's objects list,
and collection's children lists, should always be valid.
Found while investigating a studio production file.
This is a temp fix for a memory leak where the VSE isn't aware that a
float representation of the image could exist. The VSE somehow doens't
clears it (refcounter is still 1).
The work around is just to let the image engine clean up all the data it
created. Potential this would add more overhead when buffers are needed
more than once.
This refactors how output attributes are computed in the geometry
nodes modifier. Previously, all output attributes were computed one
after the other. Every attribute was stored on the geometry directly
after computing it. The issue was that other output attributes might
depend on the already overwritten attributes, leading to unexpected
behavior.
The solution is to compute all output attributes first before changing the
geometry. Under specific circumstances, this refactor can result in a speedup,
because output attributes on the same domain are evaluated together now.
Overwriting existing might have become a bit slower, because we write the
attribute into new buffer instead of using the existing one.
Differential Revision: https://developer.blender.org/D13983
Mistake in the 974981a637: f the edit data is not present then the
origindex codepath is to be used. Added a brief note about it on the
top of the file.
More ideally would be to remove edit mesh from non-bmesh-wrappers
but this would require changes in the draw manager to make a proper
decision about drawing edit mode overlays.
Now all proxies will always be converted to library overrides. If
conversion fails, they are simply 'disabled'.
This should be the last 'user-visible' step of proxies removal.
Remaining upcoming commits will remove internal ID management, depsgraph
and evaluation code related to proxies.
Also bump the blendfile subversion.
Part of T91671.
So far linked proxies were just kept as-is, this is no longer an option.
Attempt to convert them into liboverrides as much as possible, though
some cases won't be supported:
- Appending proxies is broken since a long time, so conversion will fail
here as well.
- When linking data, some cases will fail to convert properly. in
particular, if the linked proxy object is not instanced in a scene
(e.g. when linking a collection containing a proxy as an
epty-instanced collection instead of a view-layer-instanced collection).
NOTE: converion when linking/appending is done unconditionnaly, option
to not convert on file load will be removed in next commit anyway.
Part of T91671.
This will help when dealing with liboverrides from other library files,
e.g for resync or proxies conversion.
This commit only affects proxy conversion.
Part of T91671.
We have this node for shader and geometry nodes. Compositor can also
work with vectors, and this can help with that.
Reviewed By: manzanilla
Maniphest Tasks: T95385
Differential Revision: https://developer.blender.org/D12919
The geometry nodes modifier currently always adds a dependency
relation from the evaluated geometry to the object transform. However,
that can be avoided unless there is a collection or object info node in
"Relative" mode.
In order to avoid requiring dependency graph relations updates often
when editing a node tree, this patch doesn't check if the node is muted
or if the data-block sockets are empty before adding the dependency.
Fixes T95265
Differential Revision: https://developer.blender.org/D13973
Function `IMB_indexer_get_seek_pos()` can return non 0 seek position for
frame index 0. This causes seeking to incorrect GOP and scanning ends
with failiure.
Hard-code first frame index seek position to 0.
Differential Revision: https://developer.blender.org/D13974
Some items on the Quick Setup screen can be truncated with some
languages and/or with High DPI monitors. This patch adjusts column
sizes and turns off the expand on Spacebar options, making everything
fit a bit better.
See D9853 for more details.
Differential Revision: https://developer.blender.org/D9853
Reviewed by Julian Eisel
Using opt-in instead of opt-out to make code easier to read.
Add combined flag enum.
Making restrict an inverse flag option because it is so rare to
use it.
This adds the possibility to use the `gpu_BaryCoord[NoPersp]`
builtin to support barycentric coordinates without geometry shader.
The `BuiltinBits::LAYER` builtin needs to be manually added
to the `GPUShaderCreateInfo` in order to use this feature.
Note: This is only available for shaders using `GPUShaderCreateInfo`.
A geometry shader fallback is generated if the extension
`AMD_shader_explicit_vertex_parameter` is not available.
`NV_fragment_shader_barycentric` was not considered because it is not
present inside the `glew.h` with use and seems to only be available
with vulkan.
This adds the possibility to use the `gpu_Layer` builtin to
support layered rendering without geometry shader.
The `BuiltinBits::LAYER` builtin needs to be manually added
to the `GPUShaderCreateInfo` in order to use this feature.
Note: This is only available for shaders using `GPUShaderCreateInfo`.
A geometry shader fallback is generated if the extension
`AMD_shader_explicit_vertex_parameter` is not available.
- Scan all static shaders for builtins on startup.
- Add possibility to manually add builtins.
- `ShaderCreateInfo.builtins_` contain builtins from all stages.
If an entire cyclic stroke was selected, calling "Separate by Points"
would leave a gap in the new object (making the new stroke non-cyclic).
The patch makes sure that if we separate by points and all points are
selected, we fall back to separate by stroke.
Reviewed By: antoniov
Maniphest Tasks: T91463
Differential Revision: https://developer.blender.org/D12527
This patch fixes the error that pops up
(`Error: Unable to execute '... Mode Toggle', error changing modes`)
when trying to switch to e.g. draw mode from a grease pencil object
that was saved in draw mode in an inactive scene when the file was loaded.
Note that this does not fix the bigger issue described in T91243.
The fix makes sure that we reset all the mode flags on the grease pencil
data when we set the mode to object mode.
Reviewed By: antoniov
Maniphest Tasks: T89514
Differential Revision: https://developer.blender.org/D12419
When adding an asset library in the Preferences, set the name of the new
library to the chosen directory's name by default. That avoids having to
set it manually which can be annoying. Previously I thought it would be
nice to show the name button in red then, making the user aware that
they have to give it a name, but that appears to be more annoying than
useful/practical after all.
The issue was that the code only looked at `dob->ob`
instead of `dob->ob_data` which is necessary since
rB5a9a16334c573c4566dc9b2a314cf0d0ccdcb54f.
This now uses the same pattern that is used in other places
where `BKE_object_replace_data_on_shallow_copy` is used.
Blender would have crashed when renaming bone in Edit Mode, Saving, and
than selecting/deselecting.
Caused by a mistake in the 0f89bcdbeb: can not "short-circuit" the
CoW update if it was explicitly requested.
Safest for now solution seems to be to store whether the CoW component
has been explicitly tagged, so that the following configuration can be
supported:
DEG_id_tag_update(id, ID_RECALC_GEOMETRY);
DEG_id_tag_update(id, ID_RECALC_COPY_ON_WRITE);
Differential Revision: https://developer.blender.org/D13966
Since splitting the depth and the color shader in the image engine the
backdrop wasn't visible anymore. The reson is that the min max uv
coordinates were never working for the node editor backdrop that uses
its own coordinate space.
This partial fix will ignore the depth test when drawing the color part
of the backdrop. This will still have artifacts that are visible when
showing other options as RGBA.
Proper fix would be to calculate the the uv vbo in uv space and not in
image space.
Can also happen in other places when the overlay engine is active. Some
parts of the overlay engine uses builtin shaders, but disable the color
space conversion to the target texture.
Currently there the overlay engine has its own set of libraries it could
include and defined a macro to pass-throught the color space conversion.
The library include mechanism currently fails when it couldn't find the
builtin library in the libraries of the overlay engine. This only
happened in debug mode.
This change will not fail, but warns the developer if a library could
not be included. In the future this should be replaced by a different
mechanism that can disable the builtin library. See {T95382}.
Since d9c6ceb3b8 partial updates to
normals in sculpt-mode were accumulating into the current normal
instead of a zeroed value.
Zero vertex normal values tagged for calculation before accumulation.
Reviewed By: HooglyBoogly
Ref D13975
From investigating T95185, it's important the normal returned by
SCULPT_vertex_normal_get always match the PBVH normal array.
Since this is always initialized in the PBVH, there is no advantage
in storing the normal array in two places, it only adds the possibility
that changes in the future causing different meshes normals to be used.
Split out from D13975.
Since 0ea0ccc4ff, `AV_PIX_FMT_YUV444P` pixel format was used for
lossless renders, which did override `AV_PIX_FMT_YUVA420P` format when
"RGBA" output is chosen. VP9 encoder doesn't seem to support
`AV_PIX_FMT_YUVA444P` pixel format, so use `AV_PIX_FMT_YUVA420P` for
lossless RGBA ouput instead.
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D13947
Currently, audio and video strips are synchronized based on data from
media stream, which is nice, but this causes gaps between strips.
This synchronization was implemented by moving movie strip position
relative to sound, which doesn't make much sense for user which is
mostly interested in editing video.
Code was bit hard to read, so it has been simplified. Ideally video
stream time would be easily accessible so synchronization could be done
at any time, but this is not necessary at this point.
Reviewed By: zeddb
Differential Revision: https://developer.blender.org/D13948
Correct corner radius of the node outline to prevent a noticeable gap in
some cases.
---
Currently we make a small mistake in the creation of the node outline:
We offset the rectangle describing the outline by the outline thickness,
but we don't adjust the corner radius accordingly.
Therefore the rounded corner of the outline and the node body are not
concentric which can sometimes lead to a visible gap at the corner.
How noticeable it is depends on the theme, the screen's dpi and the
line thickness set in the preferences.
Simply adjusting the corner radius for the outline to also be increased
by the outline thickness fixes this small issue.
| display, line thickness | **patch** | **master** |
| --- | --- | --- |
| 1080p, default/thin | {F12835304} | {F12835305} |
| retina, thin | {F12835306} | {F12835307} |
The issue was mentioned by @hitrpr
Reviewed By: Blendify
Differential Revision: https://developer.blender.org/D13955
Use the evaluated mesh to generate the Adjacent Faces margin.
Baking used the evaluated mesh, but generating the margin used the base
mesh. This would lead to generating the margin from a stale UV map when the
UV editor was open and the UV map was changed. Fix it by passing the same
mesh as used for baking through to the margin generation.
Differential Revision: https://developer.blender.org/D13938
The new adjacent faces method border lookup fails in some directions around
45 degrees
* Use 8 Dijkstra directions (also diagonally) to determine which polygon is the
closest to each pixel. Using only Manhattan distance lead to large parts of
the texture which were matched with the wrong polygon.
* Use neighbroing polygons for edge search. The Adjacent Faces algorithm needs
to determine the closest edge, in UV space, each pixel. To speed this up
first as map is built which finds the closest polygon for each pixel along
horizontal, vertical and diagonal steps. Because this can sometimes be one
edge off we first look in the polygon from the map, if that fails also
check the edges of its neighbouring UV polygons.
Differential Revision: https://developer.blender.org/D13935
For those EEVEE passes a bit of trickery with pointer offsets allows to
get the owning viewlayer, so path generation is not too bad.
Also moved ViewLayer path generation itself into a public utils, to
avoid duplicating code.
NOTE: Doing the same for AOV would be needed, but since pointer offsets
won't help us here to find the owning viewlayer, not sure how to do it
nicely yet (only solution I think is to loop over all AOVs of all
ViewLayer of the scene to find it :( ).
Reported by Beau Gerbrands (@Beaug), thanks.
Image buffer was visible but buffer wasn't available. In the case
the color only overlay of the render result was displayed the image
buffer was not check to be valid.
This patch adds a null pointer check to check in `IMB_alpha_affects_rgb`
to solve this crash.
This commit specifies the exact Python version which is included in the
package name, thereby allowing `install_deps.sh` to suggest
"`-D PYTHON_VERSION=3.10`" correctly.
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D13925
Empty (UDIM) tiles where drawn with a transparency checkerboard. They
should be rendered with a border background. The cause is that the image
engine would select a single area that contained all tiles and draw them
as being part of an image.
The fix is to separate the color and depth part of the image engine
shader and only draw the depths of tiles that are enabled.
While this didn't cause any user visible bugs, this wouldn't
have behaved as intended since the timer would never run again once
wmTimer.ntime was set to NAN.
GPU_select originally used GL_SELECT which defined the format for
storing the selection result.
Now this is no longer the case, define our own struct - making the code
easier to follow:
- Avoid having to deal with arrays in both `uint*` and `uint(*)[4]`
multiplying offsets by 4 in some cases & not others.
- No magic numbers for the offsets of depth & selection-ID.
- No need to allocate unused members to match GL_SELECT
(halving the buffer size).
Remove functions and function pointers that were never set or never
used at all. The "tessface" original index handling in `subsurf_ccg.c`
can be removed because the data was never used.
This function and flags weren't used outside of DerivedMesh
code, and since the plan is to remove the data structure, it makes
sense to remove complexity where possible.
This is a patch from Aras Pranckevicius, D13927. See that patch for full
details. On Windows, the many small fprintfs were taking up a large amount
of time and significant speedup comes from using snprintf into chained buffers,
and writing them all out later.
On both Windows and Linux, parallelizing the processing by Object can also lead
to a significant increase in speed.
The 3.0 splash screen scene exports 8 times faster than the current C++ exporter
on a Windows machine with 32 threads, and 5.8 times faster on a Linux machine
with 48 threads.
There is admittedly more memory usage for this, but it is still using 25 times
less memory than the old python exporter on the 3.0 splash screen scene, so
this seems an acceptable tradeoff. If use cases come up for exporting obj files
that exceed the memory size of users, a flag could be added to not parallelize
and write the buffers out every so often.
Previously, the new obj exporter was only exporting per-vertex normals for faces
marked as "smooth". But a face can have custom normals, as soon as the normals
data layer exists. This change makes it follow the behavior of USD & Collada
exporters and the old Python one, which also export per-vertex normals as soon
as the layer is there. (From Patch D13957.)
This is similar to e032ca2e25 which removed the
callback for volumes. Now that we have geometry sets, there is
no need to define a callback for every data type, and this wasn't
used. Procedural curves/hair editing will use nodes rather than new
modifier types anyway.
Since `event->xy` is now an array these functions can be
simplified to accept the sample point as an array.
Clarifications were also made to variable names:
- `eye->last_x/y` --> `eye->cursor_last`
- `mx/my` --> `cursor`
Differential Revision: https://developer.blender.org/D13671
Previously, macros were ifdefed using the cmake option `WITH_INTERNATIONAL`
However, the is unnecessary as withen the functions themselves have checks for building without internationalization.
This also means that many `add_definitions(-DWITH_INTERNATIONAL)` are also unnecessary.
Reviewed By: mont29, LazyDodo
Differential Revision: https://developer.blender.org/D13929
This fixes a regression from rBa0edee712a79239133ff840f911f6416d4c41855.
Issue being the curve map not being initialized in the GPU shader function.
Fixes T95221
This fixes a regression from rBa0edee712a79239133ff840f911f6416d4c41855.
Issue being the curve map not being initialized in the GPU shader function.
Fixes T95221
As part of the project of converting `MVert` into `float3`
(more details in T93602), this is an easy step, since it
is only locally used runtime data. In the six places it was
used, the flag was replaced by a local bitmap.
By itself this change has no benefits other than making some
code slightly simpler. It only really matters when the other
flags are removed and it can be removed from `MVert`
along with the bevel weight.
Differential Revision: https://developer.blender.org/D13878
For every spline, *all* of the normals and tangents in the output
were normalized. The node is multithreaded, so sometimes a thread
overwrote the normalized result from another thread.
Fixing this problem also made the node orders of magnitude
faster when there are many splines.
Show a more instructive error message for users who have plugged
multiple monitors into multiple display adapters. And do not exit
if unable to open a child window when in this state.
See D13885 for more details
Differential Revision: https://developer.blender.org/D13885
Reviewed by Ray Molenkamp
The GLSL defines used to make the uniform names unusable for local variable
is being interpreted as recursive on some implementation.
This avoids it by create a second macro avoiding the recursion.
Allow Windows IME Pinyin, when in Chinese mode, to use numpad decimal
key to enter decimal point.
See D13902 for more details.
Differential Revision: https://developer.blender.org/D13902
Reviewed by Brecht Van Lommel
Convert Ideographic Full Stop, used in Simplified Chinese and Japanese,
to Decimal Point when entering numbers into numerical inputs.
See D13903 for more details
Differential Revision: https://developer.blender.org/D13903
Reviewed by Brecht Van Lommel
This patch adds a "OneDrive" icon to the File Manager System list for
Windows (only!).
See D11133 for more details.
Differential Revision: https://developer.blender.org/D11133
Reviewed by Julian Eisel
Initialize m_Bar, m_dropTarget, & m_hWnd members of GHOST_WindowWin32
in constructor's member initializer list. This ensures they are are
set or NULL in destructor if constructor does not complete.
See D13886 for more details.
Differential Revision: https://developer.blender.org/D13886
Reviewed by Jesse Yurkovich
Buttons can hold context and it's very useful to use this as a way to
let buttons provide context for drop operators.
For example, with this D13549 can make the material slot list set the
material-slot pointer for each row, and the drop operator can just query
that.
Sometime throughout development some checks got lost during refactor.
This change makes it so that if OIDN is not supported on the current
CPU Cycles will report an error and stop rendering. This behavior is
similar to when an OptiX denoiser is requested and there is no OptiX
compatible device available.
The easiest way to verify this change is to force return false from
the `openimagedenoise_supported()`.
Fixes Cycles part of the T94127.
Differential Revision: https://developer.blender.org/D13944
Parameter was used to still be compatible with the previous drawing mode.
The previous mode isn't available anymore so the parameter can should be
removed.
After showing the alpha in the image editor the setting was not reset
so all images in the editor showed as being transparent.
This commit fixes this by resetting the flag before updating.
In Outliner, 'Make Override Hierarchy' on an indirectly linked data would
fail in case some items higher up in the hierarchy also needed to be
overridden was also indirectly linked.
In Outliner, 'Make Override Hierarchy' on an indirectly linked data would
fail in case some items higher up in the hierarchy also needed to be
overridden was also indirectly linked.
Adding better support for drawing huge images in the image/uv editor. Also solved tearing artifacts.
The approach is that for each image/uv editor a screen space gpu texture is created that only contains
the visible pixels. When zooming or panning the gpu texture is rebuild.
Although the solution isn't memory intensive other parts of blender memory usage scales together with
the image size.
* Due to complexity we didn't implement partial updates when drawing images tiled (wrap repeat).
This could be added, but is complicated as a change in the source could mean many different
changes on the GPU texture. The work around for now is to tag all gpu textures to be dirty when
changes are detected.
Original plan was to have 4 screen space images to support panning without gpu texture creation.
For now we don't see the need to implement it as the solution is already fast. Especially when
GPU memory is shared with CPU ram.
Reviewed By: fclem
Maniphest Tasks: T92525, T92903
Differential Revision: https://developer.blender.org/D13424
This patch reimplements the image partial updates. Biggest design motivation for the redesign
is that currently GPUTextures must be owned by the image. This reduces flexibility and adds
complexity to a single component especially when we want to have different structures.
The new design is not limited to GPUTextures and can also be used by reducing overhead in image
operations like scaling. Or partial image updating in Cycles.
The usecase in hand is that we want to support virtual images in the image editor so we can
work with images that don't fit in a single GPUTexture.
Using `BKE_image_partial_update_mark_region` or `BKE_image_partial_update_mark_full_update`
a part of an image can be marked as dirty. These regions are stored per ImageTile (UDIM).
When a part of the code wants to receive partial changes it needs to construct a `PartialUpdateUser`
by calling `BKE_image_partial_update_create`. As long as this instance is kept alive the changes can
be received.
When a user wants to update its own data it will call `BKE_image_partial_update_collect_changes`
This will collect the changes since the last time the user called this function. When the partial changes
are available the partial change can be read by calling `BKE_image_partial_update_get_next_change`
It can happen that the introduced mechanism doesn't have the data anymore to construct the
changes since the last time a PartialUpdateUser requested it. In this case it will get a request
to perform a full update.
Maniphest Tasks: T92613
Differential Revision: https://developer.blender.org/D13238
Assert when "//" prefixed relative paths are passed to
BLI_path_cmp_normalized as this can't be expanded
and it's possible the paths come from different blend files.
Changes to recent addition: c85c52f2ce.
Having both BLI_paths_equal and BLI_path_cmp made it ambiguous
which should be used, as `BLI_paths_equal` wasn't the equivalent to
`BLI_path_cmp(..) == 0` as it is for string equals macro `STREQ(..)`.
It's also a more specialized function which is not used for path
comparison throughout Blender's internal path handling logic.
Instead rename this `BLI_path_cmp_normalized` and return the result of
`BLI_path_cmp` to make it clear paths are modified before comparison.
Also add comments about the conventions for Blender's path comparison
as well as a possible equivalent to Python's `os.path.samefile`
for checking if two paths point to the same location on the file-system.
The defrag shader make sure the free heap is free of holes. Making
the allocation more straightforward.
Since we now only reference the pages using the tiles, we introduce
a debug shader that produces an image with page data in a visual way.
This replaces the debug 8 option.
This also fixes some bug that were still present in the pipeline.
This separate the handling of directional lights (sun) into their
own loops. This will help reduce register pressure and remove some
pollution of the local light culling.
All sun lights are packed at the start of the light array.
We now scan the depth buffer after the prepass to tag the needed
shadow tiles.
This is much more precise than the bound box tagging which is now
reserved for transparent objects.
This also:
- fix pixel radius size.
- add a dedicated info buffer to avoid having one unused tile.
Until now the LOD selection was based on distance from camera.
Now it is based on receiver distance ratio. We compute the world
size of one view pixel along with the world size of one shadow texel.
By knowing one point distance to the light or to the view, we can
compute the pixel density ratio and deduce the corresponding LOD.
We use this to compute the min LOD during the visibility selection phase
and the "mean" LOD for usage tagging by BBoxes.
The tagging LOD is a crude approximation as it only uses the BBox
center.
This makes every shadow setup pass aware of the LOD chain of the tilemap
for each cubemap face.
In the free phase, we mask any LOD page that is completely covered by
higher LOD. This avoir commiting memory twice or more per area.
In the allocation phase, we check for the last valid LOD and set it
in the LOD 0 meta data. We also store the actual page location in LOD0 but
do not mark it as allocated as the LOD tile has the ownership of the page.
This removes the light count limit for the forward shaded object. This
also provides a more efficient way of computing the culling directly on
the GPU. Moreover, this avoids doing multiple lighting passes for high
light counts in the deferred pipeline, improving performance.
This continue the effort to implement virtual shadow mapping.
This includes:
- Spot cone culling of tile.
- Tile vs. view frustum tagging.
- Shadowmap Page allocation / freeing.
- Rendering to 4K buffer only tiles that needs it.
- Copying to shadow atlas.
This debug buffer is automatically bound if a shader is including
`common_debug_lib.glsl`. One buffer is created for each shading group
using such a shader.
The shader can then use the functions from that file to draw debug
lines. There is a hardcoded limit of line one buffer can contain. Make
sure to only output lines for a few threads at most.
Under the hood this uses a vertex buffer bound as SSBO that contains
the number of verts and all the positions and colors packed into 1 vec4.
We render by just rendering the whole buffer.
All unused vertices are initialized with NaN positions and will not be
drawn.
This is a total refactor of how shadows are handled.
We use Virtual shadow maps with different Level of details to
ensure a somewhat evenly distributed precision.
The shadow test is a really crude shadow test that will be
improved in further commit.
There is a pool of 4096 Tilemaps that are distributed between
shadowed ligths. These tilemaps are 16x16 each and reference
shadow map pages that are allocated in an atlas. Pages are only
allocated if needed (i.e: visible for rendering an object).
Page management is done on GPU using compute shaders to reduce
CPU task.
On CPU only one draw pass per updated tilemaps is issued.
This reduces the memory requirement of shadowmapping large scenes
with many lights.
Denoising make use of more memory to store and reproject the result of
previous frame to reduce noise. This only works for viewport.
There is a final bilateral filter for cleaning up noise even more.
Screen space Raytracing is supported by alpha blended surfaces.
However only opaque surfaces will be visible to the rays. This means
Alpha blended surfaces cannot reflect or refract themselves.
Denoising is not possible on alpha blended surfaces. Many samples
are needed for noise free results.
Since the cost of tracing can be very high, raytracing will only be
enabled on demand, on a per-material basis.
This simply reuse the reflection raytracing pipeline but with another
ray distribution. Only direct lighting, distant lighting and emissive
light are visible to diffuse rays.
Subsurface effect is not visible but transmittance effect is visible
to diffuse rays.
Indirect diffuse light is processed by the SSS filter.
The new pipeline is now cleaner and allows for deferred refraction.
The refractions are more accurate but are not denoised for now. More
research needs to be done in this area.
There is no feedback buffer for now, so reflections of metallic surfaces
will appear black.
The same restriction on refractive materials still holds true. They will
not appear in screen space tracing of other non refractive surfaces.
However, refractive surfaces (non-blended) can now reflect themselves
and the other surfaces with screen space reflections.
Half res tracing is not implemented back yet.
This is to automate the generation of reuse sample tables and maybe more
in the future. This is not designed to make compilation way longer than
expected.
Same as SSS this has been rewritten to support varying SSS radius.
Instead of relying on shadowmap hack to improve the transmittance
artifact (previously called translucency) we exposed a min thickness
output that will reduce the maximum of light bleeding that can happen
at the shading point. This is far from perfect but at least it is
tweakable.
The effect is now cheaper and the option to enable it is now gone.
It can always be artificially disabled by making the thickness bigger
than the sss radius.
The effect is always enabled for all SSS surfaces and will even be
applied on forward shaded object (alpha blend mode).
This only adds the output but the output is not yet used.
This thickness output is meant to control the aspect of subsurface,
refraction, absorption and volume shaders.
The value expected is the mean thickness inside the object at the
shading point. The source can be a vertex color or a texture map baked
from a raytracer.
This new implementation follows the technique described in
"Efficient screen space subsurface scattering Siggraph 2018".
Compared to the old implementation it fixes a lot of issues at
the cost of it being slower. This fixes:
- Light leaking between different objects.
- Light leaking between different surfaces with different depths.
- SSS radii are now "texturable" per pixel. No SSS surfaces limits.
- Noise should be lower.
- Precomputation is only done once for all SSS surfaces which lowers the
per material storage and precomputation time.
Implementation is also simpler as it is only a one pass processing.
We differ from the reference presentation by not precomputing the
RGB weights per samples. We actually compute them on the fly in order
to support varying SSS radii.
Notes:
- SSS IOR and SSS anisotropy are not supported.
- Object level light leak prevention might not work for high number of
objects in the scene (> 1024). In this case light leak might occur.
Adding or deleting (hidding) objects in the scene might change which
objects can leak.
This was caused by the StructArrayBuffer wrapper not being tagged as NonMovable.
The UBO was in fact being freed at creation time in debug build, but the
pointer was kept as valid in the copied wrapper.
Changing the higher level structure to not use the copy constructor to avoid this.
This is a needed change for the viewport compositor. The compositor
needs to draw to `dtxl->color` to have correct overlay / background
composition.
The solution here is to have a separate buffer that keeps the first
sample we blend from. This increases VRAM usage but it is the most
elegant option.
This integer divide by zero was evaluated to 0 on all platform but apple,
where it yields 1. The world lighting would then sample the 1 sample of the
first grid instead of its own sample.
This was caused by the blend mode that was used even with full opacity.
This caused issues with the viewport was resized and the color of the
framebuffer becomes undefined, leading to undefined values in the blend
equation.
Another fix would be to clear the viewport color on resize inside the
GPUViewport.
This is a necessary step for EEVEE's new arch. This moves more data
to the draw manager. This makes it easier to have the render or draw
engines manage their own data.
This makes more sense and cleans-up what the GPUViewport holds
Also rewrites the Texture pool manager to be in C++.
This also move the DefaultFramebuffer/TextureList and the engine related
data to a new `DRWViewData` struct. This struct manages the per view
(as in stereo view) engine data.
There is a bit of cleanup in the way the draw manager is setup.
We now use a temporary DRWData instead of creating a dummy viewport.
Differential Revision: https://developer.blender.org/D11966
This change the gbuffer layout to use more of the hardware to converting
data back and forth. Normals are encoded as two 16 bits components and
colors as R11G11B10F format.
This was motivated by the need of better quality normals. The issue is
that this increase the GBuffer size consequently. In order to balance
this we chose to merge the refraction and Diffuse/SSS data to use the
same buffer. This means we need to stochastically chose one of these
layers (so noise appear). Given that Glass BSDFs are rarely mixed
with Diffuse BSDFs, we think this is a good tradeoff.
The functions need to be declared before main as prototypes.
The appended libs will use the resources (textures, UBOs) defined at
global scope.
This removes a bit of code duplication and some long macros.
Instead of appending using `BLENDER_REQUIRE`, shaders can now ask for
libs to be added after the shader's `main()` by using the
`BLENDER_REQUIRE_POST` pragma.
Use viewspace instead of world space to compute pixel projection.
This fix issues when camera is far from origin and float precision would
produce artifacts.
This port the facing "flat" normal trick used by the gpencil engine
to EEVEE as well as the thickness mode.
The objects parameters are passed via the objectInfos UBO to avoid
much boiler plate code. However if this UBO grows too much we might have
to split it.
The normal trick for planar surfaces is quite simple to port to the
vertex shader even if it is less efficient.
However to compute it we need the objects bounds. This is passed as a
scale only through the orco factors. This will needs a bit of cleaning
at some points, with boundbox computed at object level.
Nothing much different compared to the previous implementation.
The transparent BSDF and principled BSDF now detects when the material
is potentially transparent to select the best way to render it.
This makes is possible to have AA and correct blending of the
forward rendered spheres.
However, to avoid distorded spheres we need to not support Lookdev
in panoramic projection mode.
Also remove support for LookDev when using render border for now.
This differs a bit from old implementation.
- Instead of manually adjusting the viewport we correctly place the
sphere in the vertex shader.
- Rendering happens after TAA accumulation: This is because we now
support panoramic cameras and TAA would distort the spheres.
This expose the capability of having no light and no probe (except the
world one) for specific views / code path.
The caller just need to pass 0 as extent to the `set_view()` function.
This is usefull for lookdev.
This does not include reference spheres rendering.
The approach is a bit different than before.
Now we use a `bNodeTree` to control the rendering of lookdev. This
generates a `GPUMaterial` that is stored per `Instance`. This way
rendering lookdev is just updating the temp light cache using this
material as world material. Removing the use of custom shader.
This introduces a small hack in order to bind the studiolight hdri after
the nodetree glsl parsing.
The background display however is still using a custom shader in order
to sample the world cubemap with different roughness.
The view space option of the studiolight is now faster by using a
transform before shading instead of rebaking the lightprobe constantly.
This should not have any particular impact on render time.
When evaluating surfaces, the deferred passes needs to sample the
depth buffer. But it also test against the stancil buffer.
Moreover the sampler needs to be a 2D sampler which is not the case
for cubemaps and texture2Darrays.
To overcome this we simply copy the gbuffer depth to another
temp texture using framebuffer blitting.
Some things differs from old implementation.
- Object visibility is filtered correctly without using a visibility
callback (which is to be removed).
The implementation is also more high level using less low level tricks.
A dedicated LightProbeView is created for each lightprobe cubeface to
render using all pipeline (deferred and forward).
There is still a few things not working.
Only world probe is supported for now.
The new implementation diverge from the original by randomly
selecting one lightprobe instead of sampling them all.
This speeds-up rendering a bit.
This is a small convenience. This let the render engine use this
default world if scene has no world.
World is black to keep the same behavior as before.
Shading groups are now created by the material_array_get functions
instead of passing a reference to be filled later. This avoids having
to wait later to maybe create a sub shading group.
This also simplifies different geomety type handling.
This adds a new closure selection method.
- In a first pass, weights are accumulated per output type (diffuse,
reflection, refraction).
- A random threshold is then generated before evaluating the BSDF nodes
again.
- During the evaluation pass the random threshold is decremented until
it reaches 0. At this moment the current BSDF is sampled.
For this to work, I splited the evaluation and the weighting in two
functions for all BSDF. The `*_eval` nodes are generated as dangling
nodes from the graph and only serialized after the rest of the graph.
Recalc flag on Material ID being unavailable to render engine, this
adds a simple way to detect material update by detecting shader creation
or update.
This constructs a "mirror" nodetree that feeds the closure "shader"
nodes with their respective final weight.
The tree is mirrored using simple math nodes. This is quite messy but
this is the only way to proceed without introducing special nodes.
The other issue with this method is that inputs are all uniforms even
for unplugged socket on temporary math nodes with add bloat to the
shader uniform buffer structure.
Only the part relevant to the weighting is duplicated. Other connexions
with the shading tree are reuse.
All shader nodes are updated to receive a `Weight` hidden parameter.
The original shader mixing tree is preserve to let the choice of using
either way to weight the output.
For now this is only done for the output nodes. This will need to be
extended to Closure to RGBA sub-tree.
This is the first step towards the new evaluation scheme of EEVEE
closures.
This commit contains:
- Removal of GPU_SOURCE_BUILTIN type, prefering global instead. This
avoid many boilerplate code since most of the old builtins are now
datas that are always present (i.e: view matrices, normals).
- Rewritting of codegen in C++ to use `std::stringstream`.
- Added a callback to let engine decide what to do with codegen code.
This remove a lot of needs for defines because of code order
dependency. The engine can insert the nodetree code in custom ways
to create advance effects (i.e: add displacement or vertex lighting).
Engine now returns final shader strings.
- Closure nodes evaluation replacment is a placeholder for now.
This is a port of the old material grouping. This is a bit more
clean as we use containers for each passes and other structures.
Nodetree is generated without major error for simple materials but
it is not yet used as closures are not outputed.
This adds the transparency and volume handling in the deferred
render pipeline.
Implementation is still unfinished.
To have better naming convention, I renamed object shader to surface.
This introduce a fat Gbuffer layout that groups closure data in groups
of similar BSDF. The goal is to have at least one sample for each
group to avoid too much code complexity and expected worse performance.
There is a lot of room for buffer reuse to reduce memory usage but it is
not considered a priority for now.
Add a smooth transition to avoid flickering of stochastic effects such
as soft shadows.
This use a simple blend method to progressively reveal the render
after some low sample count to avoid most of the flickering.
Parameters are hardcoded for now.
We use a new RNG to avoid correlation artifacts between Anti-Aliasing
and Shadow samples (see T68594).
The new sequence is a leap halton sequence. This makes it good with
low number of samples and yield less correlation issues.
Another change is that we directly jitter the projection matrix instead
of rotating the view matrix. This is improving convergence time and
avoid passing a second matrix to the shader.
However this case lead to discontinuity artifacts at face boders.
We might want to revert to the old rotation method for this
reason even if convergence is slower.
Now the shadows are linked to a `Light` object. The `Light` object is
linked to an `ObjectKey` to ensure persistence and deletion tracking.
The Uniform data are packed so that there is 1 `ShadowPunctualData`
per light in a `LightBatch`. This means there is only a shadowmap
limit to the number of `Shadow` in a scene.
Difference with previous implementation:
- Better texture space usage of cone and area light shadow.
- Shadows are packed in an atlas. Reducing requirements for future
features.
- Sampling is simpler because shadow matrix does everything.
This follows closely the implementation of 2.5D tiled light
culling described in the presentation:
"Improved Culling for Tiled and Clustered Rendering"
from Michal Drobot
http://advances.realtimerendering.com/s2017/2017_Sig_Improved_Culling_final.pdf
I chose the tile + Z binning approach for its high depth range support
and low CPU overhead & low memory consumption compared to the cluster
based culling. The cons is that the culling is a bit less precise in
some aspect but it is quite balanced.
The culling is done by the `Culling` object which is templated to easily
be reused for light probes cullg.
The Z-binning process is described starting from slide 20 in the
reference pdf.
I also implemented a debug pass to visualize false negative (light
culled when they shouldn't) and light evaluation density.
This is useful to detect failure case and hotspot. This could be exposed
as a developper only render pass in the future.
Some optimization of the reference implementation requires extensions
not yet added to GPU module and will be added later.
This has the basis of clustered light culling but does not yet do
it. The lights are only culled by frustum.
Its the same as if there was only one Cell for the entire Viewport.
This also wrap GPUFrameBuffer & GPUTexture inside eevee:Framebuffer
and eevee:Texture to improve managment.
Another cleanup was to put all members of `Instance` public to
avoid much complexity in accessing the data with modules
dependencies.
Also split velocity View related data to `class Velocity` and
rename previous `Velocity` to `VelocityModule`
Support infinite light count by dividing rendering into chucks of
LIGHT_MAX. Forward passes are just rendered again and deferred passes
(not implemented yet) will just have to have multiple light evaluation
passes.
This is almost the same thing as old implementation.
Differences:
- We clamp the motion vectors to their maximum when sampling the velocity buffer.
- Velocity rendering (and data manager) is separated from motion blur. This allows
outputing the motion vector render pass and in the future use motion vectors to
reproject older frames.
- Vector render pass support (only if motion blur is disabled, just like cycles).
- Velocity tiles are computed in one pass (simpler code, less CPU overhead, less
VRAM usage, maybe a bit slower but imperceivable (< 0.3ms)).
- Two velocity passes are outputed, one for motion blur fx (applied per shading view)
and one for the vector pass. This could be optimized further in the future.
- No current support for deformation & hair (to come).
Bonus addition, support for shutter curve.
Compared to the old implementation, the per time step sync function
is lighter and localized. Also it does not require a full engine
"reboot" in order to work.
Also modifies camera setup to be compatible with future camera motion
blur.
Bonus addition, support for shutter curve.
Compared to the old implementation, the per time step sync function
is lighter and localized. Also it does not require a full engine
"reboot" in order to work.
Pretty much identical to the previous implementation. With the exception
of a temporary noise function and some simplification of the CoC
computation. This also fixes issues with the Ortho depth of field.
Most of the files were modified to comply to new shader codestyle.
This also adds partial support of panoramic cameras (bokeh and
anamorphic is still buggy).
This cleansup a lot of confusion / complexity in the setup code.
Setup is closer to what cycles does now.
Also duplicates some buggy behavior of Cycles for now until this
is fixed.
This move view resolution handling to the `Camera` class that will
in the future clip and trim each view in panoramic projection.
There is a new `CameraView` that contains the `DRWView` and subview.
This way each `ShadingView` is associated to a unique `CameraView`.
ShadingView` & `CameraView` are all allocated & defined at creation time
but only the one activated by `Camera` will be rendered.
This option will make accumulation happen in a pre exposed logarithm
color space. This reduces the importance of bright pixels in the pixel
filter which will result in less aliasing in theses areas.
There is a few cases where one might want to disable this option to
match cycles better.
Render mode is really close to what the viewport render does.
Film output is done by resolving the data to the next (double buffered)
framebuffer and read back.
This also includes a bit of cleaning about naming of init() and sync()
functions.
This commit adds the Film class that handles accumulation of color and
non-color data using arbitrary projection and filter size.
A weighted accumulation (sum) is done into a data buffer with an
additional weight buffer. The sum being per pixel, it allows the input
textures that are not aligned with the output pixel grid.
Panoramic projection works by rendering a cubemap (6 views) of the scene
at the camera position. The Film filter pass then gather the pixels
using the correct Panoramic projection ensuring correct Anti-Aliasing.
For Non-color data (depth, normals) we only keep the closest value to
the target pixel center (simulating a filter size of 0).
Color data is accumulated in a log space to improve AntiAliasing output.
This is hardcoded for now.
Larger filters have poor performance but are very fast to converge.
Code Wise: This commit rename some modules to avoid possible confusion
and have better meaning. Use namespace instead of prefixes.
Added a new eevee_shared.hh file to share structure and enum definitions
between GLSL and C++.
Same idea as previous commit. This cleans-up the interface and put all
viewport related data inside the `DRWData` struct.
The draw manager is responsible for freeing it. That is the main point
of this all. In the future, we can have custom freeing method for each
engine.
This also move the DefaultFramebuffer/TextureList and the engine related
data to a new `DRWViewData` struct. This struct manages the per view
(as in stereo view) engine data.
There is a bit of cleanup in the way the draw manager is setup.
We now use a temporary DRWData instead of creating a dummy viewport.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.