This uses a StorageBuf as the source of indirect dispatch argument.
The user needs to make sure the parameters are in the right order.
There is no support for argument offset for the moment as there is no
need for it. But this might be added in the future.
Note that the indirect buffer is synchronized at the backend level. This is
done for practical reasons and because this feature is almost always used
for GPU driven pipeline.
This is a faster way to clear a buffer instead of reuploading new data.
It is equivalent to `memset` and runs directly on the GPU.
This is better to clear huge buffers and to avoid the sync cost of data upload.
This was getting in the way in multiple instances. Compute shaders dispatch
are still made in the presence of the last bound framebuffer even if they
do not interact with it.
This is supposed to hold the latest improvement from the EEVEE rewrite branch.
Note that a restart is necessary in order for the engine to appear.
The registration code is a bit convoluted as it needs to be after the WM_init.
Add a new operator to the Graph Editor that blends selected keyframes
to their default value.
The operator can be accessed from
Key>Slider Operators>Blend To Default Value
Reviewed by: Sybren A. Stüvel
Differential Revision: https://developer.blender.org/D9376
Ref: D9367
As the grease pencil simplify is a subotion of general simplify, if the general switch is disabled, the grease pencil simplify must be disabled too.
This patch also disable the UI panel.
This is supposed to hold the latest improvement from the EEVEE rewrite branch.
Note that a restart is necessary in order for the engine to appear.
The registration code is a bit convoluted as it needs to be after the WM_init.
Create a function on CurvesGeometry that can also be used for an edit
mode operator in the future. Dealing with CustomData directly means the
code is a bit more verbose than would be ideal, but this would be a
simple thing to clean up in the future if we get an attribute API here.
Also change the reverse node to first work on a read-only geometry
component, and only get write access if there is a curve selected.
Differential Revision: https://developer.blender.org/D14375
This will mostly just remove the overhead of converting
to and from the old curves type, though it also does open
some opportunities for multi-threading in the future.
A mistake in 8538c69921. The offsets include the segment at the
corresponding index, but the evaluated offset calculation was adjusting
the offset for the second to last segment.
Make the new curves' translate and transform functions also affect
the handle position attributes.
Differential Revision: https://developer.blender.org/D14372
Resizing nodes used the cursor location when the event was triggered
instead of the drag-start, harmless but means the drag location isn't
under the cursor especially with a high drag threshold.
Noticed when investigating other drag issues,
unrelated to recent changes to drag behavior.
`RNA_def_struct_ui_text(srna, ...)` was reused for `is_valid` and `is_muted`
which would set the documentation to theirs (actually to that of the last
call).
`RNA_def_property_ui_text(prop, ...)` should be used for the properties.
Remove the conversion to and from `CurveEval` by supporting the
new Curves data-block in the node. This allows for some simplifications
to the code, as well as a fix for transfering curve domain attributes
when duplicating the curve domain.
The performance improvements (obverved through the timings overlay)
can be relatively massive with many curves. When duplicating 10000
4-point curves to become 2 million curves, I observed an approximate
150x improvement, from about 3 seconds to about 20ms.
- Pass less redundant information in function arguments.
- Use `IndexRange` more instead of direct offset calculations.
- Use specific geometry component types for specialized functions.
- Use const arguments.
- Declare variables closer to where they are created or used.
- Remove some redundant logic.
- Simplify the description for the output geometry.
The menu for Timeline > Keying > Active Keying Set wouldn't show up.
Caused by d8e3bcf770. The function to attach search menu data to the button
would be called twice with different arguments for the same button now.
Shouldn't be an issue in general, but the first call now had the unexpected
side effect that the button would get disabled. Make sure it's re-enabled when
the second call sets the proper search data now.
Caused by rB43bc494892c3, moving this 'new id' relink to generic
remapping code added the over-head of proper, generic post-processing,
compared to the special-cases previous code was only designed to handle.
Fortunately with recent 'multi-remapping' work we can easily rewrite
that new id relink code to use the multi-remapping approach too.
No behavioral change is expected from this commit, besides the improved
performances (essentially restored to what they where before
rB43bc494892c3).
This reduces the complexity and avoid framebuffer setup costs.
This also "remove" the prefiltering of the glossy cubemaps in favor
of a simple bilinear filtering of the mipchain.
Change the sample mode to don't duplicate the last vertex of the
stroke and instead use the cyclic flag to close previously cyclic
strokes. This is necessary for the following modifiers.
Reviewed By: NicksBest
Differential Revision: http://developer.blender.org/D14359
From hair particle mode:
* Add
* Comb
* Cut
* Grow
New:
* Delete
Only comb and delete are used at the moment (by the new tools which are
under experimental).
Selecting an object that was already active & selected would de-select
it when the cursor was over the objects center.
This was caused by [0] that added a check which assumed more than one
hits from GPU_select meant there were multiple objects to select from.
This is not necessarily the case since bones, camera tracks or the
objects own center can add additional hits.
Resolve by keeping track of the best hit with & without the
active-selected object, only using the non-active-selected if it's found.
[0] 1550573360
Support for differentiating the tweak tool from the 3D cursor when
select is set to RMB.
This is currently an experimental preference:
Tweak Tool: Left Mouse Select & Drag
When enabled the tweak tool can now tweak the existing selection
without de-selecting first, a single click can be used to replace
the selection.
This matches selection in the graph & node editors.
This preferences is only available with "Developer Extras" enabled.
Ref T96544.
Needed so mapping selection to click doesn't pass the click event
through to setting the 3D cursor for e.g.
While this doesn't happen with the default key-map, setting selection
to LMB-click would set the 3D cursor as well (when the selection
fell through to nothing).
De-selecting objects meant that selecting a bone would de-select
all the other pose objects - making exiting & entering pose-mode
loose the current set of pose objects.
Match edit-mode behavior: avoid de-selecting objects in the current mode
(unless the action is explicitly performed in the outliner for e.g.).
Previously setting the 'basact' to NULL was done, but this wasn't
so simple to use with deselect_all which needs to check if there was
anything found at the cursor.
Add a 'handled' variable to differentiate this case, when set
don't attempt object selection.
While basic single track selection worked,
toggling and de-selection has been broken since at least 2.83.
Support SelectPick_Params with the exception of deselect_all
which doesn't make sense for tracks as de-selecting all objects
is expected in that case.
This was an involved operation to include inline,
making ed_object_select_pick more difficult to follow.
Prepare for track selection to properly support SelectPick_Params.
Currently this isn't used in the key-map, it will eventually
allow the 3D viewports tweak tool to match the behavior of other
editors that support tweaking a selection without first de-selecting
all other elements.
This is only part of the experimental "Full Frame" mode (disabled
by default). See T88150.
Currently the viewer node uses buffer paddings to display image offset
in the backdrop as a temporal solution implemented for {D12466}.
This solution is inefficient memory and performance-wise. Another
issue is that the paddings are part the image when saved.
This patch instead sets the offset in the Viewer node image
as variables and makes the backdrop take it into account
when drawing the image or any related gizmo.
Reviewed By: jbakker
Differential Revision: https://developer.blender.org/D12750
The previous fix including `<algorithm>` was an improvement
but not the actual error, which appears to be that `int64_t` is
long long int on one platform but just long int on another.
The fix includes the template argument directly.
This patch adds evaluation for NURBS, Bezier, and Catmull Rom
curves for the new `Curves` data-block. The main difference from
the code in `BKE_spline.hh` is that the functionality is not
encapsulated in classes. Instead, each function has arguments
for all of the information it needs. This makes the code more
reusable and removes a bunch of unnecessary complications
for keeping track of state.
NURBS and Bezier evaluation works the same way as existing code.
The Catmull Rom implementation is new, with the basis function
based on Cycles code. All three types have some basic tests.
For NURBS and Catmull Rom curves, evaluating positions is the
same as any generic attribute, so it's implemented by the generic
interpolation to evaluated points. Bezier curves are a bit special,
because the "handle" control points are stored in a separate attribute.
This patch doesn't include generic interpolation to evaluated points
for Bezier curves.
Ref T95942
Differential Revision: https://developer.blender.org/D14284
With the deferred pipeline, the materials needs different shading groups
depending on their matflags.
Note that this is potentially slower because execution order of shaders
may now be random. This might be fixed in a later commit.
Similar to other changes to ID remapping, gives huge speedups in some
cases, like certain types of liboverride creation.
Case from {T96092} goes from 1725 seconds (almost 30 minutes) to 45
seconds to generate the liboverride, on my machine.
Reviewed By: jbakker
Maniphest Tasks: T96092
Differential Revision: https://developer.blender.org/D14240
Ever since d5b72fb06c, shader nodes have been in the
`blender::nodes` namespace, so they don't need to use that to access
Blender's C++ types and functions.
Somehow exposed after 943b919fe8, linking could fail because
bf_nodes was not properly configured as a dependency of bf_nodes_shader.
Also add the dependency to the geometry nodes module.
To make porting to other architectures easier, clarifying that this does not
need to be supported. The unused parallel_reduce implementation assumed warp
size 32, but is easy to update if we ever need it in the future.
When using inverted filling and click inside a closed area and not outside as is expected, the algorithm to detect the contour to fill is unable to find the filling shape and try to fill outside of the valid index.
The infinite loop was adding more memory for each loop and the process continued while there was system resources and finally crashed the system.
As the tool in negative mode is designed to fill all areas when you click outside of any shape, now the algorithm check if the outline is not working as expected and cancels the filling process.
This commit removes the implementations of legacy nodes,
their type definitions, and related code that becomes unused.
Now that we have two releases that included the legacy nodes,
there is not much reason to include them still. Removing the
code means refactoring will be easier, and old code doesn't
have to be tested and maintained.
After this commit, the legacy nodes will be undefined in the UI,
so 3.0 or 3.1 should be used to convert files to the fields system.
The net change is 12184 lines removed!
The tooltip for legacy nodes mentioned that we would remove
them before 4.0, which was purposefully a bit vague to allow
us this flexibility. In a poll in a devtalk post showed that the
majority of people were okay with removing the nodes.
https://devtalk.blender.org/t/geometry-nodes-backward-compatibility-poll/20199
Differential Revision: https://developer.blender.org/D14353
Solved by introducing introducing a variant of MEM_cnew which behaves
as a copy-constructor for a trivial types.
Alternative approach would be to surround DNA structs with clang/gcc
diagnostics push/modify/pop so that implicitly defined constructors
and copy operators are allowed to access deprecated fields.
The downside of the DNA approach is that it will require some way to
easily apply diagnostics modifications to many structs, which is not
possible currently.
The newly added MEM_cnew has other good usecases, so is easiest to
use this route, at least for now.
Differential Revision: https://developer.blender.org/D14356
Resolves a fair amount of noisy warnings with default build on macOS.
Tested using render_layer render test which includes Freestyle layer.
Differential Revision: https://developer.blender.org/D14355
Meta-element selection now follows conventions for other picking
functions (e.g. EDBM_select_pick).
- Split meta-element find-nearest into a separate function.
- Cycle the meta-element starting from the active & selected
instead of comparing & setting a static variable.
- Order elements using depth (from front-to-back)
when cycling multiple elements.
This uses a StorageBuf as the source of indirect dispatch argument.
The user needs to make sure the parameters are in the right order.
There is no support for argument offset for the moment as there is no
need for it. But this might be added in the future.
Note that the indirect buffer is synchronized at the backend level. This is
done for practical reasons and because this feature is almost always used
for GPU driven pipeline.
This is a faster way to clear a buffer instead of reuploading new data.
It is equivalent to `memset` and runs directly on the GPU.
This is better to clear huge buffers and to avoid the sync cost of data upload.
This was getting in the way in multiple instances. Compute shaders dispatch
are still made in the presence of the last bound framebuffer even if they
do not interact with it.
Volatile fields were introduced to the RenderResult struct years ago[1].
However, volatile is most likely not doing what it was intended to do
in this instance, and is problematic when moving files to c++ (see
discussion from D13962). There are complex rules around what happens to
these fields but none of them guarantee what the above commit alluded to.
This patch drops the volatile and cleans up the APIs surrounding it.
[1] rB7930c40051ef1b1a26140629cf1299aa89eed859
Passing on all platforms:
https://builder.blender.org/admin/#/builders/18/builds/338
Differential Revision: https://developer.blender.org/D14298
- Rename 'location' to 'mval', typically used for region cursor coords.
- Rename 'retval' to 'changed', typically used for operators
when their return value depends on a change being made.
- Add SelectPick_Params struct to make picking logic more
straightforward and easier to extend.
- Use `eSelectOp` instead of booleans (extend, deselect, toggle)
which were used to represent 4 states (which wasn't obvious).
- Handle deselect_all when pocking instead of view3d_select_exec,
de-duplicate de-selection which was already needed in when replacing
the selection in picking functions.
- Handle outliner update & notifiers in the picking functions
instead of view3d_select_exec.
- Fix particle select deselect_all option which did nothing.
As proposed in T95802, this adds buttons to a new column on the right to modify
the override in the Library Override display mode. Some further usability
improvements are planned. E.g. this does not yet expand collections (modifiers,
constraints, etc) nicely or group modified properties of a modifier together.
Vector properties with more than 3 items or matrices aren't displayed nicely
yet, they are just squeezed into the column. If this actually becomes a problem
there are some ideas to address this.
Differential Revision: https://developer.blender.org/D14268
While the correlation may not work well with adaptive sampling, in practice
this appears to work ok in most cases
Automatic scrambling distance uses the minimum samples from adaptive sampling,
which provides a good default estimate to avoid artifacts.
Contributed by Alaska.
Differential Revision: https://developer.blender.org/D13325
This allows users to type in values larger than 1, for use in conjunction
with automatic scrambling distance.
Contributed by Alaska.
Differential Revision: https://developer.blender.org/D13580
When the light direction is not pointing away from the geometric normal and
there is a shadow terminator offset, self intersection is supposed to occur.
Some old platforms and drivers have limited amount of SSBO binding per
compute shader. This disables GPU subdivision if we cannot possibly
bind all required buffers within this limit.
For now the maximum number of buffers used by the GPU code is hardcoded,
but will be programmatically detected when shader creation is automated.
Ref D14337
This adds detection of the maximum number of shader storage buffer
bindings that is supported on the current platform. This can be
useful to turn off features that require compute shaders but use
more buffer bindings than available.
Differential Revision: https://developer.blender.org/D14337
The performance issue was noticeable when tracking a lot of tracks
which are using keyframe pattern matching. What was happening is that
at some cache gets filled in and the furthest away frame gets removed
from the cache: the frame at marker's keyframe gets removed and needs
to be re-read from disk on the next tracking step.
This change makes it so frames at markers' keyframes are not removed
from cache during tracking.
Steps to easily reproduce:
- Set cache size to 512 Mb.
- Open image sequence in clip editor
- Detect features
- Track all markers
Originally was reported by Rik, thanks!
Modified source Armature ID in the join operation was not properly
tagged as such for the depsgraph (and therefore memfile undo)..
Issue caused/revealed by rBe648e388874a.
Should be backported to 3.1 should we make a corrective release.
After rB9b298cf3dbec, the `StructRNA` declarations can now be accessed via
`RNA prototypes.h`
Also, since all redundated declarations are now removed,
`_WM_MESSAGE_EXTERN_BEGIN` and `_WM_MESSAGE_EXTERN_END` are also no
longer needed.
Differential Revision: https://developer.blender.org/D14342
Caused by 0cb5eae9d0 which restored
support for 3D depth when selecting gizmos - making it difficult
to select single lines drawn in front of other gizmos.
Previously the first hit was always used.
Resolve by using a margin around arrow stems when selecting
which was already done for 2D arrows.
This commit adds three nodes:
- `Remove Attribute`: Removes an attribute with the given name
- `Named Attribute`: A field input node
- `Store Named Attribute`: Puts results of a field in a named attribute
They are added behind a new experimental feature flag, because further
development of attribute search and name dependency visualization will
happen as separate steps.
Ref T91742
Differential Revision: https://developer.blender.org/D12685
So far it was needed to declare a new RNA struct to `RNA_access.h` manually.
Since 9b298cf3db we generate a `RNA_prototypes.h` for RNA property
declarations. Now this also includes the RNA struct declarations, so they don't
have to be added manually anymore.
Differential Revision: https://developer.blender.org/D13862
Reviewed by: brecht, campbellbarton
Lets `makesrna` generate a `RNA_prototypes.h` header with declarations for all
RNA properties. This can be included in regular source files when needing to
reference RNA properties statically.
This solves an issue on MSVC with adding such declarations in functions, like
we used to do. See 800fc17367. Removes any such declarations and the related
FIXME comments.
Reviewed By: campbellbarton, LazyDodo, brecht
Differential Revision: https://developer.blender.org/D13837
This patch remove all duplicate code for the same Bake modifier logic.
Still some modifiers need custom bake functions and cannot use this generic bake.
Steps to reproduce:
- Add image sequence to movie clip editor.
- Set cache limit to a low value in the user preferences.
- Playback until old frames starts to be removed from cache.
- Jump to the beginning of the image sequence.
The reason of dead-lock comes from two factors:
- Due to global nature of the cache limiter calls needs to be
guarded with locks.
- Image buffers stored in the cache can have their own cache
(which is used for color management).
Didn't find a better solution than to use recursive lock.
Kind of makes sense since the thread-guardable resource is
recursive (moviecache can have nested moviecaches).
Differential Revision: https://developer.blender.org/D14331
This reverts commit 1558b270e9.
An earlier commit (rB101fadcf6b93c) introduced some new functionality,
which was overlooked in reviewing this commit & got broken.
Will re-commit after the issue has been fixed.
Ref: D13687
If the scale in the offset modifier was set to a value lower than -1,
the object would get mirrored. The problem was, that the thickness
was set to 0 by that. This fix makes the thickness calculation only
use the absolute values.
Differential Revision: http://developer.blender.org/D14324
For now just assume that a node group without output sockets is
an output node. Ideally, we would use run-time information stored
on the node group itself to determine if the group contains a
top-level output node (e.g. Material Output). That can be
implemented separately.
In the larger scheme of things, top-level outputs within node
groups seem to break the node group abstraction and reusability
a bit.
Correction to the calculation of font size used for the tabs on the
Sidebar so that they are always the same size as other content on the
panel.
See D14322 for more details.
Differential Revision: https://developer.blender.org/D14322
Reviewed by Brecht Van Lommel
Instead of allocating a vector of the basis weights cache for
each evaluated point, allocate a single vector for all of the
weights. This should reduce memory usage by avoiding the
overhead of storing many vectors. I noticed a small performance
improvement to evaluated position calculation with an order of 5,
which is larger than `Vector`'s default inline buffer capacity.
This change is possible because of previous commits that
made the basis cache for each evaluated point always have
the same "order" size.
Currently a single buffer is used as working space for all evaluated
points. In order to make evaluations more independent, opening
options like multi-threading in the future, instead use a separate
array for each call. Using an inline buffer capacity higher than
the default allows a few percent performance improvement, and removes
allocations for every evaluated point.
The step after calculating the NURBS basis for a single evaluated
point trimmed extra zeroes from the weights. However, in practice
this rarely did anything, only for the first and last evaluated point
of certain knot configurations. Remove it in order to simplify code.
Also use a separate span for the result, to clarify its length.
Previously, the popover menu in sculpt/texture paint mode did not
take into account the `UnifiedBrushSettings` for the unit.
To fix this, the behavior of `class _draw_tool_settings_context_mode` is matched
by checking the same conditions when setting up the UI of the right-click popover menu.
Fixes T81616
Reviewed By: #sculpt_paint_texture, pablodp606
Maniphest Tasks: T81616
Differential Revision: https://developer.blender.org/D9168
Label alignment in top bar by using `ui_text_icon_width_ex` instead of `w_hint`
Old:
{F12733743}
New:
{F12733742}
Fixes T61558
Reviewed By: Severin
Maniphest Tasks: T61558
Differential Revision: https://developer.blender.org/D13552
This commit fixes T96229.
The maximum possible radius was being used for 3 point
splines, regardless of the current radius.
Reviewed By: HooglyBoogly
Maniphest Tasks: T96229
Differential Revision: https://developer.blender.org/D14311
When OneDrive files are offline, show preexisting thumbnails.
See D13930 for details.
Differential Revision: https://developer.blender.org/D13930
Reviewed by Julian Eisel
Add the ability to move `CurvesGeometry` without copying its attributes
and data. The benefit is more intuitive management of the data-block
copying, and less overhead for copying in some cases. The "moved-from"
source is left in an empty but valid state. A test file is added to test
the move constructor.
Before this patch, users had to switch render engines just to change how the
hair should be displayed in solid and material preview viewport shading modes.
Differential Revision: https://developer.blender.org/D14290
By not resetting the line width, other scopes were using the
wrong line thickness.
Contributed by RedMser.
Differential Revision: https://developer.blender.org/D12030
My own error when committing 0602852860. It appears that
the "Endpoint" knots modes should be handled together, otherwise
out of bounds array access is possible.
Win32: Replace SHGetFileInfoW as means to get friendly display names
for volumes because it causes long pauses for disconnected remote
drives.
See D14305 for more details.
Differential Revision: https://developer.blender.org/D14305
Reviewed by Brecht Van Lommel
There was an error with the attribute API implementation for vertex
groups. If the vertex group layer referenced an original mesh, it wasn't
properly duplicated for writing.
When rendering using the command line the curvature wasn't rendered. The reason
was that the ui_scale wasn't initialized and therefore the same pixels where
sampled to detect the curvature. This is fixed by setting the ui_scale to 1 for any
image render.
Change early drag evaluation added in
1f1dcf41d5 to only apply to drag events
from mouse buttons. Otherwise pressing two keyboard keys at one would
create a drag event for the first pressed key. While this didn't cause
any bugs as far as I know, this behavior makes most sense for drags
that come from cursor input.
Only set press events in the windows eventstate, not the current event
since it's not useful for these to be set current events press values.
This makes it possible for a press event to access values for the
previous press.
Activating a gizmo used the windows eventstate which may have values
newer than the event used to activate the gizmo.
This meant transforms check for the key that activated transform
could be incorrect.
Support passing an event when calling operators to avoid this problem.
It was possible that a render thread will be freeing cache while the
interface is iterating over cache items to build cache line.
Found while looking into T94738. It might be a fix, but I am unable
to reproduce the original issue, so can not know for sure whether
there is something else going or or not.
This function was copied from txt_sel_to_buf, including unnecessary
complexity to support selection as well as checks for the cursor
which don't make sense when copying the whole buffer.
Use a simple loop to copy all text into the destination buffer.
This patch enables all 8 combinations of Nurbs modes: Cyclic,
Bezier and Endpoint. Also removes restriction on Bezier Nurbs order.
The most significant changes are mode combinations bringing new
meaning. In D13891 is a scheme showing NURBS with same control
points in a modes, and also further description of each possible case.
Differential Revision: https://developer.blender.org/D13891
Regression in 265d97556a.
Where iterating directly on a property group failed, e.g.:
`iter(group)`, tests missed this since only `group.keys()`
was checked.
`3DView`'s `use_snap` option has little or nothing to do with using
snapping in `UV`, `Nodes` or `Sequencer`.
So there are no real advantages to keeping these options in sync.
Therefore, individualize the option to use snap for each "spacetype".
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D13310
The result handle attributes for non-bezier types are zeroed.
By mistake though, the entire array was zeroed, not just the
area corresponding to that curves source.
When viewing meta strip, it had orange color. This was caused by
overflow because of hard-coded offset. Theme got darker, and background
was also set again further in code, but redundant drawing was removed in
f4492629ea.
Realizing and copying attributes of meshes, curves, and points are very
similar processes, but currently the logic is duplicated three times in
the realize instances code. This commit combines the implementation
for copying generic attributes and creating the result id attribute.
The functions for threaded copying and filling should ideally be in
some file elsewhere, since they're not just useful here. But it's not
clear where they would go yet.
Differential Revision: https://developer.blender.org/D14294
Caused by an integer overflow in the tiling utilities of OptiX SDK.
Seems for now it's easier to copy and modify code to our sources so
that we don't need to bump SDK version requirement (which might lead
to an increased driver requirement as well).
There are still some fixes needed from a newer driver to have such
denoising to work properly: Windows requires 511.79, Linux 510.54.
Thanks Patrick for investigation!
Differential Revision: https://developer.blender.org/D14300
Mark the chain length of regular and spline IK constraints
non-animatable. Changing the IK chain length requires a rebuild of
depsgraph relations. This makes it unsuitable for animation. It's better
to simply avoid having this property animatable than to allow animation
but produce unstable results.
Ref: T96203
When dragging with a large threshold (using a tablet for example),
it's possible to press another key before the drag threshold is reached.
So tweaking then pressing X would show the delete popup instead of
transforming along the X-axis.
Now key presses while dragging cause the drag event to be evaluated
before the key press.
Note that to properly base the mouse-move event on the previous
state the last handled event is now stored in the window.
Without this the inserted mouse-move event may contain invalid values
from the next event (it's modifier state or other `prev_*` values).
Requested by @JulienKaspar.
Regression in 08d8eee006 caused
emulate-middle mouse to work once, clearing the modifier key.
Now the modifier key from emulated mouse events is never stored
in the windows event-state.
The realize instances code used "assign", but the attribute buffers on
the result aren't necessarily initialized. This doesn't make a difference
for trivial types like `int`, but it would with more complex types.
This commit replaces the temporary conversion to `CurveEval` with
use of the new curves data-block. The end result is that the
process looks more like the other components-- somewhere in between
meshes and point clouds in terms of complexity.
The final result is that the logic between meshes and curves is
very similar. There are a few different strategies to reduce
duplication here, so I'll investigate that separately.
There is some special behavior for the radius and handle position
attributes. I used the attribute API to store spans of these
attributes temporarily. Using access methods on `CurvesGeometry`
would be reasonable to, storing spans separately feels a bit more
predictable for now though.
There should be significant performance improvements in some cases,
I haven't tested that specifically though.
Differential Revision: https://developer.blender.org/D14247
Passing a `TreeElement *` instead of its `TreeStoreElement *` to
`TSELEM_OPEN()` would seem to work but cause a bug. Add a type check
that will cause a compiler error if it fails.
I don't see a reason to use 2x the element height for the "in-view"
checks. That seems incorrect (although shouldn't cause issues). So
remove that, I don't expect behavior changes.
For whatever reason the "in-view" check was using 2x the element height.
From what I can see this isn't needed, so I'll remove it in a follow-up
commit.
Restrict a lot deletion/moving around of liboverride objects and
collections in the Outliner.
While some of those operations may be valid in some specific cases, in
the vast majority of cases they would just end up breaking override
hierarchies/relationships.
Part of T95708/T95707.
Blender crashes when a multi-user grease pencil object has vertex
groups and is modified by modifiers, layer transform or parenting.
The fix makes sure that we copy the vertex group names list.
Reviewed By: antoniov
Maniphest Tasks: T96233
Differential Revision: https://developer.blender.org/D14275
Fix T95462: Partly transparent objects appear to glow in the dark
The issue was caused by incorrect check for exceeded number
of transparent bounces: the same maximum distance was used
for picking up N closest intersections and counting overall
intersections count.
Now made it so intersection count is using ray distance which
matches the way how Embree and OptiX implementation works.
Benchmark result:
{F12907888}
There is no big time difference in the pabellon scene. The
Victor scene timing doesn't seem to be very reliable as the
variance in time across different benchmark runs is quite
high.
Differential Revision: https://developer.blender.org/D14280
At the time of naming these members only some event types generated
click events so it made some sense to differentiate a click.
Now all buttons support click & drag it's more logical to use the
prefix "prev_press_" as any press event will set these values.
Also update doc-strings.
Operator area_dupli_invoke should not create modal windows.
See D14253 for details.
Differential Revision: https://developer.blender.org/D14253
Reviewed by Brecht Van Lommel
This commit fixes an issue, where for instance, when merging vertices
with the "Merge by Distance" geometry node, the resulting vertices had
their boolean attributes set unpredictably.
Boolean attributes are implemented as custom data, and when welding
vertices, the custom data for the resulting vertices comes from
interpolating the custom data of the source vertices.
This commit implements the missing interpolation function for the
boolean custom data type. This interpolation function is implemented in
terms of the logical or operation, that is to say, if any of the source
vertices (with a weight greater than zero) have the boolean set, the
boolean will also be set on the resulting vertex.
This logic matches 95981c9876.
In geometry nodes, attribute interpolation generally does not use the
CustomData API for performance reasons, but other areas of Blender
still do.
Differential Revision: https://developer.blender.org/D14172
This avoids transform jumping which is a problem when tweaking values a
small amount. A fix for T40549 was made box-select used the location
when the key was pressed.
While it's important for box-select or any operator where it's expected
the drag-start location is used, this is only needed in some cases.
Since the event stores the click location and the current location,
no longer overwrite the events real location. Operators that depend on
using the drag-start can use this location if they need.
In some cases the region relative cursor location (Event.mval) now needs
to be calculated based on the click location.
- Added `WM_event_drag_start_mval` for convenient access to the region
relative drag-start location (for drag events).
- Added `WM_event_drag_start_xy` for window relative coordinates.
- Added Python property Event.mouse_prev_click_x/y
Resolves T93599.
Reviewed By: Severin
Ref D14213
Prevents a few unneeded calls to `std::sin`, with an observed
performance improvement of about 1 percent.
Differential Revision: https://developer.blender.org/D14279
When the geometry of the sculpt mesh was replaced when restoring from
a full undo step, the runtime data was not cleared (including any
normals, triangulation data, or any other cached derived data).
In the report, only the invalid normals were observed.
The fix is to simply clear these caches. Later they will be reallocated
and recalculated if necessary. Since the whole mesh replaced here
anyway, this should be a safe fix.
Differential Revision: https://developer.blender.org/D14282
The new mode only builds the new strokes in each frame.
The code is assuming somebody uses "additive" drawing, so that each frame is different only in its NEW strokes. Already existing strokes are skipped.
I used a simple solution: Count the number of strokes in the previous frame and ignore this many strokes in the current frame.
Differential Revision: https://developer.blender.org/D14252
Tangents are computed from UVs on the CPU side when using GPU subdivision
and require that the normals are available there as well (at least for smooth
shading, flat normals can be computed on the fly). This simply adds the missing
normals update call for the `MeshRenderData` setup for the subdivision case.
Differential Revision: https://developer.blender.org/D14278
Some drivers for legacy platforms seem to have issues with compute
shaders, as revealed by T94936. This disables compute shader for the
known drivers where this issue is present. It is not clear if the issue
is Windows only or not, so this disable them for all operating systems.
See T94936 for a list of configurations where the issue is reproducible
or not.
Differential Revision: https://developer.blender.org/D14264
Sound properties like volume, pitch and muting are handled in
`BKE_sound_add_scene_sound()`. This is unnecessary, because in
properties are then set to real values in `SEQ_edit_update_muting()` and
`seq_update_seq_cb()`.
Alternatively, it may be better to remove all other updates leave them
in `BKE_sound_add_scene_sound()`. But I want to add muting per channel,
whhich is easier and probably cleaner to do with function
`SEQ_edit_update_muting()`.
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D14269
Avoid re-creating & freeing the depsgraph for every driver evaluation.
Now the depsgraph is kept in the name-space (matching self),
only re-created when the value changes.
In a contrived test-case with many drivers this gave ~15% overall
speedup for animation playback.
Code cleaning up no-more-needed override data during diffing process
would systematically remove override data from linked IDs.
While this is not a critical issue in theory, it has bad consequences at
the very least on user UI/UX, and potentially can cause bugs in some
corner-cases scenarii.
The exact behavior of the brushes is still being iterated on, but it
helps having a base implementation that we can work upon.
All of that is still hidden behind an experimental feature flag anyway.
The brushes will get a name in the ui soon.
Differential Revision: https://developer.blender.org/D14241
Issue only happens in release builds on windows. That said it was an
actual error in the code. This class is compiled inline in release
builds. When updating multiple textures it would reuse the same memory
to collect the changes. When the previous loaded tilenumber was exactly the
same but from a different image the tile buffer wasn't loaded.
Reviewed By: sergey
Maniphest Tasks: T96213
Differential Revision: https://developer.blender.org/D14274
This part has to be refactored soon anyway, because more types
curves have to be drawn for the new Curves object.
For now, 3 is a better default than 2, because that matches the
actual resolution of the curve currently.
This makes the brushes more smooth, because the brush has an
effect after every mouse move, instead of only every x pixels.
For this to work well, the brushes have to look at the stroke
segments instead of at the mouse positions separately.
A more fine grained check might be added in the future.
Issue was introduced after the python 3.10 switch
Explicit conversion to int will fix the issue.
Same issue is likely to happen with `MovieTrackingSettings.default_search_size`
So I did the same change over there.
Differential Revision: https://developer.blender.org/D14273
Support this for completeness, as it's simpler to support click-drag
for all events types that support press/release instead of having to
document which kinds buttons support click-drag.
In practice this didn't cause a bug since assigning hot-keys was also
checking for "press" events (which NDOF_MOTION doesn't generate).
Add ISNDOF_BUTTON macro which is now used by ISHOTKEY to avoid
problems in the future.
The main improvement is a code simplification, because attributes don't
have to be transferred separately for each curve, and all attributes can
be handled generically. Performance improves significantly when the
output contains many curves. Basic testing with a 2 million curve output
shows an approximate 10x performance improvement.
This fixes the second part of T93573 that 8506f3d9fe didn't
properly address. Specifically, outlines of instances still had the
selected color in edit mode in wireframe view. This change is the
same as that commit, just in a different place.
Differential Revision: https://developer.blender.org/D14229
Previously the labels and values in number fields and value sliders
used different padding for the text. This looks weird when they are
placed underneath each other in a column and, as noted by a comment
in the code of `widget_numslider`, they are actually meant to be
aligned.
This patch fixes that by using the same padding that is used for the
number field for the value slider, as well. This also has the benefit,
that the labels of the value sliders don't shift anymore when adjusting
the corner roundness.
Differential Revision: https://developer.blender.org/D14091
This quick fix will populate the runtime orig pointers to avoid
crashes when a grease pencil object uses layer transforms, parenting
or modifiers.
This will have to be revisited and fixed with a better solution.
Enables image user nodes to display the file alpha mode, similar to the
colorspace setting.
Also removes image_has_alpha in favor of using BKE_image_has_alpha, because it
did not check if the image actually had an alpha channel, just if the file format
was capable of supporting an alpha channel.
Differential Revision: https://developer.blender.org/D14153
An alpha component can be specified for an object's color. This adds an alpha
socket to the object info shader node allowing for the alpha component of the
object's color to be accessed in the shader editor.
Differential Revision: https://developer.blender.org/D14141
Fix crash when creating a pose asset for which the file list entry in
the asset browser is scrolled off-screen. Because of the
off-screen-ness, it wasn't loaded into memory, which eventually caused
an unexpected NULL pointer.
The solution was to use a different function (`filelist_file_find_id`)
that can reliably find the file list entry, after which the cache entry
can be created.
Reviewed by: Severin
Differential Revision: https://developer.blender.org/D14265
When drawing the driver editor, only skip drawing the "scrubbing area"
and not the Y-axis values or the scroll bars.
The issue was introduced in rBb3431a88465db2433b46e1f6426c801125d0047d
to avoid drawing the playhead in the Driver Editor but also prevented
the text on the y axis from being drawn.
Reviewed by: Severin, sybren
Maniphest Tasks: T95531
Differential Revision: https://developer.blender.org/D14022
Fix crash when creating a pose asset for which the file list entry in
the asset browser is scrolled off-screen. Because of the
off-screen-ness, it wasn't loaded into memory, which eventually caused
an unexpected NULL pointer.
The solution was to use a different function (`filelist_file_find_id`)
that can reliably find the file list entry, after which the cache entry
can be created.
Reviewed by: Severin
Differential Revision: https://developer.blender.org/D14265
Fix an assert by commenting out the assert.
In normal situations all keyframes are sorted. However, while keys are
transformed, they may change order and then this assertion no longer
holds. The effect is that the drawing isn't perfect during the
transform; the "constant value" bars aren't updated until the
transformation is confirmed. Apart from that, the code runs fine, so it
seems like a workable workaround.
When drawing the driver editor, only skip drawing the "scrubbing area"
and not the Y-axis values or the scroll bars.
The issue was introduced in rBb3431a88465db2433b46e1f6426c801125d0047d
to avoid drawing the playhead in the Driver Editor but also prevented
the text on the y axis from being drawn.
Reviewed by: Severin, sybren
Maniphest Tasks: T95531
Differential Revision: https://developer.blender.org/D14022
Annotation tool is used as a general mark tool for many add-ons. To be able to detect when an annotation is done is very handy to integrate the annotation tool in add-ons and other studio workflows.
The new callback names are: `annotation_pre` and `annotation_post`
Both callbacks are exposed via the Python module `bpy.app.handlers`
Example use:
```
import bpy
def annotation_starts(gpd):
print("Annotation starts")
def annotation_done(gpd):
print("Annotation done")
bpy.app.handlers.annotation_pre.clear()
bpy.app.handlers.annotation_pre.append(annotation_starts)
bpy.app.handlers.annotation_post.clear()
bpy.app.handlers.annotation_post.append(annotation_done)
```
Note: The handlers are called for any annotation tool, including eraser.
Reviewed By: campbellbarton
Differential Revision: https://developer.blender.org/D14221
Using press to activate the Tweak tool doesn't work well when used a
fallback tool as the drag event is often used by the current tool -
making it impossible not to select when dragging (unless the fallback
tool is disabled entirely).
Resolve this by using CLICK events when the Tweak tool is used as a
fallback.
Even though this avoids the crash, check for null-pointer de-reference
since changes to the key-map shouldn't cause operators to crash.
Note that the ability for operators to access a gizmo before it's fully
initialized is a more general problem that should be addressed, but out
of scope for a bug-fix.
Reviewed By: zeddb, JulienKaspar, Severin
Maniphest Tasks: T95591
Ref D14231
When a grease pencil data-block has multiple users and is subject
to modifiers, layer transforms or parenting, performance
(especially playback) is greatly affected.
This was caused by the grease pencil eval process which does per
instance full-copies of the original datablock in case those
kinds of transformations need to be applied.
This commit changes the behavior of the eval process to do shallow
copies (layers with empty frames) of the datablock instead
and duplicates only the visible strokes.
When we need to have a unique eval data
per instance, only copy the strokes of visible
frames to this copy.
Performance:
On a test file with 1350 frames 33k strokes and 480k points
in a single grease pencil object that was instanced 13 times:
- master: 2.8 - 3.3 fps
- patch: 42 - 52 fps
Co-authored by: @filedescriptor
This patch was contributed by The SPA Studios.
Reviewed By: #grease_pencil, pepeland
Differential Revision: https://developer.blender.org/D14238
Regression in d961adb866,
it's important that for the Mesh used for undo storage matches
the shape-key instead of using the coordinates of the Basis key.
Prior to bfdbc78466 a different method of
restoring the basis shape-key coordinates was used (restoring from the
input `Mesh.mvert` array). When undo wrote the edit-mesh into the mesh
this was always NULL so the basis shape keys coordinates were never
used.
Now a parameter has been added so undo can use the active shape for the
meshes vertex coordinates.
Reviewed By: sergey
Maniphest Tasks: T96205
Ref D14258
Undo would invalidate image owned GPU textures only. Textures
that are owned by the editor were not refreshed. This patch would
invalidate all the GPU textures by marking the whole image dirty.
This can be improved later as we could add partial updates of GPU
textures.
Reviewed By: mont29
Maniphest Tasks: T96163
Differential Revision: https://developer.blender.org/D14259
Caused by oversight in 2bcf93bbbe. Operator returns `OPERATOR_CANCELLED`
when it should return `OPERATOR_FINISHED`.
Reviewed By: mano-wii, campbellbarton
Differential Revision: https://developer.blender.org/D14243
The "curve_type" was transferred to instances because it isn't a
built-in curve attribute. Then it was interpolated as a point
domain attribute from the instance domain in the realize
instances node.
The fix was just missing from 9ec12c26f1.
`curve_type` needs to be marked as a built-in attribute.
Support drag/drop of materials to Properties Material Slots.
See D13549 for more details.
Differential Revision: https://developer.blender.org/D13549
Reviewed by Julian Eisel
This changes drastically the implementation to leverage arbitrary writes
in order to reduce complexity, memory usage and increase speed.
Since we are no longer dependent on the framebuffer requirement, we can
allocate bigger size texture that fits all views and avoid the extra.
Transparency, holdout and emissions are no longer deferred and are now
composited using dual source blending.
The indirect lighting and raytracing is still not functional but will
also gets a large refactor on its own
The node should be faster than in 3.1, for a few reasons:
- It doesn't need to calculate and allocate the curve offsets.
- It doesn't need to de-reference a pointer for each curve.
- The inputs are accessed from the virual arrays fewer times.
On top of that, I added two other performance improvements:
- The node is multi-threaded when there are many curves.
- There are generated special cases for single value and span inputs.
**Performance**
With a set position node affecting 1 million splines with a selection
based on this node, on an Intel i5 8250U (times are approximate):
| Before | After | Speedup |
| 760 ms | 60 ms | 13x |
Differential Revision: https://developer.blender.org/D14233
When removing a modifier, changing the layer transform or updating
the parent of a grease pencil object that has a multi-user datablock
and animation data, the eval data is not updated properly (after a
frame change). This can also cause memory leaks.
The fix makes sure that we free and reset any runtime copy
(`ob->runtime.gpd_eval`) in `BKE_gpencil_prepare_eval_data`.
Note: As far as we can tell, `ob->runtime.gpd_orig` is unused and could
be removed. The assignment in `BKE_gpencil_prepare_eval_data`
seemed to be unnecessary.
Co-authored-by: @yann-lty
Reviewed By: antoniov
Maniphest Tasks: T96145
Differential Revision: https://developer.blender.org/D14236
Previously you'd have to be careful to drag the image itself. Dragging
anywhere else on the tile (e.g. between the preview and the text, or the
text itself) would trigger border select. This often conflicts with user
expectations and causes frustration when trying to work quick, I've seen
many people complain about this.
Note that the "hitbox" for dragging is a bit smaller than the tile, to
not make border select by dragging from in-between the tiles too hard.
Differential Revision: https://developer.blender.org/D14228
Just use the image-buffer size and the already provided scale to
determine the size, not the button size (which would always have to
match the scaled image-buffer size or it would give unexpected results).
This patch adds edge selection support for UV editing (refer T76545).
Developed as a part of GSoC 2021 project - UV Editor Improvements.
Previously, selections in the UV editor always flushed down to vertices
and this caused multiple issues such as T76343, T78757 and T26676.
This patch fixes that by adding edge selection support for all UV
operators and adding support for flushing selections between vertices
and edges. Updating UV select modes is now done using a separate
operator, which also handles select mode flushing and undo for UV
select modes. Drawing edges (in UV edge mode) is also updated to match
the edit-mesh display in the 3D viewport.
Notes on technical changes made with this patch:
* MLOOPUV_EDGESEL flag is restored (was removed in rB9fa29fe7652a).
* Support for flushing selection between vertices and edges.
* Restored the BMLoopUV.select_edge boolean in the Python API.
* New operator to update UV select modes and flushing.
* UV select mode is now part of editmesh undo.
TODOs added with this patch:
* Edge support for shortest path operator (currently uses vertex path logic).
* Change default theme color instead of reducing contrast with edge-select.
* Proper UV element selections for Reveal Hidden operator.
Reviewed By: campbellbarton
Differential Revision: https://developer.blender.org/D12028
Just showing the library override icon for every item doesn't add much
information, it's just redundant. Displaying the data-block type icon on
the other hand can be useful.
Differential Revision: https://developer.blender.org/D14208
This patch adds a button in the scene to add a new one, but this does not change to the new created scene because this breaks the storyboarding workflow.
This is a common request for Storyboarding artists.
Reviewed By: mendio, brecht, ISS
Differential Revision: https://developer.blender.org/D14148
When exiting edit-mode set the vertex coordinates to the basis-shape when editing non-basis keys.
Regression in bfdbc78466.
Reviewed By: sergey
Ref D14234
This is needed since 4d0f846b93
however change in the operator instead of the event handler is correct,
as accepting a press event should suppress drag events unless
the pass-through flag is set.
This is how select & tweak already works.
It's not useful to wrap vertical motion when dragging markers.
It was too easy to accidentally wrap the cursor to the top of a region,
as markers need to be dragged from the bottom edge of the region.
The logic to cycle selected markers wasn't cycling back to the beginning
of the list.
The marker after the selected marker at the cursor frame was also used
to check if a selection existed, causing dragging to transform all
selected markers to de-select all when when dragging the last marker.
The node unnecessarily converted to the old data structure to check if
there were any poly splines. Instead, that warning is just removed,
because the node now still sets resolution values in that case, they
just aren't used (before the values weren't set at all). Either way, it
wasn't clear that looping though all of the curve types was worth
the performance cost here.
In `ffmpeg_read_video_frame` fix assignment used as truth value.
In `ffmpeg_seek_recover_stream_position` loop while return value is
greater or equal to 0.
Passing around coordinates for drawing can be quite confusing, it's
often not clear what they represent and where they are currently.
Instead pass around the tile rectangle for drawing and let all code draw
based on that, it's way more clear that way.
Changes shouldn't be user visible.
Correct misspellings in code comments of "vertex" and "vertices".
See D13932 for more details.
Differential Revision: https://developer.blender.org/D13932
Reviewed by Harley Acheson
This adds a prototype for the first brush that can add new curves by
painting on a surface. Note that this can only be used when the curves
object has a surface object set in the properties panel.
The brush can take minimum distance into account. This allows
distributing curves with a somewhat consistent density.
Differential Revision: https://developer.blender.org/D14207
This commit removes the outline from instances generated from an object
when in edit mode. This takes the change in aa13c4b386 a bit further,
with the idea that instance outlines are more like regular outlines.
Because evaluated object data that doesn't match the original object
type is treated as an instance internally, this fixes the way evaluated
meshes for curves objects have an outline, for example.
See the differential revision for a visual comparison.
Differential Revision: https://developer.blender.org/D14226
This patch enables enables the outliner to use the correct icon for each
of the curve subtypes (Curve/Surface/Font).
Differential Revision: https://developer.blender.org/D14093
Reviewed by: Julian Eisel
Since removal of tweak events 4986f71848,
box-select is activating while dragging files.
As far as I can tell this used to work because of differences
int the order tweak / click-drag events are handled.
Apply a workaround since dragging files doesn't prevent other parts
of the UI from activated (it's possible to open menus for e.g),
this is something we will likely want to limit which would resolve
this bug too.
There are two issues revealed in the bug report:
- the GPU subdivision does not support meshes with only loose geometry
- the loose geometry is not subdivided
For the first case, checks are added to ensure we still fill the
buffers with loose geometry even if no polygons are present.
For the second case, this adds
`BKE_subdiv_mesh_interpolate_position_on_edge` which encapsulates the
loose vertex interpolation mechanism previously found in
`subdiv_mesh_vertex_of_loose_edge`.
The subdivided loose geometry is stored in a new specific data structure
`DRWSubdivLooseGeom` so as to not pollute `MeshExtractLooseGeom`. These
structures store the corresponding coarse element data, which will be
used for filling GPU buffers appropriately.
Differential Revision: https://developer.blender.org/D14171
Partially revert aa71414dfc
This attempted have the click-drag event compatible with old tweak
events, but needs to be re-thought since it caused events to be handled
in unexpected situations.
This utility is useful when using C types that own some resource in
a C++ file. It mainly helps in functions that have multiple return
statements, but also simplifies code by moving construction and
destruction closer together.
Differential Revision: https://developer.blender.org/D14215
This fix contains two parts. There was one critical mistake where
order of two indices was wrong when removing constraint planes from
the array. The other changes are improvements to the used thresholds
to keep everything numerically stable.
Differential Revision: http://developer.blender.org/D14183
This branch was previously run when the action had been handled,
since action checks were removed it was running. This assignment
does nothing but shouldn't be kept.
previously_visible_components_mask was not preserved for Image ID nodes, which
meant it was always detected as newly visible and tagged to be updated, which
in turn caused the geometry nodes using it to be always updated also.
Reviewed By: sergey, JacquesLucke
Maniphest Tasks: T94609
Differential Revision: https://developer.blender.org/D14217
Supporting two kinds of dragging is redundant, remove tweak events as
they only supported 3 mouse buttons and added complexity from using the
'value' to store directions.
Support only click-drag events (KM_CLICK_DRAG) which can be used with
any keyboard or mouse button.
Details:
- A "direction" member has been added to keymap items and events which
can be used when the event value is set to KM_CLICK_DRAG.
- Keymap items are version patched.
- Loading older key-maps are also updated.
- Currently the key-maps stored in ./release/scripts/presets/keyconfig/
still reference tweak events & need updating. For now they are updated
on load.
Note that in general this wont impact add-ons as modal operators don't
receive tweak events.
Reviewed By: brecht
Ref D14214
Clicking and dragging markers accumulates flags from multiple operators
in a way that can't be interpreted when combine.
Follow tweak behavior for cancelling click-drag events.
Replacing tweak key-map items with click-drag caused selection
in the graph/sequencer/node editors to ignore drag events
(all uses of WM_generic_select_modal).
Operators that return OPERATOR_PASS_THROUGH | OPERATOR_FINISHED
result in the event loop considering the event as being handled.
This stopped WM_CLICK_DRAG events from being generated which is not the
case for tweak events.
As click-drag is intended to be compatible with tweak events,
accept drag events when WM_HANDLER_BREAK isn't set or when
wm_action_not_handled returns true.
A simple case of missing the tangent VBO. The tangents are computed from
the coarse mesh, and interpolated on the GPU for the final mesh. Code for
initializing the tangents, and the vertex format for the VBO was
factored out of the coarse extraction routine, to be shared with the
subdivision routine.
Last step of proxy building caused crash due to thread race contition
when acessing ed->seqbase.
Looking at code, it seems that calling IMB_close_anim_proxies on
original strips is unnecessary so this code was removed to resolve this
issue.
Locking seqbase data may be possible, but it's not very practical as
many functions access this data on demand which can easily cause
program to freeze.
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D14210
Mousemove events are sent to windows.
In Windows OS, almost all mousemove events are sent to the window whose
mouse cursor is over.
On MacOS, the window with mousemove events is always the active window.
It doesn't matter if the mouse cursor is inside or outside the window.
So, in order for non-active windows to also have events,
`WM_window_find_under_cursor` is called to find those windows and send
the same events.
The problem is that to find the window, `WM_window_find_under_cursor`
only has the mouse coordinates available, it doesn't differentiate
which monitor these coordinates came from.
So the mouse on one monitor may incorrectly send events to a window on
another monitor.
The solution used is to use a native API on Mac to detect the window
under the cursor.
For Windows and Linux nothing has changed.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D14197
The subdivision modifier for Grease Pencil handles closed strokes
correctly now and does converge to the same shape as the mesh
subdivision surface.
Differential Revision: http://developer.blender.org/D14218
Create `Curves` directly, instead of using the conversion from
`CurveEval`. This means that the `tilt` and `radius` attributes
don't need to be allocated. The old behavior is kept by using the
right defaults in the conversion to `CurveEval` later on.
The Bezier segment primitive isn't ported yet, because functions
to provide easy access to built-in attributes used for Bezier curves
haven't been added yet.
Differential Revision: https://developer.blender.org/D14212
Currently, any time a Curves data-block is created, the `curves_random`
function runs, filling it with 500 random curves, also adding a radius
attribute. This is just left over from the prototype in the initial
commit that added the type.
This commit moves the code that creates the random data to the curve
editors module, like the other primitives are organized.
Differential Revision: https://developer.blender.org/D14211
This patch hides the MetalRT checkbox for AMD GPUs, pending fixes for MetalRT argument encoding on AMD.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D14175
The issue was uncovered by the 0f89bcdbeb, but the root cause goes
into a much earlier design violation happened in the code: the modifier
evaluation function is modifying input mesh, which is not something
what is ever expected.
Bring code closer to the older state where such modification is only
done for the object in edit mode.
---
From own tests works seems to work fine, but extra eyes and testing
is needed.
Differential Revision: https://developer.blender.org/D14191
db4313610c added support for modifier
keys to be released while dragging.
The key release events set wmEvent.prev_type which is used select the
drag threshold, causing the wrong threshold to be used.
Add wmEvent.prev_click_type which is only set when the drag begins.
The image engine is depth aware when using tile drawing the depth is
only updated for the central image what lead to showing the background
on top of other areas.
Also makes sure that switching the tile drawing would lead to an update
of the texture slots.
When dimension of images aren't a multifold of 256 parts of the gpu
textures are not updated. This patch will calculate the correct part of
the image that needs to be reuploaded.
Internally the update tiles are 256x256. Due to some miscalculations
tiles were not generated correctly if the dimension of the image wasn't
a multifold of 256.
Previously we used to cache a float image representation of the image in
rect_float. This adds some incorrect behavior as many areas only expect
one of these buffers to be used.
This patch stores float buffers inside the image engine. This is done per
instance. In the future we should consider making a global cache.
Use a flag for events to avoid adding struct members every time a new
kind of tag is needed - so events remain small.
This also simplifies copying settings as flags can be copied at once
with a mask.
- Rename ED_view3d_win_to_delta `mval` argument to `xy_delta` as it
as it was misleading since this is an screen-space offset not a region
relative cursor position (typical use of the name `mval`).
Also rename the variable passed to this function which also used the
term `mval` in many places.
- Re-order the output argument of ED_view3d_win_to_delta last.
use an r_ prefix for return arguments.
- Document how the `zfac` argument is intended to be used.
- Split ED_view3d_calc_zfac into two functions as the `r_flip` argument
was only used in some special cases.
Small fixes to the drawing of multi input sockets:
- Make the outline thickness consistent with normal node sockets,
independent from the screen DPI.
- Only highlight multi input sockets when they are actually selected.
- Skip selected multi inputs when drawing normal selected sockets.
Differential Revision: https://developer.blender.org/D14192
Currently the code expects the radius attribuet to always exist on the
input Curves. This won't be true in the future though, so the correct
default value of one should be used when creating the data on CurveEval,
where the data is not optional.
This commit improves the drawing of selected node links:
- Highlight the entire link to make it easier to spot where the link
is going/coming from.
- Always draw selected links on top, so they are always clearly
visible.
- Don't fade selected node links when the sockets they are connected
to are out out view.
- Dragged node links still get a partial highlight when they are only
attached to one socket.
Differential Revision: https://developer.blender.org/D11930
Add a std::move in some places to prevent arrays from being copied.
These cases were potentially optimized by the compiler, but this makes
it more explicit.
Differential Revision: https://developer.blender.org/D14129
New code from the vertex normal refactor cfa53e0fbe combined with older code
from 592759e3d6 that disabled instancing for custom normals and autosmooth
meant that instancing was always disabled.
However we do not need to disable instancing for custom normals and autosmooth
at all, this can be shared between instances just fine.
The constraint operators for delete, apply, copy and copy to selected
were missing null checks and could crash blender when called wrongly
from the python api.
Differential Revision: http://developer.blender.org/D14195
This commit changes `CurveComponent` to store the new curve
type by adding conversions to and from `CurveEval` in most nodes.
This will temporarily make performance of curves in geometry nodes
much worse, but as functionality is implemented for the new type
and it is used in more places, performance will become better than
before.
We still use `CurveEval` for drawing curves, because the new `Curves`
data-block has no evaluated points yet. So the `Curve` ID is still
generated for rendering in the same way as before. It's also still
needed for drawing curve object edit mode overlays.
The old curve component isn't removed yet, because it is still used
to implement the conversions to and from `CurveEval`.
A few more attributes are added to make this possible:
- `nurbs_weight`: The weight for each control point on NURBS curves.
- `nurbs_order`: The order of the NURBS curve
- `knots_mode`: Necessary for conversion, not defined yet.
- `handle_type_{left/right}`: An 8 bit integer attribute.
Differential Revision: https://developer.blender.org/D14145
The UI context was only set for the operator polls, but not for the
drop-box polls. Initially I thought this wouldn't be needed since the
drop-boxes should leave up context polls to the operator, but in
practice that may not be what API users expect. Plus the tooltip for the
drop-boxes will likely have to access context anyway, so they should be
able to check it beforehand.
Motion paths can now be initialised to more sensible frame ranges,
rather than simply 1-250:
- Scene Frame Range
- Selected Keyframes
- All Keyframes
The Motion Paths operators are now also added to the Object context menu
and the Dopesheet context menu.
The scene range operator was removed, because the operators now
automatically find the range when baking the motion paths.
The clear operator now appears separated in "Selected Only" and "All",
because it was not clear for the user what the button was doing.
Reviewed By: sybren, looch
Maniphest Tasks: T93047
Differential Revision: https://developer.blender.org/D13687
Caused by 0f89bcdbeb and was not fully addressed by 6f9828289f:
tagging an ID with flag 0 is to be seen as an explicit tag for copy
on write.
Would be nice to either consolidate code paths of flag 0 and explicit
component tag, or get rid of tagging with 0 flag, but that is above of
what we can do for the upcoming release.
When using ancored stroked the diameter of the stroke can be 0 what
leads to a division by zero that on certain platforms wrap to a large
negative number that cannot be looked up. This fix will clamp the size
of the brush to 1.
Now drag & tweak can have modifier keys to be released while dragging.
without this, modifier keys needs to be held which is more noticeable
for tablet input or whenever the drag threshold is set to a large value.
Resolves T89989.
This fixes T93051, T76405 and maybe others.
Characters like '²', '<' are not recognized in Blender's shortcut keys.
And sometimes simple buttons like {key .} and {key /} on the Windows
keyboard, although the symbol is "known", Blender also doesn't
detect for shortcuts.
For Windows, some of the symbols represented by `VK_OEM_[1-8]` values,
depending on the language, are not mapped by Blender.
On Mac there is a fallback reading the "actual character value of the
'remappable' keys". But sometimes the character is not mapped either.
On Windows, the solution now mimics the Mac and tries to read the button's
character as a fallback.
For unmapped characters ('²', '<', '\''), now another value is chosen as a
substitute.
More "substitutes" may be added over time.
Differential Revision: https://developer.blender.org/D14149
Some logic and comments in the vertex normal calculation were
left over from when normals were stored in MVert, before
cfa53e0fbe. Normals are never allocated and freed
locally anymore.
In some cases, the normal edit modifier calculated the normals on one
mesh with the "ensure" functions, then copied the mesh and retrieved
the layers "for write" on the copy. Since 59343ee162, normal
layers are never copied, and normals are allocated with malloc instead
of calloc, so the mutable memory was uninitialized.
Fix by calculating normals on the correct mesh, and also add a warning
to the "for write" functions in the header.
The check to see if newly requested attributes are not already in the
cache was not taking into account the possibility that we do not have
new requested attributes (`num_requests == 0`). In this case, if
`attr_used` already had attributes, but `attr_requested` is empty, we
would consider the cache as dirty, and needlessly rebuild the attribute
VBOs.
These features are complicated to support on GPU and hardly compatible
with subdivision in the first place. In the future, with T68891 and
T68893, subdivision and custom smooth shading will be separate workflows.
For now, and to better prepare for this future (although long term
plan), we should discourage workflows mixing subdivision and custom
smooth normals, and as such, this disables GPU subdivision when
autosmoothing or custom split normals are used.
This also adds a message in the modifier's UI to indicate that GPU
subdivision will be disabled if autosmooth or custom split normals are
used on the mesh.
Differential Revision: https://developer.blender.org/D14194
Reuse the same vertex normals calculation as for the GPU code, by
weighing each vertex normals by the angle of the edges incident to the
vertex on the face.
Additionally, remove limit normals, as the CPU code does not use them
either, and would also cause different shading issues when limit surface
is used.
Fixes T95242: shade smooth artifacts with edge crease and limit surface
Fixes T94919: subdivision, different shading between CPU and GPU
The custom data code checks for `LayerTypeInfo.defaultname` before
adding a second layer with a certain type. This was missed in
e7912dfa19. In practice, this default name
is not actually used.
Added call to ensure that the USD plugins are registered
when opening a USD cache archive. This is to avoid USD
load errors due to missing USD file format plugins when
opening blender files that contain USD transform cache
constraints and mesh sequence cache modifilers.
Fixes T94396
This operation can only be applied on one ID at a time, so only apply it
to the active Outliner item, and not all the selected ones.
Also renamed `Make Library Override` menu entry to `Make Library Override
Single` to emphasis this is not the 'default expected' option for the
user.
This affects essentially the Outliner 'create hierarchy' tool currenlty.
Previously code did not handle properly hierarchy root in case overrides
where created from a non-root ID (e.g. an object inside of a linked
collection), and in case additional partial overrides were added to an
existing partially overrided hierarchy.
Also did some renaming on the go to avoid using 'reference' in override
context for anything else but the reference linked IDs.
Trust user count to actually delete or not the dragged ID when current
dragging is cancelled, since it may be already used by others.
NOTE: This is more a band-aid fix than anything else, cancelling drag
has a lot of other issues here (like never deleting any indirectly
linked/appended data, etc.). It needs a proper rethink in general.
To keep consistency with the new contract option, the dilate now expand the shape beyond the internal closed area.
Note: This was committed only in master (3.2) by error.
This is requested by artist for some animation styles where is necessary to fill the area, but create a gap between fill and stroke.
Also some code cleanup and fix a bug in dilate for top area.
Reviewed By: pepeland, mendio
Differential Revision: https://developer.blender.org/D14082
Note: This was committed only in master (3.2) by error.
Using flags makes checking multiple modifiers at once more convenient
and avoids macros/functions such as IS_EVENT_MOD & WM_event_modifier_flag
which have been removed. It also simplifies checking if modifier keys
have changed.
This means textures need to have the number of mipmap levels specified
upfront. It does not mean the data is immutable.
There is fallback code for OpenGL < 4.2.
Immutable storage will enables texture views in the future.
Without ray offsets intersections at neigbhoring triangles are found, as
the ray start is exactly at the vertex. There was a small offset towards
the center of the triangle, but not enough.
Now this offset computation is moved into Cycles and modified for better
results. It's still not perfect though like any offset approach, especially
with long thin triangles.
Additionaly, this uses the shadow terminate offset for AO rays now, which
helps remove some pre-existing artifacts.
This is similar to f8fe0e831e, which made the change to the
handle position attributes. This commit removes the way that setting the
`position` attribute also changes the handle position attributes. Now,
the "Set Position" node still has this behavior, but changing the
attribute directly (with the modifier's output attributes) does not.
The previous behavior was a relic of the geometry nodes design
from before fields and the set position node existed.
This makes the transition to the new curves data structure simpler.
There is more room for optimizing the Bezier case of the set position
node in the future.
Also fix a couple other places where normals layers weren't properly
tagged dirty or reallocated when the mesh changes.
Caused by cfa53e0fbe. When the size of a mesh changes,
the normal layers need to be reallocated. There were a couple of places
that cleared other runtime data with `BKE_mesh_runtime_clear_geometry`
but didn't deal with normals properly. Clearing the runtime "geometry"
is different from clearing the normals, because sometimes the size of
the normal layers doesn't have to change, in which case simply tagging
them dirty is fine.
Reverts 6d97fdc37e. A function like this should not return a different
tree-display object than of the requested type. This may hide errors,
and leaves the Outliner in an undefined state (where the stored display
mode doesn't match the tree-display object). I rather don't hide the
fact that all display-modes should be handled here, and emit a clear
error if one isn't.
When X-ray mode is active the selection is done using the mesh data to
select what is closest to the cursor. When GPU subdivision is active with
the "show on cage" modifier option, this fails as the mesh used for selection
is the unsubdivided one.
This creates a subdivision wrapper before running the selection routines to
ensure that subdivision is available on the CPU side as well.
Differential Revision: https://developer.blender.org/D14188
This was a double free error which happened because `BM_mesh_bm_from_me`
was taking ownership of arrays that were still owned by the Mesh. Note that
this only happens when the mesh is empty but some custom data layers still
have a non-null data pointer. While usually the data pointer should be null in
this case for performance reasons, other functions should still be able to
handle this situation.
Differential Revision: https://developer.blender.org/D14181
When exporting generated coordinates, the subdivision export code was
using the schema for the non-subdivision case, which is invalid as
non-initialized. This typo existed since the initial commit for the
feature (rBf9567f6c63e75feaf701fa7b78669b9a436f13dd).
The scale-to-fit option did nothing for single words when
the text box had a height. This happened because it was expected that
text would be wrapped however single words never wrap.
Now the same behavior for zero-height text boxes is used when text
can't be wrapped onto multiple lines.
The handle position attributes `handle_left` and `handle_right` had
rather complex behavior to get expected behavior when aligned or auto/
vector handles were used. In order to simplify the attribtue API and
make the transition to the new curves data structure simpler, this
commit moves that behavior from the attribute to the "Set Handle
Positions" node. When that node is used, the behavior should be the
same as before. However, if the modifier's output attributes were used
to set handle positions, the behavior may be different. That situation
is expected to be very rare though.
This adds a node with a boolean field output which returns true if all of the
points of the evaluated face are on the same plane. A float field input allows
for the threshold of the face/point comparison to be adjusted on a per face basis.
One clear use case is to only triangulate faces that are not planar.
Differential Revision: https://developer.blender.org/D13906
Recently we changed the build pipeline to always create a version
number in the url and point 'dev' to the latest version rather than creating the version number url once we release.
This makes the check to `bpy.app.version_cycle` unnecessary.
tbb/enumerable_thread_specific.h drags in windows.h
which will define min/max macro's unless you politely
ask it not to.
it's bit of an eyesore, but it is what it is
0fd72a98ac called functions to set bezier handle positions
that used uninitialized memory. The fix is to define the handle positions
explicitly, like before.
The main goal here is to add the boilerplate code to make it possible
to add the actual sculpt tools more easily. Both brush implementations
added by this patch are meant to be prototypes which will be removed
or refined in the coming weeks.
Ref T95773.
Differential Revision: https://developer.blender.org/D14180
This adds a node which copies part of a geometry a dynamic number
of times.
Different parts of the geometry can be copied differing amounts
of times, controlled by the amount input field. Geometry can also
be ignored by use of the selection input.
The output geometry contains only the copies created by the node.
if the amount input is set to zero, the output geometry will be
empty. The duplicate index output is an integer index with the copy
number of each duplicate.
Differential Revision: https://developer.blender.org/D13701
Sometimes it is useful to get the index ranges that are in an index mask.
That is because some algorithms can process index ranges more efficiently
than generic index masks.
Extracting ranges from an index mask is relatively efficient, because it is
cheap to check if a span of indices contains a contiguous range.
Drag Action was constantly resetting itself to "move".
Solve this by storing the tool settings per tool and no longer clear
gizmo properties when activating a new tool.
- Reserve "test" for tests & testing frameworks.
- Add "check_mypy" to "make help" text.
- Output to the standard output instead of redirecting to log-files,
leave redirecting output log-files to the user running the command.
The build-bot directly referenced this file and doesn't
have publically available configuration.
Add an empty file until this can be removed by the build scripts.
This will make the transition to the new curves data structure
a bit simple, since the handle types can be copied directly between
the two. The change to CurveEval is simple because it is runtime-only.
Helps with building against different OpenXR SDK versions (i.e. for
downstream builds that require specific versions), as the extension was
only defined since OpenXR 1.0.22.
These were only set in two places. One was related to "tessellated loop
normal", and the other derived corner normals. The values were never
checked though, after 59343ee162. The handling of dirty face
corner normals is clearly problematic, but in the future it should be
handled like the normal layers on the other domains instead.
Ref D14154, T95839
Currently, when normals are calculated for a const mesh, a custom data
layer might be added if it doesn't already exist. Adding a custom data
layer to a mesh is not thread-safe, so this can be a problem in some
situations.
This commit moves derived mesh normals for polygons and
vertices out of `CustomData` to `Mesh_Runtime`. Most of the
hard work for this was already done by rBcfa53e0fbeed7178.
Some changes to logic elsewhere are necessary/helpful:
- No need to call both `BKE_mesh_runtime_clear_cache` and
`BKE_mesh_normals_tag_dirty`, since the former also does the latter.
- Cleanup/simplify mesh conversion and copying since normals are
handled with other runtime data.
Storing these normals like other runtime data clarifies their status
as derived data, meaning custom data moves more towards storing
original/editable data. This means normals won't automatically benefit
from the planned copy-on-write refactor (T95845), so it will have to be
added manually like for the other runtime data.
Differential Revision: https://developer.blender.org/D14154
Allow SCREEN_OT_area_swap to operate between different Blender
windows, and other minor feedback improvements.
See D14135 for more details and demonstrations.
Differential Revision: https://developer.blender.org/D14135
Reviewed by Campbell Barton
Currently the RNA functions to add mesh elements like vertices
don't clear the runtime cache of things like triangulation, BVH
trees, etc. This is important, since they might be accessed with
incorrect sizes. This is split from a fix for T95839.
This is still far from perfect but it is better than not working
correctly.
The view/casters intersection bounds are too big and rough to
compute a decent tilemap level that is near the desire shadow pixels
density.
The algorithm works relatively ok if the sun direction is almost
parallel to the ortho view direction.
Atomic operations performed by the C++ standard library might require
libatomic on platforms which do not have hardware support for those
operations.
This change makes it that such configurations are automatically detected
and -latomic is added when needed.
Differential Revision: https://developer.blender.org/D14106
NOTE: This function is currently unused. However, it does use a callback
defined by a few RNA properties through
`RNA_def_property_editable_array_func`, so don't think it should be
removed without further thinking.
This function had two main issues:
* It was doing bitwise AND on potentially three sources of property
flag, when actually used `RNA_property_editable` just use one source
ever.
* It was completely ignoring liboverride cases.
TODO: Deduplicate code between `RNA_property_editable`,
`RNA_property_editable_info` and `RNA_property_editable_index`.
There was accidentally some displacement related code running even when not
using displacement.
Differential Revision: https://developer.blender.org/D14169
Previously, objects and geometries were mapped between frames
using different hash tables in a way that is incompatible with
geometry instances. That is because the geometry mapping happened
without looking at the `persistent_id` of instances, which is not possible
anymore. Now, there is just one mapping that identifies the same
object at multiple points in time.
There are also two new caches for duplicated vbos and textures used for
motion blur. This data has to be duplicated, otherwise it would be freed
when another time step is evaluated. This caching existed before, but is
now a bit more explicit and works for geometry instances as well.
Differential Revision: https://developer.blender.org/D13497
For an upcoming prototype we would introduced a new eTexPaintMode
option. That would add more cases and if statements. This change migrate
the eTexPaintMode to 3 classes. AbstractPaintMode contains a shared interface.
ImagePaintMode for 2d painting and ProjectionPaintMode for 3d painting.
Reset Defaults left the undo stack in an invalid state,
with the active undo step left at the previous state then it should
have been.
Now the buttons own undo logic is used to perform undo pushes.
1. Now handles cyclic strokes correctly.
2. Added a sharp threshold value to allow preservation of sharp corners.
Reviewed By: Antonio Vazquez (antoniov), Aleš Jelovčan (frogstomp)
Ref D14044
1. Now will remove lines if both adjacent faces are back face.
2. Added a check to respect material back face culling setting.
3. Changed label in the modifier to "Force Backface Culling" (which reflect more accurately with what the checkbox does).
Reviewed By: Antonio Vazquez (antoniov), Aleš Jelovčan (frogstomp)
Ref D14041
- No need for `normal_tx` array if we normalize the planes in `plane_tx`.
- No need to calculate the distance squared to a plane (with `dist_signed_squared_to_plane_v3`) if the plane is normalized. `plane_point_side_v3` gets the real distance, accurately, efficiently and also signed.
So normalize the planes of the member `CameraViewFrameData::plane_tx`.
This is a regression partially introduced in rB0a6f428be7f0.
Bones being transformed into edit mode were snapping to themselves.
And the bones of the pose mode weren't even snapping.
(Curious that this was not reported).
Since Python 3.10 is now supported on all platform,
bump the minimum version to reduce the number of Python versions that
need to be supported simultaneously.
Reviewed By: LazyDodo, sybren, mont29, brecht
Ref D13943
This is an alternate fix for T35170 since it caused T44415.
Having the undo system manipulate the key-block coordinates is error
prone as (in the case of T44415) there are situations when it's
important to apply the difference with the original shape key.
This reverts dab0bd9de6, and instead
avoids the problem by not using the data in `Mesh.key` as a reference
for updating shape-keys when exiting edit-mode.
The assumption that the `Mesh.key` in edit-mode won't be modified
until leaving edit-mode isn't always true. Leading to synchronization
errors. (details noted in code-comments).
Resolve this by using shape-key data stored in the BMesh.
Resolving both T35170 & T44415.
Details:
- Remove use of the original vertices when exiting edit mode.
- Remove use of the original shape-key coordinates when exiting
edit-mode (except as a last resort).
- Move shape-key synchronization into a separate function:
`bm_to_mesh_key`.
- Split the synchronization loop into two branches,
depending on the existence of BMesh shape-key coordinates.
- Always write shape-key values back to the BMesh CD_SHAPEKEY layers.
This was only done in some cases but is now necessary for all
shape-keys as these are used to calculate offsets where the `Mesh.key`
was previously used.
- Report a warning when the shape-key layer isn't found as this uses an
imperfect method of restoring coordinates which should only be used as
a last resort.
Reviewed By: mont29
Ref D14127
The problem was when the Object Offset was enabled because the Constant Offset flag was not checked and the offset always was added to the transformation matrix.
Limit the min and max of the IDProperty for the node group input
from 0 to infinity, and the soft min and max between 0 and 1.
Thanks to @PratikPB2123 for investigation.
Instead of accessing the `CD_NORMAL` layer directly,
use the proper API for accessing mesh normals. Even if the
layer exists, the values might be incorrect due to a deformation.
Related to ef0e21f0ae, 969c4a45ce, and T95839.
The flags overlapped ever since normalize was added, so this
requires versioning to copy the flag value. This needs to be
done both in Blender 3.1 and 3.2.
Differential Revision: https://developer.blender.org/D14165
The flags overlapped ever since normalize was added, so this requires
versioning to copy the flag value.
Differential Revision: https://developer.blender.org/D14165
The code was using the same flag value for different modifiers,
resulting in matching the toggle to random overlapping flags.
Differential Revision: https://developer.blender.org/D14165
The constraints solver is now able to handle more cases correctly.
Also the behavior of the boundary fixes is slightly changed if
the constraints thickness mode is used.
Differential Revision: https://developer.blender.org/D14143
The modifier supports arithmetic operations, like Add or Multiply,
but for some reason omits Minimum and Maximum. They are similarly
simple and useful math functions and should be supported.
Differential Revision: https://developer.blender.org/D14164
When RMB-select uses "Select Tweak" as a fallback tool,
ignore all bindings mapped to the Control key as these are
used for path selection.
This was fixed in 2a2d873124
however that caused shift-select to fail (T93100).
The node animation versioning code passes `nullptr` to the `oldName` and
`newName` parameters, but those weren't `NULL`-safe. I added an extra
check for this.
No functional changes, just a crash fix.
Previously, all operators using `PaintStroke` would have to store
the stroke in `op->customdata`. That made it impossible to store
other operator specific data in `op->customdata` that was unrelated
to the stroke.
This patch changes it so that the `PaintStroke` is passed to api
functions as a separate argument, which allows storing the stroke
as a subfield of some other struct in `op->customdata`.
Althought the float buffers are only used as cache, current code paths
don't look at the flags to identify which kind of image it is. Actual
fix would be to check flags, but that wouldn't be something to add one
week before release.
This commit fixes it by removing the buffers after use in the image
engine.
The previous behavior called RNA_enum_item_add a second time,
filled it's contents with invalid values then subtracted totitem,
this caused an unusual reallocation pattern where MEM_recallocN
would run in order to create the dummy item, then again when adding
another itme (reallocating an array of the same size).
Simply memset the array to 0xff instead.
Even though the size of the map was set back to DEFAULT_SIZE_EXP,
the underlying arrays were left unchained. In some cases this caused
further expansions to result in an unusual reallocation pattern
where MEM_reallocN would run expand the entries into an array
that was in fact the same size.
Bug was introduced in D12167.
Reading input size from an input socket is not possible in tiled compositor before execution is initialized. A similar change was done for the base class, see BlurBaseOperation::init_data().
Reviewed by: Jeroen Bakker
Differential Revision: https://developer.blender.org/D14067
This allows removing the indirection for lods during shading since the
tile is not owner of the page unless it uses it.
The cache system is quite more complex but makes it easier to spot
errors since the pages are not scattered into the tile texture.
This also simplify allocation since the free heap is separated from the
cache.
This adds maintence overhead and it is not that useful when we have reset to default.
If this is something that we want it should be added dynamically.
Reviewed By: HooglyBoogly, Severin, #user_interface
Differential Revision: https://developer.blender.org/D14151
After using MEM_dupallocN() on the original item/binding, the
user/component path ListBase for the new item/binding needs to be
cleared and each path copied separately.
This commit should suffice to make the shader API agnostic now (given that
all users of it use the GPU API).
This makes the shaders not trigger a false positive error anymore since
the binding slots are now garanteed by the backend and not changed at
after compilation.
This also bundles all uniforms into UBOs. Making them extendable without
limitations of push constants. The generated uniforms from OCIO are not
densely packed in the UBO to avoid complexity. Another approach would be to
use GPU_uniformbuf_create_from_list but this requires converting uniforms
to GPUInputs which is too complex for what it is.
Reviewed by: brecht, jbakker
Differential Revision: https://developer.blender.org/D14123
This commit should suffice to make the shader API agnostic now (given that
all users of it use the GPU API).
This makes the shaders not trigger a false positive error anymore since
the binding slots are now garanteed by the backend and not changed at
after compilation.
This also bundles all uniforms into UBOs. Making them extendable without
limitations of push constants. The generated uniforms from OCIO are not
densely packed in the UBO to avoid complexity. Another approach would be to
use GPU_uniformbuf_create_from_list but this requires converting uniforms
to GPUInputs which is too complex for what it is.
Reviewed by: brecht, jbakker
Differential Revision: https://developer.blender.org/D14123
This removes manual handling of normals that was hard-coded
to false in the one place the function was called. This change
will help to make a fix to T95839 simpler.
It's better not to expose the details of where the dirty flags are
stored to every place that wants to know if the normals are dirty.
Some of these places are relics from before vertex normals were
computed lazily anyway, so this is more of an incrememtal cleanup.
This will make part of the fix for T95839 simpler.
Make relation match material and world nodes. Does not address the reported
issue regarding muted nodes, but another missing update found investigating.
This was an old issue, but recent image partial update changes made this more
likely to happen in some cases. Now ensure that whenever the rendered scene
switches the image is updated.
This patch aims to fix the issues presented in T87829 and T95331,
namely precision issues while connecting two nodes when being too
close together in the node editor editors, in a few cases even
resulting in the complete inability to connect nodes.
Sockets are found by intersecting a padded rect around the cursor
with the nodes' sockets' location. That creates ambiguities, as it's
possible for the padded rect to intersect with the wrong node,
as the distance between two nodes is smaller than the rect is padded.
The fix in this patch is checking against an unpadded rectangle in
visible_node().
Differential Revision: https://developer.blender.org/D14122
The problem was that the code for sorting polygons around a vertex
assumed that it was a manifold or boundary vertex. However in some cases
the vertex could still be nonmanifold causing the crash. The cases where
the sorting fails are now detected and these vertices are then marked as
nonmanifold.
Differential Revision: https://developer.blender.org/D14065
The "Fill Caps" option on the Curve to Mesh node introduced in
rBbc2f4dd8b408ee makes it possible to fill the open ends of the sweep
to create a manifold mesh.
This patch fixes an edge case, where caps were created even when the
rail curve (the curve used in the "Curve" input socket) was cyclic
making the resulting mesh non-manifold.
Differential Revision: https://developer.blender.org/D14124
In ffmpeg 5.0, several variables were made const to try to prevent bad API usage.
Removed some dead code that wasn't used anymore as well.
Reviewed By: Richard Antalik
Differential Revision: http://developer.blender.org/D14063
Significantly improves loading speed of preview images from disk, e.g. custom
previews loaded using `bpy.utils.previews.ImagePreviewCollection.load()`.
See D14144 for details & comparison videos.
Differential Revision: https://developer.blender.org/D14144
Reviewed by: Bastien Montagne
Currently, whenever any BMesh is converted to a Mesh (except for edit
mode switching), original index (`CD_ORIGINDEX`) layers are added.
This is incorrect, because many operations just convert some Mesh into
a BMesh and then back, but they shouldn't make any assumption about
where their input mesh came from. It might even come from a primitive
in geometry nodes, where there are no original indices at all.
Conceptually, mesh original indices should be filled by the modifier
stack when first creating the evaluated mesh. So that's where they're
moved in this patch. A separate function now fills the indices with their
default (0,1,2,3...) values. The way the mesh wrapper system defers
the BMesh to Mesh conversion makes this a bit less obvious though.
The old behavior is incorrect, but it's also slower, because three
arrays the size of the mesh's vertices, edges, and faces had to be
allocated and filled during the BMesh to Mesh conversion, which just
ends up putting more pressure on the cache. In the many cases where
original indices aren't used, I measured an **8% speedup** for the
conversion (from 76.5ms to 70.7ms).
Generally there is an assumption that BMesh is "original" and Mesh is
"evaluated". After this patch, that assumption isn't quite as strong,
but it still exists for two reasons. First, original indices are added
whenever converting a BMesh "wrapper" to a Mesh. Second, original
indices are not added to the BMesh at the beginning of evaluation,
which assumes that every BMesh in the viewport is original and doesn't
need the mapping.
Differential Revision: https://developer.blender.org/D14018
This commit renames enums related the "Curve" object type and ID type
to add `_LEGACY` to the end. The idea is to make our aspirations clearer
in the code and to avoid ambiguities between `CURVE` and `CURVES`.
Ref T95355
To summarize for the record, the plans are:
- In the short/medium term, replace the `Curve` object data type with
`Curves`
- In the longer term (no immediate plans), use a proper data block for
3D text and surfaces.
Differential Revision: https://developer.blender.org/D14114
Fix boundary error in `BLI_str_unescape_ex`. The `dst_maxncpy` parameter
indicates the maximum buffer size, not the maximum number of characters.
As these are strings, the loop has to stop one byte early to allow space
for the trailing zero byte.
Thanks @mano-wii for the patch!
When removing a node that has a dependence on an ID, like the object
info node, the dependency graph relations weren't updated. This can
cause unexpected performance issues if a complex node tree continues
to depend on an ID that it doesn't actually use anymore. To fix this case,
tag relations for an update if the node has a data-block socket.
Fixes part of T88332
Differential Revision: https://developer.blender.org/D14121
If there is no animation at all, or it's all hidden, the Euler Filter
operators poll now fails with a message that explains this a bit more,
instead of just the generic "context is wrong" error.
Reviewed By: sybren
Maniphest Tasks: T95135
Differential Revision: https://developer.blender.org/D13967
While this should not happen in theory, very bad/broken/dirty files can
lead to such situations.
So we need to re-ensure valid root IDs after resync (for now, done after
each 'library indirect level' pass of resync, this may not be 100%
bulletproof though, time will say).
Found while investigating Blender studio issues in Snow parkour short.
In some cases broken files could lead to selecting a shapekey as
hierarchy root ID, which is not allowed.
Found while investigating Blender studio issues in Snow parkour short.
When new display driver is given to the PathTrace ensure that there are
no GPU resources used from it by the work. This solves graphics interop
descriptors leak.
This aqlso fixes Invalid graphics context in cuGraphicsUnregisterResource
error when doing final render on the display GPU.
Fixes T95837: Regression: GPU memory accumulation in Cycles render
Fixes T95733: Cycles Cuda/Optix error message with multi GPU devices. (Invalid graphics context in cuGraphicsUnregisterResource)
Fixes T95651: GPU error (Invalid graphics context in cuGraphicsUnregisterResource)
Fixes T95631: VRAM is not being freed when rendering (Invalid graphics context in cuGraphicsUnregisterResource)
Fixes T89747: Cycles Render - Textures Disappear then Crashes the Render
Maniphest Tasks: T95837, T95733, T95651, T95631, T89747
Differential Revision: https://developer.blender.org/D14146
Add check for `NULL` `from` pointer to `BLO_main_validate_shapekeys`,
and delete these shapekeys, as they are fully invalid and impossible to
recover.
Found in a studio production file (`animation
test/snow_parkour/shots/0040/0040.lighting.blend`, svn rev `1111`).
Would be nice to know how this was generated too...
This adds initial support for edit mode for the experimental new curves
object. For now we can only toggle in and out of the mode, no real
interraction is possible.
This patch also adds empty menus in edit mode. Those were added mainly
to quiet warnings as the menus are programmatically added to the edit
mode based on the object type and context.
Ref T95769
Reviewed By: JacquesLucke, HooglyBoogly
Maniphest Tasks: T95769
Differential Revision: https://developer.blender.org/D14136
This adds the boilerplate code that is necessary to use the tool/brush/paint
systems in the new sculpt curves mode.
Two temporary dummy tools are part of this patch. They do nothing and
only serve to test the boilerplate. When the first actual tool is added,
those dummy tools will be removed.
Differential Revision: https://developer.blender.org/D14117
Previous implementation had a copy of the image user, which doesn't
contain all the data to identify changes. This patch introduces a new
struct to store the data and can be extended with other data as well
(color spaces, alpha settings).
Check for a camera-view before checking if the view is locked
to the cursor/object since the camera-view takes priority,
it reads better to check that first.
Also reuse the event offset variable.
NDOF navigation in a camera view now behaves like orthographic pan/zoom.
Note that NDOF orbiting out of the camera view has been disabled,
see code comment for details.
Resolves T93666.
For the attribute search button, the tooltip was missing
if the input socket type has attribute toggle activated.
Differential Revision: https://developer.blender.org/D14142
This affected loading of EXR files with set to Linear ACES colorspace, as
well as the sky texture for in some custom OpenColorIO configurations.
Use the builtin OpenColorIO transform from ACES AP0 to XYZ D65 to fix this.
The idea is to keep `is_any_zero` in the `blender::math` namespace,
so instead of trying to be clever, just move it there and expand the
function where it was used in the class.
I noticed that there were a few variables that should not be visible per default.
It seems to me to simply be an oversight, so I went ahead and cleaned them up.
Reviewed By: Sybren, Ray molenkamp
Differential Revision: http://developer.blender.org/D14132
This commit should suffice to make the shader API agnostic now (given that
all users of it use the GPU API).
This makes the shaders not trigger a false positive error anymore since
the binding slots are now garanteed by the backend and not changed at
after compilation.
This also bundles all uniforms into UBOs. Making them extendable without
limitations of push constants. The generated uniforms from OCIO are not
densely packed in the UBO to avoid complexity. Another approach would be to
use GPU_uniformbuf_create_from_list but this requires converting uniforms
to GPUInputs which is too complex for what it is.
Reviewed by: brecht, jbakker
Differential Revision: https://developer.blender.org/D14123
This is a bug on the Blender side, where the depsgraph does not have proper
relations for text object duplis and fails to include the required materials
in the dependency graph. But at least Cycles should not crash.
- No need for `normal_tx` array if we normalize the planes in `plane_tx`.
- No need to calculate the distance squared to a plane (with `dist_signed_squared_to_plane_v3`) if the plane is normalized. `plane_point_side_v3` gets the real distance, accurately, efficiently and also signed.
So normalize the planes of the member `CameraViewFrameData::plane_tx`.
FindOpenImageIO was updated to link to separate OpenImageIO_Util for new
versions, where it is required. For older versions, we can not link to it
because there will be duplicated symbols.
Ref D14128
FindOpenEXR was updated to find new lib names and separate Imath. It's all
added to the list of OpenEXR include dirs and libs.
This keeps it compatible with both version 2 and 3 for now, and doesn't
require changes outside the find module.
Ref D14128
Coarse meshes with high polycount would show as corrupted when GPU
subdivision is used with AMD cards This was caused by the OpenSubdiv
library not taking `GL_MAX_COMPUTE_WORK_GROUP_COUNT` into account when
dispatching computes. AMD drivers tend to set the limit lower than
NVidia ones (2^16 for the former, and 2^32 for the latter, at least
on my machine).
This moves the `GLComputeEvaluator` from the OpenSubdiv library into
`intern/opensubdiv` and modifies it to compute a dispatch size in a
similar way as for the draw code: we split the dispatch size into a 2
dimensional value based on `GL_MAX_COMPUTE_WORK_GROUP_COUNT` and
manually compute an index in the shader.
We could have patched the OpenSubdiv library and sent the fix upstream
(which can still be done), however, moving it to our side allows us to
better control the `GLComputeEvaluator` and in the future remove some
redundant work that it does compared to Blender (see T94644) and
probably prepare the ground for Vulkan support. As a matter of fact,
this patch also removes the OpenGL initialization that OpenSubdiv would
do here. This removal is not related to the bug fix, but necessary to not
have to copy more files/code over.
Differential Revision: https://developer.blender.org/D14131
Previously, the number of action map subactions was limited to two per
action (identified by user_path0, user_path1), however for devices with
more than two user paths (e.g. Vive Tracker) it will be useful to
support a variable amount instead.
For example, a single pose action could then be used to query the
positions of all connected trackers, with each tracker having its own
subaction tracking space.
NOTE: This introduces breaking changes for the XR Python API as follows:
- XrActionMapItem: The new `user_paths` collection property
replaces the `user_path0`/`user_path1` properties.
- XrActionMapBinding: The new `component_paths` collection property
replaces the `component_path0`/`component_path1` properties.
Reviewed By: Severin
Differential Revision: https://developer.blender.org/D13949
This fixes VR pink screen issues when using the DirectX backend, caused
by `wglDXRegisterObjectNV()` failing to register the shared
OpenGL-DirectX render buffer. The issue is mainly present on AMD
graphics, however, there have been reports on NVIDIA as well.
A limited workaround for the SteamVR runtime (AMD only) was provided
in rB82ab2c167844, however this patch provides a more complete solution
that should apply to all OpenXR runtimes. For example, with this patch,
the Windows Mixed Reality runtime that exclusively uses DirectX can now
be used with AMD graphics cards.
Implementation-wise, a `GL_TEXTURE_2D` render target is used as a
fallback for the shared OpenGL-DirectX resource in the case that
registering a render buffer (`GL_RENDERBUFFER`) fails. While using a
texture render target may be less optimal than a render buffer, it
enables proper display in VR using the OpenGL/DirectX interop (tested
on AMD Vega 64).
Reviewed By: Severin
Differential Revision: https://developer.blender.org/D14100
After running the breakdown operator for the graph editor,
the factor property in the redo panel didn't reflect the value you chose
to mitigate that issue down the line there is a
new helper function to get the factor value, and
store it at the same time
Reviewed by: Sybren A. Stüvel
Differential Revision: https://developer.blender.org/D14105
Ref: D14105
Brew's Python framework's site-packages is a symlink so the assumption
that Resources and site-packages would be in the same directory
doesn't hold. So install scripts etc relative to bpy.so. Part of D14111
StringGrid has been deprecated in openvdb 9.0.0 and will be removed soon
Reviewed By: Brecht
Differential Revision: http://developer.blender.org/D14133
The general idea here is to wrap the `CurvesGeometry` DNA struct
with a C++ class that can do most of the heavy lifting for the curve
geometry. Using a C++ class allows easier ways to group methods, easier
const correctness, and code that's more readable and faster to write.
This way, it works much more like a version of `CurveEval` that uses
more efficient attribute storage.
This commit adds the structure of some yet-to-be-implemented code,
the largest thing being mutexes and vectors meant to hold lazily
calculated evaluated positions, tangents, and normals. That part might
change slightly, but it's helpful to be able to see the direction this
commit is aiming in. In particular, the inherently single-threaded
accumulated lengths and Bezier evaluated point offsets might be cached.
Ref T95355
Differential Revision: https://developer.blender.org/D14054
Finding the greatest and/or smallest element in an array is a common
need. This commit refactors the point cloud bounds code added in
6d7dbdbb44 to a more general header in blenlib.
This will allow reusing the algorithm for curves without duplicating it.
Differential Revision: https://developer.blender.org/D14053
This is meant to complement the `blender::math` functions recently
added by D13791. It's sometimes desired to template an operation to work
on vector types, but also basic types like `float` and `int`. This patch
adds that ability with a new `BLI_math_base.hh` header.
The existing vector math header is changed to use the `vec_base` type
more explicitly, to allow the compiler's generic function overload resolution
to determine which implementation of each math function to use.
This is a relatively large change, but it also makes the file significantly
easier to understand by reducing the use of macros.
Differential Revision: https://developer.blender.org/D14113
This lib allows any shader to use `print()` like functions for
logging and debugging shaders.
Usage is described in the comment at the top of the file.
Instead of using a manual list of dependency, the new implementation
scans all shader files beginning by `gpu_shader_material_` and extract
all function declarations.
This way we can deduce the internal dependencies between theses files.
This new implementation is merged with the manual pragma dependency system
uses by other shader files. This way it is compatible with the shader
logging system and does not require any string duplication during shader
building.
This introduce `DEBUG_DEPENDENCIES` (not a cmake flag but a local define)
which when set to 1, will list all the original files included in this
shader while omitting the generated / non original code.
This is the layout used by the amdgpu-pro GL implementation.
This also add some sanitizing of the parse output because the bespoke
implementation has bogus error when it comes to compute shaders.
This displays the error source such as IDE can find identify them
as path and let the programmer follow the direct link instead of
manual string search.
This only works for shaders compiling from unaltered sources as
it uses the source `char*` as key for filename search.
For some reason on some GL implementation (amdgpu) this particular
syntaxes shift the error lines.
Remove the context lines by default as they are not useful anymore.
We now use ShaderCreateInfo as a way to setup the custom material
implementation.
This is more versatile and flexible while not require parsing of
snippets of code.
The defrag shader make sure the free heap is free of holes. Making
the allocation more straightforward.
Since we now only reference the pages using the tiles, we introduce
a debug shader that produces an image with page data in a visual way.
This replaces the debug 8 option.
This also fixes some bug that were still present in the pipeline.
This separate the handling of directional lights (sun) into their
own loops. This will help reduce register pressure and remove some
pollution of the local light culling.
All sun lights are packed at the start of the light array.
We now scan the depth buffer after the prepass to tag the needed
shadow tiles.
This is much more precise than the bound box tagging which is now
reserved for transparent objects.
This also:
- fix pixel radius size.
- add a dedicated info buffer to avoid having one unused tile.
Until now the LOD selection was based on distance from camera.
Now it is based on receiver distance ratio. We compute the world
size of one view pixel along with the world size of one shadow texel.
By knowing one point distance to the light or to the view, we can
compute the pixel density ratio and deduce the corresponding LOD.
We use this to compute the min LOD during the visibility selection phase
and the "mean" LOD for usage tagging by BBoxes.
The tagging LOD is a crude approximation as it only uses the BBox
center.
This makes every shadow setup pass aware of the LOD chain of the tilemap
for each cubemap face.
In the free phase, we mask any LOD page that is completely covered by
higher LOD. This avoir commiting memory twice or more per area.
In the allocation phase, we check for the last valid LOD and set it
in the LOD 0 meta data. We also store the actual page location in LOD0 but
do not mark it as allocated as the LOD tile has the ownership of the page.
This removes the light count limit for the forward shaded object. This
also provides a more efficient way of computing the culling directly on
the GPU. Moreover, this avoids doing multiple lighting passes for high
light counts in the deferred pipeline, improving performance.
This continue the effort to implement virtual shadow mapping.
This includes:
- Spot cone culling of tile.
- Tile vs. view frustum tagging.
- Shadowmap Page allocation / freeing.
- Rendering to 4K buffer only tiles that needs it.
- Copying to shadow atlas.
This debug buffer is automatically bound if a shader is including
`common_debug_lib.glsl`. One buffer is created for each shading group
using such a shader.
The shader can then use the functions from that file to draw debug
lines. There is a hardcoded limit of line one buffer can contain. Make
sure to only output lines for a few threads at most.
Under the hood this uses a vertex buffer bound as SSBO that contains
the number of verts and all the positions and colors packed into 1 vec4.
We render by just rendering the whole buffer.
All unused vertices are initialized with NaN positions and will not be
drawn.
This is a total refactor of how shadows are handled.
We use Virtual shadow maps with different Level of details to
ensure a somewhat evenly distributed precision.
The shadow test is a really crude shadow test that will be
improved in further commit.
There is a pool of 4096 Tilemaps that are distributed between
shadowed ligths. These tilemaps are 16x16 each and reference
shadow map pages that are allocated in an atlas. Pages are only
allocated if needed (i.e: visible for rendering an object).
Page management is done on GPU using compute shaders to reduce
CPU task.
On CPU only one draw pass per updated tilemaps is issued.
This reduces the memory requirement of shadowmapping large scenes
with many lights.
Denoising make use of more memory to store and reproject the result of
previous frame to reduce noise. This only works for viewport.
There is a final bilateral filter for cleaning up noise even more.
Screen space Raytracing is supported by alpha blended surfaces.
However only opaque surfaces will be visible to the rays. This means
Alpha blended surfaces cannot reflect or refract themselves.
Denoising is not possible on alpha blended surfaces. Many samples
are needed for noise free results.
Since the cost of tracing can be very high, raytracing will only be
enabled on demand, on a per-material basis.
This simply reuse the reflection raytracing pipeline but with another
ray distribution. Only direct lighting, distant lighting and emissive
light are visible to diffuse rays.
Subsurface effect is not visible but transmittance effect is visible
to diffuse rays.
Indirect diffuse light is processed by the SSS filter.
The new pipeline is now cleaner and allows for deferred refraction.
The refractions are more accurate but are not denoised for now. More
research needs to be done in this area.
There is no feedback buffer for now, so reflections of metallic surfaces
will appear black.
The same restriction on refractive materials still holds true. They will
not appear in screen space tracing of other non refractive surfaces.
However, refractive surfaces (non-blended) can now reflect themselves
and the other surfaces with screen space reflections.
Half res tracing is not implemented back yet.
This is to automate the generation of reuse sample tables and maybe more
in the future. This is not designed to make compilation way longer than
expected.
Same as SSS this has been rewritten to support varying SSS radius.
Instead of relying on shadowmap hack to improve the transmittance
artifact (previously called translucency) we exposed a min thickness
output that will reduce the maximum of light bleeding that can happen
at the shading point. This is far from perfect but at least it is
tweakable.
The effect is now cheaper and the option to enable it is now gone.
It can always be artificially disabled by making the thickness bigger
than the sss radius.
The effect is always enabled for all SSS surfaces and will even be
applied on forward shaded object (alpha blend mode).
This only adds the output but the output is not yet used.
This thickness output is meant to control the aspect of subsurface,
refraction, absorption and volume shaders.
The value expected is the mean thickness inside the object at the
shading point. The source can be a vertex color or a texture map baked
from a raytracer.
This new implementation follows the technique described in
"Efficient screen space subsurface scattering Siggraph 2018".
Compared to the old implementation it fixes a lot of issues at
the cost of it being slower. This fixes:
- Light leaking between different objects.
- Light leaking between different surfaces with different depths.
- SSS radii are now "texturable" per pixel. No SSS surfaces limits.
- Noise should be lower.
- Precomputation is only done once for all SSS surfaces which lowers the
per material storage and precomputation time.
Implementation is also simpler as it is only a one pass processing.
We differ from the reference presentation by not precomputing the
RGB weights per samples. We actually compute them on the fly in order
to support varying SSS radii.
Notes:
- SSS IOR and SSS anisotropy are not supported.
- Object level light leak prevention might not work for high number of
objects in the scene (> 1024). In this case light leak might occur.
Adding or deleting (hidding) objects in the scene might change which
objects can leak.
This was caused by the StructArrayBuffer wrapper not being tagged as NonMovable.
The UBO was in fact being freed at creation time in debug build, but the
pointer was kept as valid in the copied wrapper.
Changing the higher level structure to not use the copy constructor to avoid this.
This is a needed change for the viewport compositor. The compositor
needs to draw to `dtxl->color` to have correct overlay / background
composition.
The solution here is to have a separate buffer that keeps the first
sample we blend from. This increases VRAM usage but it is the most
elegant option.
This integer divide by zero was evaluated to 0 on all platform but apple,
where it yields 1. The world lighting would then sample the 1 sample of the
first grid instead of its own sample.
This was caused by the blend mode that was used even with full opacity.
This caused issues with the viewport was resized and the color of the
framebuffer becomes undefined, leading to undefined values in the blend
equation.
Another fix would be to clear the viewport color on resize inside the
GPUViewport.
This is a necessary step for EEVEE's new arch. This moves more data
to the draw manager. This makes it easier to have the render or draw
engines manage their own data.
This makes more sense and cleans-up what the GPUViewport holds
Also rewrites the Texture pool manager to be in C++.
This also move the DefaultFramebuffer/TextureList and the engine related
data to a new `DRWViewData` struct. This struct manages the per view
(as in stereo view) engine data.
There is a bit of cleanup in the way the draw manager is setup.
We now use a temporary DRWData instead of creating a dummy viewport.
Differential Revision: https://developer.blender.org/D11966
This change the gbuffer layout to use more of the hardware to converting
data back and forth. Normals are encoded as two 16 bits components and
colors as R11G11B10F format.
This was motivated by the need of better quality normals. The issue is
that this increase the GBuffer size consequently. In order to balance
this we chose to merge the refraction and Diffuse/SSS data to use the
same buffer. This means we need to stochastically chose one of these
layers (so noise appear). Given that Glass BSDFs are rarely mixed
with Diffuse BSDFs, we think this is a good tradeoff.
The functions need to be declared before main as prototypes.
The appended libs will use the resources (textures, UBOs) defined at
global scope.
This removes a bit of code duplication and some long macros.
Instead of appending using `BLENDER_REQUIRE`, shaders can now ask for
libs to be added after the shader's `main()` by using the
`BLENDER_REQUIRE_POST` pragma.
Use viewspace instead of world space to compute pixel projection.
This fix issues when camera is far from origin and float precision would
produce artifacts.
This port the facing "flat" normal trick used by the gpencil engine
to EEVEE as well as the thickness mode.
The objects parameters are passed via the objectInfos UBO to avoid
much boiler plate code. However if this UBO grows too much we might have
to split it.
The normal trick for planar surfaces is quite simple to port to the
vertex shader even if it is less efficient.
However to compute it we need the objects bounds. This is passed as a
scale only through the orco factors. This will needs a bit of cleaning
at some points, with boundbox computed at object level.
Nothing much different compared to the previous implementation.
The transparent BSDF and principled BSDF now detects when the material
is potentially transparent to select the best way to render it.
This makes is possible to have AA and correct blending of the
forward rendered spheres.
However, to avoid distorded spheres we need to not support Lookdev
in panoramic projection mode.
Also remove support for LookDev when using render border for now.
This differs a bit from old implementation.
- Instead of manually adjusting the viewport we correctly place the
sphere in the vertex shader.
- Rendering happens after TAA accumulation: This is because we now
support panoramic cameras and TAA would distort the spheres.
This expose the capability of having no light and no probe (except the
world one) for specific views / code path.
The caller just need to pass 0 as extent to the `set_view()` function.
This is usefull for lookdev.
This does not include reference spheres rendering.
The approach is a bit different than before.
Now we use a `bNodeTree` to control the rendering of lookdev. This
generates a `GPUMaterial` that is stored per `Instance`. This way
rendering lookdev is just updating the temp light cache using this
material as world material. Removing the use of custom shader.
This introduces a small hack in order to bind the studiolight hdri after
the nodetree glsl parsing.
The background display however is still using a custom shader in order
to sample the world cubemap with different roughness.
The view space option of the studiolight is now faster by using a
transform before shading instead of rebaking the lightprobe constantly.
This should not have any particular impact on render time.
When evaluating surfaces, the deferred passes needs to sample the
depth buffer. But it also test against the stancil buffer.
Moreover the sampler needs to be a 2D sampler which is not the case
for cubemaps and texture2Darrays.
To overcome this we simply copy the gbuffer depth to another
temp texture using framebuffer blitting.
Some things differs from old implementation.
- Object visibility is filtered correctly without using a visibility
callback (which is to be removed).
The implementation is also more high level using less low level tricks.
A dedicated LightProbeView is created for each lightprobe cubeface to
render using all pipeline (deferred and forward).
There is still a few things not working.
Only world probe is supported for now.
The new implementation diverge from the original by randomly
selecting one lightprobe instead of sampling them all.
This speeds-up rendering a bit.
This is a small convenience. This let the render engine use this
default world if scene has no world.
World is black to keep the same behavior as before.
Shading groups are now created by the material_array_get functions
instead of passing a reference to be filled later. This avoids having
to wait later to maybe create a sub shading group.
This also simplifies different geomety type handling.
This adds a new closure selection method.
- In a first pass, weights are accumulated per output type (diffuse,
reflection, refraction).
- A random threshold is then generated before evaluating the BSDF nodes
again.
- During the evaluation pass the random threshold is decremented until
it reaches 0. At this moment the current BSDF is sampled.
For this to work, I splited the evaluation and the weighting in two
functions for all BSDF. The `*_eval` nodes are generated as dangling
nodes from the graph and only serialized after the rest of the graph.
Recalc flag on Material ID being unavailable to render engine, this
adds a simple way to detect material update by detecting shader creation
or update.
This constructs a "mirror" nodetree that feeds the closure "shader"
nodes with their respective final weight.
The tree is mirrored using simple math nodes. This is quite messy but
this is the only way to proceed without introducing special nodes.
The other issue with this method is that inputs are all uniforms even
for unplugged socket on temporary math nodes with add bloat to the
shader uniform buffer structure.
Only the part relevant to the weighting is duplicated. Other connexions
with the shading tree are reuse.
All shader nodes are updated to receive a `Weight` hidden parameter.
The original shader mixing tree is preserve to let the choice of using
either way to weight the output.
For now this is only done for the output nodes. This will need to be
extended to Closure to RGBA sub-tree.
This is the first step towards the new evaluation scheme of EEVEE
closures.
This commit contains:
- Removal of GPU_SOURCE_BUILTIN type, prefering global instead. This
avoid many boilerplate code since most of the old builtins are now
datas that are always present (i.e: view matrices, normals).
- Rewritting of codegen in C++ to use `std::stringstream`.
- Added a callback to let engine decide what to do with codegen code.
This remove a lot of needs for defines because of code order
dependency. The engine can insert the nodetree code in custom ways
to create advance effects (i.e: add displacement or vertex lighting).
Engine now returns final shader strings.
- Closure nodes evaluation replacment is a placeholder for now.
This is a port of the old material grouping. This is a bit more
clean as we use containers for each passes and other structures.
Nodetree is generated without major error for simple materials but
it is not yet used as closures are not outputed.
This adds the transparency and volume handling in the deferred
render pipeline.
Implementation is still unfinished.
To have better naming convention, I renamed object shader to surface.
This introduce a fat Gbuffer layout that groups closure data in groups
of similar BSDF. The goal is to have at least one sample for each
group to avoid too much code complexity and expected worse performance.
There is a lot of room for buffer reuse to reduce memory usage but it is
not considered a priority for now.
Add a smooth transition to avoid flickering of stochastic effects such
as soft shadows.
This use a simple blend method to progressively reveal the render
after some low sample count to avoid most of the flickering.
Parameters are hardcoded for now.
We use a new RNG to avoid correlation artifacts between Anti-Aliasing
and Shadow samples (see T68594).
The new sequence is a leap halton sequence. This makes it good with
low number of samples and yield less correlation issues.
Another change is that we directly jitter the projection matrix instead
of rotating the view matrix. This is improving convergence time and
avoid passing a second matrix to the shader.
However this case lead to discontinuity artifacts at face boders.
We might want to revert to the old rotation method for this
reason even if convergence is slower.
Now the shadows are linked to a `Light` object. The `Light` object is
linked to an `ObjectKey` to ensure persistence and deletion tracking.
The Uniform data are packed so that there is 1 `ShadowPunctualData`
per light in a `LightBatch`. This means there is only a shadowmap
limit to the number of `Shadow` in a scene.
Difference with previous implementation:
- Better texture space usage of cone and area light shadow.
- Shadows are packed in an atlas. Reducing requirements for future
features.
- Sampling is simpler because shadow matrix does everything.
This follows closely the implementation of 2.5D tiled light
culling described in the presentation:
"Improved Culling for Tiled and Clustered Rendering"
from Michal Drobot
http://advances.realtimerendering.com/s2017/2017_Sig_Improved_Culling_final.pdf
I chose the tile + Z binning approach for its high depth range support
and low CPU overhead & low memory consumption compared to the cluster
based culling. The cons is that the culling is a bit less precise in
some aspect but it is quite balanced.
The culling is done by the `Culling` object which is templated to easily
be reused for light probes cullg.
The Z-binning process is described starting from slide 20 in the
reference pdf.
I also implemented a debug pass to visualize false negative (light
culled when they shouldn't) and light evaluation density.
This is useful to detect failure case and hotspot. This could be exposed
as a developper only render pass in the future.
Some optimization of the reference implementation requires extensions
not yet added to GPU module and will be added later.
This has the basis of clustered light culling but does not yet do
it. The lights are only culled by frustum.
Its the same as if there was only one Cell for the entire Viewport.
This also wrap GPUFrameBuffer & GPUTexture inside eevee:Framebuffer
and eevee:Texture to improve managment.
Another cleanup was to put all members of `Instance` public to
avoid much complexity in accessing the data with modules
dependencies.
Also split velocity View related data to `class Velocity` and
rename previous `Velocity` to `VelocityModule`
Support infinite light count by dividing rendering into chucks of
LIGHT_MAX. Forward passes are just rendered again and deferred passes
(not implemented yet) will just have to have multiple light evaluation
passes.
This is almost the same thing as old implementation.
Differences:
- We clamp the motion vectors to their maximum when sampling the velocity buffer.
- Velocity rendering (and data manager) is separated from motion blur. This allows
outputing the motion vector render pass and in the future use motion vectors to
reproject older frames.
- Vector render pass support (only if motion blur is disabled, just like cycles).
- Velocity tiles are computed in one pass (simpler code, less CPU overhead, less
VRAM usage, maybe a bit slower but imperceivable (< 0.3ms)).
- Two velocity passes are outputed, one for motion blur fx (applied per shading view)
and one for the vector pass. This could be optimized further in the future.
- No current support for deformation & hair (to come).
Bonus addition, support for shutter curve.
Compared to the old implementation, the per time step sync function
is lighter and localized. Also it does not require a full engine
"reboot" in order to work.
Also modifies camera setup to be compatible with future camera motion
blur.
Bonus addition, support for shutter curve.
Compared to the old implementation, the per time step sync function
is lighter and localized. Also it does not require a full engine
"reboot" in order to work.
Pretty much identical to the previous implementation. With the exception
of a temporary noise function and some simplification of the CoC
computation. This also fixes issues with the Ortho depth of field.
Most of the files were modified to comply to new shader codestyle.
This also adds partial support of panoramic cameras (bokeh and
anamorphic is still buggy).
This cleansup a lot of confusion / complexity in the setup code.
Setup is closer to what cycles does now.
Also duplicates some buggy behavior of Cycles for now until this
is fixed.
This move view resolution handling to the `Camera` class that will
in the future clip and trim each view in panoramic projection.
There is a new `CameraView` that contains the `DRWView` and subview.
This way each `ShadingView` is associated to a unique `CameraView`.
ShadingView` & `CameraView` are all allocated & defined at creation time
but only the one activated by `Camera` will be rendered.
This option will make accumulation happen in a pre exposed logarithm
color space. This reduces the importance of bright pixels in the pixel
filter which will result in less aliasing in theses areas.
There is a few cases where one might want to disable this option to
match cycles better.
Render mode is really close to what the viewport render does.
Film output is done by resolving the data to the next (double buffered)
framebuffer and read back.
This also includes a bit of cleaning about naming of init() and sync()
functions.
This commit adds the Film class that handles accumulation of color and
non-color data using arbitrary projection and filter size.
A weighted accumulation (sum) is done into a data buffer with an
additional weight buffer. The sum being per pixel, it allows the input
textures that are not aligned with the output pixel grid.
Panoramic projection works by rendering a cubemap (6 views) of the scene
at the camera position. The Film filter pass then gather the pixels
using the correct Panoramic projection ensuring correct Anti-Aliasing.
For Non-color data (depth, normals) we only keep the closest value to
the target pixel center (simulating a filter size of 0).
Color data is accumulated in a log space to improve AntiAliasing output.
This is hardcoded for now.
Larger filters have poor performance but are very fast to converge.
Code Wise: This commit rename some modules to avoid possible confusion
and have better meaning. Use namespace instead of prefixes.
Added a new eevee_shared.hh file to share structure and enum definitions
between GLSL and C++.
Same idea as previous commit. This cleans-up the interface and put all
viewport related data inside the `DRWData` struct.
The draw manager is responsible for freeing it. That is the main point
of this all. In the future, we can have custom freeing method for each
engine.
This also move the DefaultFramebuffer/TextureList and the engine related
data to a new `DRWViewData` struct. This struct manages the per view
(as in stereo view) engine data.
There is a bit of cleanup in the way the draw manager is setup.
We now use a temporary DRWData instead of creating a dummy viewport.
@@ -349,8 +348,8 @@ class CyclesRenderSettings(bpy.types.PropertyGroup):
scrambling_distance:FloatProperty(
name="Scrambling Distance",
default=1.0,
min=0.0,max=1.0,
description="Reduce randomization between pixels to improve GPU rendering performance, at the cost of possible rendering artifacts if set too low. Only works when not using adaptive sampling",
min=0.0,soft_max=1.0,
description="Reduce randomization between pixels to improve GPU rendering performance, at the cost of possible rendering artifacts if set too low",
)
preview_scrambling_distance:BoolProperty(
name="Scrambling Distance viewport",
@@ -361,7 +360,7 @@ class CyclesRenderSettings(bpy.types.PropertyGroup):
auto_scrambling_distance:BoolProperty(
name="Automatic Scrambling Distance",
default=False,
description="Automatically reduce the randomization between pixels to improve GPU rendering performance, at the cost of possible rendering artifacts. Only works when not using adaptive sampling",
description="Automatically reduce the randomization between pixels to improve GPU rendering performance, at the cost of possible rendering artifacts",
)
use_layer_samples:EnumProperty(
@@ -1353,7 +1352,7 @@ class CyclesPreferences(bpy.types.AddonPreferences):
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.