This uses a StorageBuf as the source of indirect dispatch argument.
The user needs to make sure the parameters are in the right order.
There is no support for argument offset for the moment as there is no
need for it. But this might be added in the future.
Note that the indirect buffer is synchronized at the backend level. This is
done for practical reasons and because this feature is almost always used
for GPU driven pipeline.
This is a faster way to clear a buffer instead of reuploading new data.
It is equivalent to `memset` and runs directly on the GPU.
This is better to clear huge buffers and to avoid the sync cost of data upload.
This was getting in the way in multiple instances. Compute shaders dispatch
are still made in the presence of the last bound framebuffer even if they
do not interact with it.
This is supposed to hold the latest improvement from the EEVEE rewrite branch.
Note that a restart is necessary in order for the engine to appear.
The registration code is a bit convoluted as it needs to be after the WM_init.
Add a new operator to the Graph Editor that blends selected keyframes
to their default value.
The operator can be accessed from
Key>Slider Operators>Blend To Default Value
Reviewed by: Sybren A. Stüvel
Differential Revision: https://developer.blender.org/D9376
Ref: D9367
As the grease pencil simplify is a subotion of general simplify, if the general switch is disabled, the grease pencil simplify must be disabled too.
This patch also disable the UI panel.
This is supposed to hold the latest improvement from the EEVEE rewrite branch.
Note that a restart is necessary in order for the engine to appear.
The registration code is a bit convoluted as it needs to be after the WM_init.
Create a function on CurvesGeometry that can also be used for an edit
mode operator in the future. Dealing with CustomData directly means the
code is a bit more verbose than would be ideal, but this would be a
simple thing to clean up in the future if we get an attribute API here.
Also change the reverse node to first work on a read-only geometry
component, and only get write access if there is a curve selected.
Differential Revision: https://developer.blender.org/D14375
This will mostly just remove the overhead of converting
to and from the old curves type, though it also does open
some opportunities for multi-threading in the future.
A mistake in 8538c69921. The offsets include the segment at the
corresponding index, but the evaluated offset calculation was adjusting
the offset for the second to last segment.
Make the new curves' translate and transform functions also affect
the handle position attributes.
Differential Revision: https://developer.blender.org/D14372
Resizing nodes used the cursor location when the event was triggered
instead of the drag-start, harmless but means the drag location isn't
under the cursor especially with a high drag threshold.
Noticed when investigating other drag issues,
unrelated to recent changes to drag behavior.
`RNA_def_struct_ui_text(srna, ...)` was reused for `is_valid` and `is_muted`
which would set the documentation to theirs (actually to that of the last
call).
`RNA_def_property_ui_text(prop, ...)` should be used for the properties.
Remove the conversion to and from `CurveEval` by supporting the
new Curves data-block in the node. This allows for some simplifications
to the code, as well as a fix for transfering curve domain attributes
when duplicating the curve domain.
The performance improvements (obverved through the timings overlay)
can be relatively massive with many curves. When duplicating 10000
4-point curves to become 2 million curves, I observed an approximate
150x improvement, from about 3 seconds to about 20ms.
- Pass less redundant information in function arguments.
- Use `IndexRange` more instead of direct offset calculations.
- Use specific geometry component types for specialized functions.
- Use const arguments.
- Declare variables closer to where they are created or used.
- Remove some redundant logic.
- Simplify the description for the output geometry.
The menu for Timeline > Keying > Active Keying Set wouldn't show up.
Caused by d8e3bcf770. The function to attach search menu data to the button
would be called twice with different arguments for the same button now.
Shouldn't be an issue in general, but the first call now had the unexpected
side effect that the button would get disabled. Make sure it's re-enabled when
the second call sets the proper search data now.
Caused by rB43bc494892c3, moving this 'new id' relink to generic
remapping code added the over-head of proper, generic post-processing,
compared to the special-cases previous code was only designed to handle.
Fortunately with recent 'multi-remapping' work we can easily rewrite
that new id relink code to use the multi-remapping approach too.
No behavioral change is expected from this commit, besides the improved
performances (essentially restored to what they where before
rB43bc494892c3).
This reduces the complexity and avoid framebuffer setup costs.
This also "remove" the prefiltering of the glossy cubemaps in favor
of a simple bilinear filtering of the mipchain.
Change the sample mode to don't duplicate the last vertex of the
stroke and instead use the cyclic flag to close previously cyclic
strokes. This is necessary for the following modifiers.
Reviewed By: NicksBest
Differential Revision: http://developer.blender.org/D14359
From hair particle mode:
* Add
* Comb
* Cut
* Grow
New:
* Delete
Only comb and delete are used at the moment (by the new tools which are
under experimental).
Selecting an object that was already active & selected would de-select
it when the cursor was over the objects center.
This was caused by [0] that added a check which assumed more than one
hits from GPU_select meant there were multiple objects to select from.
This is not necessarily the case since bones, camera tracks or the
objects own center can add additional hits.
Resolve by keeping track of the best hit with & without the
active-selected object, only using the non-active-selected if it's found.
[0] 1550573360
Support for differentiating the tweak tool from the 3D cursor when
select is set to RMB.
This is currently an experimental preference:
Tweak Tool: Left Mouse Select & Drag
When enabled the tweak tool can now tweak the existing selection
without de-selecting first, a single click can be used to replace
the selection.
This matches selection in the graph & node editors.
This preferences is only available with "Developer Extras" enabled.
Ref T96544.
Needed so mapping selection to click doesn't pass the click event
through to setting the 3D cursor for e.g.
While this doesn't happen with the default key-map, setting selection
to LMB-click would set the 3D cursor as well (when the selection
fell through to nothing).
De-selecting objects meant that selecting a bone would de-select
all the other pose objects - making exiting & entering pose-mode
loose the current set of pose objects.
Match edit-mode behavior: avoid de-selecting objects in the current mode
(unless the action is explicitly performed in the outliner for e.g.).
Previously setting the 'basact' to NULL was done, but this wasn't
so simple to use with deselect_all which needs to check if there was
anything found at the cursor.
Add a 'handled' variable to differentiate this case, when set
don't attempt object selection.
While basic single track selection worked,
toggling and de-selection has been broken since at least 2.83.
Support SelectPick_Params with the exception of deselect_all
which doesn't make sense for tracks as de-selecting all objects
is expected in that case.
This was an involved operation to include inline,
making ed_object_select_pick more difficult to follow.
Prepare for track selection to properly support SelectPick_Params.
Currently this isn't used in the key-map, it will eventually
allow the 3D viewports tweak tool to match the behavior of other
editors that support tweaking a selection without first de-selecting
all other elements.
This is only part of the experimental "Full Frame" mode (disabled
by default). See T88150.
Currently the viewer node uses buffer paddings to display image offset
in the backdrop as a temporal solution implemented for {D12466}.
This solution is inefficient memory and performance-wise. Another
issue is that the paddings are part the image when saved.
This patch instead sets the offset in the Viewer node image
as variables and makes the backdrop take it into account
when drawing the image or any related gizmo.
Reviewed By: jbakker
Differential Revision: https://developer.blender.org/D12750
The previous fix including `<algorithm>` was an improvement
but not the actual error, which appears to be that `int64_t` is
long long int on one platform but just long int on another.
The fix includes the template argument directly.
This patch adds evaluation for NURBS, Bezier, and Catmull Rom
curves for the new `Curves` data-block. The main difference from
the code in `BKE_spline.hh` is that the functionality is not
encapsulated in classes. Instead, each function has arguments
for all of the information it needs. This makes the code more
reusable and removes a bunch of unnecessary complications
for keeping track of state.
NURBS and Bezier evaluation works the same way as existing code.
The Catmull Rom implementation is new, with the basis function
based on Cycles code. All three types have some basic tests.
For NURBS and Catmull Rom curves, evaluating positions is the
same as any generic attribute, so it's implemented by the generic
interpolation to evaluated points. Bezier curves are a bit special,
because the "handle" control points are stored in a separate attribute.
This patch doesn't include generic interpolation to evaluated points
for Bezier curves.
Ref T95942
Differential Revision: https://developer.blender.org/D14284
With the deferred pipeline, the materials needs different shading groups
depending on their matflags.
Note that this is potentially slower because execution order of shaders
may now be random. This might be fixed in a later commit.
Similar to other changes to ID remapping, gives huge speedups in some
cases, like certain types of liboverride creation.
Case from {T96092} goes from 1725 seconds (almost 30 minutes) to 45
seconds to generate the liboverride, on my machine.
Reviewed By: jbakker
Maniphest Tasks: T96092
Differential Revision: https://developer.blender.org/D14240
Ever since d5b72fb06c, shader nodes have been in the
`blender::nodes` namespace, so they don't need to use that to access
Blender's C++ types and functions.
Somehow exposed after 943b919fe8, linking could fail because
bf_nodes was not properly configured as a dependency of bf_nodes_shader.
Also add the dependency to the geometry nodes module.
To make porting to other architectures easier, clarifying that this does not
need to be supported. The unused parallel_reduce implementation assumed warp
size 32, but is easy to update if we ever need it in the future.
When using inverted filling and click inside a closed area and not outside as is expected, the algorithm to detect the contour to fill is unable to find the filling shape and try to fill outside of the valid index.
The infinite loop was adding more memory for each loop and the process continued while there was system resources and finally crashed the system.
As the tool in negative mode is designed to fill all areas when you click outside of any shape, now the algorithm check if the outline is not working as expected and cancels the filling process.
This commit removes the implementations of legacy nodes,
their type definitions, and related code that becomes unused.
Now that we have two releases that included the legacy nodes,
there is not much reason to include them still. Removing the
code means refactoring will be easier, and old code doesn't
have to be tested and maintained.
After this commit, the legacy nodes will be undefined in the UI,
so 3.0 or 3.1 should be used to convert files to the fields system.
The net change is 12184 lines removed!
The tooltip for legacy nodes mentioned that we would remove
them before 4.0, which was purposefully a bit vague to allow
us this flexibility. In a poll in a devtalk post showed that the
majority of people were okay with removing the nodes.
https://devtalk.blender.org/t/geometry-nodes-backward-compatibility-poll/20199
Differential Revision: https://developer.blender.org/D14353
Solved by introducing introducing a variant of MEM_cnew which behaves
as a copy-constructor for a trivial types.
Alternative approach would be to surround DNA structs with clang/gcc
diagnostics push/modify/pop so that implicitly defined constructors
and copy operators are allowed to access deprecated fields.
The downside of the DNA approach is that it will require some way to
easily apply diagnostics modifications to many structs, which is not
possible currently.
The newly added MEM_cnew has other good usecases, so is easiest to
use this route, at least for now.
Differential Revision: https://developer.blender.org/D14356
Resolves a fair amount of noisy warnings with default build on macOS.
Tested using render_layer render test which includes Freestyle layer.
Differential Revision: https://developer.blender.org/D14355
Meta-element selection now follows conventions for other picking
functions (e.g. EDBM_select_pick).
- Split meta-element find-nearest into a separate function.
- Cycle the meta-element starting from the active & selected
instead of comparing & setting a static variable.
- Order elements using depth (from front-to-back)
when cycling multiple elements.
This uses a StorageBuf as the source of indirect dispatch argument.
The user needs to make sure the parameters are in the right order.
There is no support for argument offset for the moment as there is no
need for it. But this might be added in the future.
Note that the indirect buffer is synchronized at the backend level. This is
done for practical reasons and because this feature is almost always used
for GPU driven pipeline.
This is a faster way to clear a buffer instead of reuploading new data.
It is equivalent to `memset` and runs directly on the GPU.
This is better to clear huge buffers and to avoid the sync cost of data upload.
This was getting in the way in multiple instances. Compute shaders dispatch
are still made in the presence of the last bound framebuffer even if they
do not interact with it.
Volatile fields were introduced to the RenderResult struct years ago[1].
However, volatile is most likely not doing what it was intended to do
in this instance, and is problematic when moving files to c++ (see
discussion from D13962). There are complex rules around what happens to
these fields but none of them guarantee what the above commit alluded to.
This patch drops the volatile and cleans up the APIs surrounding it.
[1] rB7930c40051ef1b1a26140629cf1299aa89eed859
Passing on all platforms:
https://builder.blender.org/admin/#/builders/18/builds/338
Differential Revision: https://developer.blender.org/D14298
- Rename 'location' to 'mval', typically used for region cursor coords.
- Rename 'retval' to 'changed', typically used for operators
when their return value depends on a change being made.
- Add SelectPick_Params struct to make picking logic more
straightforward and easier to extend.
- Use `eSelectOp` instead of booleans (extend, deselect, toggle)
which were used to represent 4 states (which wasn't obvious).
- Handle deselect_all when pocking instead of view3d_select_exec,
de-duplicate de-selection which was already needed in when replacing
the selection in picking functions.
- Handle outliner update & notifiers in the picking functions
instead of view3d_select_exec.
- Fix particle select deselect_all option which did nothing.
As proposed in T95802, this adds buttons to a new column on the right to modify
the override in the Library Override display mode. Some further usability
improvements are planned. E.g. this does not yet expand collections (modifiers,
constraints, etc) nicely or group modified properties of a modifier together.
Vector properties with more than 3 items or matrices aren't displayed nicely
yet, they are just squeezed into the column. If this actually becomes a problem
there are some ideas to address this.
Differential Revision: https://developer.blender.org/D14268
While the correlation may not work well with adaptive sampling, in practice
this appears to work ok in most cases
Automatic scrambling distance uses the minimum samples from adaptive sampling,
which provides a good default estimate to avoid artifacts.
Contributed by Alaska.
Differential Revision: https://developer.blender.org/D13325
This allows users to type in values larger than 1, for use in conjunction
with automatic scrambling distance.
Contributed by Alaska.
Differential Revision: https://developer.blender.org/D13580
When the light direction is not pointing away from the geometric normal and
there is a shadow terminator offset, self intersection is supposed to occur.
Some old platforms and drivers have limited amount of SSBO binding per
compute shader. This disables GPU subdivision if we cannot possibly
bind all required buffers within this limit.
For now the maximum number of buffers used by the GPU code is hardcoded,
but will be programmatically detected when shader creation is automated.
Ref D14337
This adds detection of the maximum number of shader storage buffer
bindings that is supported on the current platform. This can be
useful to turn off features that require compute shaders but use
more buffer bindings than available.
Differential Revision: https://developer.blender.org/D14337
The performance issue was noticeable when tracking a lot of tracks
which are using keyframe pattern matching. What was happening is that
at some cache gets filled in and the furthest away frame gets removed
from the cache: the frame at marker's keyframe gets removed and needs
to be re-read from disk on the next tracking step.
This change makes it so frames at markers' keyframes are not removed
from cache during tracking.
Steps to easily reproduce:
- Set cache size to 512 Mb.
- Open image sequence in clip editor
- Detect features
- Track all markers
Originally was reported by Rik, thanks!
Modified source Armature ID in the join operation was not properly
tagged as such for the depsgraph (and therefore memfile undo)..
Issue caused/revealed by rBe648e388874a.
Should be backported to 3.1 should we make a corrective release.
After rB9b298cf3dbec, the `StructRNA` declarations can now be accessed via
`RNA prototypes.h`
Also, since all redundated declarations are now removed,
`_WM_MESSAGE_EXTERN_BEGIN` and `_WM_MESSAGE_EXTERN_END` are also no
longer needed.
Differential Revision: https://developer.blender.org/D14342
Caused by 0cb5eae9d0 which restored
support for 3D depth when selecting gizmos - making it difficult
to select single lines drawn in front of other gizmos.
Previously the first hit was always used.
Resolve by using a margin around arrow stems when selecting
which was already done for 2D arrows.
This commit adds three nodes:
- `Remove Attribute`: Removes an attribute with the given name
- `Named Attribute`: A field input node
- `Store Named Attribute`: Puts results of a field in a named attribute
They are added behind a new experimental feature flag, because further
development of attribute search and name dependency visualization will
happen as separate steps.
Ref T91742
Differential Revision: https://developer.blender.org/D12685
So far it was needed to declare a new RNA struct to `RNA_access.h` manually.
Since 9b298cf3db we generate a `RNA_prototypes.h` for RNA property
declarations. Now this also includes the RNA struct declarations, so they don't
have to be added manually anymore.
Differential Revision: https://developer.blender.org/D13862
Reviewed by: brecht, campbellbarton
Lets `makesrna` generate a `RNA_prototypes.h` header with declarations for all
RNA properties. This can be included in regular source files when needing to
reference RNA properties statically.
This solves an issue on MSVC with adding such declarations in functions, like
we used to do. See 800fc17367. Removes any such declarations and the related
FIXME comments.
Reviewed By: campbellbarton, LazyDodo, brecht
Differential Revision: https://developer.blender.org/D13837
This patch remove all duplicate code for the same Bake modifier logic.
Still some modifiers need custom bake functions and cannot use this generic bake.
Steps to reproduce:
- Add image sequence to movie clip editor.
- Set cache limit to a low value in the user preferences.
- Playback until old frames starts to be removed from cache.
- Jump to the beginning of the image sequence.
The reason of dead-lock comes from two factors:
- Due to global nature of the cache limiter calls needs to be
guarded with locks.
- Image buffers stored in the cache can have their own cache
(which is used for color management).
Didn't find a better solution than to use recursive lock.
Kind of makes sense since the thread-guardable resource is
recursive (moviecache can have nested moviecaches).
Differential Revision: https://developer.blender.org/D14331
This reverts commit 1558b270e9.
An earlier commit (rB101fadcf6b93c) introduced some new functionality,
which was overlooked in reviewing this commit & got broken.
Will re-commit after the issue has been fixed.
Ref: D13687
If the scale in the offset modifier was set to a value lower than -1,
the object would get mirrored. The problem was, that the thickness
was set to 0 by that. This fix makes the thickness calculation only
use the absolute values.
Differential Revision: http://developer.blender.org/D14324
For now just assume that a node group without output sockets is
an output node. Ideally, we would use run-time information stored
on the node group itself to determine if the group contains a
top-level output node (e.g. Material Output). That can be
implemented separately.
In the larger scheme of things, top-level outputs within node
groups seem to break the node group abstraction and reusability
a bit.
Correction to the calculation of font size used for the tabs on the
Sidebar so that they are always the same size as other content on the
panel.
See D14322 for more details.
Differential Revision: https://developer.blender.org/D14322
Reviewed by Brecht Van Lommel
Instead of allocating a vector of the basis weights cache for
each evaluated point, allocate a single vector for all of the
weights. This should reduce memory usage by avoiding the
overhead of storing many vectors. I noticed a small performance
improvement to evaluated position calculation with an order of 5,
which is larger than `Vector`'s default inline buffer capacity.
This change is possible because of previous commits that
made the basis cache for each evaluated point always have
the same "order" size.
Currently a single buffer is used as working space for all evaluated
points. In order to make evaluations more independent, opening
options like multi-threading in the future, instead use a separate
array for each call. Using an inline buffer capacity higher than
the default allows a few percent performance improvement, and removes
allocations for every evaluated point.
The step after calculating the NURBS basis for a single evaluated
point trimmed extra zeroes from the weights. However, in practice
this rarely did anything, only for the first and last evaluated point
of certain knot configurations. Remove it in order to simplify code.
Also use a separate span for the result, to clarify its length.
Previously, the popover menu in sculpt/texture paint mode did not
take into account the `UnifiedBrushSettings` for the unit.
To fix this, the behavior of `class _draw_tool_settings_context_mode` is matched
by checking the same conditions when setting up the UI of the right-click popover menu.
Fixes T81616
Reviewed By: #sculpt_paint_texture, pablodp606
Maniphest Tasks: T81616
Differential Revision: https://developer.blender.org/D9168
Label alignment in top bar by using `ui_text_icon_width_ex` instead of `w_hint`
Old:
{F12733743}
New:
{F12733742}
Fixes T61558
Reviewed By: Severin
Maniphest Tasks: T61558
Differential Revision: https://developer.blender.org/D13552
This commit fixes T96229.
The maximum possible radius was being used for 3 point
splines, regardless of the current radius.
Reviewed By: HooglyBoogly
Maniphest Tasks: T96229
Differential Revision: https://developer.blender.org/D14311
When OneDrive files are offline, show preexisting thumbnails.
See D13930 for details.
Differential Revision: https://developer.blender.org/D13930
Reviewed by Julian Eisel
Add the ability to move `CurvesGeometry` without copying its attributes
and data. The benefit is more intuitive management of the data-block
copying, and less overhead for copying in some cases. The "moved-from"
source is left in an empty but valid state. A test file is added to test
the move constructor.
Before this patch, users had to switch render engines just to change how the
hair should be displayed in solid and material preview viewport shading modes.
Differential Revision: https://developer.blender.org/D14290
By not resetting the line width, other scopes were using the
wrong line thickness.
Contributed by RedMser.
Differential Revision: https://developer.blender.org/D12030
My own error when committing 0602852860. It appears that
the "Endpoint" knots modes should be handled together, otherwise
out of bounds array access is possible.
Win32: Replace SHGetFileInfoW as means to get friendly display names
for volumes because it causes long pauses for disconnected remote
drives.
See D14305 for more details.
Differential Revision: https://developer.blender.org/D14305
Reviewed by Brecht Van Lommel
There was an error with the attribute API implementation for vertex
groups. If the vertex group layer referenced an original mesh, it wasn't
properly duplicated for writing.
When rendering using the command line the curvature wasn't rendered. The reason
was that the ui_scale wasn't initialized and therefore the same pixels where
sampled to detect the curvature. This is fixed by setting the ui_scale to 1 for any
image render.
Change early drag evaluation added in
1f1dcf41d5 to only apply to drag events
from mouse buttons. Otherwise pressing two keyboard keys at one would
create a drag event for the first pressed key. While this didn't cause
any bugs as far as I know, this behavior makes most sense for drags
that come from cursor input.
Only set press events in the windows eventstate, not the current event
since it's not useful for these to be set current events press values.
This makes it possible for a press event to access values for the
previous press.
Activating a gizmo used the windows eventstate which may have values
newer than the event used to activate the gizmo.
This meant transforms check for the key that activated transform
could be incorrect.
Support passing an event when calling operators to avoid this problem.
It was possible that a render thread will be freeing cache while the
interface is iterating over cache items to build cache line.
Found while looking into T94738. It might be a fix, but I am unable
to reproduce the original issue, so can not know for sure whether
there is something else going or or not.
This function was copied from txt_sel_to_buf, including unnecessary
complexity to support selection as well as checks for the cursor
which don't make sense when copying the whole buffer.
Use a simple loop to copy all text into the destination buffer.
This patch enables all 8 combinations of Nurbs modes: Cyclic,
Bezier and Endpoint. Also removes restriction on Bezier Nurbs order.
The most significant changes are mode combinations bringing new
meaning. In D13891 is a scheme showing NURBS with same control
points in a modes, and also further description of each possible case.
Differential Revision: https://developer.blender.org/D13891
Regression in 265d97556a.
Where iterating directly on a property group failed, e.g.:
`iter(group)`, tests missed this since only `group.keys()`
was checked.
`3DView`'s `use_snap` option has little or nothing to do with using
snapping in `UV`, `Nodes` or `Sequencer`.
So there are no real advantages to keeping these options in sync.
Therefore, individualize the option to use snap for each "spacetype".
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D13310
The result handle attributes for non-bezier types are zeroed.
By mistake though, the entire array was zeroed, not just the
area corresponding to that curves source.
When viewing meta strip, it had orange color. This was caused by
overflow because of hard-coded offset. Theme got darker, and background
was also set again further in code, but redundant drawing was removed in
f4492629ea.
Realizing and copying attributes of meshes, curves, and points are very
similar processes, but currently the logic is duplicated three times in
the realize instances code. This commit combines the implementation
for copying generic attributes and creating the result id attribute.
The functions for threaded copying and filling should ideally be in
some file elsewhere, since they're not just useful here. But it's not
clear where they would go yet.
Differential Revision: https://developer.blender.org/D14294
Caused by an integer overflow in the tiling utilities of OptiX SDK.
Seems for now it's easier to copy and modify code to our sources so
that we don't need to bump SDK version requirement (which might lead
to an increased driver requirement as well).
There are still some fixes needed from a newer driver to have such
denoising to work properly: Windows requires 511.79, Linux 510.54.
Thanks Patrick for investigation!
Differential Revision: https://developer.blender.org/D14300
Mark the chain length of regular and spline IK constraints
non-animatable. Changing the IK chain length requires a rebuild of
depsgraph relations. This makes it unsuitable for animation. It's better
to simply avoid having this property animatable than to allow animation
but produce unstable results.
Ref: T96203
When dragging with a large threshold (using a tablet for example),
it's possible to press another key before the drag threshold is reached.
So tweaking then pressing X would show the delete popup instead of
transforming along the X-axis.
Now key presses while dragging cause the drag event to be evaluated
before the key press.
Note that to properly base the mouse-move event on the previous
state the last handled event is now stored in the window.
Without this the inserted mouse-move event may contain invalid values
from the next event (it's modifier state or other `prev_*` values).
Requested by @JulienKaspar.
Regression in 08d8eee006 caused
emulate-middle mouse to work once, clearing the modifier key.
Now the modifier key from emulated mouse events is never stored
in the windows event-state.
The realize instances code used "assign", but the attribute buffers on
the result aren't necessarily initialized. This doesn't make a difference
for trivial types like `int`, but it would with more complex types.
This commit replaces the temporary conversion to `CurveEval` with
use of the new curves data-block. The end result is that the
process looks more like the other components-- somewhere in between
meshes and point clouds in terms of complexity.
The final result is that the logic between meshes and curves is
very similar. There are a few different strategies to reduce
duplication here, so I'll investigate that separately.
There is some special behavior for the radius and handle position
attributes. I used the attribute API to store spans of these
attributes temporarily. Using access methods on `CurvesGeometry`
would be reasonable to, storing spans separately feels a bit more
predictable for now though.
There should be significant performance improvements in some cases,
I haven't tested that specifically though.
Differential Revision: https://developer.blender.org/D14247
Passing a `TreeElement *` instead of its `TreeStoreElement *` to
`TSELEM_OPEN()` would seem to work but cause a bug. Add a type check
that will cause a compiler error if it fails.
I don't see a reason to use 2x the element height for the "in-view"
checks. That seems incorrect (although shouldn't cause issues). So
remove that, I don't expect behavior changes.
For whatever reason the "in-view" check was using 2x the element height.
From what I can see this isn't needed, so I'll remove it in a follow-up
commit.
Restrict a lot deletion/moving around of liboverride objects and
collections in the Outliner.
While some of those operations may be valid in some specific cases, in
the vast majority of cases they would just end up breaking override
hierarchies/relationships.
Part of T95708/T95707.
Blender crashes when a multi-user grease pencil object has vertex
groups and is modified by modifiers, layer transform or parenting.
The fix makes sure that we copy the vertex group names list.
Reviewed By: antoniov
Maniphest Tasks: T96233
Differential Revision: https://developer.blender.org/D14275
Fix T95462: Partly transparent objects appear to glow in the dark
The issue was caused by incorrect check for exceeded number
of transparent bounces: the same maximum distance was used
for picking up N closest intersections and counting overall
intersections count.
Now made it so intersection count is using ray distance which
matches the way how Embree and OptiX implementation works.
Benchmark result:
{F12907888}
There is no big time difference in the pabellon scene. The
Victor scene timing doesn't seem to be very reliable as the
variance in time across different benchmark runs is quite
high.
Differential Revision: https://developer.blender.org/D14280
At the time of naming these members only some event types generated
click events so it made some sense to differentiate a click.
Now all buttons support click & drag it's more logical to use the
prefix "prev_press_" as any press event will set these values.
Also update doc-strings.
Operator area_dupli_invoke should not create modal windows.
See D14253 for details.
Differential Revision: https://developer.blender.org/D14253
Reviewed by Brecht Van Lommel
This commit fixes an issue, where for instance, when merging vertices
with the "Merge by Distance" geometry node, the resulting vertices had
their boolean attributes set unpredictably.
Boolean attributes are implemented as custom data, and when welding
vertices, the custom data for the resulting vertices comes from
interpolating the custom data of the source vertices.
This commit implements the missing interpolation function for the
boolean custom data type. This interpolation function is implemented in
terms of the logical or operation, that is to say, if any of the source
vertices (with a weight greater than zero) have the boolean set, the
boolean will also be set on the resulting vertex.
This logic matches 95981c9876.
In geometry nodes, attribute interpolation generally does not use the
CustomData API for performance reasons, but other areas of Blender
still do.
Differential Revision: https://developer.blender.org/D14172
This avoids transform jumping which is a problem when tweaking values a
small amount. A fix for T40549 was made box-select used the location
when the key was pressed.
While it's important for box-select or any operator where it's expected
the drag-start location is used, this is only needed in some cases.
Since the event stores the click location and the current location,
no longer overwrite the events real location. Operators that depend on
using the drag-start can use this location if they need.
In some cases the region relative cursor location (Event.mval) now needs
to be calculated based on the click location.
- Added `WM_event_drag_start_mval` for convenient access to the region
relative drag-start location (for drag events).
- Added `WM_event_drag_start_xy` for window relative coordinates.
- Added Python property Event.mouse_prev_click_x/y
Resolves T93599.
Reviewed By: Severin
Ref D14213
Prevents a few unneeded calls to `std::sin`, with an observed
performance improvement of about 1 percent.
Differential Revision: https://developer.blender.org/D14279
When the geometry of the sculpt mesh was replaced when restoring from
a full undo step, the runtime data was not cleared (including any
normals, triangulation data, or any other cached derived data).
In the report, only the invalid normals were observed.
The fix is to simply clear these caches. Later they will be reallocated
and recalculated if necessary. Since the whole mesh replaced here
anyway, this should be a safe fix.
Differential Revision: https://developer.blender.org/D14282
The new mode only builds the new strokes in each frame.
The code is assuming somebody uses "additive" drawing, so that each frame is different only in its NEW strokes. Already existing strokes are skipped.
I used a simple solution: Count the number of strokes in the previous frame and ignore this many strokes in the current frame.
Differential Revision: https://developer.blender.org/D14252
Tangents are computed from UVs on the CPU side when using GPU subdivision
and require that the normals are available there as well (at least for smooth
shading, flat normals can be computed on the fly). This simply adds the missing
normals update call for the `MeshRenderData` setup for the subdivision case.
Differential Revision: https://developer.blender.org/D14278
Some drivers for legacy platforms seem to have issues with compute
shaders, as revealed by T94936. This disables compute shader for the
known drivers where this issue is present. It is not clear if the issue
is Windows only or not, so this disable them for all operating systems.
See T94936 for a list of configurations where the issue is reproducible
or not.
Differential Revision: https://developer.blender.org/D14264
Sound properties like volume, pitch and muting are handled in
`BKE_sound_add_scene_sound()`. This is unnecessary, because in
properties are then set to real values in `SEQ_edit_update_muting()` and
`seq_update_seq_cb()`.
Alternatively, it may be better to remove all other updates leave them
in `BKE_sound_add_scene_sound()`. But I want to add muting per channel,
whhich is easier and probably cleaner to do with function
`SEQ_edit_update_muting()`.
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D14269
Avoid re-creating & freeing the depsgraph for every driver evaluation.
Now the depsgraph is kept in the name-space (matching self),
only re-created when the value changes.
In a contrived test-case with many drivers this gave ~15% overall
speedup for animation playback.
Code cleaning up no-more-needed override data during diffing process
would systematically remove override data from linked IDs.
While this is not a critical issue in theory, it has bad consequences at
the very least on user UI/UX, and potentially can cause bugs in some
corner-cases scenarii.
The exact behavior of the brushes is still being iterated on, but it
helps having a base implementation that we can work upon.
All of that is still hidden behind an experimental feature flag anyway.
The brushes will get a name in the ui soon.
Differential Revision: https://developer.blender.org/D14241
Issue only happens in release builds on windows. That said it was an
actual error in the code. This class is compiled inline in release
builds. When updating multiple textures it would reuse the same memory
to collect the changes. When the previous loaded tilenumber was exactly the
same but from a different image the tile buffer wasn't loaded.
Reviewed By: sergey
Maniphest Tasks: T96213
Differential Revision: https://developer.blender.org/D14274
This part has to be refactored soon anyway, because more types
curves have to be drawn for the new Curves object.
For now, 3 is a better default than 2, because that matches the
actual resolution of the curve currently.
This makes the brushes more smooth, because the brush has an
effect after every mouse move, instead of only every x pixels.
For this to work well, the brushes have to look at the stroke
segments instead of at the mouse positions separately.
A more fine grained check might be added in the future.
Issue was introduced after the python 3.10 switch
Explicit conversion to int will fix the issue.
Same issue is likely to happen with `MovieTrackingSettings.default_search_size`
So I did the same change over there.
Differential Revision: https://developer.blender.org/D14273
Support this for completeness, as it's simpler to support click-drag
for all events types that support press/release instead of having to
document which kinds buttons support click-drag.
In practice this didn't cause a bug since assigning hot-keys was also
checking for "press" events (which NDOF_MOTION doesn't generate).
Add ISNDOF_BUTTON macro which is now used by ISHOTKEY to avoid
problems in the future.
The main improvement is a code simplification, because attributes don't
have to be transferred separately for each curve, and all attributes can
be handled generically. Performance improves significantly when the
output contains many curves. Basic testing with a 2 million curve output
shows an approximate 10x performance improvement.
This fixes the second part of T93573 that 8506f3d9fe didn't
properly address. Specifically, outlines of instances still had the
selected color in edit mode in wireframe view. This change is the
same as that commit, just in a different place.
Differential Revision: https://developer.blender.org/D14229
Previously the labels and values in number fields and value sliders
used different padding for the text. This looks weird when they are
placed underneath each other in a column and, as noted by a comment
in the code of `widget_numslider`, they are actually meant to be
aligned.
This patch fixes that by using the same padding that is used for the
number field for the value slider, as well. This also has the benefit,
that the labels of the value sliders don't shift anymore when adjusting
the corner roundness.
Differential Revision: https://developer.blender.org/D14091
This quick fix will populate the runtime orig pointers to avoid
crashes when a grease pencil object uses layer transforms, parenting
or modifiers.
This will have to be revisited and fixed with a better solution.
Enables image user nodes to display the file alpha mode, similar to the
colorspace setting.
Also removes image_has_alpha in favor of using BKE_image_has_alpha, because it
did not check if the image actually had an alpha channel, just if the file format
was capable of supporting an alpha channel.
Differential Revision: https://developer.blender.org/D14153
An alpha component can be specified for an object's color. This adds an alpha
socket to the object info shader node allowing for the alpha component of the
object's color to be accessed in the shader editor.
Differential Revision: https://developer.blender.org/D14141
Fix crash when creating a pose asset for which the file list entry in
the asset browser is scrolled off-screen. Because of the
off-screen-ness, it wasn't loaded into memory, which eventually caused
an unexpected NULL pointer.
The solution was to use a different function (`filelist_file_find_id`)
that can reliably find the file list entry, after which the cache entry
can be created.
Reviewed by: Severin
Differential Revision: https://developer.blender.org/D14265
When drawing the driver editor, only skip drawing the "scrubbing area"
and not the Y-axis values or the scroll bars.
The issue was introduced in rBb3431a88465db2433b46e1f6426c801125d0047d
to avoid drawing the playhead in the Driver Editor but also prevented
the text on the y axis from being drawn.
Reviewed by: Severin, sybren
Maniphest Tasks: T95531
Differential Revision: https://developer.blender.org/D14022
Fix crash when creating a pose asset for which the file list entry in
the asset browser is scrolled off-screen. Because of the
off-screen-ness, it wasn't loaded into memory, which eventually caused
an unexpected NULL pointer.
The solution was to use a different function (`filelist_file_find_id`)
that can reliably find the file list entry, after which the cache entry
can be created.
Reviewed by: Severin
Differential Revision: https://developer.blender.org/D14265
Fix an assert by commenting out the assert.
In normal situations all keyframes are sorted. However, while keys are
transformed, they may change order and then this assertion no longer
holds. The effect is that the drawing isn't perfect during the
transform; the "constant value" bars aren't updated until the
transformation is confirmed. Apart from that, the code runs fine, so it
seems like a workable workaround.
When drawing the driver editor, only skip drawing the "scrubbing area"
and not the Y-axis values or the scroll bars.
The issue was introduced in rBb3431a88465db2433b46e1f6426c801125d0047d
to avoid drawing the playhead in the Driver Editor but also prevented
the text on the y axis from being drawn.
Reviewed by: Severin, sybren
Maniphest Tasks: T95531
Differential Revision: https://developer.blender.org/D14022
Annotation tool is used as a general mark tool for many add-ons. To be able to detect when an annotation is done is very handy to integrate the annotation tool in add-ons and other studio workflows.
The new callback names are: `annotation_pre` and `annotation_post`
Both callbacks are exposed via the Python module `bpy.app.handlers`
Example use:
```
import bpy
def annotation_starts(gpd):
print("Annotation starts")
def annotation_done(gpd):
print("Annotation done")
bpy.app.handlers.annotation_pre.clear()
bpy.app.handlers.annotation_pre.append(annotation_starts)
bpy.app.handlers.annotation_post.clear()
bpy.app.handlers.annotation_post.append(annotation_done)
```
Note: The handlers are called for any annotation tool, including eraser.
Reviewed By: campbellbarton
Differential Revision: https://developer.blender.org/D14221
Using press to activate the Tweak tool doesn't work well when used a
fallback tool as the drag event is often used by the current tool -
making it impossible not to select when dragging (unless the fallback
tool is disabled entirely).
Resolve this by using CLICK events when the Tweak tool is used as a
fallback.
Even though this avoids the crash, check for null-pointer de-reference
since changes to the key-map shouldn't cause operators to crash.
Note that the ability for operators to access a gizmo before it's fully
initialized is a more general problem that should be addressed, but out
of scope for a bug-fix.
Reviewed By: zeddb, JulienKaspar, Severin
Maniphest Tasks: T95591
Ref D14231
When a grease pencil data-block has multiple users and is subject
to modifiers, layer transforms or parenting, performance
(especially playback) is greatly affected.
This was caused by the grease pencil eval process which does per
instance full-copies of the original datablock in case those
kinds of transformations need to be applied.
This commit changes the behavior of the eval process to do shallow
copies (layers with empty frames) of the datablock instead
and duplicates only the visible strokes.
When we need to have a unique eval data
per instance, only copy the strokes of visible
frames to this copy.
Performance:
On a test file with 1350 frames 33k strokes and 480k points
in a single grease pencil object that was instanced 13 times:
- master: 2.8 - 3.3 fps
- patch: 42 - 52 fps
Co-authored by: @filedescriptor
This patch was contributed by The SPA Studios.
Reviewed By: #grease_pencil, pepeland
Differential Revision: https://developer.blender.org/D14238
Regression in d961adb866,
it's important that for the Mesh used for undo storage matches
the shape-key instead of using the coordinates of the Basis key.
Prior to bfdbc78466 a different method of
restoring the basis shape-key coordinates was used (restoring from the
input `Mesh.mvert` array). When undo wrote the edit-mesh into the mesh
this was always NULL so the basis shape keys coordinates were never
used.
Now a parameter has been added so undo can use the active shape for the
meshes vertex coordinates.
Reviewed By: sergey
Maniphest Tasks: T96205
Ref D14258
Undo would invalidate image owned GPU textures only. Textures
that are owned by the editor were not refreshed. This patch would
invalidate all the GPU textures by marking the whole image dirty.
This can be improved later as we could add partial updates of GPU
textures.
Reviewed By: mont29
Maniphest Tasks: T96163
Differential Revision: https://developer.blender.org/D14259
Caused by oversight in 2bcf93bbbe. Operator returns `OPERATOR_CANCELLED`
when it should return `OPERATOR_FINISHED`.
Reviewed By: mano-wii, campbellbarton
Differential Revision: https://developer.blender.org/D14243
The "curve_type" was transferred to instances because it isn't a
built-in curve attribute. Then it was interpolated as a point
domain attribute from the instance domain in the realize
instances node.
The fix was just missing from 9ec12c26f1.
`curve_type` needs to be marked as a built-in attribute.
Support drag/drop of materials to Properties Material Slots.
See D13549 for more details.
Differential Revision: https://developer.blender.org/D13549
Reviewed by Julian Eisel
This changes drastically the implementation to leverage arbitrary writes
in order to reduce complexity, memory usage and increase speed.
Since we are no longer dependent on the framebuffer requirement, we can
allocate bigger size texture that fits all views and avoid the extra.
Transparency, holdout and emissions are no longer deferred and are now
composited using dual source blending.
The indirect lighting and raytracing is still not functional but will
also gets a large refactor on its own
The node should be faster than in 3.1, for a few reasons:
- It doesn't need to calculate and allocate the curve offsets.
- It doesn't need to de-reference a pointer for each curve.
- The inputs are accessed from the virual arrays fewer times.
On top of that, I added two other performance improvements:
- The node is multi-threaded when there are many curves.
- There are generated special cases for single value and span inputs.
**Performance**
With a set position node affecting 1 million splines with a selection
based on this node, on an Intel i5 8250U (times are approximate):
| Before | After | Speedup |
| 760 ms | 60 ms | 13x |
Differential Revision: https://developer.blender.org/D14233
When removing a modifier, changing the layer transform or updating
the parent of a grease pencil object that has a multi-user datablock
and animation data, the eval data is not updated properly (after a
frame change). This can also cause memory leaks.
The fix makes sure that we free and reset any runtime copy
(`ob->runtime.gpd_eval`) in `BKE_gpencil_prepare_eval_data`.
Note: As far as we can tell, `ob->runtime.gpd_orig` is unused and could
be removed. The assignment in `BKE_gpencil_prepare_eval_data`
seemed to be unnecessary.
Co-authored-by: @yann-lty
Reviewed By: antoniov
Maniphest Tasks: T96145
Differential Revision: https://developer.blender.org/D14236
Previously you'd have to be careful to drag the image itself. Dragging
anywhere else on the tile (e.g. between the preview and the text, or the
text itself) would trigger border select. This often conflicts with user
expectations and causes frustration when trying to work quick, I've seen
many people complain about this.
Note that the "hitbox" for dragging is a bit smaller than the tile, to
not make border select by dragging from in-between the tiles too hard.
Differential Revision: https://developer.blender.org/D14228
Just use the image-buffer size and the already provided scale to
determine the size, not the button size (which would always have to
match the scaled image-buffer size or it would give unexpected results).
This patch adds edge selection support for UV editing (refer T76545).
Developed as a part of GSoC 2021 project - UV Editor Improvements.
Previously, selections in the UV editor always flushed down to vertices
and this caused multiple issues such as T76343, T78757 and T26676.
This patch fixes that by adding edge selection support for all UV
operators and adding support for flushing selections between vertices
and edges. Updating UV select modes is now done using a separate
operator, which also handles select mode flushing and undo for UV
select modes. Drawing edges (in UV edge mode) is also updated to match
the edit-mesh display in the 3D viewport.
Notes on technical changes made with this patch:
* MLOOPUV_EDGESEL flag is restored (was removed in rB9fa29fe7652a).
* Support for flushing selection between vertices and edges.
* Restored the BMLoopUV.select_edge boolean in the Python API.
* New operator to update UV select modes and flushing.
* UV select mode is now part of editmesh undo.
TODOs added with this patch:
* Edge support for shortest path operator (currently uses vertex path logic).
* Change default theme color instead of reducing contrast with edge-select.
* Proper UV element selections for Reveal Hidden operator.
Reviewed By: campbellbarton
Differential Revision: https://developer.blender.org/D12028
Just showing the library override icon for every item doesn't add much
information, it's just redundant. Displaying the data-block type icon on
the other hand can be useful.
Differential Revision: https://developer.blender.org/D14208
This patch adds a button in the scene to add a new one, but this does not change to the new created scene because this breaks the storyboarding workflow.
This is a common request for Storyboarding artists.
Reviewed By: mendio, brecht, ISS
Differential Revision: https://developer.blender.org/D14148
When exiting edit-mode set the vertex coordinates to the basis-shape when editing non-basis keys.
Regression in bfdbc78466.
Reviewed By: sergey
Ref D14234
This is needed since 4d0f846b93
however change in the operator instead of the event handler is correct,
as accepting a press event should suppress drag events unless
the pass-through flag is set.
This is how select & tweak already works.
It's not useful to wrap vertical motion when dragging markers.
It was too easy to accidentally wrap the cursor to the top of a region,
as markers need to be dragged from the bottom edge of the region.
The logic to cycle selected markers wasn't cycling back to the beginning
of the list.
The marker after the selected marker at the cursor frame was also used
to check if a selection existed, causing dragging to transform all
selected markers to de-select all when when dragging the last marker.
The node unnecessarily converted to the old data structure to check if
there were any poly splines. Instead, that warning is just removed,
because the node now still sets resolution values in that case, they
just aren't used (before the values weren't set at all). Either way, it
wasn't clear that looping though all of the curve types was worth
the performance cost here.
In `ffmpeg_read_video_frame` fix assignment used as truth value.
In `ffmpeg_seek_recover_stream_position` loop while return value is
greater or equal to 0.
Passing around coordinates for drawing can be quite confusing, it's
often not clear what they represent and where they are currently.
Instead pass around the tile rectangle for drawing and let all code draw
based on that, it's way more clear that way.
Changes shouldn't be user visible.
Correct misspellings in code comments of "vertex" and "vertices".
See D13932 for more details.
Differential Revision: https://developer.blender.org/D13932
Reviewed by Harley Acheson
This adds a prototype for the first brush that can add new curves by
painting on a surface. Note that this can only be used when the curves
object has a surface object set in the properties panel.
The brush can take minimum distance into account. This allows
distributing curves with a somewhat consistent density.
Differential Revision: https://developer.blender.org/D14207
This commit removes the outline from instances generated from an object
when in edit mode. This takes the change in aa13c4b386 a bit further,
with the idea that instance outlines are more like regular outlines.
Because evaluated object data that doesn't match the original object
type is treated as an instance internally, this fixes the way evaluated
meshes for curves objects have an outline, for example.
See the differential revision for a visual comparison.
Differential Revision: https://developer.blender.org/D14226
This patch enables enables the outliner to use the correct icon for each
of the curve subtypes (Curve/Surface/Font).
Differential Revision: https://developer.blender.org/D14093
Reviewed by: Julian Eisel
Since removal of tweak events 4986f71848,
box-select is activating while dragging files.
As far as I can tell this used to work because of differences
int the order tweak / click-drag events are handled.
Apply a workaround since dragging files doesn't prevent other parts
of the UI from activated (it's possible to open menus for e.g),
this is something we will likely want to limit which would resolve
this bug too.
There are two issues revealed in the bug report:
- the GPU subdivision does not support meshes with only loose geometry
- the loose geometry is not subdivided
For the first case, checks are added to ensure we still fill the
buffers with loose geometry even if no polygons are present.
For the second case, this adds
`BKE_subdiv_mesh_interpolate_position_on_edge` which encapsulates the
loose vertex interpolation mechanism previously found in
`subdiv_mesh_vertex_of_loose_edge`.
The subdivided loose geometry is stored in a new specific data structure
`DRWSubdivLooseGeom` so as to not pollute `MeshExtractLooseGeom`. These
structures store the corresponding coarse element data, which will be
used for filling GPU buffers appropriately.
Differential Revision: https://developer.blender.org/D14171
Partially revert aa71414dfc
This attempted have the click-drag event compatible with old tweak
events, but needs to be re-thought since it caused events to be handled
in unexpected situations.
This utility is useful when using C types that own some resource in
a C++ file. It mainly helps in functions that have multiple return
statements, but also simplifies code by moving construction and
destruction closer together.
Differential Revision: https://developer.blender.org/D14215
This fix contains two parts. There was one critical mistake where
order of two indices was wrong when removing constraint planes from
the array. The other changes are improvements to the used thresholds
to keep everything numerically stable.
Differential Revision: http://developer.blender.org/D14183
This branch was previously run when the action had been handled,
since action checks were removed it was running. This assignment
does nothing but shouldn't be kept.
previously_visible_components_mask was not preserved for Image ID nodes, which
meant it was always detected as newly visible and tagged to be updated, which
in turn caused the geometry nodes using it to be always updated also.
Reviewed By: sergey, JacquesLucke
Maniphest Tasks: T94609
Differential Revision: https://developer.blender.org/D14217
Supporting two kinds of dragging is redundant, remove tweak events as
they only supported 3 mouse buttons and added complexity from using the
'value' to store directions.
Support only click-drag events (KM_CLICK_DRAG) which can be used with
any keyboard or mouse button.
Details:
- A "direction" member has been added to keymap items and events which
can be used when the event value is set to KM_CLICK_DRAG.
- Keymap items are version patched.
- Loading older key-maps are also updated.
- Currently the key-maps stored in ./release/scripts/presets/keyconfig/
still reference tweak events & need updating. For now they are updated
on load.
Note that in general this wont impact add-ons as modal operators don't
receive tweak events.
Reviewed By: brecht
Ref D14214
Clicking and dragging markers accumulates flags from multiple operators
in a way that can't be interpreted when combine.
Follow tweak behavior for cancelling click-drag events.
Replacing tweak key-map items with click-drag caused selection
in the graph/sequencer/node editors to ignore drag events
(all uses of WM_generic_select_modal).
Operators that return OPERATOR_PASS_THROUGH | OPERATOR_FINISHED
result in the event loop considering the event as being handled.
This stopped WM_CLICK_DRAG events from being generated which is not the
case for tweak events.
As click-drag is intended to be compatible with tweak events,
accept drag events when WM_HANDLER_BREAK isn't set or when
wm_action_not_handled returns true.
A simple case of missing the tangent VBO. The tangents are computed from
the coarse mesh, and interpolated on the GPU for the final mesh. Code for
initializing the tangents, and the vertex format for the VBO was
factored out of the coarse extraction routine, to be shared with the
subdivision routine.
Last step of proxy building caused crash due to thread race contition
when acessing ed->seqbase.
Looking at code, it seems that calling IMB_close_anim_proxies on
original strips is unnecessary so this code was removed to resolve this
issue.
Locking seqbase data may be possible, but it's not very practical as
many functions access this data on demand which can easily cause
program to freeze.
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D14210
Mousemove events are sent to windows.
In Windows OS, almost all mousemove events are sent to the window whose
mouse cursor is over.
On MacOS, the window with mousemove events is always the active window.
It doesn't matter if the mouse cursor is inside or outside the window.
So, in order for non-active windows to also have events,
`WM_window_find_under_cursor` is called to find those windows and send
the same events.
The problem is that to find the window, `WM_window_find_under_cursor`
only has the mouse coordinates available, it doesn't differentiate
which monitor these coordinates came from.
So the mouse on one monitor may incorrectly send events to a window on
another monitor.
The solution used is to use a native API on Mac to detect the window
under the cursor.
For Windows and Linux nothing has changed.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D14197
The subdivision modifier for Grease Pencil handles closed strokes
correctly now and does converge to the same shape as the mesh
subdivision surface.
Differential Revision: http://developer.blender.org/D14218
Create `Curves` directly, instead of using the conversion from
`CurveEval`. This means that the `tilt` and `radius` attributes
don't need to be allocated. The old behavior is kept by using the
right defaults in the conversion to `CurveEval` later on.
The Bezier segment primitive isn't ported yet, because functions
to provide easy access to built-in attributes used for Bezier curves
haven't been added yet.
Differential Revision: https://developer.blender.org/D14212
Currently, any time a Curves data-block is created, the `curves_random`
function runs, filling it with 500 random curves, also adding a radius
attribute. This is just left over from the prototype in the initial
commit that added the type.
This commit moves the code that creates the random data to the curve
editors module, like the other primitives are organized.
Differential Revision: https://developer.blender.org/D14211
This patch hides the MetalRT checkbox for AMD GPUs, pending fixes for MetalRT argument encoding on AMD.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D14175
The issue was uncovered by the 0f89bcdbeb, but the root cause goes
into a much earlier design violation happened in the code: the modifier
evaluation function is modifying input mesh, which is not something
what is ever expected.
Bring code closer to the older state where such modification is only
done for the object in edit mode.
---
From own tests works seems to work fine, but extra eyes and testing
is needed.
Differential Revision: https://developer.blender.org/D14191
db4313610c added support for modifier
keys to be released while dragging.
The key release events set wmEvent.prev_type which is used select the
drag threshold, causing the wrong threshold to be used.
Add wmEvent.prev_click_type which is only set when the drag begins.
The image engine is depth aware when using tile drawing the depth is
only updated for the central image what lead to showing the background
on top of other areas.
Also makes sure that switching the tile drawing would lead to an update
of the texture slots.
When dimension of images aren't a multifold of 256 parts of the gpu
textures are not updated. This patch will calculate the correct part of
the image that needs to be reuploaded.
Internally the update tiles are 256x256. Due to some miscalculations
tiles were not generated correctly if the dimension of the image wasn't
a multifold of 256.
Previously we used to cache a float image representation of the image in
rect_float. This adds some incorrect behavior as many areas only expect
one of these buffers to be used.
This patch stores float buffers inside the image engine. This is done per
instance. In the future we should consider making a global cache.
Use a flag for events to avoid adding struct members every time a new
kind of tag is needed - so events remain small.
This also simplifies copying settings as flags can be copied at once
with a mask.
- Rename ED_view3d_win_to_delta `mval` argument to `xy_delta` as it
as it was misleading since this is an screen-space offset not a region
relative cursor position (typical use of the name `mval`).
Also rename the variable passed to this function which also used the
term `mval` in many places.
- Re-order the output argument of ED_view3d_win_to_delta last.
use an r_ prefix for return arguments.
- Document how the `zfac` argument is intended to be used.
- Split ED_view3d_calc_zfac into two functions as the `r_flip` argument
was only used in some special cases.
Small fixes to the drawing of multi input sockets:
- Make the outline thickness consistent with normal node sockets,
independent from the screen DPI.
- Only highlight multi input sockets when they are actually selected.
- Skip selected multi inputs when drawing normal selected sockets.
Differential Revision: https://developer.blender.org/D14192
Currently the code expects the radius attribuet to always exist on the
input Curves. This won't be true in the future though, so the correct
default value of one should be used when creating the data on CurveEval,
where the data is not optional.
This commit improves the drawing of selected node links:
- Highlight the entire link to make it easier to spot where the link
is going/coming from.
- Always draw selected links on top, so they are always clearly
visible.
- Don't fade selected node links when the sockets they are connected
to are out out view.
- Dragged node links still get a partial highlight when they are only
attached to one socket.
Differential Revision: https://developer.blender.org/D11930
Add a std::move in some places to prevent arrays from being copied.
These cases were potentially optimized by the compiler, but this makes
it more explicit.
Differential Revision: https://developer.blender.org/D14129
New code from the vertex normal refactor cfa53e0fbe combined with older code
from 592759e3d6 that disabled instancing for custom normals and autosmooth
meant that instancing was always disabled.
However we do not need to disable instancing for custom normals and autosmooth
at all, this can be shared between instances just fine.
The constraint operators for delete, apply, copy and copy to selected
were missing null checks and could crash blender when called wrongly
from the python api.
Differential Revision: http://developer.blender.org/D14195
This commit changes `CurveComponent` to store the new curve
type by adding conversions to and from `CurveEval` in most nodes.
This will temporarily make performance of curves in geometry nodes
much worse, but as functionality is implemented for the new type
and it is used in more places, performance will become better than
before.
We still use `CurveEval` for drawing curves, because the new `Curves`
data-block has no evaluated points yet. So the `Curve` ID is still
generated for rendering in the same way as before. It's also still
needed for drawing curve object edit mode overlays.
The old curve component isn't removed yet, because it is still used
to implement the conversions to and from `CurveEval`.
A few more attributes are added to make this possible:
- `nurbs_weight`: The weight for each control point on NURBS curves.
- `nurbs_order`: The order of the NURBS curve
- `knots_mode`: Necessary for conversion, not defined yet.
- `handle_type_{left/right}`: An 8 bit integer attribute.
Differential Revision: https://developer.blender.org/D14145
The UI context was only set for the operator polls, but not for the
drop-box polls. Initially I thought this wouldn't be needed since the
drop-boxes should leave up context polls to the operator, but in
practice that may not be what API users expect. Plus the tooltip for the
drop-boxes will likely have to access context anyway, so they should be
able to check it beforehand.
Motion paths can now be initialised to more sensible frame ranges,
rather than simply 1-250:
- Scene Frame Range
- Selected Keyframes
- All Keyframes
The Motion Paths operators are now also added to the Object context menu
and the Dopesheet context menu.
The scene range operator was removed, because the operators now
automatically find the range when baking the motion paths.
The clear operator now appears separated in "Selected Only" and "All",
because it was not clear for the user what the button was doing.
Reviewed By: sybren, looch
Maniphest Tasks: T93047
Differential Revision: https://developer.blender.org/D13687
Caused by 0f89bcdbeb and was not fully addressed by 6f9828289f:
tagging an ID with flag 0 is to be seen as an explicit tag for copy
on write.
Would be nice to either consolidate code paths of flag 0 and explicit
component tag, or get rid of tagging with 0 flag, but that is above of
what we can do for the upcoming release.
When using ancored stroked the diameter of the stroke can be 0 what
leads to a division by zero that on certain platforms wrap to a large
negative number that cannot be looked up. This fix will clamp the size
of the brush to 1.
Now drag & tweak can have modifier keys to be released while dragging.
without this, modifier keys needs to be held which is more noticeable
for tablet input or whenever the drag threshold is set to a large value.
Resolves T89989.
This fixes T93051, T76405 and maybe others.
Characters like '²', '<' are not recognized in Blender's shortcut keys.
And sometimes simple buttons like {key .} and {key /} on the Windows
keyboard, although the symbol is "known", Blender also doesn't
detect for shortcuts.
For Windows, some of the symbols represented by `VK_OEM_[1-8]` values,
depending on the language, are not mapped by Blender.
On Mac there is a fallback reading the "actual character value of the
'remappable' keys". But sometimes the character is not mapped either.
On Windows, the solution now mimics the Mac and tries to read the button's
character as a fallback.
For unmapped characters ('²', '<', '\''), now another value is chosen as a
substitute.
More "substitutes" may be added over time.
Differential Revision: https://developer.blender.org/D14149
Some logic and comments in the vertex normal calculation were
left over from when normals were stored in MVert, before
cfa53e0fbe. Normals are never allocated and freed
locally anymore.
In some cases, the normal edit modifier calculated the normals on one
mesh with the "ensure" functions, then copied the mesh and retrieved
the layers "for write" on the copy. Since 59343ee162, normal
layers are never copied, and normals are allocated with malloc instead
of calloc, so the mutable memory was uninitialized.
Fix by calculating normals on the correct mesh, and also add a warning
to the "for write" functions in the header.
The check to see if newly requested attributes are not already in the
cache was not taking into account the possibility that we do not have
new requested attributes (`num_requests == 0`). In this case, if
`attr_used` already had attributes, but `attr_requested` is empty, we
would consider the cache as dirty, and needlessly rebuild the attribute
VBOs.
These features are complicated to support on GPU and hardly compatible
with subdivision in the first place. In the future, with T68891 and
T68893, subdivision and custom smooth shading will be separate workflows.
For now, and to better prepare for this future (although long term
plan), we should discourage workflows mixing subdivision and custom
smooth normals, and as such, this disables GPU subdivision when
autosmoothing or custom split normals are used.
This also adds a message in the modifier's UI to indicate that GPU
subdivision will be disabled if autosmooth or custom split normals are
used on the mesh.
Differential Revision: https://developer.blender.org/D14194
Reuse the same vertex normals calculation as for the GPU code, by
weighing each vertex normals by the angle of the edges incident to the
vertex on the face.
Additionally, remove limit normals, as the CPU code does not use them
either, and would also cause different shading issues when limit surface
is used.
Fixes T95242: shade smooth artifacts with edge crease and limit surface
Fixes T94919: subdivision, different shading between CPU and GPU
The custom data code checks for `LayerTypeInfo.defaultname` before
adding a second layer with a certain type. This was missed in
e7912dfa19. In practice, this default name
is not actually used.
Added call to ensure that the USD plugins are registered
when opening a USD cache archive. This is to avoid USD
load errors due to missing USD file format plugins when
opening blender files that contain USD transform cache
constraints and mesh sequence cache modifilers.
Fixes T94396
This operation can only be applied on one ID at a time, so only apply it
to the active Outliner item, and not all the selected ones.
Also renamed `Make Library Override` menu entry to `Make Library Override
Single` to emphasis this is not the 'default expected' option for the
user.
This affects essentially the Outliner 'create hierarchy' tool currenlty.
Previously code did not handle properly hierarchy root in case overrides
where created from a non-root ID (e.g. an object inside of a linked
collection), and in case additional partial overrides were added to an
existing partially overrided hierarchy.
Also did some renaming on the go to avoid using 'reference' in override
context for anything else but the reference linked IDs.
Trust user count to actually delete or not the dragged ID when current
dragging is cancelled, since it may be already used by others.
NOTE: This is more a band-aid fix than anything else, cancelling drag
has a lot of other issues here (like never deleting any indirectly
linked/appended data, etc.). It needs a proper rethink in general.
To keep consistency with the new contract option, the dilate now expand the shape beyond the internal closed area.
Note: This was committed only in master (3.2) by error.
This is requested by artist for some animation styles where is necessary to fill the area, but create a gap between fill and stroke.
Also some code cleanup and fix a bug in dilate for top area.
Reviewed By: pepeland, mendio
Differential Revision: https://developer.blender.org/D14082
Note: This was committed only in master (3.2) by error.
Using flags makes checking multiple modifiers at once more convenient
and avoids macros/functions such as IS_EVENT_MOD & WM_event_modifier_flag
which have been removed. It also simplifies checking if modifier keys
have changed.
This means textures need to have the number of mipmap levels specified
upfront. It does not mean the data is immutable.
There is fallback code for OpenGL < 4.2.
Immutable storage will enables texture views in the future.
Without ray offsets intersections at neigbhoring triangles are found, as
the ray start is exactly at the vertex. There was a small offset towards
the center of the triangle, but not enough.
Now this offset computation is moved into Cycles and modified for better
results. It's still not perfect though like any offset approach, especially
with long thin triangles.
Additionaly, this uses the shadow terminate offset for AO rays now, which
helps remove some pre-existing artifacts.
This is similar to f8fe0e831e, which made the change to the
handle position attributes. This commit removes the way that setting the
`position` attribute also changes the handle position attributes. Now,
the "Set Position" node still has this behavior, but changing the
attribute directly (with the modifier's output attributes) does not.
The previous behavior was a relic of the geometry nodes design
from before fields and the set position node existed.
This makes the transition to the new curves data structure simpler.
There is more room for optimizing the Bezier case of the set position
node in the future.
Also fix a couple other places where normals layers weren't properly
tagged dirty or reallocated when the mesh changes.
Caused by cfa53e0fbe. When the size of a mesh changes,
the normal layers need to be reallocated. There were a couple of places
that cleared other runtime data with `BKE_mesh_runtime_clear_geometry`
but didn't deal with normals properly. Clearing the runtime "geometry"
is different from clearing the normals, because sometimes the size of
the normal layers doesn't have to change, in which case simply tagging
them dirty is fine.
Reverts 6d97fdc37e. A function like this should not return a different
tree-display object than of the requested type. This may hide errors,
and leaves the Outliner in an undefined state (where the stored display
mode doesn't match the tree-display object). I rather don't hide the
fact that all display-modes should be handled here, and emit a clear
error if one isn't.
When X-ray mode is active the selection is done using the mesh data to
select what is closest to the cursor. When GPU subdivision is active with
the "show on cage" modifier option, this fails as the mesh used for selection
is the unsubdivided one.
This creates a subdivision wrapper before running the selection routines to
ensure that subdivision is available on the CPU side as well.
Differential Revision: https://developer.blender.org/D14188
This was a double free error which happened because `BM_mesh_bm_from_me`
was taking ownership of arrays that were still owned by the Mesh. Note that
this only happens when the mesh is empty but some custom data layers still
have a non-null data pointer. While usually the data pointer should be null in
this case for performance reasons, other functions should still be able to
handle this situation.
Differential Revision: https://developer.blender.org/D14181
When exporting generated coordinates, the subdivision export code was
using the schema for the non-subdivision case, which is invalid as
non-initialized. This typo existed since the initial commit for the
feature (rBf9567f6c63e75feaf701fa7b78669b9a436f13dd).
The scale-to-fit option did nothing for single words when
the text box had a height. This happened because it was expected that
text would be wrapped however single words never wrap.
Now the same behavior for zero-height text boxes is used when text
can't be wrapped onto multiple lines.
The handle position attributes `handle_left` and `handle_right` had
rather complex behavior to get expected behavior when aligned or auto/
vector handles were used. In order to simplify the attribtue API and
make the transition to the new curves data structure simpler, this
commit moves that behavior from the attribute to the "Set Handle
Positions" node. When that node is used, the behavior should be the
same as before. However, if the modifier's output attributes were used
to set handle positions, the behavior may be different. That situation
is expected to be very rare though.
This adds a node with a boolean field output which returns true if all of the
points of the evaluated face are on the same plane. A float field input allows
for the threshold of the face/point comparison to be adjusted on a per face basis.
One clear use case is to only triangulate faces that are not planar.
Differential Revision: https://developer.blender.org/D13906
Recently we changed the build pipeline to always create a version
number in the url and point 'dev' to the latest version rather than creating the version number url once we release.
This makes the check to `bpy.app.version_cycle` unnecessary.
tbb/enumerable_thread_specific.h drags in windows.h
which will define min/max macro's unless you politely
ask it not to.
it's bit of an eyesore, but it is what it is
0fd72a98ac called functions to set bezier handle positions
that used uninitialized memory. The fix is to define the handle positions
explicitly, like before.
The main goal here is to add the boilerplate code to make it possible
to add the actual sculpt tools more easily. Both brush implementations
added by this patch are meant to be prototypes which will be removed
or refined in the coming weeks.
Ref T95773.
Differential Revision: https://developer.blender.org/D14180
This adds a node which copies part of a geometry a dynamic number
of times.
Different parts of the geometry can be copied differing amounts
of times, controlled by the amount input field. Geometry can also
be ignored by use of the selection input.
The output geometry contains only the copies created by the node.
if the amount input is set to zero, the output geometry will be
empty. The duplicate index output is an integer index with the copy
number of each duplicate.
Differential Revision: https://developer.blender.org/D13701
Sometimes it is useful to get the index ranges that are in an index mask.
That is because some algorithms can process index ranges more efficiently
than generic index masks.
Extracting ranges from an index mask is relatively efficient, because it is
cheap to check if a span of indices contains a contiguous range.
Drag Action was constantly resetting itself to "move".
Solve this by storing the tool settings per tool and no longer clear
gizmo properties when activating a new tool.
- Reserve "test" for tests & testing frameworks.
- Add "check_mypy" to "make help" text.
- Output to the standard output instead of redirecting to log-files,
leave redirecting output log-files to the user running the command.
The build-bot directly referenced this file and doesn't
have publically available configuration.
Add an empty file until this can be removed by the build scripts.
This will make the transition to the new curves data structure
a bit simple, since the handle types can be copied directly between
the two. The change to CurveEval is simple because it is runtime-only.
Helps with building against different OpenXR SDK versions (i.e. for
downstream builds that require specific versions), as the extension was
only defined since OpenXR 1.0.22.
These were only set in two places. One was related to "tessellated loop
normal", and the other derived corner normals. The values were never
checked though, after 59343ee162. The handling of dirty face
corner normals is clearly problematic, but in the future it should be
handled like the normal layers on the other domains instead.
Ref D14154, T95839
Currently, when normals are calculated for a const mesh, a custom data
layer might be added if it doesn't already exist. Adding a custom data
layer to a mesh is not thread-safe, so this can be a problem in some
situations.
This commit moves derived mesh normals for polygons and
vertices out of `CustomData` to `Mesh_Runtime`. Most of the
hard work for this was already done by rBcfa53e0fbeed7178.
Some changes to logic elsewhere are necessary/helpful:
- No need to call both `BKE_mesh_runtime_clear_cache` and
`BKE_mesh_normals_tag_dirty`, since the former also does the latter.
- Cleanup/simplify mesh conversion and copying since normals are
handled with other runtime data.
Storing these normals like other runtime data clarifies their status
as derived data, meaning custom data moves more towards storing
original/editable data. This means normals won't automatically benefit
from the planned copy-on-write refactor (T95845), so it will have to be
added manually like for the other runtime data.
Differential Revision: https://developer.blender.org/D14154
Allow SCREEN_OT_area_swap to operate between different Blender
windows, and other minor feedback improvements.
See D14135 for more details and demonstrations.
Differential Revision: https://developer.blender.org/D14135
Reviewed by Campbell Barton
Currently the RNA functions to add mesh elements like vertices
don't clear the runtime cache of things like triangulation, BVH
trees, etc. This is important, since they might be accessed with
incorrect sizes. This is split from a fix for T95839.
This is still far from perfect but it is better than not working
correctly.
The view/casters intersection bounds are too big and rough to
compute a decent tilemap level that is near the desire shadow pixels
density.
The algorithm works relatively ok if the sun direction is almost
parallel to the ortho view direction.
Atomic operations performed by the C++ standard library might require
libatomic on platforms which do not have hardware support for those
operations.
This change makes it that such configurations are automatically detected
and -latomic is added when needed.
Differential Revision: https://developer.blender.org/D14106
NOTE: This function is currently unused. However, it does use a callback
defined by a few RNA properties through
`RNA_def_property_editable_array_func`, so don't think it should be
removed without further thinking.
This function had two main issues:
* It was doing bitwise AND on potentially three sources of property
flag, when actually used `RNA_property_editable` just use one source
ever.
* It was completely ignoring liboverride cases.
TODO: Deduplicate code between `RNA_property_editable`,
`RNA_property_editable_info` and `RNA_property_editable_index`.
There was accidentally some displacement related code running even when not
using displacement.
Differential Revision: https://developer.blender.org/D14169
Previously, objects and geometries were mapped between frames
using different hash tables in a way that is incompatible with
geometry instances. That is because the geometry mapping happened
without looking at the `persistent_id` of instances, which is not possible
anymore. Now, there is just one mapping that identifies the same
object at multiple points in time.
There are also two new caches for duplicated vbos and textures used for
motion blur. This data has to be duplicated, otherwise it would be freed
when another time step is evaluated. This caching existed before, but is
now a bit more explicit and works for geometry instances as well.
Differential Revision: https://developer.blender.org/D13497
For an upcoming prototype we would introduced a new eTexPaintMode
option. That would add more cases and if statements. This change migrate
the eTexPaintMode to 3 classes. AbstractPaintMode contains a shared interface.
ImagePaintMode for 2d painting and ProjectionPaintMode for 3d painting.
Reset Defaults left the undo stack in an invalid state,
with the active undo step left at the previous state then it should
have been.
Now the buttons own undo logic is used to perform undo pushes.
1. Now handles cyclic strokes correctly.
2. Added a sharp threshold value to allow preservation of sharp corners.
Reviewed By: Antonio Vazquez (antoniov), Aleš Jelovčan (frogstomp)
Ref D14044
1. Now will remove lines if both adjacent faces are back face.
2. Added a check to respect material back face culling setting.
3. Changed label in the modifier to "Force Backface Culling" (which reflect more accurately with what the checkbox does).
Reviewed By: Antonio Vazquez (antoniov), Aleš Jelovčan (frogstomp)
Ref D14041
- No need for `normal_tx` array if we normalize the planes in `plane_tx`.
- No need to calculate the distance squared to a plane (with `dist_signed_squared_to_plane_v3`) if the plane is normalized. `plane_point_side_v3` gets the real distance, accurately, efficiently and also signed.
So normalize the planes of the member `CameraViewFrameData::plane_tx`.
This is a regression partially introduced in rB0a6f428be7f0.
Bones being transformed into edit mode were snapping to themselves.
And the bones of the pose mode weren't even snapping.
(Curious that this was not reported).
Since Python 3.10 is now supported on all platform,
bump the minimum version to reduce the number of Python versions that
need to be supported simultaneously.
Reviewed By: LazyDodo, sybren, mont29, brecht
Ref D13943
This is an alternate fix for T35170 since it caused T44415.
Having the undo system manipulate the key-block coordinates is error
prone as (in the case of T44415) there are situations when it's
important to apply the difference with the original shape key.
This reverts dab0bd9de6, and instead
avoids the problem by not using the data in `Mesh.key` as a reference
for updating shape-keys when exiting edit-mode.
The assumption that the `Mesh.key` in edit-mode won't be modified
until leaving edit-mode isn't always true. Leading to synchronization
errors. (details noted in code-comments).
Resolve this by using shape-key data stored in the BMesh.
Resolving both T35170 & T44415.
Details:
- Remove use of the original vertices when exiting edit mode.
- Remove use of the original shape-key coordinates when exiting
edit-mode (except as a last resort).
- Move shape-key synchronization into a separate function:
`bm_to_mesh_key`.
- Split the synchronization loop into two branches,
depending on the existence of BMesh shape-key coordinates.
- Always write shape-key values back to the BMesh CD_SHAPEKEY layers.
This was only done in some cases but is now necessary for all
shape-keys as these are used to calculate offsets where the `Mesh.key`
was previously used.
- Report a warning when the shape-key layer isn't found as this uses an
imperfect method of restoring coordinates which should only be used as
a last resort.
Reviewed By: mont29
Ref D14127
The problem was when the Object Offset was enabled because the Constant Offset flag was not checked and the offset always was added to the transformation matrix.
Limit the min and max of the IDProperty for the node group input
from 0 to infinity, and the soft min and max between 0 and 1.
Thanks to @PratikPB2123 for investigation.
Instead of accessing the `CD_NORMAL` layer directly,
use the proper API for accessing mesh normals. Even if the
layer exists, the values might be incorrect due to a deformation.
Related to ef0e21f0ae, 969c4a45ce, and T95839.
The flags overlapped ever since normalize was added, so this
requires versioning to copy the flag value. This needs to be
done both in Blender 3.1 and 3.2.
Differential Revision: https://developer.blender.org/D14165
The flags overlapped ever since normalize was added, so this requires
versioning to copy the flag value.
Differential Revision: https://developer.blender.org/D14165
The code was using the same flag value for different modifiers,
resulting in matching the toggle to random overlapping flags.
Differential Revision: https://developer.blender.org/D14165
The constraints solver is now able to handle more cases correctly.
Also the behavior of the boundary fixes is slightly changed if
the constraints thickness mode is used.
Differential Revision: https://developer.blender.org/D14143
The modifier supports arithmetic operations, like Add or Multiply,
but for some reason omits Minimum and Maximum. They are similarly
simple and useful math functions and should be supported.
Differential Revision: https://developer.blender.org/D14164
When RMB-select uses "Select Tweak" as a fallback tool,
ignore all bindings mapped to the Control key as these are
used for path selection.
This was fixed in 2a2d873124
however that caused shift-select to fail (T93100).
The node animation versioning code passes `nullptr` to the `oldName` and
`newName` parameters, but those weren't `NULL`-safe. I added an extra
check for this.
No functional changes, just a crash fix.
Previously, all operators using `PaintStroke` would have to store
the stroke in `op->customdata`. That made it impossible to store
other operator specific data in `op->customdata` that was unrelated
to the stroke.
This patch changes it so that the `PaintStroke` is passed to api
functions as a separate argument, which allows storing the stroke
as a subfield of some other struct in `op->customdata`.
Althought the float buffers are only used as cache, current code paths
don't look at the flags to identify which kind of image it is. Actual
fix would be to check flags, but that wouldn't be something to add one
week before release.
This commit fixes it by removing the buffers after use in the image
engine.
The previous behavior called RNA_enum_item_add a second time,
filled it's contents with invalid values then subtracted totitem,
this caused an unusual reallocation pattern where MEM_recallocN
would run in order to create the dummy item, then again when adding
another itme (reallocating an array of the same size).
Simply memset the array to 0xff instead.
Even though the size of the map was set back to DEFAULT_SIZE_EXP,
the underlying arrays were left unchained. In some cases this caused
further expansions to result in an unusual reallocation pattern
where MEM_reallocN would run expand the entries into an array
that was in fact the same size.
Bug was introduced in D12167.
Reading input size from an input socket is not possible in tiled compositor before execution is initialized. A similar change was done for the base class, see BlurBaseOperation::init_data().
Reviewed by: Jeroen Bakker
Differential Revision: https://developer.blender.org/D14067
This allows removing the indirection for lods during shading since the
tile is not owner of the page unless it uses it.
The cache system is quite more complex but makes it easier to spot
errors since the pages are not scattered into the tile texture.
This also simplify allocation since the free heap is separated from the
cache.
This adds maintence overhead and it is not that useful when we have reset to default.
If this is something that we want it should be added dynamically.
Reviewed By: HooglyBoogly, Severin, #user_interface
Differential Revision: https://developer.blender.org/D14151
After using MEM_dupallocN() on the original item/binding, the
user/component path ListBase for the new item/binding needs to be
cleared and each path copied separately.
This commit should suffice to make the shader API agnostic now (given that
all users of it use the GPU API).
This makes the shaders not trigger a false positive error anymore since
the binding slots are now garanteed by the backend and not changed at
after compilation.
This also bundles all uniforms into UBOs. Making them extendable without
limitations of push constants. The generated uniforms from OCIO are not
densely packed in the UBO to avoid complexity. Another approach would be to
use GPU_uniformbuf_create_from_list but this requires converting uniforms
to GPUInputs which is too complex for what it is.
Reviewed by: brecht, jbakker
Differential Revision: https://developer.blender.org/D14123
This commit should suffice to make the shader API agnostic now (given that
all users of it use the GPU API).
This makes the shaders not trigger a false positive error anymore since
the binding slots are now garanteed by the backend and not changed at
after compilation.
This also bundles all uniforms into UBOs. Making them extendable without
limitations of push constants. The generated uniforms from OCIO are not
densely packed in the UBO to avoid complexity. Another approach would be to
use GPU_uniformbuf_create_from_list but this requires converting uniforms
to GPUInputs which is too complex for what it is.
Reviewed by: brecht, jbakker
Differential Revision: https://developer.blender.org/D14123
This removes manual handling of normals that was hard-coded
to false in the one place the function was called. This change
will help to make a fix to T95839 simpler.
It's better not to expose the details of where the dirty flags are
stored to every place that wants to know if the normals are dirty.
Some of these places are relics from before vertex normals were
computed lazily anyway, so this is more of an incrememtal cleanup.
This will make part of the fix for T95839 simpler.
Make relation match material and world nodes. Does not address the reported
issue regarding muted nodes, but another missing update found investigating.
This was an old issue, but recent image partial update changes made this more
likely to happen in some cases. Now ensure that whenever the rendered scene
switches the image is updated.
This patch aims to fix the issues presented in T87829 and T95331,
namely precision issues while connecting two nodes when being too
close together in the node editor editors, in a few cases even
resulting in the complete inability to connect nodes.
Sockets are found by intersecting a padded rect around the cursor
with the nodes' sockets' location. That creates ambiguities, as it's
possible for the padded rect to intersect with the wrong node,
as the distance between two nodes is smaller than the rect is padded.
The fix in this patch is checking against an unpadded rectangle in
visible_node().
Differential Revision: https://developer.blender.org/D14122
The problem was that the code for sorting polygons around a vertex
assumed that it was a manifold or boundary vertex. However in some cases
the vertex could still be nonmanifold causing the crash. The cases where
the sorting fails are now detected and these vertices are then marked as
nonmanifold.
Differential Revision: https://developer.blender.org/D14065
The "Fill Caps" option on the Curve to Mesh node introduced in
rBbc2f4dd8b408ee makes it possible to fill the open ends of the sweep
to create a manifold mesh.
This patch fixes an edge case, where caps were created even when the
rail curve (the curve used in the "Curve" input socket) was cyclic
making the resulting mesh non-manifold.
Differential Revision: https://developer.blender.org/D14124
In ffmpeg 5.0, several variables were made const to try to prevent bad API usage.
Removed some dead code that wasn't used anymore as well.
Reviewed By: Richard Antalik
Differential Revision: http://developer.blender.org/D14063
Significantly improves loading speed of preview images from disk, e.g. custom
previews loaded using `bpy.utils.previews.ImagePreviewCollection.load()`.
See D14144 for details & comparison videos.
Differential Revision: https://developer.blender.org/D14144
Reviewed by: Bastien Montagne
Currently, whenever any BMesh is converted to a Mesh (except for edit
mode switching), original index (`CD_ORIGINDEX`) layers are added.
This is incorrect, because many operations just convert some Mesh into
a BMesh and then back, but they shouldn't make any assumption about
where their input mesh came from. It might even come from a primitive
in geometry nodes, where there are no original indices at all.
Conceptually, mesh original indices should be filled by the modifier
stack when first creating the evaluated mesh. So that's where they're
moved in this patch. A separate function now fills the indices with their
default (0,1,2,3...) values. The way the mesh wrapper system defers
the BMesh to Mesh conversion makes this a bit less obvious though.
The old behavior is incorrect, but it's also slower, because three
arrays the size of the mesh's vertices, edges, and faces had to be
allocated and filled during the BMesh to Mesh conversion, which just
ends up putting more pressure on the cache. In the many cases where
original indices aren't used, I measured an **8% speedup** for the
conversion (from 76.5ms to 70.7ms).
Generally there is an assumption that BMesh is "original" and Mesh is
"evaluated". After this patch, that assumption isn't quite as strong,
but it still exists for two reasons. First, original indices are added
whenever converting a BMesh "wrapper" to a Mesh. Second, original
indices are not added to the BMesh at the beginning of evaluation,
which assumes that every BMesh in the viewport is original and doesn't
need the mapping.
Differential Revision: https://developer.blender.org/D14018
This commit renames enums related the "Curve" object type and ID type
to add `_LEGACY` to the end. The idea is to make our aspirations clearer
in the code and to avoid ambiguities between `CURVE` and `CURVES`.
Ref T95355
To summarize for the record, the plans are:
- In the short/medium term, replace the `Curve` object data type with
`Curves`
- In the longer term (no immediate plans), use a proper data block for
3D text and surfaces.
Differential Revision: https://developer.blender.org/D14114
Fix boundary error in `BLI_str_unescape_ex`. The `dst_maxncpy` parameter
indicates the maximum buffer size, not the maximum number of characters.
As these are strings, the loop has to stop one byte early to allow space
for the trailing zero byte.
Thanks @mano-wii for the patch!
When removing a node that has a dependence on an ID, like the object
info node, the dependency graph relations weren't updated. This can
cause unexpected performance issues if a complex node tree continues
to depend on an ID that it doesn't actually use anymore. To fix this case,
tag relations for an update if the node has a data-block socket.
Fixes part of T88332
Differential Revision: https://developer.blender.org/D14121
If there is no animation at all, or it's all hidden, the Euler Filter
operators poll now fails with a message that explains this a bit more,
instead of just the generic "context is wrong" error.
Reviewed By: sybren
Maniphest Tasks: T95135
Differential Revision: https://developer.blender.org/D13967
While this should not happen in theory, very bad/broken/dirty files can
lead to such situations.
So we need to re-ensure valid root IDs after resync (for now, done after
each 'library indirect level' pass of resync, this may not be 100%
bulletproof though, time will say).
Found while investigating Blender studio issues in Snow parkour short.
In some cases broken files could lead to selecting a shapekey as
hierarchy root ID, which is not allowed.
Found while investigating Blender studio issues in Snow parkour short.
When new display driver is given to the PathTrace ensure that there are
no GPU resources used from it by the work. This solves graphics interop
descriptors leak.
This aqlso fixes Invalid graphics context in cuGraphicsUnregisterResource
error when doing final render on the display GPU.
Fixes T95837: Regression: GPU memory accumulation in Cycles render
Fixes T95733: Cycles Cuda/Optix error message with multi GPU devices. (Invalid graphics context in cuGraphicsUnregisterResource)
Fixes T95651: GPU error (Invalid graphics context in cuGraphicsUnregisterResource)
Fixes T95631: VRAM is not being freed when rendering (Invalid graphics context in cuGraphicsUnregisterResource)
Fixes T89747: Cycles Render - Textures Disappear then Crashes the Render
Maniphest Tasks: T95837, T95733, T95651, T95631, T89747
Differential Revision: https://developer.blender.org/D14146
Add check for `NULL` `from` pointer to `BLO_main_validate_shapekeys`,
and delete these shapekeys, as they are fully invalid and impossible to
recover.
Found in a studio production file (`animation
test/snow_parkour/shots/0040/0040.lighting.blend`, svn rev `1111`).
Would be nice to know how this was generated too...
This adds initial support for edit mode for the experimental new curves
object. For now we can only toggle in and out of the mode, no real
interraction is possible.
This patch also adds empty menus in edit mode. Those were added mainly
to quiet warnings as the menus are programmatically added to the edit
mode based on the object type and context.
Ref T95769
Reviewed By: JacquesLucke, HooglyBoogly
Maniphest Tasks: T95769
Differential Revision: https://developer.blender.org/D14136
This adds the boilerplate code that is necessary to use the tool/brush/paint
systems in the new sculpt curves mode.
Two temporary dummy tools are part of this patch. They do nothing and
only serve to test the boilerplate. When the first actual tool is added,
those dummy tools will be removed.
Differential Revision: https://developer.blender.org/D14117
Previous implementation had a copy of the image user, which doesn't
contain all the data to identify changes. This patch introduces a new
struct to store the data and can be extended with other data as well
(color spaces, alpha settings).
Check for a camera-view before checking if the view is locked
to the cursor/object since the camera-view takes priority,
it reads better to check that first.
Also reuse the event offset variable.
NDOF navigation in a camera view now behaves like orthographic pan/zoom.
Note that NDOF orbiting out of the camera view has been disabled,
see code comment for details.
Resolves T93666.
For the attribute search button, the tooltip was missing
if the input socket type has attribute toggle activated.
Differential Revision: https://developer.blender.org/D14142
This affected loading of EXR files with set to Linear ACES colorspace, as
well as the sky texture for in some custom OpenColorIO configurations.
Use the builtin OpenColorIO transform from ACES AP0 to XYZ D65 to fix this.
The idea is to keep `is_any_zero` in the `blender::math` namespace,
so instead of trying to be clever, just move it there and expand the
function where it was used in the class.
I noticed that there were a few variables that should not be visible per default.
It seems to me to simply be an oversight, so I went ahead and cleaned them up.
Reviewed By: Sybren, Ray molenkamp
Differential Revision: http://developer.blender.org/D14132
This commit should suffice to make the shader API agnostic now (given that
all users of it use the GPU API).
This makes the shaders not trigger a false positive error anymore since
the binding slots are now garanteed by the backend and not changed at
after compilation.
This also bundles all uniforms into UBOs. Making them extendable without
limitations of push constants. The generated uniforms from OCIO are not
densely packed in the UBO to avoid complexity. Another approach would be to
use GPU_uniformbuf_create_from_list but this requires converting uniforms
to GPUInputs which is too complex for what it is.
Reviewed by: brecht, jbakker
Differential Revision: https://developer.blender.org/D14123
This is a bug on the Blender side, where the depsgraph does not have proper
relations for text object duplis and fails to include the required materials
in the dependency graph. But at least Cycles should not crash.
- No need for `normal_tx` array if we normalize the planes in `plane_tx`.
- No need to calculate the distance squared to a plane (with `dist_signed_squared_to_plane_v3`) if the plane is normalized. `plane_point_side_v3` gets the real distance, accurately, efficiently and also signed.
So normalize the planes of the member `CameraViewFrameData::plane_tx`.
FindOpenImageIO was updated to link to separate OpenImageIO_Util for new
versions, where it is required. For older versions, we can not link to it
because there will be duplicated symbols.
Ref D14128
FindOpenEXR was updated to find new lib names and separate Imath. It's all
added to the list of OpenEXR include dirs and libs.
This keeps it compatible with both version 2 and 3 for now, and doesn't
require changes outside the find module.
Ref D14128
Coarse meshes with high polycount would show as corrupted when GPU
subdivision is used with AMD cards This was caused by the OpenSubdiv
library not taking `GL_MAX_COMPUTE_WORK_GROUP_COUNT` into account when
dispatching computes. AMD drivers tend to set the limit lower than
NVidia ones (2^16 for the former, and 2^32 for the latter, at least
on my machine).
This moves the `GLComputeEvaluator` from the OpenSubdiv library into
`intern/opensubdiv` and modifies it to compute a dispatch size in a
similar way as for the draw code: we split the dispatch size into a 2
dimensional value based on `GL_MAX_COMPUTE_WORK_GROUP_COUNT` and
manually compute an index in the shader.
We could have patched the OpenSubdiv library and sent the fix upstream
(which can still be done), however, moving it to our side allows us to
better control the `GLComputeEvaluator` and in the future remove some
redundant work that it does compared to Blender (see T94644) and
probably prepare the ground for Vulkan support. As a matter of fact,
this patch also removes the OpenGL initialization that OpenSubdiv would
do here. This removal is not related to the bug fix, but necessary to not
have to copy more files/code over.
Differential Revision: https://developer.blender.org/D14131
Previously, the number of action map subactions was limited to two per
action (identified by user_path0, user_path1), however for devices with
more than two user paths (e.g. Vive Tracker) it will be useful to
support a variable amount instead.
For example, a single pose action could then be used to query the
positions of all connected trackers, with each tracker having its own
subaction tracking space.
NOTE: This introduces breaking changes for the XR Python API as follows:
- XrActionMapItem: The new `user_paths` collection property
replaces the `user_path0`/`user_path1` properties.
- XrActionMapBinding: The new `component_paths` collection property
replaces the `component_path0`/`component_path1` properties.
Reviewed By: Severin
Differential Revision: https://developer.blender.org/D13949
This fixes VR pink screen issues when using the DirectX backend, caused
by `wglDXRegisterObjectNV()` failing to register the shared
OpenGL-DirectX render buffer. The issue is mainly present on AMD
graphics, however, there have been reports on NVIDIA as well.
A limited workaround for the SteamVR runtime (AMD only) was provided
in rB82ab2c167844, however this patch provides a more complete solution
that should apply to all OpenXR runtimes. For example, with this patch,
the Windows Mixed Reality runtime that exclusively uses DirectX can now
be used with AMD graphics cards.
Implementation-wise, a `GL_TEXTURE_2D` render target is used as a
fallback for the shared OpenGL-DirectX resource in the case that
registering a render buffer (`GL_RENDERBUFFER`) fails. While using a
texture render target may be less optimal than a render buffer, it
enables proper display in VR using the OpenGL/DirectX interop (tested
on AMD Vega 64).
Reviewed By: Severin
Differential Revision: https://developer.blender.org/D14100
After running the breakdown operator for the graph editor,
the factor property in the redo panel didn't reflect the value you chose
to mitigate that issue down the line there is a
new helper function to get the factor value, and
store it at the same time
Reviewed by: Sybren A. Stüvel
Differential Revision: https://developer.blender.org/D14105
Ref: D14105
Brew's Python framework's site-packages is a symlink so the assumption
that Resources and site-packages would be in the same directory
doesn't hold. So install scripts etc relative to bpy.so. Part of D14111
StringGrid has been deprecated in openvdb 9.0.0 and will be removed soon
Reviewed By: Brecht
Differential Revision: http://developer.blender.org/D14133
The general idea here is to wrap the `CurvesGeometry` DNA struct
with a C++ class that can do most of the heavy lifting for the curve
geometry. Using a C++ class allows easier ways to group methods, easier
const correctness, and code that's more readable and faster to write.
This way, it works much more like a version of `CurveEval` that uses
more efficient attribute storage.
This commit adds the structure of some yet-to-be-implemented code,
the largest thing being mutexes and vectors meant to hold lazily
calculated evaluated positions, tangents, and normals. That part might
change slightly, but it's helpful to be able to see the direction this
commit is aiming in. In particular, the inherently single-threaded
accumulated lengths and Bezier evaluated point offsets might be cached.
Ref T95355
Differential Revision: https://developer.blender.org/D14054
Finding the greatest and/or smallest element in an array is a common
need. This commit refactors the point cloud bounds code added in
6d7dbdbb44 to a more general header in blenlib.
This will allow reusing the algorithm for curves without duplicating it.
Differential Revision: https://developer.blender.org/D14053
This is meant to complement the `blender::math` functions recently
added by D13791. It's sometimes desired to template an operation to work
on vector types, but also basic types like `float` and `int`. This patch
adds that ability with a new `BLI_math_base.hh` header.
The existing vector math header is changed to use the `vec_base` type
more explicitly, to allow the compiler's generic function overload resolution
to determine which implementation of each math function to use.
This is a relatively large change, but it also makes the file significantly
easier to understand by reducing the use of macros.
Differential Revision: https://developer.blender.org/D14113
Fix possible overflow of Modifier UUID
The code prior this change was re-generating modifier's session UUID
prior to copying this id from the source. This approach has a higher
risk of modifiers session UUID to overflow and start colliding with
existing modifiers.
This change makes it so that modifier copy does not re-generated the
session UUID unless it is needed.
Differential Revision: https://developer.blender.org/D14125
GLUT does not support offscreen contexts, which is required for the new
display driver. So we use SDL instead. Note that this requires using a
system SDL package, the Blender precompiled SDL does not include the video
subsystem.
There is currently no text display support, instead info is printed to
the terminal. This would require adding an embedded font and GLSL shaders,
or using GUI library.
Another improvement to be made is supporting OpenColorIO display transforms,
right now we assume Rec.709 scene linear and display.
All OpenGL, GLEW and SDL code was move out of core cycles and into
app/opengl. This serves as a template for apps that want to integrate
Cycles interactive rendering, with a simple OpenGLDisplayDriver example.
In general this would be adapted to the graphics API and color management
used by the app.
Ref T91846
Due to recent changes there have been reports of incorrect loading of
GPU textures. This fix reverts a part of {D13238} that might be the
source of the issue.
This does not happen with **any** image, but with images that have ID
properties.
ID properties are used to store view projection matrices (e.g. for
reprojection with `Image from View` or `Quick Edit` -- these are the
ones we are interested in), but of course they can be used for anything
else, too. The images in the file from the report have ID properties from
an Addon for example.
So the crash can reliably be reproduced with **any** image doing the
following:
```
bpy.data.images['myImage']['myIDprop'] = "foo"
```
This would lead code in `texture_paint_camera_project_exec` to think the
needed `view_data` is on the image (but in reality it was just some
other IDprop).
Solution is simple: just check `view_data` is really valid after getting
it from the IDprops.
Maniphest Tasks: T95787
Differential Revision: https://developer.blender.org/D14116
Having this setting stored in the image space caused low level selection
logic to have to pass around the image space (which could be NULL
in some cases). Use the tool-settings instead since there doesn't seem
to be much/any advantage in having this setting per-space.
Although rB56407432a6a did fix missing subdivision in some cases, in
other cases it did not return the mesh wrapper (like when using
autosmooth, which requires a copy of the mesh), so the non-subdivided
mesh was still returned.
Not all coarse vertices were used to compute the center value (off by
one), and the interpolation for the current would always start at the
base corner for the base face instead of the base corner for the current
patch.
This patch reverses the dependency between `BLI_math_vec_types.hh` and
`BLI_math_vector.hh`. Now the higher level `blender::math` functions
depend on the header that defines the types they work with, rather than
the other way around.
The initial goal was to allow defining an `enable_if` in the types header
and using it in the math header. But I also think this operations to types
dependency is more natural anyway.
This required changing the includes some files used from the type
header to the math implementation header. I took that change a bit
further removing the C vector math header from the C++ header;
I think that helps to make the transition between the two systems
clearer.
Differential Revision: https://developer.blender.org/D14112
This commit renames `mesh_validate.cc` to `mesh_calc_edges.cc`.
I would like to move `mesh_validate.c` to C++, but it makes sense to
keep this specific algorithm in a smaller file.
When running `./blender/build_files/build_environment/install_deps.sh` on Ubuntu 20.04.3 LTS the following error can be seen:
```
./blender/build_files/build_environment/install_deps.sh: line 1266: [: too many arguments
```
This error results from the call:
```
check_package_version_ge_DEB $CLANG_FORMAT $CLANG_FORMAT_VERSION
```
with `CLANG_FORMAT_VERSION` being undefined.
Also as `clang-format` 13 is already released and hopefully didn't break anything `CLANG_FORMAT_VERSION_MEX` could use version bump.
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D13924
This adds a new sculpt mode to the experimental new curves object.
Currently, this mode can only be entered and exited, nothing else.
The main initial purpose of this node will be to use it for hair grooming.
The patch also adds the `editors/curves/` directory for the new curves
object, which will be necessary for many other things as well.
I added a completely new mode (`OB_MODE_SCULPT_CURVES`), because
`OB_MODE_SCULPT` seems to be rather specific to meshes, and reusing
it doesn't seem worth the trouble. The tools/brushes used in mesh vs.
curves sculpt mode are quite distinct as well.
I had to add DNA_userdef_enums.h to make the patch compile with C++
(forward declaration of enums isn't allowed). This follows the same
pattern that we use for other enums in dna.
Differential Revision: https://developer.blender.org/D14107
The path calculation method for animation players: frame-cycler, rv &
mplayer would fail when the number of digits exceeded the range of a
32bit int causing RenderData.frame_path() to raise a ValueError.
Use a simpler method of extracting the range that uses the sign to
detect the beginning of the number.
Since the output file stays unmodified for most developer builds,
install step installed it redundantly.
Create readme.html using `configure_file`:
- Now it's modified only if final output changes (handled by CMake).
- If input file (from git) or blender version changes,
it //will// be modified.
Also don't re-implement what CMake can do.
Reviewed By: campbellbarton, LazyDodo
Differential Revision: https://developer.blender.org/D13863
- Increment the argument index at the end of the loop.
Otherwise using the index after incrementing required subtracting 1.
- Move error prefix creation into a function: `pyrna_func_error_prefix`
so it's possible to create an error prefix without duplicate code.
This simplifies further changes for argument parsing from D14047.
Set "view2d_edge_pan" to true for the NODE_OT_translate_attach operator,
which is used by the duplication operator. This is done in the keymap so
that it's not hard-coded.
Differential Revision: https://developer.blender.org/D13934
The root issue was caused by a mistake in modifier copy data which was
wrongly re-generating source modifier data identifier.
The c8cca88851 simply exposed a bug in code which always was there
since the modifiers session UUID was introduced.
Shows an importance of const qualifier :)
Since now we delegate the evaluation of the last subsurf modifier in the stack
to the draw code, Cycles does not get a subdivided mesh anymore. This is because
the subdivision wrapper for generating a CPU side subdivision is never created
as it is only ever created via `BKE_object_get_evaluated_mesh` which Cycles does
not call (rather, it accesses the Mesh either via `object.data()`, or via
`object.to_mesh()`).
This ensures that a subdivision wrapper is created when accessing the object data
or converting an Object to a Mesh via the RNA/Python API.
Reviewed by: brecht
Differential Revision: https://developer.blender.org/D14048
This is requested by artist for some animation styles where is necessary to fill the area, but create a gap between fill and stroke.
Also some code cleanup and fix a bug in dilate for top area.
Reviewed By: pepeland, mendio
Differential Revision: https://developer.blender.org/D14082
The crash is caused as we did not check that the RNA pointer is null
before trying to use it. This moves the existing checks from the
modifier panels into the template functions so the logic is a bit
centralized.
The issue has two causes: on one hand origin indices were not handled
properly, on the other hand the extraction type (Mesh, BMesh, or mapped)
was not detected correctly.
For the second case reuse the MeshRenderData creation from the coarse
code path so that we make the same decisions. Loose geometry extraction
had to be updated to properly handle the BMesh cases.
For the origin indices, in some cases (for edges and faces), the arrays
used by the subdivision code already have the origin indices baked into
them, so mapping them a second time through the origin index layer is
wrong, and could cause out of bounds accesses.
For vertices especially, we would use two arrays: one for mapping
subdivision vertices to coarse vertices, and another one to map coarse
vertices to subdivision loops used for the selection index buffer. The
second one is now removed (which saves a bit of memory) as it is did not
have the proper data setup for use with the origin indices and we can
easily compute it using the first array anyway.
Fix segfault when calling `some_id.id_properties_ui("propname").update()`,
i.e. call the `update()` function without any keyword arguments. In such
a case, Python passes `kwargs = NULL`, but `PyDict_Contains()` is not
`NULL`-safe.
The Viewer marked the gpu texture to be out of date. But it should have used
the mark_full_update as the gpu textures
are only used by the render/draw engines.
The image/node editor uses the image engine that have its own GPU textures.
Currently one a single texture slot is used to update the screen.
Current design is implemented to use multiple textures.
for now limit the number of texture slots to 1.
This node is a bit of a weird case, because it uses the value stored in an
output socket as an input. So when we want to determine if the Dot
changed, we also have to check if the Normal output changed.
A cleaner solution would be to refactor this by either storing the normal
on the node directly (instead of in an output socket), or by exposing it
by a separate input. This refactor should be done separately though.
Currently whenever gl queries are performed for the viewport, a large
1024 byte array is allocated to store the query results (256 of them).
Unfortunately, if any gizmo using a `draw_select` callback is active
(e.g. the transform gizmos), these queries (and allocations) will occur
during every mouse move event.
Change the vector to allow for up to 16 query results before making an
allocation. This provides enough space for every built-in gizmo except
Scale Cage (which needs 27 queries). It also removes unnecessary
allocations from two other related vectors used during query processing.
Differential Revision: https://developer.blender.org/D13784
This is a regression partially introduced in rB0a6f428be7f0.
Bones being transformed into edit mode were snapping to themselves.
And the bones of the pose mode weren't even snapping.
(Curious that this was not reported).
This fix avoid the drif checking if the previous position is equals to new one, in this case, the pen has not moved and can be canceled.
Differential Revision: https://developer.blender.org/D13870
The animation playback did not take into account individual stereoscopic views.
This patch fixes this by playing back the active view render.
Reviewed By: campbellbarton
Maniphest Tasks: T91423
Differential Revision: https://developer.blender.org/D14070
Workaround for a compilation issue preventing kernels compiling for AMD GPUs: Avoid problematic use of templates on Metal by making `gpu_parallel_active_index_array` a wrapper macro, and moving `blocksize` to be a macro parameter.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D14081
A zero length vector was normalized and the resulting NaN used in further calculations.
This caused trouble on some compilers when using fast math.
Reviewed By: brecht, sergey
Differential Revision: https://developer.blender.org/D14058
* Apple Silicon support enabled on macOS 12.2+
* AMD support enabled on macOS 12.3+
This patch also fixes a device enumeration crash on certain AMD configs which
was caused by over-release of MTLDevice objects.
Differential Revision: https://developer.blender.org/D14090
Improve the nodes' drop shadow by making it scale with the view
and replace the loop for the alpha calculation with something more
explicit.
The amount of drop shadow softness was scaled with the zoom level
and therefore had a fixed screen space size. DPI and UI scale
weren't taken into account either. This patch fixes both issues by
basing the shadow softness on the `widget_unit` that scales correctly
in zoomable views and takes UI scale etc. into account.
Differential Revision: https://developer.blender.org/D13356
* Replace license text in headers with SPDX identifiers.
* Remove specific license info from outdated readme.txt, instead leave details
to the source files.
* Add list of SPDX license identifiers used, and corresponding license texts.
* Update copyright dates while we're at it.
Ref D14069, T95597
Drivers make it way too easy to create dependenciy loops between IDs, so
need to use the same trick as in other dependency-following code in this
file to prevent those infinite loops.
hard to predict for sure how bad of a hierarchy root this can end up
producing, but in general cases think this should be OK.
The rbit instruction is only available starting with ARMv6T2 and
the register prefix is different from what AARCH64 uses.
Separate the 32 and 64 bit ARM branches, add missing ISA checks.
Made sure the code works as intended on macMini with Apple silicon,
and on Raspberry Pi 4 B running 32bit Raspbian OS.
Differential Revision: https://developer.blender.org/D14056
Reduce compute effort of liboverrides resync process by only re-syncing
the parts of the override hierarchy that actually need it.
The main change compared to existing code (which was systematically resyncing
a whole override hierarchy), is that resyncing now operates over several
sub-hierarchies at once, each defined by their own 'resync root' ID.
This ensures that we do not get several new overrides for the same data inside
of the same hierarchy.
Implements T95682.
Differential Revision: https://developer.blender.org/D14079
This patch increases the performance when remapping data.
{D13615} introduced a mechanism to remap multiple items in a single go.
This patch uses the same mechanism when remapping data inside ID datablocks.
Benchmark results when loading the village scene of sprite fright on AMD Ryzen 7 3800X 8-Core Processor
Before this patch 115 seconds
When patch applied less than 43 seconds
There is still some room for improvement by porting relink code.
Reviewed By: mont29
Maniphest Tasks: T95279
Differential Revision: https://developer.blender.org/D14043
Adds helper functions to debug IDRemapper data structure.
`BKE_id_remapper_result_string` converst a given IDRemapperApplyResult
to a readable form for logging purposes.
`BKE_id_remapper_print` prints out the rules inside a IDRemapper struct.
Instead of creating and destroying threads when starting and stopping renders,
keep a single thread alive for the duration of the session. This makes it so all
display driver OpenGL resource allocation and destruction can happen in the same
thread.
This was implemented as part of trying to solve another bug, but it did not
help. Still I prefer this behavior, to eliminate potential future issues wit
graphics drivers or with future Cycles display driver implementations.
Differential Revision: https://developer.blender.org/D14086
For reasons unclear, destroying and then recreating a vertex buffer in the
render OpenGL context is affecting the immediate mode vertex buffer in the
draw manager OpenGL context.
Instead just create a single vertex buffer and use it for the lifetime of
the render OpenGL context. There's not really any need to have a separate
one per tile as far as I can tell.
Differential Revision: https://developer.blender.org/D14084
This adds support for exporting attributes from a Blender Curves object to Cycles.
The implementation follows that of the Mesh object. This also creates motion blur
data if the "velocity" attribute is present on the Curves.
Ref T94193
Reviewed By: brecht
Maniphest Tasks: T94193
Differential Revision: https://developer.blender.org/D14088
Due to the freeing and re-creation of textures performed when binding
offscreen viewports, VR viewport textures would be needlessly
re-created every drawing iteration, leading to a negative impact on VR
frame rate.
This was brought to light by 6738ecb64e, which introduced an
additional texture clear operation on initialization and was
prohibitively costly on some systems when performed every frame.
Now, the textures for VR viewports will not be always re-created
during offscreen binding, but only when necessary using a pre-drawing
step (`wm_xr_session_surface_offscreen_ensure()`).
Reviewed By: jbakker, fclem
Differential Revision: https://developer.blender.org/D14059
When using a RGBA16 (`GL_RGBA16`, `DXGI_FORMAT_R16G16B16A16_UNORM`)
swapchain format with Quest 2, no image is presented to the headset.
This can occur when using the SteamVR runtime with an AMD graphics card
(ex. T95374).
Workaround is to move this format after the Quest 2-compatible RGBA16F
formats in the candidates list so that the RGBA16F formats are chosen
instead.
Reviewed By: Severin
Differential Revision: https://developer.blender.org/D14024
Crash was caused since the function pointers
`s_xrGetOpenGLGraphicsRequirementsKHR_fn`/
`s_xrGetD3D11GraphicsRequirementsKHR_fn` were static and were not
updated with the correct proc address after being set the first time.
As stated in the OpenXR spec: "function pointers returned by
xrGetInstanceProcAddr using one XrInstance may not be valid when used
with objects related to a different XrInstance".
Although it would seem reasonable that the proc address would not
change if the instance was the same (hence the `static XrInstance s_instance;`),
in testing, repeated calls to `xrGetInstanceProcAddress()`
with the same instance still can result in changes (at least for the
SteamVR runtime) so the workaround is to simply set the function pointers
every time, essentially trivializing their `static` designations.
Reviewed By: Severin
Maniphest Tasks: T94268
Differential Revision: https://developer.blender.org/D14023
For the majority of node groups created in Blender 3.0 the behavior does not change.
So far we only found a single file where this setting has an effect.
Differential Revision: https://developer.blender.org/D14078
For an upcoming project we would want to match multiple id types in a
single go. To not replicate the implementation using other types we
introduce `BKE_library_id_can_use_filter_id` that returns all supported
types as a filter.
Not all ID types have a filter_id (ID_LI, ID_KE, ID_SCR) These
exceptions are not available in the filter_id function.
Reviewed By: mont29
Maniphest Tasks: T95279
Differential Revision: https://developer.blender.org/D14061
Use a shorter/simpler license convention, stops the header taking so
much space.
Follow the SPDX license specification: https://spdx.org/licenses
- C/C++/objc/objc++
- Python
- Shell Scripts
- CMake, GNUmakefile
While most of the source tree has been included
- `./extern/` was left out.
- `./intern/cycles` & `./intern/atomic` are also excluded because they
use different header conventions.
doc/license/SPDX-license-identifiers.txt has been added to list SPDX all
used identifiers.
See P2788 for the script that automated these edits.
Reviewed By: brecht, mont29, sergey
Ref D14069
Complex Solidify creates edge bevel weights on the rim if the
according vertex has some vertex bevel weight. If there are no
edge bevel weights, they were left disabled even if vertex bevel
weights are used.
Some curve objects don't have an evaluated mesh at all, but line art
currently assumes that all curve objects have one before converting
it to a mesh internally. Fix this by checking if the curve object has an
evaluated mesh before skipping it.
The remaining problem is that evalauted from non-mesh objects or
evaluated curves from non-curve objects, etc. will be ignored if
"Allow Duplicates" is off. That's a different problem though.
Differential Revision: https://developer.blender.org/D14036
For curve-heavy scenes, memory consumption regressed when we switched from MetalRT to bvh2. Allow users to opt in to MetalRT to workaround this.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D14071
Disable binary archives on Apple Silicon (issue stems from instancing multiple PSOs from the same binary archive). Pipeline creation still filters through the OS shader cache, mitigating any impact on setup times after the initial render.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D14072
This is part of the project of converting `MVert` into `float3`.
(more details in T93602), The pbvh update flag is removed and
replaced with a bitmap stored in the PBVH structure. This
patch is similar to D13878. This is mainly setup for an eventual
performance improvement by removing the extra data from
mesh vertices, but if it's consistent with testing in the other patch
doing the same thing for another "temp tag", then it may actually
increase the speed of sculpt code slightly, since less memory needs
to be loaded when checking/changing the flags.
Differential Revision: https://developer.blender.org/D14000
The main issue is that the image and image user is not updated correctly
in `rna_ImageUser_update`. `BKE_image_user_frame_calc` does not set the
correct frame, because the image is null. Also `IMA_GPU_REFRESH` is not
set for the same reason.
When gpu materials are first created, it is expected that the frame is set
correctly, and the flag is set if necessary. Therefore, somewhere during
depsgraph evaluation, those have to be updated. The depsgraph node
to do the update existed already. Now there is a new relation so that it is
executed when the node tree changed, not only when the frame changed.
This is partially caused by a stupid mistake in cfa53e0fbe
where I missed initializing the `vert_normals` pointer in
`MResolvePixelData`. It's also caused by questionable assumptions
from DerivedMesh code that vertex normals would be valid.
The fix used here is to create a temporary mesh with the data necessary
to compute vertex normals, and ensure them here. This is used because
normal calculation is only implemented for `Mesh` and edit mesh, not
`DerivedMesh`. While this might not be great for performance, it's
potentially aligned with future refactoring of this code to remove
`DerivedMesh` completely. Since this is one of the last places the data
structure is used, that would be a great improvement.
Differential Revision: https://developer.blender.org/D13960
While applying liboverrides on linked data, RNA code (in
`rna_property_override_check_resync`) would detect a lot of false positive
regarding IDs needing resync, because the temporary ID used to apply
overrides was always local, breaking the libraries comparison check.
NOTE: that whole 'apply liboverrides' code could use some refreshment,
it did not change much from initila version several years ago, we now
have better tools and control over non-main data.
But for now the 'trick' in that commit should do the job, ultimately
those temps IDs should never be put in Main at all.
The crash was happening when the mesh had loose edges.
Loose edges are not part of OpenSubdiv topology and hence should not be
communicated to the refiner. Pass ta boolean flag indicating whether an
edge is loose or not in the mesh foreach routines, which seems to be
the easiest way.
Decouple the reference (linked) root ID and the hierarchy (override) root ID.
Previous code was assuming that the reference ID was always also tagged
for override creation, which is true in current master, but cannot be
assumed in general (and won't be true with partial resync anymore).
Also add asserts to validate conditions that the reference/hierarchy_root
variables must meet.
There are two things achieved by this change:
- No possible downcast of size_t to int when calculating motion steps.
- Disambiguate call to `min()` which was for some reason considered
ambiguous on 32bit platforms `min(int, unsigned int)`.
- Do the same for the `max()` call to keep them symmetrical.
On an implementation side the `min()` is defined for a fixed width
integer type to disambiguate uint from size_t on 32bit platforms,
and yet be able to use it for 32bit operands on 64bit platforms without
upcast.
This ended up in a bit bigger change as the conditional compile-in of
functions is easiest if the functions is templated. Making the functions
templated required to remove the other source of ambiguity which is
`algorithm.h` which was pulling min/max from std.
Now it is the `math.h` which is the source of truth for min/max.
It was only one place which was relying on `algorithm.h` for these
functions, hence the choice of `math.h` as the safest and least
intrusive.
Fixes 32bit platforms (such as i386) in Debian package build system.
Differential Revision: https://developer.blender.org/D14062
This implements the update cache described in T95401.
The cache is currently only used for drawing strokes and
sculpting (using the push brush).
**Note: Making use of the cache throughout grease pencil will
have to be done incrementally in other patches. **
The update cache stores what elements have changed in the
original data-block since the last time the eval object
was updated. Additionally, the update cache can store multiple
updates to the data and minimizes the number of elements
that need to be copied.
Elements can be tagged using `BKE_gpencil_tag_full_update` and
`BKE_gpencil_tag_light_update`. A full update means that the element
itself will be copied but also all of the content inside. E.g. when a
layer is tagged for a full update, the layer, all the frames inside the
layer and all the strokes inside the frames will be copied.
A light update means that only the properties of the element are copied
without any of the content. E.g. if a layer is tagged with a light
update, it will copy the layer name, opacity, transform, etc.
When the update cache is in use (e.g. elements have been tagged) then
the depsgraph will not trigger a copy-on-write, but an update-on-write.
This means that the update cache will be used to determine what elements
have changed and then only those elements will be copied over to the
eval object.
If the update cache is empty or the data block was tagged with a full
update, we always fall back to a copy-on-write.
Currently, the update cache is only used by the active depsgraph. This
is because we need to free the update cache after an update-on-write so
it's reset and we need to make sure it is not freed or read by other
depsgraphs.
Co-authored-by: @yann-lty
This patch was contributed by The SPA Studios.
Reviewed By: sergey, antoniov, #dependency_graph, pepeland, mendio
Maniphest Tasks: T95401
Differential Revision: https://developer.blender.org/D13984
Simple upgrade of OpenXR to 1.0.22, following the steps from
https://wiki.blender.org/wiki/Source/OpenXR_SDK_Dependency and
rBb69ab42982a1. No changes to Blender code were necessary, only a version
bump.
The primary motivation for this upgrade is to utilize the
`XR_HTCX_vive_tracker_interaction` extension introduced in ver. 1.0.20.
However, the latest release (1.0.22) also adds a number of potentially
useful extensions such as:
- `XR_FB_render_model`
- `XR_HTC_facial_expression`
- `XR_HTC_vive_focus3_controller_interaction`
Ref T95206
Reviewed By: LazyDodo, sybren, mont29
Maniphest Tasks: T95206
Differential Revision: https://developer.blender.org/D13950
Turn off "Sync Length" when splitting an NLA strip.
The NLA Strip-Action Clip has a setting called "Sync Length" that is on
by default and helps to update the length of the clip to the current
actions keyframes so the strip shows the entire actions stored
animation.
When you split one of these strip-action clips in the NLA to trim it
shorter or to move it somewhere else in the NLA tracks to blend or work
with, the "Sync Length" setting stays checked on. You can have many
strips in the NLA that all look to the same action, if you split one
strip , you now have two strips showing or linked to the same action.
To see or edit keyframes on a strip, you enter tweak mode. When you exit
tweak mode, if "Sync Length" is active on the strip-action clip
settings, the strip length is changed/reset to match the action keys.
When you have many clips, this causes all of them to evaluate and update
no matter where they are in the NLA. This destroys/undoes all the user
work to trim down and place the clips in the NLA.
**Description of the proposed solution:**
This patch/change would turn off, the "Sync Length" setting when the
split tool was used on a strip/action clip to help protect the users
choice to trim and keep the clip that length. It doesn't change the
ability to turn the setting back on or off, it just makes sure that the
user doesn't accidentally or unknowingly loose work.
The same process happens when an action is created that has a "manual
range" set that is different than the length of the actions start-end
keyframes. It makes sense to do the same thing for the split tool.
**Alternative solutions:**
While the user could know this and turn off this setting by hand, it is
easy to forget and or the user might think that it only happened to the
one clip they were editing and not realize it happened to all the
trimmed versions, changing the users choice without the user knowing it
happened.
**Limitations of the proposed solution:**
It only fixes the split tool, if another tool was created and some how
impacted the clip length we would need to have that tool also take the
sync length into account.
Reviewed By: sybren
Maniphest Tasks: T84486
Differential Revision: https://developer.blender.org/D10168
From a strict language point of view the code required a braces around
`trgba` initialization. But it is easier to rely on the fact that fields
which are not specified are zero-initialized.
Under some circumstances, simply adding a curve object and going
to edit mode would cause a crash. This is because the evaluated
`CurveEval` was accessed but also freed by the dependency graph.
The fix reverts the part of b76918717d that uses the
`CurveEval` for the curve object bounds. While this isn't ideal,
it was the previous behavior, and some unexpected behavior
with object bounds is much better than a crash. Plus, given the plans
of using the new "Curves" data-block for evaluated curves, this
situation will change relatively soon anyway.
Strip aligning wasn't done when length of 2 strip is equal, but since
these strips are aligned according to position in stream this can
produce offset.
There are two things achieved by this change:
- No possible downcast of size_t to int when calculating motion steps.
- Disambiguate call to min() which was for some reason considered
ambiguous on 32bit platforms `min(int, unsigned int)`.
On an implementation side the `min()` is defined for a fixed width
integer type to disambiguate uint from size_t on 32bit platforms,
and yet be able to use it for 32bit operands on 64bit platforms without
upcast.
Fixes 32bit platforms (such as i386) in Debian package build system.
Differential Revision: https://developer.blender.org/D13992
This lib allows any shader to use `print()` like functions for
logging and debugging shaders.
Usage is described in the comment at the top of the file.
Added a new "Sharpen Less" kernel to the filter compositor node. The intent here is to provide a much less aggressive sharpening filter that can't simply be solved by toning down the factor on the existing sharpen filter.
The existing "Sharpen" filter uses a "box" kernel:
```
-1 -1 -1
-1 9 -1
-1 -1 -1
```
The new "Sharpen Less" filter uses a "diamond" kernel:
```
0 -1 0
-1 5 -1
0 -1 0
```
The difference between the two is clear to see in the following side-by-side:
{F12847431}
Below shows the difference between the filtering kernels as applied to a B&W render of Suzanne with the UV grid as a texture. The left side of the render using the existing "Sharpen" filter, and the right side showing the new "Sharpen Less" filter. Notice that the left side is more aggressive in accentuating localized contrasts across the image. This can lead to what appears to be aliasing or striations in the resulting image:
{F12847429}
https://developer.blender.org/T95275https://blender.community/c/rightclickselect/57Kq/?sorting=hot
{F12847428}
Reviewed By: #compositing, jbakker
Differential Revision: https://developer.blender.org/D14019
This is almost always meaningless as most code has changed
since the comment was added. Besides this, version control can be used
to check if/when a file was modified.
Some cases of this were kept when they contain details
about the original copyright holder.
Order copyright immediately after the license block,
this was done almost everywhere with a few exceptions.
Remove authors from a few files (we had already removed "Contributors"
section however with old patches being applied this gets added back in).
Also move descriptive text into the doxygen comment block under \file.
In some cases remove the text as it was accidentally copied.
GHOST_ISystem::toggleConsole had a somewhat misleading name
it could be fed 4 different values, so it was not as much a
toggle as a set console window state.
This change renames `toggleConsole` to a more appropriately
named `setConsoleWindowState` and replaces the integer it had
to an enum so it's easy to tell what is being asked of it at
the call site.
Reviewed By: LazyDodo
Differential Revision: https://developer.blender.org/D14020
Symmetrize Armature now also symmetrizes the transform of custom bone
shapes.
Adds a new property to `bpy.ops.armature.symmetrize`:
//Parameters//:
- **direction **(enum in ['NEGATIVE_X', 'POSITIVE_X'], (optional)) –
**Direction**, Which sides to copy from and to (when both are selected)
- **custom_shape **(enum in ['SYMMETRIZE_SAME', 'SYMMETRIZE_ALL',
'SYMMETRIZE_NONE'], (optional)) – **Custom Shapes**, Wether to
symmetrize non symmetric custom bone shapes, all custom shapes, or
none at all
//Rationale//:
Reviewed By: #animation_rigging, Mets, sybren
Maniphest Tasks: T91871
Differential Revision: https://developer.blender.org/D13416
Add files with extension ".woff" and ".woff2" to FILE_TYPE_FTFONT
file type. Allows selecting and using these types of font files.
See D13822 for more details.
Differential Revision: https://developer.blender.org/D13822
Reviewed by Campbell Barton
Avoid using the uint64_t as an intermediate cast since it complicates
behavior for signed types (which first need to be cast to an int64_t).
Assign both old_value_i & old_value_f from the original value to avoid
the need for different handling of signed/unsigned types.
Reviewed By: JacquesLucke
Ref D14039
A Bump node without a Height input is meaningless and does nothing.
As such, it is available as an old workaround that allows making Node
Group inputs that default to normal when not connected, by routing
via a no-op Bump node before doing math.
Cycles specifically recognizes this use case and either bypasses
the node, or converts it into a Geometry Normal node, but Eevee
was still evaluating it as usual. That incurred performance cost,
and also normalized the vector unlike Cycles.
This implements the same bypass logic for Eevee. Since I'm not
sure if it's possible to totally remove the node at this stage,
it emits a no-op function call to copy the input vector.
Differential Revision: https://developer.blender.org/D14045
Viewport cull bones during selection to avoid depth-picking
reading the depth buffer for bones that aren't in the viewport.
Files with thousands of bones could hang blender for seconds while
selecting. The issue could still happen with overlapping bones or when
zoomed out so all bones are under the cursor, however in practice this
rarely happens.
Now files with many bones select quickly.
Related changes include:
- Split `BKE_pchan_minmax` out of `BKE_pose_minmax`.
- Add `mat3_to_size_max_axis` to return the length of the largest
axis (used for scaling the radius).
Reviewed By: sybren
Maniphest Tasks: T91253
Ref D13990
This new info simplifies greatly the handling of resync and deleting
liboverride hierarchies, and makes those operations more robust to
complex dependencies cases between different hierarchies.
Related to T95283.
This change will make handling of liboverrides hierarchies (especially
resyncing) much easier and efficient. It should also make it more
resilient to 'degenerate' cases, and allow proper support of things like
parenting an override to another override of the same linked data (e.g.
a override character parented to another override of the same
character).
NOTE: this commit only implements minimal changes to add that data and
generate it for existing files on load. Actual refactor of resync code
to take advantage of this new info will happen separately.
This change will make handling of liboverrides hierarchies (especially
resyncing) much easier and efficient. It should also make it more
resilient to 'degenerate' cases, and allow proper support of things like
parenting an override to another override of the same linked data (e.g.
a override character parented to another override of the same
character).
NOTE: this commit only implements minimal changes to add that data and
generate it for existing files on load. Actual refactor of resync code
to take advantage of this new info will happen separately.
Previously, node selection made no distinction between a frame node and
other nodes. So a frame node would be selected by their whole rect or
center (depending on box/lasso/circle select). As a consequence of this,
box and lasso could not pratically be started inside a frame node (with
the intention to select a subset of contained child nodes) because the
frame would be selected immediately and tweak-transforming started.
Circle selecting would always contain the frame node as well (making
transforming a subset of nodes without also transforming the whole frame
impossible).
Now change selection behavior so that for all selection modes only the
border [the margin area that is automatically added around all nodes,
see note below] of a frame node is considered in selection. This makes
for a much more intuitive experience when arranging nodes inside frames.
note: to make the area of interest for selection/moving more obvious,
the cursor changes when hovering over (as is done for resizing).
note: this also makes the resize margin consistent with other nodes.
note: this also fixes right resize border (was exclusive instead of
inclusive as every other border)
Also fixes T46540.
This adds an option to the "Select Similar" operator in edit mode to
select vertices based on vertex crease similarity. The implementation
follows that of the edge crease, with a 1-dimensional KD-tree used to
store and retrieve vertex indices base on crease values.
To maintain compatibility with old files (scripts), the `SIMEDGE_CREASE`
enumeration identifier remains `CREASE`, while the one for the new
`SIMVERT_CREASE` is `VCREASE` to follow the naming convention of other
enum values.
Reviewed By: campbellbarton
Differential Revision: https://developer.blender.org/D14037
This Diff changes the computation of the input and output socket
position to being always aligned with its corresponding label.
Reviewed By: campbellbarton
Ref D14025
Regression in 51befa4108
caused negative values to overflow into a uint64_t.
Resolve by casting to a signed int from originally signed types.
This caused D14033 not to work properly.
Assign the actual value before casting to large uint64_t/double types.
This improves readability, especially in cases where both pointer
and integer casts were used in one expression, to make matters worse
clang-format treated these casts as a multiplication.
This also made debugging/printing the values more of a hassle.
No functional changes (GCC produced identical output).
Account for `CurveEval`, which stores the proper deformed and
procedurally created data, unlike the `nurb` list, which has always
just meant a copy of the original curve.
Also account for the case when the curve is empty by using a -1, 1,
fallback bounding box in that case, just like mesh objects.
The problem was that nullptr was returned which is a valid value for
Mesh * and hence the returned optional was treated as having some value.
There was no check for point clouds so that was fixed as well.
Differential Revision: https://developer.blender.org/D14026
Based on discussions from T95355 and T94193, the plan is to use
the name "Curves" to describe the data-block container for multiple
curves. Eventually this will replace the existing "Curve" data-block.
However, it will be a while before the curve data-block can be replaced
so in order to distinguish the two curve types in the UI, "Hair Curves"
will be used, but eventually changed back to "Curves".
This patch renames "hair-related" files, functions, types, and variable
names to this convention. A deep rename is preferred to keep code
consistent and to avoid any "hair" terminology from leaking, since the
new data-block is meant for all curve types, not just hair use cases.
The downside of this naming is that the difference between "Curve"
and "Curves" has become important. That was considered during
design discussons and deemed acceptable, especially given the
non-permanent nature of the somewhat common conflict.
Some points of interest:
- All DNA compatibility is lost, just like rBf59767ff9729.
- I renamed `ID_HA` to `ID_CV` so there is no complete mismatch.
- `hair_curves` is used where necessary to distinguish from the
existing "curves" plural.
- I didn't rename any of the cycles/rendering code function names,
since that is also used by the old hair particle system.
Differential Revision: https://developer.blender.org/D14007
This patch makes it possible to set the UI color of a node's
header bar and override the default from the node's typeinfo.
Currently the color is taken from the `.nclass`
member of the node's bNodeType->TypeInfo struct.
This is created once when registering the node.
The TypeInfo is used for both UI and non-UI functionality.
Since the TypeInfo is shared, the header bar for the node
can't be changed without changing all nodes of that type.
The Map Range node is shown as a `Converter` or blue color by default.
This patch allows this to be changed dynamically to `Vector` or purple.
This is done by adding a `ui_class` callback to node typeinfo struct.
Reviewed By: HooglyBoogly
Differential Revision: https://developer.blender.org/D13936
If the console was hidden on windows, it would
be made visible during shutdown in an effort
not hide a potentially active interactive console
session. This however did not take in account
if blender was actually launched from an interactive
console causing the console window to briefly flash
during shutdown even when launched from the new
launcher, the brief flash concerned some users.
This change adjusts the behaviour to restore the
console only when blender was started from the
console.
Reviewed By: LazyDodo
Differential Revision: https://developer.blender.org/D14016
If the "Asset Indexing" option was disabled in Preferences > Experimental
> Debugging, the asset-view template (used by pose libraries for
example) would still use the indexing.
Previously, nearest interpolation filter was used for preview, because
it offered good performance and bilinear was used for rendering. This
is not always desirable behavior, so filter method can now be chosen by
user. Chosen method will be used for preview and for rendering.
Reviewed By: ISS
Differential Revision: https://developer.blender.org/D12807
Dashed line display was changed since 2.7x, using white on black
instead of grey on black.
This made selection difficult to see as white is brighter
than the default selected color.
Match the grey used in 2.7x.
This patch from Henrik Dick improves the continuity between the
grid forming corners and the edge polyons on multisegment bevels.
For details, see patch D13867.
Fixes T95384. New exporter was missing a fix for T94516 that recently got applied to the python exporter.
Also changed the obj export tests code so that when save_failing_test_output is requested and MTL result is different from the golden expectation, it is saved as well, similar to how it's done for the OBJ file result.
This change from Aras further parallelizes wihin large meshes (the previous one
just parallelized over objects).
Some stats: on A Windows machine, AMD Ryzen (32 threads):
(one mesh) Monkey subdivided to level 6: 4.9s -> 1.2s (blender 3.1 was 6.3s; 3.0 was 49.4s).
(one mesh) "Rungholt" minecraft level: 8.5s -> 2.9s (3.1 was 10.5s; 3.0 was 73.7s).
(lots of meshes) Blender 3 splash: 6.2s -> 5.2s (3.1 was 48.9s; 3.0 was 392.3s).
On a Linux machine (Threadripper, 48 threads, writing to SSD):
Monkey - 5.08s -> 1.18s (4.2x speedup)
Rungholt - 9.52s -> 3.22s (2.95x speedup)
Blender 3 splash - 5.91s -> 4.61s (1.28x speedup)
For details see patch D14028.
Caused by rBf75449b5f2b04b79, which was missing a null check when
attempting to extract a `CustomData` pointer from an mesh that might
be null if the object isn't a mesh object. The commit added null checks
elsewhere, so simply adding them here is a straightforward fix.
Fixes T95526, T95539
Also fixed conflicts due to the change in file writing in the new obj exporter
in master, and fixed one of the tests that was added in master but not 3.1.
The new wavefront .obj exporter in 3.1 was producing slightly invalid parm line syntax (missing u), and was not setting first/last N params to zeroes and ones for curves with "endpoint" flag properly.
Removes unnecessary calls to blf_glyph_cache_find, simplifies
blf_font_size, and reduces calls to it. blf_glyph_cache_new
and blf_glyph_cache_find made static.
See D13374 for more details.
Differential Revision: https://developer.blender.org/D13374
Reviewed by Campbell Barton
Allowing setting and storing of the default font size as float.
See D13230 for more details.
Differential Revision: https://developer.blender.org/D13230
Reviewed by Campbell Barton
Removal of declaration of unused blf_font_draw_ascii
See D13624 for more details.
Differential Revision: https://developer.blender.org/D13624
Reviewed by Campbell Barton
Cache the font size's ideal fixed width column size in the glyph cache
rather than the font itself to improve performance.
See D13226 for more details.
Differential Revision: https://developer.blender.org/D13226
Reviewed by Campbell Barton
blf_glyph_cache_free does not need to be public.
See D13395 for more details.
Differential Revision: https://developer.blender.org/D13395
Reviewed by Campbell Barton
`viewops_data_alloc` allocates and stores some pointers in
`ViewOpsData` while `viewops_data_create` reuses already stored
pointers and also stores others in `ViewOpsData`.
The similar names and usages can confuse and in this case it also
creates a dependency on the order in which these functions are called.
Merging these functions simplifies usage and deduplicates code.
Since we have a node that sets a mesh's auto smooth angle
(unfortunately, in retrospect), we generally can't assume at all
that value is the same as whatever input mesh. Similar asserts
were removed previously in 8216b759e9. While the attempt
at assertions to clarify assumptions is noble, this one doesn't
make sense anymore.
I found this while investigating T95479.
Differential Revision: https://developer.blender.org/D14009
- Fix image.format conversion to string
- Fix warnings about ARB_conservative_depth not found even if GL > 4.2
- Add `array(type)` define for portable array definition
Thanks to the new `ShaderCreateInfo` we now include source files without
any modification. This let us query which are the source files passed to the
`print_log` function. The log will now include a file with row and column
number which is interpreted as a link in most IDE.
DEBUG_CONTEXT_LINES will add more lines around the error lines for more
context. This is also useful if the error line is imprecise (because of
driver bugs) and the reported line is not sufficient to know the location
of the error.
The DEBUG_DEPENDENCIES option will display the list of included files in
the shader sources. Note that it will not print generated source.
This commit also fixes some issues with unhelpful logs, bogus row & column
numbers, other error format, and bug if row was 0.
Source files are now only referenced and listed for the driver to ingest.
Shader sources now includes generated data if any.
Also cleans up gpu_shader_dependency_get_builtins casts.
This includes multiple commits:
- Fix crash when using std::cerr for error output
- Add auto_resource_location which overrides all resources location (not vert input)
- Improve codestyle of error reporting.
- Add type conversion to string and to `eGPUType`
- Add comparison operator (will be used for hash collision resolution).
- Add members related to generated code (codegen)
Detected on `amdgpu-pro` libGL implementation. The workaround is to not
use explicit location for vertex attributes. This is not a real problem
as we don't rely on them for now.
This was an oversight in the patch that added this node,
the default merge distance is meant to be the same as the weld
modifier, 0.001m, meaning by in most situations it removes
vertices generally at the same location.
- Compute ref let the size of dispatch be modified just before drawing.
- Barrier call makes it possible to chain multiple compute passes in one pass.
- DRW_shgroup_vertex_buffer_ref is the analog of DRW_shgroup_uniform_block_ref.
Since we have a node that sets a mesh's auto smooth angle
(unfortunately, in retrospect), we generally can't assume at all
that value is the same as whatever input mesh. Similar asserts
were removed previously in 8216b759e9. While the attempt
at assertions to clarify assumptions is noble, this one doesn't
make sense anymore.
I found this while investigating T95479.
Differential Revision: https://developer.blender.org/D14009
This commit adds infrastructure for 8 bit signed integer attributes.
This can be useful given the discussion in T94193, where we want to
store spline type, Bezier handle type, and other small enums as
attributes.
This is only exposed in the interface in the attribute lists, so it
shouldn't be an option in geometry nodes, at least for now.
I expect that this type won't be used directly very often, it
should mostly be cast to an enum type. However, with support
for 8 bit integers, it also makes sense to add things like mixing
implementations for consistency.
Differential Revision: https://developer.blender.org/D13721
Required to solve a crash on windows (T95367)
Mostly an uneventful update, except for FreeType
giving its cmake options a rename.
Reviewed By: brecht, sybren
Differential Revision: https://developer.blender.org/D13968
Every translation unit that included the modified headers generated
some extra code, even though it was not used. This adds unnecessary
compile time overhead and is annoying when investigating the
generated assembly.
Initialization with `MEM_new()` depends a lot on the initialization rules
of C++, which are not obvious. Calling it with no arguments to be passed
to the constructor may cause the resulting object to be implicitly 0
initialized (or parts of it). This can have an impact on performance
sensitive code, so it's something to document.
Alternatively we could enforce default initialization (as opposed to the
value initalization that happens now), but this could cause
uninitialized memory in a way that isn't obvious either. This is a
possible source of bugs, so Jacques and I decided against it.
Use the global F2 rename panel for the NLA editor to rename NLA strips.
Reviewed By: sybren, RiggingDojo
Differential Revision: https://developer.blender.org/D12300
It was missing framework flags added in `setup_platform_linker_flags`.
Keep it off until QuickLook Thumbnailing is implemented.
Differential Revision: https://developer.blender.org/D13997
The check for existence of custom data layers did not take wrapper nature of
mesh into account.
Quickest and safest for 3.1 solution is to take care of branching of checks
in the draw manager.
Ideally both wrapper and mesh access will happen via the same public API
without branching in the "user" code. That is something outside of the fix
for the coming release though.
Differential Revision: https://developer.blender.org/D14013
Instead of using a manual list of dependency, the new implementation
scans all shader files beginning by `gpu_shader_material_` and extract
all function declarations.
This way we can deduce the internal dependencies between theses files.
This new implementation is merged with the manual pragma dependency system
uses by other shader files. This way it is compatible with the shader
logging system and does not require any string duplication during shader
building.
This callback was only needed to allow specific handling of proxies, now
that theses have been removed the generic
`BKE_lib_id_make_local_generic` code works for objects as well.
Byte images are converted to float. Due to an issue how VSE cache is
freeing its images we cannot store these float buffers what leads
to recalculating it for each change in the image editor.
This fix will reduce the slowdown to areas that have the root cause of
the memory leak, so the buffers can be reused between refreshes.
NOTE: The root cause should still be fixed.
Thanks for reporting Sybren!
Didn't cause visible issues, because the layout uses spacers to
right-align text, which happens to use the region size with pixel-size
applied for calculations.
Some navigation operators check flags like `RV3D_LOCK_ROTATION` in the
invoke function to see if the operation can be performed.
As the comment indicates, these checks should be in the poll function.
This avoids redundant initialization.
Note that this brings functional changes as now operators with context
`EXEC_DEFAULT` will also be affected by the flag.
(There doesn't seem to be a problem with the current code).
Differential Revision: https://developer.blender.org/D14005
This introduce `DEBUG_DEPENDENCIES` (not a cmake flag but a local define)
which when set to 1, will list all the original files included in this
shader while omitting the generated / non original code.
This is the layout used by the amdgpu-pro GL implementation.
This also add some sanitizing of the parse output because the bespoke
implementation has bogus error when it comes to compute shaders.
The strings in the `get_description` functions for operators need
translation, they are not found by the translation system automatically,
and there is no translation applied afterwards either (as far as I could
tell). Some used `N_` before, but most did nothing.
Differential Revision: https://developer.blender.org/D14011
The case where Y rotation is mapped to Y rotation was not handled.
This is now fixed.
Also added an automated test to make sure that the symmetrize operator
functions as intended.
Reviewed By: Sybren
Differential Revision: http://developer.blender.org/D9214
This was caused by macros interpreted as recursive. Workaround by
not using macros at all and just define local variables which
hopefully will be optimized.
This was caused by macros interpreted as recursive. Workaround by
not using macros at all and just define local variables which
hopefully will be optimized.
Crash introduced by {rB0cb5eae}.
When switching to between drawing modes the region.draw_buffer could be
uninitialized when the gizmo depth test is performed. When the mouse is
placed on top of a gizmo part that could be highlighted would crash.
This fix adds a early exit when depth testing is requested, but there
isn't a draw_buffer. Not sure this is an root cause fix.
Reported by multiple animators in Blender Studio.
Technically, this can't be relied upon in the long term. It worked more or
less accidentally before. It was broken by a previous fix accidentally. I mainly
bring it back because rBa985f558a6eb16cd6f0 was not expected to have
this side effect.
Note, this change can result in slower performance. Writing to a vertex
groups is less efficient than using a generic attribute.
Previously, these methods used the more generic substring-finding
algorithm, which is more complex and slower.
Using the more specialized methods results in a noticable speedup
in the obj importer (D13958).
Differential Revision: https://developer.blender.org/D14012
When viewing backdrop on top of the node grid, the grid would be
rendered black when the mode wasn't set to RGBA. This fix fixes this by
reverting the previous fix of drawing the backdrop and implement a
different one that recomputes the UV coordinates on the screen edges.
DNAstr was assumed to be 4-byte aligned which is not necessarily
the case for byte-arrays.
Use a compiler attribute to ensure this is the case.
Thanks to @mtasaka for investigating and providing a patch.
Part of T91671.
Not much else to say, this is mainly a massive deletion of code.
Note that a few cleanups possible after this proxy removal were kept out
of this commit to try to reduce a bit its size.
Reviewed By: sergey, brecht
Maniphest Tasks: T91671
Differential Revision: https://developer.blender.org/D13995
This doesn't make a user visible difference since it's only used for
brackets at the moment, this is more for general correctness as the
width calculation for mono-spaced text drawing is different
(as it uses BLI_wcwidth).
ED_transverts_create_from_obedit expected an evaluated object.
Add flag to request TX_VERT_USE_MAPLOC to be set, which avoids having to
calculate this data when it's not used as well as the requirement
that the input object be evaluated from the depsgraph.
This fixes the crash by removing the `do_view3d_header_buttons` handler.
The code can work at a higher level here, using the operator for setting
the select mode, which makes this patch a cleanup as well.
The operator now has a description callback to add the custom
description used for the behavior in its invoke method.
Differential Revision: https://developer.blender.org/D13660
Since f59767ff97, these hair layer types are unused. Since DNA
compatibility was broken with any files that would contain them, the
indices can be reused to avoid growing custom data's typemap.
In the future when we have a docs staging area it will be
important to change where this JSON is pulled from.
For now, always pull from the "Production" versions
This commit adds a version switch similar to the one on the user manual,
in the future it would be nice to refactor both of these into a more generic
code that works for both. Maybe develop this into a sphinx extension.
As part of this change I had to change how the blender hash is displayed.
Instead of the version hash in the top left it has been moved to the page footer.
This change will also be backported to 2.93 LTS, 2.93 LTS, and 3.0.
This patch refactors the "Hair" data-block, which will soon be renamed
to "Curves". The larger change is switching from an array of `HairCurve`
to find indices in the points array to simply storing an array of offsets.
Using a single integer instead of two halves the amount of memory for that
particular array.
Besides that, there are some other changes in this patch:
- Split the data-structure to a separate `CurveGeometry`
DNA struct so it is usable for grease pencil too.
- Update naming to be more aligned with newer code and the style guide.
- Add direct access to some arrays in RNA
-- Radius is now retrieved as a regular attribute in Cycles.
-- `HairPoint` has been renamed to `CurvePoint`
-- `HairCurve` has been renamed to `CurveSlice`
- Add comments to the struct in DNA.
The next steps are renaming `Hair` -> `Curves`, and adding support
for other curve types: Bezier, Poly, and NURBS.
Ref T95355
Differential Revision: https://developer.blender.org/D13987
This is partially caused by a stupid mistake in cfa53e0fbe
where I missed initializing the `vert_normals` pointer in
`MResolvePixelData`. It's also caused by questionable assumptions
from DerivedMesh code that vertex normals would be valid.
The fix used here is to create a temporary mesh with the data necessary
to compute vertex normals, and ensure them here. This is used because
normal calculation is only implemented for `Mesh` and edit mesh, not
`DerivedMesh`. While this might not be great for performance, it's
potentially aligned with future refactoring of this code to remove
`DerivedMesh` completely. Since this is one of the last places the data
structure is used, that would be a great improvement.
Differential Revision: https://developer.blender.org/D13960
Iterating over scene's objects while we modify those (through proxy to
override conversion code) is call for problems (use after free etc.).
Instead, all proxy objects need to be gathered first in a temporary
list, and processed all at once in a second loop.
`BKE_collection_object_add` ensures given object is added to an editable
collection, and not e.g. a linked or override one.
However, some processes like do_version manipulate collections also from
libraries, i.e. linked collections, in those cases we need a version of
the code that unconditionnally adds the given object to the given
colleciton.
This is from patch D13988. It removes the "- New" from the menu of the
new obj exporter, changes the default addon to just io_import_obj,
and does the right versioning thing.
Also disables the python tests for the old python exporter.
This patch reverts the normal behavior of the spotlights. In the last fix,
the returned normal of a spot light was equal to its direction. This broke
some texturing methods used by artists.
Differential Revision: https://developer.blender.org/D13991
The view3d_edit.c file is already getting big (5436 lines) and mixes
operators of different uses.
Splitting the code makes it easier to read and simplifies the
implementation of new features.
Differential Revision: https://developer.blender.org/D13976
Those cases are fairly hard to track down... Added some more checks,
also at lower levels, more generic levels of object editing, and fixed
core check in liboverride (previously code was assuming that an override
of a collection only could have overrides of objects or linked objects,
but this is not necessarily true).
This is from patch D13988. It removes the "- New" from the menu of the
new obj exporter, changes the default addon to just io_import_obj,
and does the right versioning thing.
Also disables the python tests for the old python exporter.
This displays the error source such as IDE can find identify them
as path and let the programmer follow the direct link instead of
manual string search.
This only works for shaders compiling from unaltered sources as
it uses the source `char*` as key for filename search.
Use the ID.recalc flag to detect when updates after frame-change is
needed. Since comparing the last calculated frame doesn't take undo into
account (see code-comment for details).
`ID_RECALC_AUDIO_SEEK` has been renamed to `ID_RECALC_FRAME_CHANGE`
since this is not only related to audio however internally this flag is
still categorized in `NodeType::AUDIO`.
Reviewed By: sergey
Ref D13942
The issue was happening with a specific file where the ID management
code was not fully copying all modifiers because of the extra check
in the `BKE_object_support_modifier_type_check()`.
While it is arguable that copy-on-write should be a 1:1 copy there is
no real need to maintain the per-modifier pointer to its original.
Use its SessionUUID to perform lookup in the original datablock.
Downside of this approach is that it is a linear lookup instead of
direct pointer access, but the upside is that there is less pointers
to manage and that the file with unsupported modifiers does behave
correct without any asserts.
Differential Revision: https://developer.blender.org/D13993
The modifiers are mapped between original and evaluated objects based on
their session IDs. The pointer to original modifier is no longer needed
for the backup: it remained from the initial implementation which was
rewritten at some point.
This is a preparation for removal of the pointer to original modifier.
c9d9bfa84a caused a regression in when
the right-mouse select action was set to "Select & Tweak" (default).
Now the fallback tool works with RMB select as it did before.
For some reason on some GL implementation (amdgpu) this particular
syntaxes shift the error lines.
Remove the context lines by default as they are not useful anymore.
We now use ShaderCreateInfo as a way to setup the custom material
implementation.
This is more versatile and flexible while not require parsing of
snippets of code.
This will allow to keep the access to deprecated DNA proxy data in that
specific file, instead of allowing deprecated accesses in the whole
override kernel code.
Part of T91671.
Part of the resynching code would access collections' objects base
cache, which can be invalid at that point (due to previous ID remapping
and/or deletion). Use a custom recursive iterator over collections'
objects instead, since those 'raw' data like collection's objects list,
and collection's children lists, should always be valid.
Found while investigating a studio production file.
This is a temp fix for a memory leak where the VSE isn't aware that a
float representation of the image could exist. The VSE somehow doens't
clears it (refcounter is still 1).
The work around is just to let the image engine clean up all the data it
created. Potential this would add more overhead when buffers are needed
more than once.
This refactors how output attributes are computed in the geometry
nodes modifier. Previously, all output attributes were computed one
after the other. Every attribute was stored on the geometry directly
after computing it. The issue was that other output attributes might
depend on the already overwritten attributes, leading to unexpected
behavior.
The solution is to compute all output attributes first before changing the
geometry. Under specific circumstances, this refactor can result in a speedup,
because output attributes on the same domain are evaluated together now.
Overwriting existing might have become a bit slower, because we write the
attribute into new buffer instead of using the existing one.
Differential Revision: https://developer.blender.org/D13983
Mistake in the 974981a637: f the edit data is not present then the
origindex codepath is to be used. Added a brief note about it on the
top of the file.
More ideally would be to remove edit mesh from non-bmesh-wrappers
but this would require changes in the draw manager to make a proper
decision about drawing edit mode overlays.
Now all proxies will always be converted to library overrides. If
conversion fails, they are simply 'disabled'.
This should be the last 'user-visible' step of proxies removal.
Remaining upcoming commits will remove internal ID management, depsgraph
and evaluation code related to proxies.
Also bump the blendfile subversion.
Part of T91671.
So far linked proxies were just kept as-is, this is no longer an option.
Attempt to convert them into liboverrides as much as possible, though
some cases won't be supported:
- Appending proxies is broken since a long time, so conversion will fail
here as well.
- When linking data, some cases will fail to convert properly. in
particular, if the linked proxy object is not instanced in a scene
(e.g. when linking a collection containing a proxy as an
epty-instanced collection instead of a view-layer-instanced collection).
NOTE: converion when linking/appending is done unconditionnaly, option
to not convert on file load will be removed in next commit anyway.
Part of T91671.
This will help when dealing with liboverrides from other library files,
e.g for resync or proxies conversion.
This commit only affects proxy conversion.
Part of T91671.
We have this node for shader and geometry nodes. Compositor can also
work with vectors, and this can help with that.
Reviewed By: manzanilla
Maniphest Tasks: T95385
Differential Revision: https://developer.blender.org/D12919
The geometry nodes modifier currently always adds a dependency
relation from the evaluated geometry to the object transform. However,
that can be avoided unless there is a collection or object info node in
"Relative" mode.
In order to avoid requiring dependency graph relations updates often
when editing a node tree, this patch doesn't check if the node is muted
or if the data-block sockets are empty before adding the dependency.
Fixes T95265
Differential Revision: https://developer.blender.org/D13973
Function `IMB_indexer_get_seek_pos()` can return non 0 seek position for
frame index 0. This causes seeking to incorrect GOP and scanning ends
with failiure.
Hard-code first frame index seek position to 0.
Differential Revision: https://developer.blender.org/D13974
Some items on the Quick Setup screen can be truncated with some
languages and/or with High DPI monitors. This patch adjusts column
sizes and turns off the expand on Spacebar options, making everything
fit a bit better.
See D9853 for more details.
Differential Revision: https://developer.blender.org/D9853
Reviewed by Julian Eisel
Using opt-in instead of opt-out to make code easier to read.
Add combined flag enum.
Making restrict an inverse flag option because it is so rare to
use it.
This adds the possibility to use the `gpu_BaryCoord[NoPersp]`
builtin to support barycentric coordinates without geometry shader.
The `BuiltinBits::LAYER` builtin needs to be manually added
to the `GPUShaderCreateInfo` in order to use this feature.
Note: This is only available for shaders using `GPUShaderCreateInfo`.
A geometry shader fallback is generated if the extension
`AMD_shader_explicit_vertex_parameter` is not available.
`NV_fragment_shader_barycentric` was not considered because it is not
present inside the `glew.h` with use and seems to only be available
with vulkan.
This adds the possibility to use the `gpu_Layer` builtin to
support layered rendering without geometry shader.
The `BuiltinBits::LAYER` builtin needs to be manually added
to the `GPUShaderCreateInfo` in order to use this feature.
Note: This is only available for shaders using `GPUShaderCreateInfo`.
A geometry shader fallback is generated if the extension
`AMD_shader_explicit_vertex_parameter` is not available.
- Scan all static shaders for builtins on startup.
- Add possibility to manually add builtins.
- `ShaderCreateInfo.builtins_` contain builtins from all stages.
If an entire cyclic stroke was selected, calling "Separate by Points"
would leave a gap in the new object (making the new stroke non-cyclic).
The patch makes sure that if we separate by points and all points are
selected, we fall back to separate by stroke.
Reviewed By: antoniov
Maniphest Tasks: T91463
Differential Revision: https://developer.blender.org/D12527
This patch fixes the error that pops up
(`Error: Unable to execute '... Mode Toggle', error changing modes`)
when trying to switch to e.g. draw mode from a grease pencil object
that was saved in draw mode in an inactive scene when the file was loaded.
Note that this does not fix the bigger issue described in T91243.
The fix makes sure that we reset all the mode flags on the grease pencil
data when we set the mode to object mode.
Reviewed By: antoniov
Maniphest Tasks: T89514
Differential Revision: https://developer.blender.org/D12419
When adding an asset library in the Preferences, set the name of the new
library to the chosen directory's name by default. That avoids having to
set it manually which can be annoying. Previously I thought it would be
nice to show the name button in red then, making the user aware that
they have to give it a name, but that appears to be more annoying than
useful/practical after all.
The issue was that the code only looked at `dob->ob`
instead of `dob->ob_data` which is necessary since
rB5a9a16334c573c4566dc9b2a314cf0d0ccdcb54f.
This now uses the same pattern that is used in other places
where `BKE_object_replace_data_on_shallow_copy` is used.
Blender would have crashed when renaming bone in Edit Mode, Saving, and
than selecting/deselecting.
Caused by a mistake in the 0f89bcdbeb: can not "short-circuit" the
CoW update if it was explicitly requested.
Safest for now solution seems to be to store whether the CoW component
has been explicitly tagged, so that the following configuration can be
supported:
DEG_id_tag_update(id, ID_RECALC_GEOMETRY);
DEG_id_tag_update(id, ID_RECALC_COPY_ON_WRITE);
Differential Revision: https://developer.blender.org/D13966
Since splitting the depth and the color shader in the image engine the
backdrop wasn't visible anymore. The reson is that the min max uv
coordinates were never working for the node editor backdrop that uses
its own coordinate space.
This partial fix will ignore the depth test when drawing the color part
of the backdrop. This will still have artifacts that are visible when
showing other options as RGBA.
Proper fix would be to calculate the the uv vbo in uv space and not in
image space.
Can also happen in other places when the overlay engine is active. Some
parts of the overlay engine uses builtin shaders, but disable the color
space conversion to the target texture.
Currently there the overlay engine has its own set of libraries it could
include and defined a macro to pass-throught the color space conversion.
The library include mechanism currently fails when it couldn't find the
builtin library in the libraries of the overlay engine. This only
happened in debug mode.
This change will not fail, but warns the developer if a library could
not be included. In the future this should be replaced by a different
mechanism that can disable the builtin library. See {T95382}.
Since d9c6ceb3b8 partial updates to
normals in sculpt-mode were accumulating into the current normal
instead of a zeroed value.
Zero vertex normal values tagged for calculation before accumulation.
Reviewed By: HooglyBoogly
Ref D13975
From investigating T95185, it's important the normal returned by
SCULPT_vertex_normal_get always match the PBVH normal array.
Since this is always initialized in the PBVH, there is no advantage
in storing the normal array in two places, it only adds the possibility
that changes in the future causing different meshes normals to be used.
Split out from D13975.
Since 0ea0ccc4ff, `AV_PIX_FMT_YUV444P` pixel format was used for
lossless renders, which did override `AV_PIX_FMT_YUVA420P` format when
"RGBA" output is chosen. VP9 encoder doesn't seem to support
`AV_PIX_FMT_YUVA444P` pixel format, so use `AV_PIX_FMT_YUVA420P` for
lossless RGBA ouput instead.
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D13947
Currently, audio and video strips are synchronized based on data from
media stream, which is nice, but this causes gaps between strips.
This synchronization was implemented by moving movie strip position
relative to sound, which doesn't make much sense for user which is
mostly interested in editing video.
Code was bit hard to read, so it has been simplified. Ideally video
stream time would be easily accessible so synchronization could be done
at any time, but this is not necessary at this point.
Reviewed By: zeddb
Differential Revision: https://developer.blender.org/D13948
Correct corner radius of the node outline to prevent a noticeable gap in
some cases.
---
Currently we make a small mistake in the creation of the node outline:
We offset the rectangle describing the outline by the outline thickness,
but we don't adjust the corner radius accordingly.
Therefore the rounded corner of the outline and the node body are not
concentric which can sometimes lead to a visible gap at the corner.
How noticeable it is depends on the theme, the screen's dpi and the
line thickness set in the preferences.
Simply adjusting the corner radius for the outline to also be increased
by the outline thickness fixes this small issue.
| display, line thickness | **patch** | **master** |
| --- | --- | --- |
| 1080p, default/thin | {F12835304} | {F12835305} |
| retina, thin | {F12835306} | {F12835307} |
The issue was mentioned by @hitrpr
Reviewed By: Blendify
Differential Revision: https://developer.blender.org/D13955
Use the evaluated mesh to generate the Adjacent Faces margin.
Baking used the evaluated mesh, but generating the margin used the base
mesh. This would lead to generating the margin from a stale UV map when the
UV editor was open and the UV map was changed. Fix it by passing the same
mesh as used for baking through to the margin generation.
Differential Revision: https://developer.blender.org/D13938
The new adjacent faces method border lookup fails in some directions around
45 degrees
* Use 8 Dijkstra directions (also diagonally) to determine which polygon is the
closest to each pixel. Using only Manhattan distance lead to large parts of
the texture which were matched with the wrong polygon.
* Use neighbroing polygons for edge search. The Adjacent Faces algorithm needs
to determine the closest edge, in UV space, each pixel. To speed this up
first as map is built which finds the closest polygon for each pixel along
horizontal, vertical and diagonal steps. Because this can sometimes be one
edge off we first look in the polygon from the map, if that fails also
check the edges of its neighbouring UV polygons.
Differential Revision: https://developer.blender.org/D13935
For those EEVEE passes a bit of trickery with pointer offsets allows to
get the owning viewlayer, so path generation is not too bad.
Also moved ViewLayer path generation itself into a public utils, to
avoid duplicating code.
NOTE: Doing the same for AOV would be needed, but since pointer offsets
won't help us here to find the owning viewlayer, not sure how to do it
nicely yet (only solution I think is to loop over all AOVs of all
ViewLayer of the scene to find it :( ).
Reported by Beau Gerbrands (@Beaug), thanks.
Image buffer was visible but buffer wasn't available. In the case
the color only overlay of the render result was displayed the image
buffer was not check to be valid.
This patch adds a null pointer check to check in `IMB_alpha_affects_rgb`
to solve this crash.
This commit specifies the exact Python version which is included in the
package name, thereby allowing `install_deps.sh` to suggest
"`-D PYTHON_VERSION=3.10`" correctly.
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D13925
Empty (UDIM) tiles where drawn with a transparency checkerboard. They
should be rendered with a border background. The cause is that the image
engine would select a single area that contained all tiles and draw them
as being part of an image.
The fix is to separate the color and depth part of the image engine
shader and only draw the depths of tiles that are enabled.
While this didn't cause any user visible bugs, this wouldn't
have behaved as intended since the timer would never run again once
wmTimer.ntime was set to NAN.
GPU_select originally used GL_SELECT which defined the format for
storing the selection result.
Now this is no longer the case, define our own struct - making the code
easier to follow:
- Avoid having to deal with arrays in both `uint*` and `uint(*)[4]`
multiplying offsets by 4 in some cases & not others.
- No magic numbers for the offsets of depth & selection-ID.
- No need to allocate unused members to match GL_SELECT
(halving the buffer size).
Remove functions and function pointers that were never set or never
used at all. The "tessface" original index handling in `subsurf_ccg.c`
can be removed because the data was never used.
This function and flags weren't used outside of DerivedMesh
code, and since the plan is to remove the data structure, it makes
sense to remove complexity where possible.
This is a patch from Aras Pranckevicius, D13927. See that patch for full
details. On Windows, the many small fprintfs were taking up a large amount
of time and significant speedup comes from using snprintf into chained buffers,
and writing them all out later.
On both Windows and Linux, parallelizing the processing by Object can also lead
to a significant increase in speed.
The 3.0 splash screen scene exports 8 times faster than the current C++ exporter
on a Windows machine with 32 threads, and 5.8 times faster on a Linux machine
with 48 threads.
There is admittedly more memory usage for this, but it is still using 25 times
less memory than the old python exporter on the 3.0 splash screen scene, so
this seems an acceptable tradeoff. If use cases come up for exporting obj files
that exceed the memory size of users, a flag could be added to not parallelize
and write the buffers out every so often.
Previously, the new obj exporter was only exporting per-vertex normals for faces
marked as "smooth". But a face can have custom normals, as soon as the normals
data layer exists. This change makes it follow the behavior of USD & Collada
exporters and the old Python one, which also export per-vertex normals as soon
as the layer is there. (From Patch D13957.)
This is similar to e032ca2e25 which removed the
callback for volumes. Now that we have geometry sets, there is
no need to define a callback for every data type, and this wasn't
used. Procedural curves/hair editing will use nodes rather than new
modifier types anyway.
Since `event->xy` is now an array these functions can be
simplified to accept the sample point as an array.
Clarifications were also made to variable names:
- `eye->last_x/y` --> `eye->cursor_last`
- `mx/my` --> `cursor`
Differential Revision: https://developer.blender.org/D13671
Previously, macros were ifdefed using the cmake option `WITH_INTERNATIONAL`
However, the is unnecessary as withen the functions themselves have checks for building without internationalization.
This also means that many `add_definitions(-DWITH_INTERNATIONAL)` are also unnecessary.
Reviewed By: mont29, LazyDodo
Differential Revision: https://developer.blender.org/D13929
This fixes a regression from rBa0edee712a79239133ff840f911f6416d4c41855.
Issue being the curve map not being initialized in the GPU shader function.
Fixes T95221
This fixes a regression from rBa0edee712a79239133ff840f911f6416d4c41855.
Issue being the curve map not being initialized in the GPU shader function.
Fixes T95221
As part of the project of converting `MVert` into `float3`
(more details in T93602), this is an easy step, since it
is only locally used runtime data. In the six places it was
used, the flag was replaced by a local bitmap.
By itself this change has no benefits other than making some
code slightly simpler. It only really matters when the other
flags are removed and it can be removed from `MVert`
along with the bevel weight.
Differential Revision: https://developer.blender.org/D13878
For every spline, *all* of the normals and tangents in the output
were normalized. The node is multithreaded, so sometimes a thread
overwrote the normalized result from another thread.
Fixing this problem also made the node orders of magnitude
faster when there are many splines.
Show a more instructive error message for users who have plugged
multiple monitors into multiple display adapters. And do not exit
if unable to open a child window when in this state.
See D13885 for more details
Differential Revision: https://developer.blender.org/D13885
Reviewed by Ray Molenkamp
The GLSL defines used to make the uniform names unusable for local variable
is being interpreted as recursive on some implementation.
This avoids it by create a second macro avoiding the recursion.
Allow Windows IME Pinyin, when in Chinese mode, to use numpad decimal
key to enter decimal point.
See D13902 for more details.
Differential Revision: https://developer.blender.org/D13902
Reviewed by Brecht Van Lommel
Convert Ideographic Full Stop, used in Simplified Chinese and Japanese,
to Decimal Point when entering numbers into numerical inputs.
See D13903 for more details
Differential Revision: https://developer.blender.org/D13903
Reviewed by Brecht Van Lommel
This patch adds a "OneDrive" icon to the File Manager System list for
Windows (only!).
See D11133 for more details.
Differential Revision: https://developer.blender.org/D11133
Reviewed by Julian Eisel
Initialize m_Bar, m_dropTarget, & m_hWnd members of GHOST_WindowWin32
in constructor's member initializer list. This ensures they are are
set or NULL in destructor if constructor does not complete.
See D13886 for more details.
Differential Revision: https://developer.blender.org/D13886
Reviewed by Jesse Yurkovich
Buttons can hold context and it's very useful to use this as a way to
let buttons provide context for drop operators.
For example, with this D13549 can make the material slot list set the
material-slot pointer for each row, and the drop operator can just query
that.
Sometime throughout development some checks got lost during refactor.
This change makes it so that if OIDN is not supported on the current
CPU Cycles will report an error and stop rendering. This behavior is
similar to when an OptiX denoiser is requested and there is no OptiX
compatible device available.
The easiest way to verify this change is to force return false from
the `openimagedenoise_supported()`.
Fixes Cycles part of the T94127.
Differential Revision: https://developer.blender.org/D13944
Parameter was used to still be compatible with the previous drawing mode.
The previous mode isn't available anymore so the parameter can should be
removed.
After showing the alpha in the image editor the setting was not reset
so all images in the editor showed as being transparent.
This commit fixes this by resetting the flag before updating.
In Outliner, 'Make Override Hierarchy' on an indirectly linked data would
fail in case some items higher up in the hierarchy also needed to be
overridden was also indirectly linked.
In Outliner, 'Make Override Hierarchy' on an indirectly linked data would
fail in case some items higher up in the hierarchy also needed to be
overridden was also indirectly linked.
Adding better support for drawing huge images in the image/uv editor. Also solved tearing artifacts.
The approach is that for each image/uv editor a screen space gpu texture is created that only contains
the visible pixels. When zooming or panning the gpu texture is rebuild.
Although the solution isn't memory intensive other parts of blender memory usage scales together with
the image size.
* Due to complexity we didn't implement partial updates when drawing images tiled (wrap repeat).
This could be added, but is complicated as a change in the source could mean many different
changes on the GPU texture. The work around for now is to tag all gpu textures to be dirty when
changes are detected.
Original plan was to have 4 screen space images to support panning without gpu texture creation.
For now we don't see the need to implement it as the solution is already fast. Especially when
GPU memory is shared with CPU ram.
Reviewed By: fclem
Maniphest Tasks: T92525, T92903
Differential Revision: https://developer.blender.org/D13424
This patch reimplements the image partial updates. Biggest design motivation for the redesign
is that currently GPUTextures must be owned by the image. This reduces flexibility and adds
complexity to a single component especially when we want to have different structures.
The new design is not limited to GPUTextures and can also be used by reducing overhead in image
operations like scaling. Or partial image updating in Cycles.
The usecase in hand is that we want to support virtual images in the image editor so we can
work with images that don't fit in a single GPUTexture.
Using `BKE_image_partial_update_mark_region` or `BKE_image_partial_update_mark_full_update`
a part of an image can be marked as dirty. These regions are stored per ImageTile (UDIM).
When a part of the code wants to receive partial changes it needs to construct a `PartialUpdateUser`
by calling `BKE_image_partial_update_create`. As long as this instance is kept alive the changes can
be received.
When a user wants to update its own data it will call `BKE_image_partial_update_collect_changes`
This will collect the changes since the last time the user called this function. When the partial changes
are available the partial change can be read by calling `BKE_image_partial_update_get_next_change`
It can happen that the introduced mechanism doesn't have the data anymore to construct the
changes since the last time a PartialUpdateUser requested it. In this case it will get a request
to perform a full update.
Maniphest Tasks: T92613
Differential Revision: https://developer.blender.org/D13238
Assert when "//" prefixed relative paths are passed to
BLI_path_cmp_normalized as this can't be expanded
and it's possible the paths come from different blend files.
Changes to recent addition: c85c52f2ce.
Having both BLI_paths_equal and BLI_path_cmp made it ambiguous
which should be used, as `BLI_paths_equal` wasn't the equivalent to
`BLI_path_cmp(..) == 0` as it is for string equals macro `STREQ(..)`.
It's also a more specialized function which is not used for path
comparison throughout Blender's internal path handling logic.
Instead rename this `BLI_path_cmp_normalized` and return the result of
`BLI_path_cmp` to make it clear paths are modified before comparison.
Also add comments about the conventions for Blender's path comparison
as well as a possible equivalent to Python's `os.path.samefile`
for checking if two paths point to the same location on the file-system.
The `OUTLINER_OT_item_activate` operator, although it detects when
something changes, always returns `OPERATOR_FINISHED` and thus induces
the creation of undo steps.
So return `OPERATOR_CANCELLED` when nothing changes.
Ref T94080
Reviewed By: Severin
Maniphest Tasks: T94080
Differential Revision: https://developer.blender.org/D13638
- Fix assert on size.
- Fix void * casting.
- Pass extent by values.
- Add swap function to avoid letting the types copyable.
- Add back the GPUTexture * operator on TextureFromPool.
Does two main changes:
* Handle regions in the order as visible on screen. Practically this
just means handling overlapping regions before non-overlapping ones.
* Don't handle any other regions after having found one containing the
mouse pointer.
Fixes: T94016, T91538, T91579, T71899 (and a whole bunch of duplicates)
Addresses: T92364
Differential Revision: https://developer.blender.org/D13539
Reviewed by: Campbell Barton
Asserts that such events actually always lead to a handler return value
that actually keeps the event passing.
Reviewed by Campbell Barton as part of
https://developer.blender.org/D13539.
Though the edge vertices aren't really meant to have an order,
it can make a difference in operations when there isn't any other
information to make decisions from, like etruding a circle of
loose edges (the situation in the report). This commit changes
the order of the vertices in the final cyclic edge to go in the
same direction as all of the other edges.
The vertex and face normals from the input mesh
were used to calculate the normals on the result,
which could cause a crash because the result should
be about twice as large.
Also remove an unnecessary dirty tag, since it is handled
automatically when creating a new mesh or in the case
of the mirror modifier, when calculating the new custom
face corner normals.
Swap "active" and "selected" in the tooltip if the `use_reverse_transfer`
option is activated.
Reviewed By: mont29
Maniphest Tasks: T85233
Differential Revision: https://developer.blender.org/D13499
Also fixes similar issues regarding some liboverride menu entries.
Reviewed By: mont29
Maniphest Tasks: T93766
Differential Revision: https://developer.blender.org/D13513
Add `USD Preview Surface From Nodes` export option, to convert a
Principled BSDF material node network to an approximate USD Preview
Surface shader representation. If this option is disabled, the original
material export behavior is maintained, where viewport setting are saved
to the Preview Surface shader.
Also added the following options for texture export.
- `Export Textures`: If converting Preview Surface, export textures
referenced by shader nodes to a 'textures' directory which is a
sibling of the USD file.
- `Overwrite Textures`: Allow overwriting existing texture files when
exporting textures (this option is off by default).
- `Relative Texture Paths`: Make texture asset paths relative to the
USD.
The entry point for the new functionality is
`create_usd_preview_surface_material()`, called from
`USDAbstractWriter::ensure_usd_material()`. The material conversion
currently handles a small subset of Blender shading nodes,
`BSDF_DIFFUSE`, `BSDF_PRINCIPLED`, `TEX_IMAGE` and `UVMAP`.
Texture export is handled by copying texture files from their original
location to a `textures` folder in the same directory as the USD.
In-memory and packed textures are saved directly to the textures folder.
This patch is based, in part, on code in Tangent Animation's USD
exporter branch.
Reviewed By: sybren, HooglyBoogly
Differential Revision: https://developer.blender.org/D13647
Downgrade the Python zstandard from 0.17.0 to 0.16.0. The Python package
should be linked against the exact same version of libzstd as Blender is,
otherwise it will refuse to load from within the Blender executable.
Python zstandard 0.17.0 links to 1.5.1, whereas we need 1.5.0.
`chardet` was replaced by `charset_normalizer` for modern `requests`.
With this change, `{make,ninja} install` will also copy the latter into
Blender's install directory.
This opt-in functionnality enabled developper keep track of unused
resources present in the `GPUShaderCreateInfo` descriptors of their
shaders.
The output is pretty noisy at the moment so we do not enforce its usage.
While install_deps tries to stay as close as possible from official
Blender versions of the libraries, it also strives to use as many distro
packages as possible.
OSL 1.11.16.0 is the minimal version that builds with llvm13, which is
the default llvm/clang version in e.g. Debian testing.
Adds two new attribute outputs:
"Line" outputs the line number of the character.
"Pivot Point" outputs the selected pivot point position per char.
Some refactoring of the text layout code.
Differential Revision: https://developer.blender.org/D13694
`node_exec` had some code that was specific to texture/shader nodes.
These functions arent used outside there module so limit there declarations.
Also make a function static that is only used in `node_exec.c`
Reviewed By: JacquesLucke
Differential Revision: https://developer.blender.org/D13899
The `TreeElement.rnaptr` was only needed for RNA tree-elements. Now it
can be gotten through the new type specific classes, e.g.
`TreeElementRNAProperty.getPointerRNA()`.
Plan is to remove things like `TreeElement.directdata` and to instead
expose specific queries in the new type specific tree-element classes.
e.g. like here: `TreeElementSequence.getSequence()`
For now uses `tree_element_cast<>()` to get the new type specific
tree-element, later these should replace `TreeElement` all together.
Add function to safely request the type-specific C++ element from a
C-style `TreeElement`. Looks like this:
```
TreeElementFoo *te_foo = tree_element_cast<TreeElementFoo>(te);
```
The "cast" will return null if the tree-element doesn't match the
requested type.
This is useful for the transition from the C-style type to the new ones.
This uses a light parser / string modification pass to convert
C++ enum declaration syntax to GLSL compatible one.
GLSL having no support for enums, we are forced to convert the
enum values to a series of constant uints.
The parser (not really one by the way), being stupidly simple,
will not change anything to the values and thus make some C++
syntax (like omitting the values) not work.
The string replacement happens on all GLSL files on startup.
I did not measure significant changes in blender startup speed.
There is plans to do all of this at compile time.
We limit the scope of the search to `.h` and `.hh` files to prevent
confusing syntax in `.glsl` files.
There is basic error reporting with file, line and char logging
for easy debuggabiliy.
The requirements to use this enum sharing system are already listed in
`gpu_shader_shared_utils.h` and repeated on top of the preprocessor
function.
Remove small ray offsets that were used to avoid self intersection, and leave
that to the newly added primitive object/prim comparison. These changes together
significantly reduce artifacts on small, large or far away objects.
The balance here is that overlapping primitives are not handled well and should
be avoided (though this was already an issue). The upside is that this is
something a user has control over, whereas the other artifacts had no good
manual solution in many cases.
There is a known issue where the Blender particle system generates overlapping
objects and in turn leads to render differences between CPU and GPU. This will
be addressed separately.
Differential Revision: https://developer.blender.org/D12954
Caused by 0f89bcdbeb where it was needed for cage and evaluated mesh
to have same behavior in respect of having edit_mesh pointer assigned.
This change makes it so that edit_data is not implied to exist when the
edit_mesh pointer is not null. This was already the case in some other
code.
Those cases are almost always synptoms of either bug in code, or broken
files. Re-doin resync on them only costs time and causes extra trash
data as a result, without really helping in any way.
Both new normals (from rBb7fe27314b25) and vpaint (from rBf7bbc7cdbb6c)
RNA arrays were missing the `PROPOVERRIDE_IGNORE`. Those huge blobs of
geometry data should never be processed by liboverride code.
This enables support for node group assets. Previously, node group
assets only worked when the "extended asset browser" experimental
features is enabled.
Differential Revision: https://developer.blender.org/D13748
For node groups there is no good default preview generation.
Nevertheless, t would be useful to generate a preview image for a
node group by rendering an object in some cases.
This commit adds a new operator that allows updating the preview
image for the active asset by rendering the active object.
Note, the operator can also be used for other asset types, not just
node groups.
The operator can be found in a menu right below the refresh-preview
button. Currently it is the only operator in that menu. In the future,
more operators to create previews may be added.
Differential Revision: https://developer.blender.org/D13747
The VSE and node editor only uses an overlay buffer to draw to the screen. The
GPUViewport assumes that platforms clears all textures during creation, but
they do not on selected platforms. What would lead to drawing from
uncleared memory.
This patch fixes this by clearing all viewport textures during creation.
Also adds a few things to GPUShader for easily create shaders.
Heavy usage of macros to compose the createInfo and avoid
duplications and copy paste bugs.
This makes the link between the shader request functions
(in workbench_shader.cc) and the actual createInfo a bit
obscure since the names are composed and not searchable.
Reviewed By: jbakker
Differential Revision: https://developer.blender.org/D13910
Continuation of work started in 2e221de4ce and 249e4df110.
Adds new tree-element classes for RNA structs, properties and array
elements. This isn't exactly a copy and paste, even though logic should
effectively be the same. Further cleanups are included to share code in
a nice way, improve code with simple C++ features, etc.
This fixes failing test cases when using `make test`.
See {D13615} for more information.
The fix will perform the id remapping one item at a time. Although not
really nice, this isn't a bottleneck.
The failing test cases is because space_node stores pointers multiple
times and didn't update all pointers. It was not clear why it didn't do
it, but changing the behavior more to the previous behavior fixes the
issue at hand.
I prefer to remove the double storage of the node tree pointers (in
snode and path) to reduce pointer management complexity.
During sprite fright loading of complex scenes would spend a long time in remapping ID's
The remapping process is done on a per ID instance that resulted in a very time consuming
process that goes over every possible ID reference to find out if it needs to be updated.
If there are N of references to ID blocks and there are M ID blocks that needed to be remapped
it would take N*M checks. These checks are scattered around the place and memory.
Each reference would only be updated at most once, but most of the time no update is needed at all.
Idea: By grouping the changes together will reduce the number of checks resulting in improved performance.
This would only require N checks. Additional benefits is improved data locality as data is only loaded once
in the L2 cache.
It has be implemented for the resyncing process and UI editors.
On an Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz 16Gig the resyncing process went
from 170 seconds to 145 seconds (during hotspot recording).
After this patch has been applied we could add similar approach
to references (references between data blocks) and functionality (tagged deletion).
In my understanding this could reduce the resyncing process to less than a second.
Opening the village production file between 10 and 20 seconds.
Flame graphs showing that UI remapping isn't visible anymore (`WM_main_remap_editor_id_reference`)
* Master {F12769210 size=full}
* This patch {F12769211 size=full}
Reviewed By: mont29
Maniphest Tasks: T94185
Differential Revision: https://developer.blender.org/D13615
This reverts commit 086f191169.
There was apparently a problem using APPEND which wasn't referenced
in the commit log.
Added comment noting the reason for the discrepancy.
Use more efficient logic for detecting when gizmos are under the cursor.
Even though this isn't a bottleneck, it runs on cursor motion in the
3D viewport, so avoiding any lag here is beneficial.
The common case for cursor motion without any gizmos was always
drawing two passes (one small, then again if nothing was found).
Now a single draw call at the larger size is used.
In isolation this gives around 1.2x-1.4x speedup.
When there are multiple gizmos a depth-buffer picking is used
(similar to object / bone selection) which is more involved but
still only performs 2x draw calls since the result is cached for reuse.
See note in gizmo_find_intersected_3d for a more detailed explanation.
Also restore the depth values in the selection result as they're
needed for gizmos to use selection bias.
Broken since support for GL_SELECT was removed.
Early on in 2.8x development gizmo-depth used GL_SELECT,
which has been removed. Bind the depth buffer so occlusion queries
use the front-most gizmo.
While this report only mentions face-maps, gizmo depth was ignored in
all cases. This wasn't noticeable in most cases though since the
transform gizmo for example was placed so gizmos didn't overlap.
When calling GPU_select_cache_begin, checking the selection mode used
the last used selection mode, not the one about to be used.
Using border select, then picking would not use the selection cache.
This wasn't noticeable by users as failing to use cache just completes
the selection without it (drawing the depth buffer unnecessarily).
The OSL image compilation step needed to be taught about the new UVTILE
format for UDIM textures.
A small missing feature from OIIO[1] means this is a bit uglier than it
needs to be. Once we update to a version of OIIO with the fix we can
remove the string replace part.
[1] 35cb6a83e2
Differential Revision: https://developer.blender.org/D13912
This was already done for APPLE & WIN32, which would
reference these libraries twice.
Now append BROTLI_LIBRARIES to FREETYPE_LIBRARIES when they're
required for linking.
No functional changes as all references to FREETYPE_LIBRARIES also
used BROTLI_LIBRARIES.
When LIBDIR existed, searching for system libraries would always
first search 'LIBDIR'.
This meant "WITH_SYSTEM_*" would still prefer LIBDIR versions of
libraries if they exist.
The presence of LIBDIR also ignored the setting for WITH_STATIC_LIBS
which is now restored to the cached value once pre-compiled libraries
have been handled.
Fix formula in function `SEQ_sound_update_length`.
Formula for sound strip length was changed in commit ded68fb102, when
strip is added to timeline, but it was not changed in function
mentioned above.
This change applies only for automatic proxy building, when strip
is added to timeline. Manual building process is not affected.
Don't build proxy file if movie is already fast enough to seek.
To determine seek performance, check if whole GOP can be decoded
in 100 milliseconds.
To consider some variation in GOP size, large number of packets are
read, assuming that each packet will produce 1 frame. While this is not
technically correct, it does give quite accurate estimate of maximum GOP
size.
This test will ensure consistent performance on wide array of machines.
Check should be done in order of few milliseconds.
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D11671
This will allow building most deps with VS2019
SDL has some linker issues that are resolved in
a newer version, but that would be better handled
in a separate change.
VS2013 and VS2015 support which was broken has
been removed.
In order to use a workaround builtin uniform, we need to count it
just like other uniforms and give it some space in the name buffer.
This also fixes extensions being added after the uniform declaration.
All `#extension` directives are now part of the gl backend.
This commit moves the weld modifier code to the geometry module
so that it can be used in the "Merge by Distance" geometry node
from ec1b0c2014. The "All" mode is exposed in the node
for now, though we could expose the "Connected" mode in the future.
The modifier itself is responsible for creating the selections from
the vertex group. The "All" mode takes an `IndexMask` for the
selection, and the "Connected" mode takes a boolean array,
since it actually iterates over all edges.
Some disabled code for a BVH mode has not been copied over,
it's still accessible through the patches and git history anyway,
and it made the port slightly simpler.
Differential Revision: https://developer.blender.org/D13907
Previously it was only part of experimental features in beta, however now
renderers can render point clouds generated by geometry nodes. Adding or
converting a point cloud object directly is still hidden by default, since
there is no good way to edit it.
This implements a merge by distance operation for point clouds.
Besides the geometry input, there are two others-- a selection
input to limit the operation to certain points, and the merge
distance. While it would be a reasonable feature, the distance
does not support a field currently, since that would make
the algorithm significantly more complex.
All attributes are merged to the merged points, with the values
mixed together. This same generic method is used for all attributes,
including `position`. The `id` attribute uses the value from the
first merged index for each point.
For the implementation, most of the effort goes into creating a
merge map to speed up attribute mixing. Some parts are inherently
single-threaded, like finding the final indices accounting for the
merged points. By far most of the time is spend balancing the
KD tree.
Mesh support will be added in the next commit.
Differential Revision: https://developer.blender.org/D13649
Only show options that are valid for the used device (CPU, GPU, Multi).
Note: The panel isn't shown for OPTIX anymore, unless Multi device is used.
Reference: https://developer.blender.org/D13592
Make the Embree RTC_SCENE_FLAG_COMPACT flag optional and enabled per default.
Disabling it makes CPU rendering a bit faster in some scenes at the cost of a higher memory usage.
Barbershop renders about 3% faster, victor about 4% on CPU with compact BVH disabled.
Differential Revision: https://developer.blender.org/D13592
Several sub commands tried on their own
to locate python, given I wanted to look
in several locations for a broader libdir
compatibility this is best done in a
central location.
Python 3.9 is still preferred, but if
3.10-3.12 are available that be accepted
as well.
note: this is about the python version
make.bat uses to run various python helper
scripts, this change has no influence on
the python version blender itself uses.
With (center) position, radius and random value outputs.
Eevee does not yet support rendering point clouds, but an untested
implementation of this node was added for when it does.
Ref T92573
Movies with variable frame rate can cause mismatch of displayed frame
when proxies are used. Since proxies are not used for rendering, this
means, that output may be different than expected. This problem can be
avoided when timecodes are used.
Set used timecode to Record Run. Timecodes are built with proxies at
the same time, therefore if proxies are built and used this will
resolve possible mismatch of output.
Record run is chosen, because it will show frames based on time they
were encoded by encoder and should match behavior as if movie was
played back at normal speed. This change is done only for new strips
in order to not overwrite user defined settings.
Other minor changes:
- When proxies are enabled, size 25% is no longer set by default. It was mostly annoying anyway.
- Silence warning when timecode file is not present. This was introduced in 4adbe31e2f.
Previously use of timecodes was hard-coded in sequencer and this error would spam console if timecodes would be
enabled by default and proxies would be never built.
ref: T95093
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D13905
Using a negative linesize to flip an image vertically is supported in
ffmpeg but not for every function.
This method treats frames that need and those that do not need alignment
the same. An RGBA frame buffer with alignment that ffmpeg decides is
optimal for the CPU and build options is allocated by ffmpeg.
The `sws_scale` does the colorspace transformation into this RGBA frame
buffer without flipping. Now the image is upside down and aligned.
The combined unaligning and vertical flipping is then done by
`av_image_copy_to_buffer` which seems to handle negative linesize
correctly.
Reviewed By: ISS
Differential Revision: https://developer.blender.org/D13908
Layer resync code would not yet fully properly deal with all possible
invalid status of ViewLayer comming from those older files.
Now put 2.80-doversion specific fixes into their own dedicated
function, so that they do not affect actual regular layer resync code
anymore. Also added some sanity-checks in main
`BKE_layer_collection_sync` code.
The Brotli library only needs to be explicitly linked when using the
statically linked libraries. When using system libs they're shared, and
the .so loading mechanism takes care of dependencies.
Currently the Boolean Math node only has 3 basic logic gates:
AND, OR, and NOT. This commit adds 6 additional logic gates
for convenience and ease of use.
- **Not And (NAND)** returns true when at least one input is false.
- **Nor (NOR)** returns true when both inputs are false.
- **Equal (XNOR)** returns true when both inputs are equal.
- **Not Equal (XOR)** returns true when both inputs are different.
- **Imply (IMPLY)** returns true unless the first input is true and
the second is false.
- **Subtract (NIMPLY)** returns true when the first input is true and
the second is false.
Differential Revision: https://developer.blender.org/D13774
ssize_t is a posix type pyconfig.h previously
supplied for MSVC, it appears to have stopped
doing this in the python 3.10 headers.
Py_ssize_t is the type of the field this macro
actually returns, so best to to use that in our
code as well.
This reverts commit 948211679f.
This commit introduced some regressions in the test suite.
As this change is a core part of blender Bastien and I decided to revert
it as the solution isn't clear and needs more investigation.
The following tests FAILED:
62 - blendfile_liblink (SEGFAULT)
63 - blendfile_library_overrides (SEGFAULT)
It fails in (id_us_ensure_real)
This makes optionnal the use of a different interface for the geometry
shader stage output. When the vertex and geometry interface instance name
matches, a `_in` and `_out` suffix is added to the end of the instance name.
This makes it easier to have optional geometry shader stages.
# Conflicts:
# source/blender/gpu/intern/gpu_shader_create_info.hh
During sprite fright loading of complex scenes would spend a long time in remapping ID's
The remapping process is done on a per ID instance that resulted in a very time consuming
process that goes over every possible ID reference to find out if it needs to be updated.
If there are N of references to ID blocks and there are M ID blocks that needed to be remapped
it would take N*M checks. These checks are scattered around the place and memory.
Each reference would only be updated at most once, but most of the time no update is needed at all.
Idea: By grouping the changes together will reduce the number of checks resulting in improved performance.
This would only require N checks. Additional benefits is improved data locality as data is only loaded once
in the L2 cache.
It has be implemented for the resyncing process and UI editors.
On an Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz 16Gig the resyncing process went
from 170 seconds to 145 seconds (during hotspot recording).
After this patch has been applied we could add similar approach
to references (references between data blocks) and functionality (tagged deletion).
In my understanding this could reduce the resyncing process to less than a second.
Opening the village production file between 10 and 20 seconds.
Flame graphs showing that UI remapping isn't visible anymore (`WM_main_remap_editor_id_reference`)
* Master {F12769210 size=full}
* This patch {F12769211 size=full}
Reviewed By: mont29
Maniphest Tasks: T94185
Differential Revision: https://developer.blender.org/D13615
This patch migrates the draw manager hair refine compute shader to use
GPUShaderCreateInfo.
Reviewed By: fclem
Differential Revision: https://developer.blender.org/D13915
Allows to perform correction of coordinate delta/displacement in a
similar way of how sculpt mode handles sculpting on a deformed mesh.
An example of usecase of this is allowing riggers and sciprters to
improve corrective shapekey workflow.
The usage consists of pre-processing and access. For example:
object.crazyspace_eval(depsgraph, scene)
# When we have a difference between two vertices and want to convert
# it to a space to be stored, say, in shapekey:
delta_in_orig_space = rigged_ob.crazyspace_displacement_to_original(
vertex_index=i, displacement=delta)
# The reverse of above.
delta_in_deformed_space = rigged_ob.crazyspace_displacement_to_deformed(
vertex_index=i, displacement=delta)
object.crazyspace_eval_clear()
Fuller explanation with actual usecases and studio examples are written in
the comment:
https://developer.blender.org/D13892#368898
Differential Revision: https://developer.blender.org/D13892
Brotli seems to add a custom postfix to its static libraries by default,
but in Debian at least libraries are just named the same for both shared
and static versions, as usual.
So add standard name after static-specific ones.
Follow-up to rB4c617c06e9cb and rBa000de7c2a4d.
The evaluated mesh is a result of evaluated modifiers, and referencing
other evaluated IDs such as materials.
It can not be stored in the EditMesh structure which is intended to be
re-used by many areas. Such sharing was causing ownership errors causing
bugs like
T93855: Cycles crash with edit mode and simultaneous viewport and final render
The proposed solution is to store the evaluated edit mesh and its cage in
the object's runtime field. The motivation goes as following:
- It allows to avoid ownership problems like the ones in the linked report.
- Object level is chosen over mesh level is because the evaluated mesh
is affected by modifiers, which are on the object level.
This patch allows to have modifier stack of an object which shares mesh with
an object which is in edit mode to be properly taken into account (before
the change the modifier stack from the active object will be used for all
objects which share the mesh).
There is a change in the way how copy-on-write is handled in the edit mode to
allow proper state update when changing active scene (or having two windows
with different scenes). Previously, the copt-on-write would have been ignored
by skipping tagging CoW component. Now it is ignored from within the CoW
operation callback. This allows to update edit pointers for objects which are
not from the current depsgraph and where the edit_mesh was never assigned in
the case when the depsgraph was evaluated prior the active depsgraph.
There is no user level changes changes expected with the CoW handling changes:
should not affect on neither performance, nor memory consumption.
Tested scenarios:
- Various modifiers configurations of objects sharing mesh and be part of the
same scene.
- Steps from the reports: T93855, T82952, T77359
This also fixes T76609, T72733 and perhaps other reports.
Differential Revision: https://developer.blender.org/D13824
The Equalize Handles operator allows users to make selected handle
lengths uniform: either respecting their original angle from the key
control point or by flattening their angle (removing the overshoot
sometimes produced by certain handle types).
Design: T94172
Reviewed by: sybren
Differential Revision: https://developer.blender.org/D13702
Some IDs (like text ones) can be linked and only kept around thanks to
editors, allow making such IDs local in `BKE_lib_id_make_local_generic`.
Also refactor logic checking whether ID should be made directly local or
copied into its own util function, so that we can remain sure all
special-cases 'make local' code still uses the same logic here.
Currently there are many function declarations in `BKE_node.h` that
don't actually have implementations in blenkernel. This commit moves
the declarations to `NOD_composite.h`, `NOD_texture.h`, and
`NOD_shader.h` instead. This helps to clarify the purpose of the
different modules.
Differential Revision: https://developer.blender.org/D13869
Fixes T93680
For current drivers of Intel HD Graphics 4400 and 4600, various Program Introspection functions appear broken and return incorrect values, causing crashes in the current handling of SSBOs. Disable use of this feature on those devices. Add checks to features that use SSBOs (Hair and Subdivision Modifier).
Reviewed By: fclem, jbakker
Maniphest Tasks: T93680
Differential Revision: https://developer.blender.org/D13806
The point IBO should only have data for coarse vertices (or in general,
the vertices in the original mesh). As it used for displaying the
vertices for selection in edit mode, and as it indexes into the VBOs for
the positions and edit data, it is itself only indexed by coarse/
original vertex index.
For the subdivision case, this would allocate space for the final
subdivision vertex and reallocate to make room for loose geometry,
although only the first coarse vertex count amount of data would be.
Now just allocate for the required memory. Also reuse index buffer APIs
instead of doing manual work.
unity launches blender in background mode to do some
file conversions, ever since the launcher got introduced
this process broke.
The root cause here is: Unity looks up the default program
to launch .blend files with, which is now the launcher, then
launches it in background mode with a script to export the data.
The launcher however was designed to exit as quickly as
possible so there would not be an extra background process
lingering. It does not wait for blender to exit and does not
pass back any error codes.
This broke unity's workflow since it assumed if the process
exits and succeeds the data *must* be ready for reading which
no longer holds true.
This change keeps the launcher design as was previously,
*except* when launching in background mode, then it
waits and passes back any error codes, thus restoring
unity's workflow.
Differential Revision: https://developer.blender.org/D13894
Reviewed by: LazyDodo, Brecht
Previously weight paint wasn't hooked up to the "Smooth" and "Invert" modes.
With this patch it is not possible to use the "Smooth" and "Invert"
modifiers for the draw keybindings.
Reviewed By: Campbell Barton
Differential Revision: http://developer.blender.org/D13857
Add `libbrotlidec-static.a` and `libbrotlicommon-static.a` to the CMake
`$FREETYPE_LIBRARIES` variable; they'll be required when the Linux libs
for the FreeType upgrade lands (D13448).
The order of libraries is different compared to the similar lines in the
Windows and Apple CMake files, to prevent linker errors on Linux.
Some drivers/glsl compilers will not warn about multiple resources using
the same binding, creating silent errors.
This patch checks for this case and outputs a descriptive error message if
a particular createInfo merge error is founds.
Other validation can be added later.
This patch introduces an extrude node with three modes. The vertex mode
is quite simple, and just attaches new edges to the selected vertices.
The edge mode attaches new faces to the selected edges. The faces mode
extrudes patches of selected faces, or each selected face individually,
depending on the "Individual" boolean input.
The default value of the "Offset" input is the mesh's normals, which
can be scaled with the "Offset Scale" input.
**Attribute Propagation**
Attributes are transferred to the new elements with specific rules.
Attributes will never change domains for interpolations. Generally
boolean attributes are propagated with "or", meaning any connected
"true" value that is mixed in for other types will cause the new value
to be "true" as well. The `"id"` attribute does not have any special
handling currently.
Vertex Mode
- Vertex: Copied values of selected vertices.
- Edge: Averaged values of selected edges. For booleans, edges are
selected if any connected edges are selected.
Edge Mode
- Vertex: Copied values of extruded vertices.
- Connecting edges (vertical): Average values of connected extruded
edges. For booleans, the edges are selected if any connected
extruded edges are selected.
- Duplicate edges: Copied values of selected edges.
- Face: Averaged values of all faces connected to the selected edge.
For booleans, faces are selected if any connected original faces
are selected.
- Corner: Averaged values of corresponding corners in all faces
connected to selected edges. For booleans, corners are selected
if one of those corners are selected.
Face Mode
- Vertex: Copied values of extruded vertices.
- Connecting edges (vertical): Average values of connected selected
edges, not including the edges "on top" of extruded regions.
For booleans, edges are selected when any connected extruded edges
were selected.
- Duplicate edges: Copied values of extruded edges.
- Face: Copied values of the corresponding selected faces.
- Corner: Copied values of corresponding corners in selected faces.
Individual Face Mode
- Vertex: Copied values of extruded vertices.
- Connecting edges (vertical): Average values of the two neighboring
edges on each extruded face. For booleans, edges are selected
when at least one neighbor on the extruded face was selected.
- Duplicate edges: Copied values of extruded edges.
- Face: Copied values of the corresponding selected faces.
- Corner: Copied values of corresponding corners in selected faces.
**Differences from edit mode**
In face mode (non-individual), the behavior can be different than the
extrude tools in edit mode-- this node doesn't handle keeping the back-
faces around in the cases that the edit mode tools do. The planned
"Solidify" node will handle that use case instead. Keeping this node
simpler and faster is preferable at this point, especially because that
sort of "smart" behavior is not that predictable and makes less sense
in a procedural context.
In the future, an "Even Offset" option could be added to this node
hopefully fairly simply. For now it is left out in order to keep
the patch simpler.
**Implementation**
For the implementation, the `Mesh` data structure is used directly
rather than converting to `BMesh` and back like D12224. This optimizes
for large extrusion operations rather than many sequential extrusions.
While this is potentially more verbose, it has some important benefits:
First, there is no conversion to and from `BMesh`. The code only has
to fill arrays and it can do that all at once, making each component of
the algorithm much easier to optimize. It also makes the attribute
interpolation more explicit, and likely faster. Only limited topology
maps must be created in most cases.
While there are some necessary loops and allocations with the size of
the entire mesh, I tried to keep everything I could on the order of the
size of the selection rather than the size of the mesh. In that respect,
the individual faces mode is the best, since there is no topology
information necessary, and the amount of work just depends on the size
of the selection.
Modifying an existing mesh instead of generating a new one was a bit
of a toss-up, but has a few potential benefits:
- Avoids manually copying over attribute data for original elements.
- Avoids some overhead of creating a new mesh.
- Can potentially take advantage of future ammortized mesh growth.
This could be changed easily if it turns out to be the wrong choice.
Differential Revision: https://developer.blender.org/D13709
When moving to C++ field for initialization was removed.
Favor assignments to field names as it reads better and avoids bugs if
files are ever re-arranged as well as mistakes (see T94784).
Note that the generated optimized output is identical with GCC11.
This adds a selection field input to the node, faces that are selected and
meet the minimum vertex count threshold will be triangulated.
Differential Revision: https://developer.blender.org/D13804
Add a boolean option to have the Curve Handle Position input node return the
position of the handle relative to each point position.
Differential Revision: https://developer.blender.org/D12947
A large polygon in the file from the report caused `alloca`
to exceed the maximum stack size, causing a crash. Instead
of using `alloca`, use `blender::Array` with an inline buffer.
Based on a patch by Germano Cavalcante (@mano-wii).
Differential Revision: https://developer.blender.org/D13898
When enabled, it will keep contour around the object instead of hide them by rule of face mark,
so the object can always have full contour while filtering out some of the feature lines inside certain regions.
Reviewed By: Antonio Vazquez (antoniov), Aleš Jelovčan (frogstomp)
Differential Revision: https://developer.blender.org/D13847
Option to discard back faced triangles, this speeds up calculation especially for when you only want to show visible feature lines.
Reviewed By: Antonio Vazquez (antoniov), Aleš Jelovčan (frogstomp)
Differential Revision: https://developer.blender.org/D13848
Instead of splitting it at each occlusion change, it tolerates short segments of "zig-zag" occlusion incoherence and doesn't split the chain at these points, thus creating a much smoother result.
Reviewed By: Antonio Vazquez (antoniov), Aleš Jelovčan (frogstomp)
Differential Revision: https://developer.blender.org/D13851
This gives a modest speedup as calculating tessellation and face
normals at the same time can be more efficiently multi-threaded.
Also avoids calculating face normals twice,
oversight in d590e223da.
This commit improves NURBS knot generation by adding proper support
for the combination of the Bezier and cyclic options. In other cases
the resulting knot doesn't change. This cyclic Bezier knot is used to
create accurate accurate "Nurbs Circle", "Nurbs Cylinder" primitives.
"Nurbs Sphere" and "Nurbs Torus" primitives are also improved by
tweaking the spin operator.
The knot vector in 3rd order NURBS curve with Bezier option turned on
(without cyclic) is changed in comparison to previous calculations,
although it doesn't change the curve shape itself.
The accuracy of the of NURBS circle is fixed, which can be checked by
comparing with mesh circle. Tessellation spacing differences in
circular NURBS is also fixed, which is observable with the NURBS
cylinder and sphere primitives. These were causing seam-like effects.
This commit contains comments from Piotr Makal (@pmakal).
Differential Revision: https://developer.blender.org/D11664
Normal layers currently aren't stored in the undo step
mesh storage, since they are not stored in files at all.
However, the edit mesh expects normals to be fully
calculated, and does not keep track of a dirty state.
This patch updates the normals in the BMesh created
by loading an undo step.
Another option would be calculating the normals on
the undo mesh first, which might be better if Mesh
normal calculation is faster than BMesh calculation,
but the preferred method to access vertex normals fails
in this case, because the mesh runtime mutexes are not
initialized for undo-state meshes.
Differential Revision: https://developer.blender.org/D13859
From an error in rBcfa53e0fbeed, the vertex normals in `SculptSession`
seem to be used, but in the case when no "pbvh" is used, the value of
the pointer is never assigned.
Normals were not generally dirty before this "ensure" function with
regular sculpting operations, so this addition shouldn't have any cost.
Differential Revision: https://developer.blender.org/D13854
All other nodes with data type and domain choices have the domain
below the data type. Generally that order makes sense, because it's
consistent with nodes that have no domain drop-down.
Because this operator is used on original objects, it's best to tag
the normals dirty instead of explicitly calculating them, to avoid
unnecessarily storing normal layers on an original object (since
they might have to be recalculated during evaluation anyway).
There may be other places this change is helpful, but being
conservative is likely better for now.
Related to T95125
The new OBJ exporter did not handle object instances.
The fix is to use a dependency graph iterator, asking for instances.
Unfortunately that iterator makes a temporary copy of instance objects
that does not persist past the iteration, but we need to save all the
objects and meshes to write later, so the Object has to be copied now.
This changed some unit tests. Even though the tests don't have instancing,
the iterator also picks up some Text objects as Mesh ones (which is a good
thing), resulting in two more objects in the all_objects.obj file output.
With object collection properties open there was a huge delay when
switching active objects in a large scene, (~10k objects, ~5m vertices).
This is due to a non-optimal function to query all the collections the object is in.
To solve this the code can be simplified by using `bpy.types.Object.users_collection`
This returns all the collections the object is in removing the need to compute this in python.
The UI team requested adding woff2 support to freetype.
this required a new dependency brotli.
This changes adds brotili to the builder and bumps
freetype to version 2.11.0
As freetype now depends on other libraries, for consistency
all use of ${FREETYPE_LIBRARY} in cmake has been updated to
use ${FREETYPE_LIBRARIES} adjustments have been made in the
windows platform file, all other platforms use cmake's
FindFreeType.cmake which already sets this variable.
reviewed by: brecht
Differential Revision: https://developer.blender.org/D13448
This node can scale individual edges and faces. When multiple selected
faces/edges share the same vertices, they are scaled together.
The center and scaling factor is averaged in this case.
For some examples see D13757.
Differential Revision: https://developer.blender.org/D13757
Currently there is no way to flip normals in geometry nodes. This node
makes that possible by flipping the winding order of selected faces.
The node is purposely not called "Flip Normals", because normals are
derived data, changing them is only a side effect. The real change is
that the vertex and edge indices in the face corners of every selected
polygon are reversed, and face corner attribute data is reversed.
While there are existing utilities to flip a polygon and its custom
data, this node aims to process an attribute's data together instead
of processing all attributes separately for each index.
Differential Revision: https://developer.blender.org/D13809
Adds a second output to the Mesh Islands node that shows the total
number of islands as a field.
Differential Revision: https://developer.blender.org/D13700
This is not currently working, with an internal compiler error. However
we are currently using BVH2 instead of Metal RT. So this has no effect for
users, it's being committed to avoid the code getting outdated.
Ref T92573, T92212
Differential Revision: https://developer.blender.org/D13632
This shader needs to use the same interface as the OCIO shader. On Linux
and Windows this seems to be the case. On MacOS however there is a
mismatch that makes the overlay texture to be completely black when
switching to workbench in the Shader workspace.
This is just a temporarily work-around as this should be solved. Due to
the poor GPU debugging facilities on Mac we haven't been able to
pin-point the root cause.
Editing of generic attributes on the original objects in edit modes is
still very limited. However, when applying a geometry nodes modifier
that generates new attributes. These attributes will show up in the
Attributes panel.
Currently, our exporters are not capable of exporting generic attributes.
Therefore, for the time being, a work around is to apply geometry nodes
and then convert a generic attribute to a task specific attribute like a
uv map, vertex group or vertex color layer. Once more parts of Blender
support generic attributes, this will become less important.
Currently, only meshes are supported by the operator. However, it would
be relatively easy to extend it to other geometry types.
Differential Revision: https://developer.blender.org/D13838
Allow any non-modifier keyboard events to be used.
Note that the existing check for >= EVT_AKEY allowed NDOF and other
non-keyboard events (including modifiers, which didn't work).
- Use defines instead of magic numbers for F-Key & NDOF range checks.
- Use explicit values for NDOF event types.
- Minor clarification to doc-strings.
- Use doxy-sections.
There is probably a better solution that's possible, but the few other
things I tried didn't work, and the build error should be resolved for
the buildbots. This is similar to the "breaks" in the namespace for
functions declared in `ED_node.h`.
The code here was using velocity based interpolation copied from particles.
However there is no velocity cached in the rigid body point cache so the
results were non-sensical.
This adds a new curve primitive to generate arcs.
Radius mode (default): Generates a fixed radius arc on XY plane
with controls for Angle, Sweep and Invert.
Points mode: Generates a three point curve arc from Start to End
via Middle with an Angle Offset and option to invert the arc.
There are also outputs for arc center, radius and normal direction
relative to the Z-axis.
This patch is based on previous patches
D11713 and D13100 from @guitargeek. Thank you.
Reviewed By: HooglyBoogly
Differential Revision: https://developer.blender.org/D13640
This fixes a similar issue as the previous commit, but this time the
continuous notifiers would be sent after redoing. E.g. after moving an
object, and then modifying the transform in the "Adjust Last Operation"
panel.
The thumbnail caching continuously sends `ND_SPACE_FILE_PREVIEW`
notifiers via a timer. But this timer was never ended properly after
thumbnails are fully loaded into the cache.
Wouldn't actually cause a refresh or redraw, send and process the
notifiers.
I already tried to avoid this for the asset view template, but
apparently that wasn't working correctly. For the File/Asset Browser I
never applied that fix to avoid possible regressions before the release.
This commit moves code in all node editor files to the
`blender::ed::space_node` namespace, except for C API
functions defined in `ED_node.h`, which can only be moved
once all areas calling them are moved to C++.
The change is fairly straightforward, I just moved a couple
of "ED_" code blocks around to make the namespace more
contiguous, and there's the method for adding a pointer to
a struct in a C++ namespace in DNA.
Differential Revision: https://developer.blender.org/D13871
This patch fixes a correctness issue discovered in the `int4 select(...)` function on Apple Silicon machines, which causes bad bvh2 builds. Although the generated bvh2s give correct renders, the resulting runtime performance is terrible. This fix allows us to switch over to bvh2 on Apple Silicon giving a significant performance uplift for many of the standard benchmarking assets. It also fixes some unit test failures stemming from the use of MetalRT, and trivially enables the new pointcloud primitive.
Ref T92212
Reviewed By: brecht
Maniphest Tasks: T92212
Differential Revision: https://developer.blender.org/D13877
Custom bones are drawn by instancing the GPUBatch of the base object. To
access the mesh and its GPUBatch, `BKE_object_get_evaluated_mesh` was
used. However, since GPU subdivision support, this will return a
subdivision wrapper which will never be drawn, and thus will have an
invalid batch, which caused the crash.
`BKE_object_get_evaluated_mesh_no_subsurf` should be used instead, to
return the mesh that will be drawn, and have the subdivision evaluated
on the GPU. Note that the rest of the draw code is already using this
function.
This adds vertex creasing support for OpenSubDiv for modeling, rendering,
Alembic and USD I/O.
For modeling, vertex creasing follows the edge creasing implementation with an
operator accessible through the Vertex menu in Edit Mode, and some parameter in
the properties panel. The option in the Subsurf and Multires to use edge
creasing also affects vertex creasing.
The vertex crease data is stored as a CustomData layer, unlike edge creases
which for now are stored in `MEdge`, but will in the future also be moved to
a `CustomData` layer. See comments for details on the difference in behavior
for the `CD_CREASE` layer between egdes and vertices.
For Cycles this adds sockets on the Mesh node to hold data about which vertices
are creased (one socket for the indices, one for the weigths).
Viewport rendering of vertex creasing reuses the same color scheme as for edges
and creased vertices are drawn bigger than uncreased vertices.
For Alembic and USD, vertex crease support follows the edge crease
implementation, they are always read, but only exported if a `Subsurf` modifier
is present on the Mesh.
Reviewed By: brecht, fclem, sergey, sybren, campbellbarton
Differential Revision: https://developer.blender.org/D10145
This brush fixes the random spikes that
occasionally happen in multires models.
These spikes can be nearly impossible to
fix manually and can make working with
multires a nightmare.
Since vertex and face normals are now stored on the mesh when necessary,
we can expose them as contiguous arrays of vectors in the Python API.
This can give render engines and other addons easy access to they data
for fast access through a regular collection property.
While "Mesh Vertex" still has a "normal" property in RNA, that is only
maintained in order to avoid breaking the existing API, and accessing
it is less efficient than accessing the normals directly.
I made the normal arrays read-only, because modifying them could
put them in an invalid state. This is inline with how we treat the data
internally, and helps keep relationships between data clear.
Differential Revision: https://developer.blender.org/D13839
When toggling to a File Browser from an Asset Browesr, the asset indexer
would be used to load files. I couldn't spot issues with that on a
quick look, but this should still be corrected.
Adds an "Asset Indexing" option (enabled by default) to Preferences >
Experimental > Debugging. This is useful when working on the asset
library loading.
An external CMake script can be used to debug CMake code, modify/read
target properties, inter-target dependencies, non-cache variables etc.
Reviewed by: LazyDodo, brecht
Differential Revision: https://developer.blender.org/D13830
This patch fixes crash T94736 on Metal in which the launch_params were not being updated to reflect destruction of MetalMem objects.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D13875
When copying strips between 2 scenes, it wasn't possible to copy
animation curves along with strips.
In this patch curves are copied into clipboard `ListBase`. When pasted,
original curves are moved into temporary `ListBase` and curves in
clipboard are moved into scene action. This is because when strips from
clipboard have to be renamed, function `SEQ_ensure_unique_name()` does
fix RNA paths of curves, but this is done globally for all curves within
action. After strips are renamed, restore original curves from backup.
Note: This patch handles only fcurves. Drivers and actions are currently
not handled anywhere in VSE.
Fixes T77530
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D13845
- Move functions that handle animation to own file - `animation.c`
- Refactor `SEQ_offset_animdata` and `SEQ_free_animdata` functions
- Add function `SEQ_fcurves_by_strip_get` to provide more granular
and explicit way for operators to handle animation
- Remove function `SEQ_dupe_animdata`, do curve duplication explicitly
in operator code, which makes more sense to do. Further this function
was also used for renaming strips which makes no sense.
- Refactor usage of function `SEQ_free_animdata` and remove XXX comment.
Now this functiuon is no longer called when `Sequence` data is freed
implicitly, it is done explicitly in high level function
`SEQ_edit_remove_flagged_sequences`
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D13852
The ustring is not a trivially copyable object from the C++ standard
point of view, so using memcpy on it is strictly wrong. In practice,
however, this is OK since it is just a thin wrapper around char*.
For now use explicit cast to void* same as it was done in other places
of ccl::array implementation. But also localize the place where memory
copy happens to make it easier to support proper non-trivial C++
objects in the future.
This merge the description into one struct only that can be more easily
copied during `finalize()`.
The in and out layout parameters are better named and extended with the
invocation count (with fallback support)
Route cause was data alignment mismatch between GPU and CPU. This
mismatch would not allow us to bind the UBO where data wasn't available
on the GPU.
Fixed by using float4 in stead of float2. This could eventually be
packed, but that would lead to less readable code.
Cause was incorrect logic when generating the resource layout. It the
explicit_location_support setting was ignored and the binding were
generated for image, uniform buffers and storage buffers.
- Add BM_mesh_debug_print & BM_mesh_debug_info.
- Report flags in Mesh.cd_flag in BKE_mesh_debug_print
- Move custom data printing into customdata.cc (noted as a TODO).
Note that the term "runtime" has been removed from
`BKE_mesh_runtime_debug_print` since these are useful for debugging any
kind of mesh data.
Code that handled merging & initializing custom-data from other
meshes sometimes missed checks for this flag, causing bevel weights to
lost when the mesh was converted to a BMesh.
The following changes are a more general fix for T94197.
- Add BM_mesh_copy_init_customdata_from_mesh_array which initializes
custom-data from multiple meshes at once.
As well as initializing custom-data layers from Mesh.cd_flag.
This isn't essential for boolean, however it avoids the overhead of
resizing custom-data layers.
- Loading mesh data into a BMesh now respects Mesh.cd_flag
instead of only checking if the BMesh custom-data-layer exists.
Without this, the order of meshes passed to BM_mesh_bm_from_me could
give different (incorrect) results.
- Copying mesh data now copies `cd_flag` too. This is a precaution
as in my tests evaluating modifiers these values always matched.
Nevertheless it's correct to copy this value as custom-data it's
self is being copied.
The buffer passed as an argument to `GPUFrameBuffer.read_color` is used
in the return of the function and therefore, if not used, its refcount is
decremented.
So be sure to increment the refcount of the already existing objects that
will be used in the return of a function.
This was the case for multi input sockets that have a link already.
Since we have multi input sockets, the way we use `socket_is_available`
is not really giving the expected result on these.
When used for input sockets the intention is to find a free socket
(either for noodle **replacement**, then it is always available, or just
the next free available socket).
Now I would think without the intention to replace an existing link, a
multi input socket should still be available.
From the inside of the function, the `replace` argument turns [namewise]
to `allow_used`, which sounds a little different (so one might argue
that if `allow_used` is `False` this should also trigger for already
connected multi input sockets).
In the end, this is an issue with the variable naming though, cant think
of a usecase where the patch change would really go against intentions.
Maniphest Tasks: T93413
Differential Revision: https://developer.blender.org/D13866
Bug possibly introduced in {rBc57e4418bb85aec8bd3615fd775b990badb43d30}.
Interestingly, the orientation set before (NORMAL), even different from
the orientation that was actually used, was allowing the use of
"orient_matrix" ("orient_matrix_type" should have been NORMAL in that
case too).
In any case, make sure the "orient_matrix_type" and "orient_type" are the
same so that the "orient_matrix" is used.
These operations (sorting and selecting all nodes) should generally
be handled by the node editor and not outside code. They were not
called outside of the node editor, so they can be moved to the editor's
`intern` header.
This file was added nine years ago, and was unused then.
Now with active tools we use a different approach to create
toolbars, so the file is not relevant.
This node allows accessing data of other elements in the context geometry.
It is similar to the Transfer Attribute node in Index mode. The main difference
is that this node does not require a geometry input, because the context
is used.
The node can e.g. be used to generalize what the Edge Vertices node is doing.
Instead of only being able to get the position of the vertices of an edge,
any field/attribute can be accessed on the vertices.
Differential Revision: https://developer.blender.org/D13825
Adds a second output to the edge angle node that shows the signed angle
between the two faces, where Convex angles are positive and Concave angles
are negative. This calculation is slower than the unsigned angle, so it
was best to leave both for times where the unsigned angle will suffice.
Differential Revision: https://developer.blender.org/D13796
It's now easier than before to do the interpolation of attributes
only for the elements that are actually used in some cases.
This can result in a speedup because unnecessary computations
can be avoided. See the patch for a simple performance test.
Differential Revision: https://developer.blender.org/D13828
Cause of the issue isn't that clear, but the NVIDIA GLSL compiler
complained that it couldn't find an overloaded function when the second
parameter is an interger. This change fixes it by using a float.
Display exact integer values of a floating point fields without
a fraction if the step is also an exact integer. This is intended
for cases when the value can technically be fractional, but most
commonly is supposed to be integer.
This handling is not applied if the field has any unit except frames,
because integer values aren't special for quantities like length.
The fraction is discarded in the normal display mode and when copying
the value to clipboard, but not when editing to remind the user that
the field allows fractions.
Differential Revision: https://developer.blender.org/D13753
Fix a precision issue when stepping down from 1 to 0 via the left
decrement button and step 100 results in a small nonzero value.
The reason is that 0.01 can't be precisely represented in binary
and converting to double before multiplication reveals this.
Ref D13753
This patch converts GPU_SHADER_2D_IMAGE_MULTI_RECT_COLOR shader to use
the GPUShaderCreateInfo pattern. It can be used as a reference when
converting other shaders.
In this special case the flat uniform vector cannot be used anymore as it
doesn't fit as push constants. To solve this a uniform buffer is used.
The previous optimization did not work in general yet, unfortunately.
This change makes the code more correct, but also brings back
some unnecessary updates (e.g. when creating a node group).
For boolean operations only one of the meshes was checked to determine
if bevel weights should be created.
Now initialize custom data from both meshes flag.
Note that this is a localized fix to be back-ported, further changes
will be made so edit-mode conversion accounts for this
without the caller needing explicit checks for custom-data flags.
Asset indexing was disabled as ID property indexing wasn't supported.
Now that ID property support is added we can enable asset indexing.
Check {T91406} for more information about asset indexing.
Object/collection asset workflow would need the bounding box for snapping.
The bounding box is stored using ID properties in the scene. Currently ID properties
aren't stored in the asset index, what would break object snapping. For this reason
Asset Indexing is turned off in mater. This patch will introduce the indexing of ID
properties what will allow the indexing to be turned on again.
## Data Mapping ##
For data mapping we store the internal structure of IDProperty to the indexer (including meta-data) to be able to deserialize it back.
```
[
{
"name": ..,
"value": ..,
"type": ..,
/* `subtype` and `length` are only available for IDP_ARRAYs. */
"subtype": ..,
},
]
```
| **DNA** | **Serialize type** | **Note** |
| IDProperty.name | StringValue| |
| IDProperty.type | StringValue| "IDP_STRING", "IDP_INT", "IDP_FLOAT", "IDP_ARRAY", "IDP_GROUP", "IDP_DOUBLE"|
| IDProperty.subtype | StringValue| "IDP_INT", "IDP_FLOAT", "IDP_GROUP", "IDP_DOUBLE" |
| IDProperty.value | StringValue | When type is IDP_STRING |
| IDProperty.value | IntValue | When type is IDP_INT |
| IDProperty.value | DoubleValue | When type is IDP_FLOAT/IDP_DOUBLE |
| IDProperty.value | ArrayValue | When type is IDP_GROUP. Recursively uses the same structure as described in this section. |
| IDProperty.value | ArrayValue | When type is IDP_ARRAY. Each element holds a single element as described in this section. |
NOTE: IDP_ID and IDP_IDARRAY aren't supported. The entry will not be added.
Example
```
[
{
"name": "MyIntValue,
"type": "IDP_INT",
"value": 6,
},
{
"name": "myComplexArray",
"type": "IDP_ARRAY",
"subtype": "IDP_GROUP",
"value": [
[
{
"name": ..
....
}
]
]
}
]
```
## Considered alternatives ##
- Add conversion functions inside `asset_indexer`; makes generic code part of a specific solution.
- Add conversion functions inside `BLI_serialize`; would add data transformation responsibilities inside a unit that is currently only responsible for formatting.
- Use direct mapping between IDP properties and Values; leads to missing information and edge cases (empty primitive arrays) that could not be de-serialized.
Reviewed By: Severin, mont29, HooglyBoogly
Maniphest Tasks: T92306
Differential Revision: https://developer.blender.org/D12990
Motion paths themselves aren't getting saved (not sure if they are
without overrides), but being able to override options makes them
usable even if it's necessary to regenerate every edit session.
Differential Revision: https://developer.blender.org/D13842
Some new obj exporter tests were disabled because the normals were different
in the last decimal place on different platforms.
The old python exporter deduped normals with their coordinates rounded to
four decimal places. This change does the same in the new exporter.
On one test, this produced a file 25% smaller and even ran 10% faster.
This can simplify iterating through all of the indices in the vector,
which is fairly common, since one of the benefits of the data structure
is that all values are contiguous.
This node's UI uses a multi-select enum to allow adjusting the
type of both handle sides with the same node. Since usually the
user wants to affect both handles, and it's the multi-select behavior
isn't obvious, selecting both by default is an improvement.
This significantly reduces discontinuities on UV seams, by giving a better
match of the texture filtered colors on both sides of the seam. It works by
using pixels from adjacent faces across the UV seam.
This new option is called "Adjacent Faces" and is the default. The old option
is called "Extend", and extends border pixels outwards.
Differential Revision: https://developer.blender.org/D13303
Use GPU-side scaling to speed up the scaling itself, and to avoid having
to copy the image buffer using the CPU. Mipmapping is used to get decent
filtering when downscaling without ugly artifacts.
In my comparisons, there was barely any difference between the methods
for DPIs >= 1. Below that, the result looks a bit different due to the
different filtering method.
See D13144 for screen-recordings showing the difference.
Part of T92922.
Differential Revision: https://developer.blender.org/D13144
Reviewed by: Jeroen Bakker
Typo in rB605cdc4346e5f82, both `eBlendfileLinkAppendForeachItemFlag`
flags had the same value, effectively preventing to filter out direct
vs. indirect appended items.
Override layers are a standard feature of Alembic, where archives can override
data from other archives, provided that the hierarchies match.
This is useful for modifying a UV map, updating an animation, or even creating
some sort of LOD system where low resolution meshes are swapped by high resolution
versions.
It is possible to add UV maps and vertex colors using this system, however, they
will only appear in the spreadsheet editor when viewing evaluated data, as the UV
map and Vertex color UI only show data present on the original mesh.
Implementation wise, this adds a `CacheFileLayer` data structure to the `CacheFile`
DNA, as well as some operators and UI to present and manage the layers. For both
the Alembic importer and the Cycles procedural, the main change is creating an
archive from a list of filepaths, instead of a single one.
After importing the base file through the regular import operator, layers can be added
to or removed from the `CacheFile` via the UI list under the `Override Layers` panel
located in the Mesh Sequence Cache modifier. Layers can also be moved around or
hidden.
See differential page for tests files and demos.
Reviewed by: brecht, sybren
Differential Revision: https://developer.blender.org/D13603
This is a first part of the Shader Create Info system could be.
A shader create info provides a way to define shader structure, resources
and interfaces. This makes for a quick way to provide backend agnostic
binding informations while also making shader variations easy to declare.
- Clear source input (only one file). Cleans up the GPU api since we can create a
shader from one descriptor
- Resources and interfaces are generated by the backend (much simpler than parsing).
- Bindings are explicit from position in the array.
- GPUShaderInterface becomes a trivial translation of enums and string copy.
- No external dependency to third party lib.
- Cleaner code, less fragmentation of resources in several libs.
- Easy to modify / extend at runtime.
- no parser involve, very easy to code.
- Does not hold any data, can be static and kept on disc.
- Could hold precompiled bytecode for static shaders.
This also includes a new global dependency system.
GLSL shaders can include other sources by using #pragma BLENDER_REQUIRE(...).
This patch already migrated several builtin shaders. Other shaders should be migrated
one at a time, and could be done inside master.
There is a new compile directive `WITH_GPU_SHADER_BUILDER` this is an optional
directive for linting shaders to increase turn around time.
What is remaining:
- pyGPU API {T94975}
- Migration of other shaders. This could be a community effort.
Reviewed By: jbakker
Maniphest Tasks: T94975
Differential Revision: https://developer.blender.org/D13360
For an upcoming refactoring of library remapping we want to be able to test if the logic won't change.
It also increased my experience inside the remapping codebase and find out what exactly needed to
be refactored.
This patch adds test cases for the core functionality of `foreach_libblock_remap_callback`. The test cases
don't cover of all the branches. Also pre-, post-processing, referencing and proxies are not tested.
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D13815
Allows conveniently selecting an inverse of a collection.
Reviewed By: Antonio Vazquez (antoniov)
Differential Revision: https://developer.blender.org/D13846
This patch improves conversion method from NURBS to Bezier curves,
resulting in exact shape between those two types when provided with
a 3rd degree NURBS curve. Part of T86086.
See the differential revision for more comparisons.
The node still cannot account properly for a NURBS "order" other
than 4 and it does not take into account control point weights.
Differential Revision: https://developer.blender.org/D13546
Fix the description for WORKSPACE_OT_reorder_to_back to say "last" in
list rather than "first"
See D13696 for more details.
Differential Revision: https://developer.blender.org/D13696
Reviewed by Aaron Carlisle
The crash is due to the fact that GPU subdivision extraction routines
for edit data (including UVs) only worked for BMesh. However, a Mesh
based version is still needed for texture painting. This adds the
missing components. This also ensures all data are properly initialized
(at least the ones revealed by the bug).
This puts the loop over the final subdivision quads outside of the mesh
iteration callback. This can also allow for easier parallel execution in
the future if need be.
Since the option to enable linkers are booleans,
it's possible to enable them all at once.
Now only the first enabled + available linker is used
(with priority given to link is with better performance).
Set the linker using CMAKE_*_LINKER_FLAGS instead of {C/CXX}FLAGS.
There is no advantage in using the CFLAGS to set the linker, it has the
downside of triggering a full rebuild when changing the linker.
Tested building Blender and the bpy.so Python module.
Ref D13833
Reviewed by: sergey, brecht
Conceptually, this is the geometry that data is taken from,
not the target of an operation, so rename it from "Target"
to "Source". This was common user feedback and agreed
on in a recent sub-module meeting.
Before rB644e6c7a3e99ae1d43ed, `fill` was used in the error
cases, but now `fill_indices` is used, which doesn't work when
the span is empty (when only one output is used). The fix is just
to check for that case.
The search list only displayed the "Result" output socket in this
case, which is unexpected since dragging from an input gives the
operations in the list as well. Also use integer mode when
connecting to boolean sockets.
I noticed these when doing final cleanup on rBcfa53e0fbeed.
One use was removed in that commit, the others were unused
going further back a few years.
Differential Revision: https://developer.blender.org/D13834
`TreeElement` isn't a trivial type anymore, so `MEM_delete()` should be
called, which calls the destructor.
AFAICS this would cause a memory leak, since the contained `unique_ptr`
is allocated but not destructed correctly - but it's not using the
guarded allocator so woudn't be reported.
Smart pointers should be the default choice for C++ owning pointers,
since they let you manage memory using RAII.
Also moved type factory methods into static class functions.
Basically this removes any C <-> C++ glue code. C++ types are accessed
directly via the public C++ APIs.
Contains some related changes like, moving functions that were
previously declared in a now removed header to a different file, whose
header is the more appropriate place (and the source file as well).
But generally I tried to avoid other changes.
The switch to how normals are kept has led to tiny differences in
the normal output values on different platforms. Disabling the failing
tests while working on a solution to this problem.
Part of a5cb7c1e62 is reverted since it
created unknown pragma warning on windows.
Use a trick to do self-assigning.
Reviewed by Jacques Lucke in chat.
`BKE_layer_collection_sync` was missing a specific handling for one of
those pre-master collection cases,
NOTE: It is a bit unfortunate to have to do 'do-version' code in BKE...
At some point might look into moving this into actual `do_version` file,
but this is not fully trivial not critical improvement for now.
Caused by rBa5c59fb90ef9.
Since Group Input and Output sockets happen to be of type `SOCK_CUSTOM`
[and since rBa5c59fb90ef9 custom py defined sockets are too :)] a check
introduced in rB513066e8ad6f that prevents connections for `SOCK_CUSTOM`
triggered.
Now refine the check, so it specifically looks for NODE_GROUP_INPUT /
NODE_GROUP_OUTPUT, too (this keeps the intention intact to not connect
group inputs to group outputs and vice versa, but allows custom py
defined sockets to connect again) and put it in new utility function.
Maniphest Tasks: T94827
Differential Revision: https://developer.blender.org/D13817
Blender.xcodeproj User-supplied CFBundleIdentifier value
'org.blenderfoundation.blender' in the Info.plist must be the same as
the PRODUCT_BUNDLE_IDENTIFIER build setting value ''.
Reviewed By: #platform_macos, brecht
Differential Revision: https://developer.blender.org/D13826
Didn't remove the key-value pair since old Xcode behavior is not
known.
warning: LSMinimumSystemVersion of '10.9.0' is less than the value of
MACOSX_DEPLOYMENT_TARGET '10.13' - setting to '10.13'. (in target
'blender' from project 'Blender')
Reviewed By: #platform_macos, brecht
Differential Revision: https://developer.blender.org/D13831
Fix assignment warning
source/blender/blenlib/tests/BLI_any_test.cc:56:5: warning: explicitly
assigning value of variable of type 'blender::Any<void, 8, 8>'
to itself [-Wself-assign-overloaded]
c = c;
Reviewed By: JacquesLucke
Differential Revision: https://developer.blender.org/D13835
2-point-curves are treated separately from 3plus-point-curves (assume a
lot of the twisting reduction can be skipped, so there is a dedicated
function for single segment curves).
And while using the 3plus-point-curves function [`make_bevel_list_3D`]
would actually work in this case, the dedicated function
`make_bevel_list_segment_3D` would only consider the tilt of the second
point and would just copy over the quat to the first point as well. Dont
see a reason for this, now consider the first point's tilt as well.
Maniphest Tasks: T94837
Differential Revision: https://developer.blender.org/D13813
Can give considerably faster linking, especially on system with many
cores.
The mold linker recently reached 1.0, see:
https://github.com/rui314/mold
The current stable release of GCC can't use this linker via
-fuse-ld=mold, so this patch uses the "-B" argument to add a binary
directory containing an alternate "ld" command that points to
"mold" (which is part of the default mold installation).
Some timing tests for linking full builds for AMD TR 3970X:
- BFD: 20.78 seconds.
- LLD: 12.16 seconds.
- GOLD: 7.21 seconds.
- MOLD: 2.53 seconds.
Ref D13807
Reviewed by: sergey, brecht
The mask is only used if it's not zero. Adding the normal mask made
it not zero, but it didn't include anything else, so all custom data
layers except normals were removed. The fix is to only add normals
to the mask when it should be used.
As described in T91186, this commit moves mesh vertex normals into a
contiguous array of float vectors in a custom data layer, how face
normals are currently stored.
The main interface is documented in `BKE_mesh.h`. Vertex and face
normals are now calculated on-demand and cached, retrieved with an
"ensure" function. Since the logical state of a mesh is now "has
normals when necessary", they can be retrieved from a `const` mesh.
The goal is to use on-demand calculation for all derived data, but
leave room for eager calculation for performance purposes (modifier
evaluation is threaded, but viewport data generation is not).
**Benefits**
This moves us closer to a SoA approach rather than the current AoS
paradigm. Accessing a contiguous `float3` is much more efficient than
retrieving data from a larger struct. The memory requirements for
accessing only normals or vertex locations are smaller, and at the
cost of more memory usage for just normals, they now don't have to
be converted between float and short, which also simplifies code
In the future, the remaining items can be removed from `MVert`,
leaving only `float3`, which has similar benefits (see T93602).
Removing the combination of derived and original data makes it
conceptually simpler to only calculate normals when necessary.
This is especially important now that we have more opportunities
for temporary meshes in geometry nodes.
**Performance**
In addition to the theoretical future performance improvements by
making `MVert == float3`, I've done some basic performance testing
on this patch directly. The data is fairly rough, but it gives an idea
about where things stand generally.
- Mesh line primitive 4m Verts: 1.16x faster (36 -> 31 ms),
showing that accessing just `MVert` is now more efficient.
- Spring Splash Screen: 1.03-1.06 -> 1.06-1.11 FPS, a very slight
change that at least shows there is no regression.
- Sprite Fright Snail Smoosh: 3.30-3.40 -> 3.42-3.50 FPS, a small
but observable speedup.
- Set Position Node with Scaled Normal: 1.36x faster (53 -> 39 ms),
shows that using normals in geometry nodes is faster.
- Normal Calculation 1.6m Vert Cube: 1.19x faster (25 -> 21 ms),
shows that calculating normals is slightly faster now.
- File Size of 1.6m Vert Cube: 1.03x smaller (214.7 -> 208.4 MB),
Normals are not saved in files, which can help with large meshes.
As for memory usage, it may be slightly more in some cases, but
I didn't observe any difference in the production files I tested.
**Tests**
Some modifiers and cycles test results need to be updated with this
commit, for two reasons:
- The subdivision surface modifier is not responsible for calculating
normals anymore. In master, the modifier creates different normals
than the result of the `Mesh` normal calculation, so this is a bug
fix.
- There are small differences in the results of some modifiers that
use normals because they are not converted to and from `short`
anymore.
**Future improvements**
- Remove `ModifierTypeInfo::dependsOnNormals`. Code in each modifier
already retrieves normals if they are needed anyway.
- Copy normals as part of a better CoW system for attributes.
- Make more areas use lazy instead of eager normal calculation.
- Remove `BKE_mesh_normals_tag_dirty` in more places since that is
now the default state of a new mesh.
- Possibly apply a similar change to derived face corner normals.
Differential Revision: https://developer.blender.org/D12770
Some of the message-bus macros are not safe to use in C++. This has come
up before, but no good solution was found. Now @LazyDodo, @HooglyBoogly
and I concluded this is the best duct tape "solution" for the moment.
The message-bus API should address this.
Today many users seem to think the output from
this node is a single curve with multiple splines.
This patch renames the geometry output socket
from "Curves" to "Curve Instances" to avoid confusion.
Differential Revision: https://developer.blender.org/D13693
We want to refactor quite some of the Outliner code using C++, this is a
logical step to help the transition to a new architecture.
Includes plenty of fixes to make this compile without warnings, trying
not to change logic. The usual stuff (casts from `void *`, designated
initializers, compound literals, etc.).
There were a couple of function name collisions which were caused
by sharing code with the mask modifier. I just removed the dependence
on the mask modifier now. The code that I duplicated for that purpose
is only in a legacy node, so it can be expected to be removed soonish.
And change install_deps.sh to build shared (instead of static) FFMPEG
libraries, for consistency with other library dependencies and to simplify
the logic. This may require users of install_deps.sh to rebuild FFMPEG.
This is the last step that lets us get rid of LIBPATH variables and
link_directories() entirely, as recommended by the CMake docs.
Some fixes were needed in the find FFMPEG module to make it actually work,
this code was unused up to now.
Followup to D8855.
Differential Revision: https://developer.blender.org/D9177
The fundamental limitation is that we can only have one instance
("dupli") generator at a time. Because the mesh output of a curve
object is output as an instances, the geometry set instances existed,
replacing the object as font instances. The "fix" is to reverse the
order. The behavior won't be perfect still, but at least the old
behavior will be preserved, which is really what matters for a
feature like this.
One way to take this change further would be completely disabling
regular geometry evaluation while this option is active. However,
it doesn't seem like that would actually improve the state of the code.
Differential Revision: https://developer.blender.org/D13768
When drawing windows on monitors that differ in DPI, we can sometimes
have UI elements draw at an incorrect scale. This patch just ensures
that `wm_window_make_drawable` always updates DPI.
See D10483 for more details.
Differential Revision: https://developer.blender.org/D10483
Reviewed by Brecht Van Lommel
Allow area Split to be initiated in any area and give better feedback
when not allowed.
See D13599 for more details and usage examples.
Differential Revision: https://developer.blender.org/D13599
Reviewed by Campbell Barton
And change install_deps.sh to build shared (instead of static) FFMPEG
libraries, for consistency with other library dependencies and to simplify
the logic. This may require users of install_deps.sh to rebuild FFMPEG.
This is the last step that lets us get rid of LIBPATH variables and
link_directories() entirely, as recommended by the CMake docs.
Some fixes were needed in the find FFMPEG module to make it actually work,
this code was unused up to now.
Followup to D8855.
Differential Revision: https://developer.blender.org/D9177
This is my attempt of adding defaults for the space clip editor struct
(in line with https://developer.blender.org/T80164).
It adds the default allocation for `SpaceClip` and
`node_composite_movieclip.cc`. This also solves the error below (for
C++ files using the DNA_default_alloc), which was put forward by
Sergey Sharybin.
Differential Revision: https://developer.blender.org/D13367
Reviewed by: Julian Eisel
While theorically fairly generic, current code is only enabled for
bledfile and liboverride views, and only used to display messages from
library IDs.
Reviewed By: Severin
Differential Revision: https://developer.blender.org/D13766
SVG files contained specific detailed pathnames on developers'
computers. These included full local user profile and path and should
not be in the release.
This patches corrects those lines. It also removes unused gradients from
the private icons SVG.
Differential Revision: https://developer.blender.org/D13344
Reviewed by: Yevgeny Makarov, Julian Eisel
This adds wrapper classes that make it easier to use GPU objects in C++.
####Motivations:####
- Easier handling of GPU objects.
- EEVEE rewrite already makes use of similar wrappers.
- There is the ongoing effort to use more C++ in the codebase
and lans to port more engines to it.
- The shader code refactor will make use of many UBOs with shared
struct declaration. This helps managing them.
- Safer handling of `TextureFromPool` which can't be bound as normal
texture (only texture ref) and can be better tracked in the future.
####Considerations:####
- I chose the `blender::draw` namespace because `blender::gpu` already has private classes (i.e: `gpu::Texture`).
- Theses are wrappers that manage a GPU object internally. They might be confused with actual `Texture`. However, the name `TextureWrapper` is a bit too much verbose in my opinion. I'm open to suggestion about better name.
Reviewed By: jbakker
Differential Revision: http://developer.blender.org/D13805
This patch implements the vector types (i.e:`float2`) by making heavy
usage of templating. All vector functions are now outside of the vector
classes (inside the `blender::math` namespace) and are not vector size
dependent for the most part.
In the ongoing effort to make shaders less GL centric, we are aiming
to share more code between GLSL and C++ to avoid code duplication.
####Motivations:
- We are aiming to share UBO and SSBO structures between GLSL and C++.
This means we will use many of the existing vector types and others
we currently don't have (uintX, intX). All these variations were
asking for many more code duplication.
- Deduplicate existing code which is duplicated for each vector size.
- We also want to share small functions. Which means that vector
functions should be static and not in the class namespace.
- Reduce friction to use these types in new projects due to their
incompleteness.
- The current state of the `BLI_(float|double|mpq)(2|3|4).hh` is a
bit of a let down. Most clases are incomplete, out of sync with each
others with different codestyles, and some functions that should be
static are not (i.e: `float3::reflect()`).
####Upsides:
- Still support `.x, .y, .z, .w` for readability.
- Compact, readable and easilly extendable.
- All of the vector functions are available for all the vectors types
and can be restricted to certain types. Also template specialization
let us define exception for special class (like mpq).
- With optimization ON, the compiler unroll the loops and performance
is the same.
####Downsides:
- Might impact debugability. Though I would arge that the bugs are
rarelly caused by the vector class itself (since the operations are
quite trivial) but by the type conversions.
- Might impact compile time. I did not saw a significant impact since
the usage is not really widespread.
- Functions needs to be rewritten to support arbitrary vector length.
For instance, one can't call `len_squared_v3v3` in
`math::length_squared()` and call it a day.
- Type cast does not work with the template version of the `math::`
vector functions. Meaning you need to manually cast `float *` and
`(float *)[3]` to `float3` for the function calls.
i.e: `math::distance_squared(float3(nearest.co), positions[i]);`
- Some parts might loose in readability:
`float3::dot(v1.normalized(), v2.normalized())`
becoming
`math::dot(math::normalize(v1), math::normalize(v2))`
But I propose, when appropriate, to use
`using namespace blender::math;` on function local or file scope to
increase readability.
`dot(normalize(v1), normalize(v2))`
####Consideration:
- Include back `.length()` method. It is quite handy and is more C++
oriented.
- I considered the GLM library as a candidate for replacement. It felt
like too much for what we need and would be difficult to extend / modify
to our needs.
- I used Macros to reduce code in operators declaration and potential
copy paste bugs. This could reduce debugability and could be reverted.
- This touches `delaunay_2d.cc` and the intersection code. I would like
to know @howardt opinion on the matter.
- The `noexcept` on the copy constructor of `mpq(2|3)` is being removed.
But according to @JacquesLucke it is not a real problem for now.
I would like to give a huge thanks to @JacquesLucke who helped during this
and pushed me to reduce the duplication further.
Reviewed By: brecht, sergey, JacquesLucke
Differential Revision: https://developer.blender.org/D13791
This patch implements the vector types (i.e:`float2`) by making heavy
usage of templating. All vector functions are now outside of the vector
classes (inside the `blender::math` namespace) and are not vector size
dependent for the most part.
In the ongoing effort to make shaders less GL centric, we are aiming
to share more code between GLSL and C++ to avoid code duplication.
####Motivations:
- We are aiming to share UBO and SSBO structures between GLSL and C++.
This means we will use many of the existing vector types and others
we currently don't have (uintX, intX). All these variations were
asking for many more code duplication.
- Deduplicate existing code which is duplicated for each vector size.
- We also want to share small functions. Which means that vector
functions should be static and not in the class namespace.
- Reduce friction to use these types in new projects due to their
incompleteness.
- The current state of the `BLI_(float|double|mpq)(2|3|4).hh` is a
bit of a let down. Most clases are incomplete, out of sync with each
others with different codestyles, and some functions that should be
static are not (i.e: `float3::reflect()`).
####Upsides:
- Still support `.x, .y, .z, .w` for readability.
- Compact, readable and easilly extendable.
- All of the vector functions are available for all the vectors types
and can be restricted to certain types. Also template specialization
let us define exception for special class (like mpq).
- With optimization ON, the compiler unroll the loops and performance
is the same.
####Downsides:
- Might impact debugability. Though I would arge that the bugs are
rarelly caused by the vector class itself (since the operations are
quite trivial) but by the type conversions.
- Might impact compile time. I did not saw a significant impact since
the usage is not really widespread.
- Functions needs to be rewritten to support arbitrary vector length.
For instance, one can't call `len_squared_v3v3` in
`math::length_squared()` and call it a day.
- Type cast does not work with the template version of the `math::`
vector functions. Meaning you need to manually cast `float *` and
`(float *)[3]` to `float3` for the function calls.
i.e: `math::distance_squared(float3(nearest.co), positions[i]);`
- Some parts might loose in readability:
`float3::dot(v1.normalized(), v2.normalized())`
becoming
`math::dot(math::normalize(v1), math::normalize(v2))`
But I propose, when appropriate, to use
`using namespace blender::math;` on function local or file scope to
increase readability.
`dot(normalize(v1), normalize(v2))`
####Consideration:
- Include back `.length()` method. It is quite handy and is more C++
oriented.
- I considered the GLM library as a candidate for replacement. It felt
like too much for what we need and would be difficult to extend / modify
to our needs.
- I used Macros to reduce code in operators declaration and potential
copy paste bugs. This could reduce debugability and could be reverted.
- This touches `delaunay_2d.cc` and the intersection code. I would like
to know @howardt opinion on the matter.
- The `noexcept` on the copy constructor of `mpq(2|3)` is being removed.
But according to @JacquesLucke it is not a real problem for now.
I would like to give a huge thanks to @JacquesLucke who helped during this
and pushed me to reduce the duplication further.
Reviewed By: brecht, sergey, JacquesLucke
Differential Revision: https://developer.blender.org/D13791
Fixes issue T94603
It adds a new compositor node called Scene Time which is already present as a geo node, having the same basic nodes available in all node trees is a nice thing to have.
Renames "Time" node to "Time Curve", this is done to avoid confusion between the Time node and the Scene Time node.
Reviewed By: jbakker
Maniphest Tasks: T94603
Differential Revision: https://developer.blender.org/D13762
This patch implements the vector types (i.e:float2) by making heavy
usage of templating. All vector functions are now outside of the vector
classes (inside the blender::math namespace) and are not vector size
dependent for the most part.
In the ongoing effort to make shaders less GL centric, we are aiming
to share more code between GLSL and C++ to avoid code duplication.
Motivations:
- We are aiming to share UBO and SSBO structures between GLSL and C++.
This means we will use many of the existing vector types and others we
currently don't have (uintX, intX). All these variations were asking
for many more code duplication.
- Deduplicate existing code which is duplicated for each vector size.
- We also want to share small functions. Which means that vector functions
should be static and not in the class namespace.
- Reduce friction to use these types in new projects due to their
incompleteness.
- The current state of the BLI_(float|double|mpq)(2|3|4).hh is a bit of a
let down. Most clases are incomplete, out of sync with each others with
different codestyles, and some functions that should be static are not
(i.e: float3::reflect()).
Upsides:
- Still support .x, .y, .z, .w for readability.
- Compact, readable and easilly extendable.
- All of the vector functions are available for all the vectors types and
can be restricted to certain types. Also template specialization let us
define exception for special class (like mpq).
- With optimization ON, the compiler unroll the loops and performance is
the same.
Downsides:
- Might impact debugability. Though I would arge that the bugs are rarelly
caused by the vector class itself (since the operations are quite trivial)
but by the type conversions.
- Might impact compile time. I did not saw a significant impact since the
usage is not really widespread.
- Functions needs to be rewritten to support arbitrary vector length. For
instance, one can't call len_squared_v3v3 in math::length_squared() and
call it a day.
- Type cast does not work with the template version of the math:: vector
functions. Meaning you need to manually cast float * and (float *)[3] to
float3 for the function calls.
i.e: math::distance_squared(float3(nearest.co), positions[i]);
- Some parts might loose in readability:
float3::dot(v1.normalized(), v2.normalized())
becoming
math::dot(math::normalize(v1), math::normalize(v2))
But I propose, when appropriate, to use
using namespace blender::math; on function local or file scope to
increase readability. dot(normalize(v1), normalize(v2))
Consideration:
- Include back .length() method. It is quite handy and is more C++
oriented.
- I considered the GLM library as a candidate for replacement.
It felt like too much for what we need and would be difficult to
extend / modify to our needs.
- I used Macros to reduce code in operators declaration and potential
copy paste bugs. This could reduce debugability and could be reverted.
- This touches delaunay_2d.cc and the intersection code. I would like to
know @Howard Trickey (howardt) opinion on the matter.
- The noexcept on the copy constructor of mpq(2|3) is being removed.
But according to @Jacques Lucke (JacquesLucke) it is not a real problem
for now.
I would like to give a huge thanks to @Jacques Lucke (JacquesLucke) who
helped during this and pushed me to reduce the duplication further.
Reviewed By: brecht, sergey, JacquesLucke
Differential Revision: http://developer.blender.org/D13791
The issue was caused by rBd09b1d2759861aa012ab2e7e4ce2ffa2.
Since this commit, the image users in gpu materials were updated
during depsgraph evaluation as well. However, there was a race
condition when one thread is deleting gpu materials in `BKE_material_eval`
while another thread is updating the image users at the same time.
The solution is to make sure that deleting gpu materials is done before
iterating over all gpu materials, by adding a new depsgraph relation.
The main issue was the use of `G_MAIN` during file load.
This patch refactors the code so that iterating over `G_MAIN`
is not necessary anymore. See D13800 for more details.
Differential Revision: https://developer.blender.org/D13800
The issue was caused by Cycles display driver not being able to restore
window's OpenGL context after disposing Cycles-side OpenGL context.
This is due to the window OpenGL re-activation needing to access window
manager which gets cleared out form global main during file reading.
Defer clearing window manager from the global main to until after all
screens are "exited". This allows Cycles to properly stop rendering,
dispose its OpenGL context, and restore window's drawable context.
It is unclear why it was required to clear window manager list early
on. Guess is that it comes from an original code in a1c8543f2a where
there was an early return which then got replaced with an actual logic
without changing the order of de-initialization and window manager list
clear.
Differential Revision: https://developer.blender.org/D13799
The direct cause of the bug in question was passing in the raw memory
buffer to sscanf. It should be called with a null-terminated buffer;
which isn't guaranteed when blindly trusting the file data.
When attempting to fuzz this code path, a variety of other crashes were
discovered and fixed.
Differential Revision: https://developer.blender.org/D11952
Remove code that very slightly darkened line on bottom of timeline, when
backdrop is enabled. Purpose of the code wasn't dodumented, and 2.79
doesn't seem to produce this darkened line.
Rename drawing functions to appropriate names.
Add maximum string length argument to UI_fontstyle_draw to reduce usage
of BLF_DRAW_STR_DUMMY_MAX. Reorders arguments to UI_fontstyle_draw_ex
See D13794 for more details.
Differential Revision: https://developer.blender.org/D13794
Reviewed by Campbell Barton
Reduction of the number of uses of the define BLF_DRAW_STR_DUMMY_MAX
by using actual sizes of static character arrays.
See D13793 for more details.
Differential Revision: https://developer.blender.org/D13793
Reviewed by Campbell Barton
Blender's compositor code already makes extensive use of
namespace which makes it very simple to enable unity build.
There was one duplicated function that has since to be moved
to a common header.
I saw roughly a 3x speedup of bf_compositor using ninja on
linux using i5 8250u (1:34 down to 0:34).
Reviewed By: LazyDodo
Differential Revision: https://developer.blender.org/D13792
With this change, compilation saw a 2.4x improvement.
This can be combined with unity build to give an overall 4x improvement
Depends on D13797
Reviewed By: LazyDodo
Differential Revision: https://developer.blender.org/D13798
Removal of unused tmp member of GHOST_TEventImeData. Not used now,
nor was it used by the commit that added it to begin with.
Differential Revision: https://developer.blender.org/D11799
Reviewed by Ray Molenkamp
our UNUSED macro is essentially a no-op for MSVC, which lead to
the situation where this well meant macro was emitting the
following warning:
C4189: 'UNUSED_i': local variable is initialized but not referenced
However since we have been on c++17 for a while now the UNUSED
macro can be replaced with the standard [[maybe_unused]] attribute
in cpp files.
This changes cleans up the use of the UNUSED macro in the
bf_nodes_geometry project.
Differential Revision: https://developer.blender.org/D12915
Reviewed by: JacquesLucke, Severin, Sergey, HooglyBoogly
Since CMake 3.16, CMake has native precompiled header (PCH) support.
This change swaps Blender's own PCH implementation with the native implementation.
Previously, PCH was only enabled on Windows however,
this new implementation works on all platforms.
For more information see https://cmake.org/cmake/help/latest/command/target_precompile_headers.html
On my system, Linux with ninja running on an i5 8250U
I saw a 60% reduction in compile times for `bf_freestyle` + linking time.
Reviewed By: LazyDodo, brecht
Differential Revision: https://developer.blender.org/D13797
The problem was the points were selected in edit mode and then sampled. Now, in draw mode, the points are always unselected to avoid this effect in the auto merge process.
The new triangulation mode for quads is the opposite of the current default
shortest diagonal mode. It is optimal for cloth simulations using quad meshes.
Differential Revision: http://developer.blender.org/D13777
Issue introduced in rB1d49293b80446b89b5b12fa0eeefaf14e5051e48
`drw_manager_init` must be called after `drw_context_state_init` as
`DST.draw_ctx.sh_cfg` (indicating when the view is clipped) must be set
first.
Differential Revision: https://developer.blender.org/D13795
After disconnecting hair on an object, if you then hide the particle system, and try connecting the hair again, the operator is cancelled due to `remap_hair_emitter` returning `false` because `target_psmd->mesh_final` is NULL, but `connect_hair` will still strip the `PSYS_GLOBAL_HAIR` flag, which will cause the hair in the hidden particle system to be positioned incorrectly. The correct behavior is to strip the flag only if `remap_hair_emitter` succeeds.
Differential Revision: https://developer.blender.org/D13703
When the 'threshold' is not used in the type we are comparing, just hide
it. This was obvious for some types (e.g. Materials), but maybe not so
on others (e.g. Polygon Sides) and potentionally confusing.
Reported by @hitrpr in chat.
Differential Revision: https://developer.blender.org/D13760
This is an update to the correct OCIO role.
It changes `SceneReference` to `scene_linear`
See https://opencolorio.readthedocs.io/en/latest/guides/authoring/overview.html#config-roles
> - reference - the color space against which the other color spaces are defined
>NOTE: The reference role has sometimes been misinterpreted as being the space in which “reference art” is stored in.
>
> - scene_linear - the scene-referred linear-to-light color space, often the same as the reference space
The current OCIO UX working group doc says:
>reference: This role has had multiple interpreted meanings over the years and is a common point of confusion. It is kept in OCIO for backwards compatibility, but the recommendation is that it is not used by apps.
Reviewed By: jbakker
Differential Revision: https://developer.blender.org/D11398
This puts all static functions in composite node files into a new
namespace. This allows using unity build which can improve
compile times significantly.
This is a follow up on rB1df8abff257030ba79bc23dc321f35494f4d91c5
but for compositor nodes.
The namespace name is derived from the file name.
That makes it possible to write some tooling that checks the names later on.
The filename extension (`cc`) is added to the namespace name as well.
his also possibly simplifies tooling but also makes it more obvious that this namespace is specific to a file.
Reviewed By: JacquesLucke, HooglyBoogly, jbakker
Differential Revision: https://developer.blender.org/D13466
If timeline contains scene strip outside of edited meta strip, this will
cause crash. This is because prefetchin ignored meta strips being edited
when rendering, but did check for scene strips only inside edited meta
strip.
Change active seqbase pointer when entering meta strip. This makes it
possible to prefetch only content that is being presented to user.
rBeed45d2a239a introduced a GPU backend for OpenSubDiv which lets us do
the subdivision at render time. However, some tools might still need to
have the subdivision data available on the CPU side. For this a
subdivision mesh wrapper was also introduced, and is computed whenever a
CPU side mesh is needed. The subdivision settings for this wrapper are
stored during modifier evaluation if GPU subdivision can be done.
The performance regression is due to the fact that although the
subdivision mesh was already computed on the CPU, and no subdivision
wrapper is generated, some checks for creating subdivision data in
`BKE_mesh_wrapper_ensure_subdivision` where still run, one of which is
very expensive.
To fix this we first check the runtime settings of the mesh to see if
subdivision is needed at all.
This commit adds topology information from mesh data structs to the
spreadsheet when the debug value `4001` is set. Eventually we could
expose these. For now it can be a useful tool for developers when
working on mesh algorithms.
Differential Revision: https://developer.blender.org/D13735
The value of this flag was only retrieved in `nodeGetActiveID`, which
wasn't used anywhere. Other than that, the `NODE_ACTIVE_ID` and
related functions seem to come from the Blender internal renderer.
Differential Revision: https://developer.blender.org/D13770
This commit moves the normal field input to `BKE_geometry_set.hh`
from the node file so that normals can be used as an implicit input to
other nodes.
Differential Revision: https://developer.blender.org/D13779
Slight change to our processing of Ctrl-C, Ctrl-V, and Ctrl-X so that
they will not be triggered if Alt is also pressed. This allows entry
of AltGr-C, -V, -X when using International keyboard layouts.
See D13781 for more details
Differential Revision: https://developer.blender.org/D13781
Reviewed by Brecht Van Lommel
Treat "/" as a key that should be evaluated by the Win IME system when
the input language is Chinese. This fixes a duplication of the input
character and results in the expected output of a Chinese wide comma.
See D13771 for more details.
Differential Revision: https://developer.blender.org/D13771
Reviewed by Brecht Van Lommel
Consider temporary directory to be variant part of session configuration
which gets communicated to the tile manager on render reset.
This allows to be able to render with one temp directory, change the
directory, render again and have proper render result even with enabled
persistent data.
For the ease of access to the temp directory expose it via the render
engine API (engine.temp_directory).
Differential Revision: https://developer.blender.org/D13790
The issue was caused by the recent changes in the way how the
render result is drawn: the display driver now could hold an
OpenGL resources. Those resources are not shared across contexts
so whenever OpenGL context is destroyed those resources are to
be destroyed as well (and not attempted to be re-used for a next
render).
Do such destruction and entire driver re-creation since it does
simplifies things from API usage point of view without causing
measurable slowdown.
Steps to reproduce the issue:
- Set the render resolution to 2x of Full HD
- Enable persistent data
- Render (F12)
- Render again
Observe OpenGL state being corrupted. Easy to see in debug mode
where IMM abstraction level reports issues about the buffer size
not being the proper size. This was caused by the display driver
trying to use VAO from the previous OpenGL context.
Differential Revision: https://developer.blender.org/D13789
The core issue is that flushing dependencies are created from an object
to a node tree when it contains e.g. a Texture Coordinate node.
That is an issue because the evaluation of the node tree itself does not
depend on the object (node tree evaluation is essentially a no-op).
Only other systems that parse and evaluate the node tree in a specific
context actually depend on e.g. the position of the referenced object.
It can even be the case that the node tree depends on objects that
the actual evaluator (geometry nodes modifier/material) does not depend
on, because a node is not connected to the output.
Geometry nodes makes the distinction between dependencies to the
node tree and to the evaluator already. Shader nodes do not.
Therefore, shader nodes need a flushing relation from node groups
to their parent node groups.
This brings back some unnecessary updates from rB7e712b2d6a0d
(e.g. when creating a node group from nodes that are not connected
to the output). This is a bit unfortunate, but refactoring how
dependencies work with shader nodes is a out of scope for this fix.
Since rBf9ccd26b037d, calling `data.path_resolve()` on custom properties
with `None` value do not cause a `ValueError` exception any more. This
is now taken into account in the keying sets targeting custom
properties.
Reviewed By: sybren
Differential Revision: https://developer.blender.org/D13787
rBd6891d9bee2b introduced a way to apply a single constraint from the
constraint stack. For this we want to work in the evaluated domain, in
particular the constraint target should be evaluated (the shrinkwrap
constraint needs to have access to the target's evaluated mesh).
Thx a lot to @sergey for handholding here!
Maniphest Tasks: T94600
Differential Revision: https://developer.blender.org/D13765
This moves the clear paths button ("X") to the same line of "Update All Paths",
and make it visible at all times.
1. The clear button affects all objects (by default). However the
Calculate/Update Paths only works on the selected objects/objects.
Better to not have them both on the same line.
2. The operator to clear object and pose paths can run even if the active
object/bone has no motion path. However the UI was not showing the button in
those cases.
Before:
{F12757500, size=full}
After:
{F12757502, size=full}
Differential Revision: https://developer.blender.org/D13609
Compositor node to convert between color spaces.
Conversion is skipped when converting between the same color spaces or to or from data spaces.
Implementation done for tiled and full frame compositor.
Reviewed By: Blendify, jbakker
Differential Revision: https://developer.blender.org/D12481
Switched populating GHOST_WintabInfoWin32 vector from resizing and
assigning to reserving and pushing.
Removed unnecessary state tracking for multiple button presses in a
single packet.
Paired initialization with definition, and added default initialization
for GHOST_WintabInfoWin32.
Currently the node link ui template only works with a few socket types.
This commit addes support for the rest of the socket type declarations.
As pointed out in D13776 currently after recent refactors
Shader nodes no longer display in the menu.
In the future more socket types will be used in the shader nodes
and makes the UI template work better for other node trees.
Differential Revision: https://developer.blender.org/D13778
Weightpaint gradient tool panel showed in other modes (and as a separate
panel).
Fix for fix, see
- rBf8a0e102cf5e
- rBe549d6c1bd2d
So now, check mode again and restrict to topbar (prevents an additional
panel since this is already included in the brush settings).
ref rB0837926740b3 in sculpt-dev branch, so thx @joeedh as well!
Maniphest Tasks: T94243
Differential Revision: https://developer.blender.org/D13630
Currently, most node buttons are defined in `drawnode.cc` however,
this is inconvenient because it requires editing many files when adding new nodes.
The goal is to minimize the number of files needed to add or update a node.
This commit moves most of the node layout functions for shader nodes into their respected
source/blender/nodes/shader/nodes file.
In the future, these functions will be simplified to node_layout.
Some nodes were left in `drawnode.cc` as this would require duplicating code
while this is likely fine it is best to leave that to a seperate commit.
Some software or processing tools (videogrammetry in this case) may
export malformed files with velocity data even when the frame is empty
for some reason. We need to explicity compare the data size with the
vertex size, and refuse to load the attribute if there is a data size
mismatch.
The dangling pointer caused errors further down the line.
The solution is to simply delete an internal link when one
of the corresponding sockets is removed (just like normal
links are removed as well).
We can't include `BLI_utildefines.h` in `RNA_types.h` since Cycles includes
that, but duplicates some of the util defines. So you'd have duplicated
definitions.
Fixes a bug introduced in rB5dedb39d447b. `mesh_original` is not set if the
mesh has no generative modifiers, in which case we can use `mesh_final`, which
would seem to be consistent with the rest of the particle code. An alternative
approach would be to make sure that `mesh_original` is always set in
`deformVerts`.
Differential Revision: https://developer.blender.org/D13754
Must take into account SD_OBJECT_TRANSFORM_APPLIED to determine if the normal
was already in world space.
Differential Revision: https://developer.blender.org/D13639
The root of the issue is caused by Cycles ignoring OpenGL limitation on
the maximum resolution of textures: Cycles was allocating texture of the
final render resolution. It was exceeding limitation on certain GPUs and
driver.
The idea is simple: use multiple textures for the display, each of which
will fit into OpenGL limitations.
There is some code which allows the display driver to know when to start
the new tile. Also added some code to allow force graphics interop to be
re-created. The latter one ended up not used in the final version of the
patch, but it might be helpful for other drivers implementation.
The tile size is limited to 8K now as it is the safest size for textures
on many GPUs and OpenGL drivers.
This is an updated fix with a workaround for freezing with the NVIDIA
driver on Linux.
Differential Revision: https://developer.blender.org/D13385
This patch fixes a couple of new Metal kernel compilation errors: 1) a kernel parameter count overflow, and 2) missing address space qualifiers.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D13763
Simplify signature of `BKE_lib_override_library_resync` and make it a
shallow wrapper around new internal `lib_override_library_resync` that
can then be easily extended for other internal needs.
Not functional changes expected here.
`DEG_OBJECT_ITER_FOR_RENDER_ENGINE_BEGIN` creates temporary objects that
correspond to duplicates or instances.
These temporary objects can share same pointers with the original object
as in the case of Bounding Box.
Bound Box of temporary objects is marked dirty in
`BKE_object_replace_data_on_shallow_copy` since `ob->data` is different.
This causes the original Bounding Box, calculated for the evaluated
geometry, to be lost.
The solution in this commit is to change the boundbox reference of the
temporary objects, so the boundbox of the non-temporary object (with
the data curve) is not marked dirty.
Differential Revision: https://developer.blender.org/D13581
It it rather an old experiment now which didn't pay off.
The initial idea was to have main and jobs threads on fast
nodes of TR2 processors. This didn't really work reliably
because in Blender we need to be able to create nested
threads without their affinity set. This is not how some of
OS are creating nested threads, and we don't always have
access to child threads to reset their affinity.
So overall complexity of the initial idea implementation
became too much compared to the performance gain.
Request from studio, to help identify quickly libs that need update.
NOTE: Currently only outputing INFO log in console, display of this info
in the outliner will come in a separate commit.
No need for it now since all the threading queries and
scheduling is done via TBB.
Should be no functional changes as all the removed code
is supposed to be unused.
Query TBB for the maximum allowed concurrency, which is free from a bug
in own concurrency detection code. One thing to keep in mind is that now
Cycles is limited by the number of threads in the TBB areana from which
Session is created. This isn't a problem for Blender since we do not limit
arena on Blender side. Could be something to watch out for in other Cycles
integrations.
Differential Revision: https://developer.blender.org/D13658
Add a data boundary check in the flipping code.
This code now also communicates the number of mipmap levels
it processed with an intent to avoid GPU texture from using
more levels than there are in the DDS data.
Differential Revision: https://developer.blender.org/D13755
This was missing from rB3e92b4ed2408eacd126c0.
Before only the Separate Geometry node was fixed, because that
node was used in the file from the bug report. The same issue
existed in the Delete Geometry node as well though.
Regression in 7972785d7b that caused
Python callback arguments to be de-referenced twice - potentially
accessing freed memory. Making a new-file with a circle-select
tool active triggered this (for example).
Now arguments aren't de-referenced when Blender it's self has already
removed the callback handle.
Fix IMB_flip[xy] to handle cases where integer overflow might occur when
given sufficiently large image dimensions.
All of these fixes were of a similar class where the intermediate
sub-expression would overflow silently. Widen the types as necessary.
Differential Revision: https://developer.blender.org/D13744
Assert that only the file name component is passed in
since special handling for UDIM should only be applied to the file name.
Also remove an unnecessary NULL check on the filename argument.
For sake of consistencey with other node tree types, create its own cmake module.
This change helps keep `bf_nodes` focused on generic nodes files.
Texture nodes are end of life and hopefully for Blender 4.0 they can be removed.
It is not expected that these will see the updates that the other nodes are getting.
This change also helps isolate the end of life files, we may move some texture
specific node tree execution code out of `node_exec` and into a `node_texture_exec` files.
Differential Revision: https://developer.blender.org/D13743
Sculpt Smooth in Surface mode (as opposed to Laplacian) needs a cache
initialized on first time. In anchored stroke mode with spherical falloff
this was skipped though (because this starts of with no PBVH nodes and
an early return checks for this) and `first_time` was set to false before
cache initialization.
Now move the cache initalization to happen earlier (same as the cache
initialization for automasking).
Maniphest Tasks: T94635
Differential Revision: https://developer.blender.org/D13746
The `lines_adjacency` IBO build in the GPU subdivision case was missing
edges at the boundaries of open meshes. As it is used for the shadow
pass, the shadows were then not clipped properly.
This would also make X-Ray mode render differently in those cases.
To fix this, we can simply reuse the buffer finalization routine from the
non-subdivision case, as such edges are handled there.
This relation is intended to ensure that the properties of the IK
constraint are ready by the time the IK solver tree is built. This
however can cause spurious dependency cycles, because there is only
one init tree node for the whole armature, and the relation actually
implies dependency on all properties of the bone.
This patch reduces spurious dependencies by only creating the relation
if any properties of the IK constraint specifically are animated.
Differential Revision: https://developer.blender.org/D13714
When weight painting the bone overlay is extremely intrusive,
effectively requiring either extensive use of hiding individual
bones, or disabling the whole bone overlay between selections.
This addresses the issue by adding a bone opacity slider that
is used for the 'wireframe' armature drawing mode. It directly
controls the uniform opacity as a straightforward option.
Differential Revision: https://developer.blender.org/D11804
If multiple bones have a custom property with the same name,
depsgraph didn't distinguish between them, potentially leading
to spurious cycles.
This patch moves ID_PROPERTY operation nodes for bone custom
properties from the parameters component to individual bone
components, thus decoupling them.
Differential Revision: https://developer.blender.org/D13729
Oversight in {rB9cb5f0a2282a}.
Above commit made an entry in `rna_Space_refine()`, but the entry in
`rna_Space_refine_reverse()` was missing (and this is what python uses
for the Space callbacks).
Maniphest Tasks: T94685
Differential Revision: https://developer.blender.org/D13751
The crash is caused as the data is only for the first frame, but the mesh
changes topology, so reading the data in subsequent frames causes a
buffer overflow. To fix this, we check that the data size matches the
mesh's vertex count.
Remove `const` from pass-by-value parameters in function declarations.
The variables passed as parameters can never be modified by the function
anyway, so declaring them as `const` is meaningless. Having the
declaration there could confuse, especially as it suggests it does have
a meaning, training people to write meaningless code.
Remove `const` from pass-by-value parameters in function declarations.
The variables passed as parameters can never be modified by the function
anyway, so declaring them as `const` is meaningless. Having the
declaration there could confuse, especially as it suggests it does have
a meaning, training people to write meaningless code.
Error in ffc4c126f5,
which moved doc-strings from implementation into headers.
Some changes in BKE_animsys.h needed to done manually as there
were already doc-strings in both the header and implementation
(with overlapping information).
When making these changes some doc-strings were removed unintentionally.
Thanks for @sybren for the heads up.
MSVC2017 and early 2019 versions are under
the impression struct OGLRender is non trivial
type due to the ThreadCondition field, not
entirely sure why, but it is what it is.
Differential Revision: https://developer.blender.org/D13742
Reviewed by: JacquesLucke
Just disable these tests on macOS for now as fixing seems hard, and we want to
be able to cross-compile and test x86_64 on Arm machines on the buildbot.
An important check to reject edge linehits when a vertex of that edge
was already hit was accidentally removed in
rB6e77afe6ec7b6a73f218f1fef264758abcbc778a
From what I can tell these are left over from Blender Internal render.
Test still pass locally, also tested a couple eevee scenes.
Reviewed By: JacquesLucke, brecht
Differential Revision: https://developer.blender.org/D13732
When tiled rendering was used the render result was
allocated at the end of every view layer render as
opposite of an intended end of all rendering.
Modify the render_result_end so that it only ensures
pixels are allocated if pixels are actually copied
over.
Overrides that are not created as part of an override hierarchy should
not be handled through (auto)resync at all. users are responsible to
hanlde those updates if they need it.
This is achieved by flagging overrides created outside of a hierarchical
process accordingly, and skipping them during resync process.
The current preview generation is more confusing than useful.
Therefore it is better to disable it until better preview generation
methods are found.
Differential Revision: https://developer.blender.org/D13728
The crash was caused by using `modify_geometry_sets` to modify
instances, which does not generally work unfortunately.
The intended behavior was wrong anyway. In instances mode,
only top level instances should be deleted.
Also removed the old error handling because it doesn't look like it
ever worked. all_is_error remained false all the time.
Furthermore, updating it was not thread safe.
Differential Revision: https://developer.blender.org/D13736
Take the Use Modifier Stack setting into account when connecting hair, and
fix wrong results results when using deforming modifiers also.
Differential Revision: https://developer.blender.org/D13704
Enables the `bpy.ops.cycles.denoise_animation()` operator again and modifies it to support
temporal denoising with OptiX. This requires renders that were done with both the "Vector"
and "Denoising Data" passes.
Differential Revision: https://developer.blender.org/D11442
This adds support to render PointCloud motion blur from a standard
"velocity" attribute.
This implementation is similar to that of the Mesh geometry, and
perhaps some code could be deduplicated through a more generic API.
`mesh_need_motion_attribute` was renamed `object_need_motion_attribute`
as it does not really require a mesh and moved to `util.h` so that
it can be shared.
This fixes T94622.
Reviewed By: brecht
Maniphest Tasks: T94622
Differential Revision: https://developer.blender.org/D13719
The DWAB compression was disabled in the d59721c2c3 due to
a bug in the OpenEXR library which is now resolved.
Re-enable the DWAB compression for OpenEXR output. It is a
simple change, and DWAB often behaves better than DWAA.
Differential Revision: https://developer.blender.org/D13713
If a mirror object is used in a mirror modifier, sculptmode did not take
this into account (and instead always clipped on the sculpt objects
local axis).
Now take this into account by storing a matrix in the preparation
function `sculpt_init_mirror_clipping` and use that later in
`SCULPT_clip`.
Maniphest Tasks: T94564
Differential Revision: https://developer.blender.org/D13711
This case wasn't handled in rBf5ce243a56a22d718 correctly.
Now `object_get_evaluated_geometry_set` just returns a geometry
set that contains the collection instance for collection instance objects.
Just an oversight in rBe9607f45d85d.
Now add notifier that toolsettings changed.
Maniphest Tasks: T94366
Differential Revision: https://developer.blender.org/D13723
Previously operations for the math node when connecting to
outputs weren't added. It also used a different method to
check whether the link would be valid.
Do not temporarily change U.pixelsize while creating object previews
in object_preview_render. It does nothing to the render, but the change
in line width can affect other UI drawing since it is done in a thread.
see D13717 for for details.
Differential Revision: https://developer.blender.org/D13717
Reviewed by Julian Eisel
Along with the general changes to CPP this commit does the following
- Use static casts where possible
- Use new CPP MEM library functions instead of cast
- Use listbase macros where possible
- Declare variables where initialized
Reviewed By: HooglyBoogly, JacquesLucke
Differential Revision: https://developer.blender.org/D13718
Dragging from a color socket would hit an assert in a debug build.
The node does not have a color mode currently, so use the vector mode
instead when connecting to a color socket.
std::min was used without including the algorithm
header. Seems to be implicitly included by
something in newer MSVC versions and GCC, however
vs16.4 needed a little help here.
Since 2.8, background images are tied to cameras (in 2.79 these were
tied to a View3D I think).
Code in `BKE_library_id_can_use_idtype` wasnt taking this relation
between `Camera` and `Image` into account, thus leading to ID deletion/
unlinking not working properly -- in particular `libblock_remap_data`
not doing its thing (and leaving the camera as a user of the image),
then things went downhill from there...
Now make the "Camera-can-use-an-Image" relation clear in
`BKE_library_id_can_use_idtype`.
Maniphest Tasks: T94544
Differential Revision: https://developer.blender.org/D13722
Currently the crop higher limits are inclusive too which contradicts
the documentation as it says that if Left and Right are both 50, it
will result in a zero-sized image. And the result is one pixel out of
the crop gizmo, which is another hint that this is not intended.
In "Full Frame" experimental mode it's two pixels short because of
a misuse of `BLI_rcti_isect_pt` as it considers max limits inclusive.
Reviewed By: jbakker
Maniphest Tasks: T90830
Differential Revision: https://developer.blender.org/D12786
The frame indicator/controller is not displayed when in the Graph or Dopesheet view of the Movie Clip Editor
To solve this we could call the function ED_time_scrub_draw_current_frame in clip_draw_dopesheet_main and graph_region_draw in space_clip.c
Reviewed By: jbakker
Maniphest Tasks: T91160
Differential Revision: https://developer.blender.org/D12659
Enable unity builds for `bf_nodes_shader`, gives about a 2.7x speedup
of total compile times when just building `bf_nodes_shader`.
On my machine, this equates to saving about 30 seconds.
Differential Revision: https://developer.blender.org/D13720
Function `blend_color_softlight_float` used math different to compositor and
produced result that had abrupt value changes.
Use math based on modified screen blend mode as compositor does.
This flag is only used a few small cases, so instead
of setting the flag for every node only set the
required flag for the nodes that require it.
Mostly the flag is used to set `ntype.flag = NODE_PREVIEW`
For nodes that should have previews by default which
is only some compositor nodes and some texture nodes.
The frame node also sets the `NODE_BACKGROUND` flag.
All other nodes were setting a flag of 0 which has no purpose.
Reviewed By: JacquesLucke
Differential Revision: https://developer.blender.org/D13699
This was originally written by Ankit Meel as a GSoC 2020 project.
Howard Trickey added some tests and made some corrections/modifications.
See D13046 for more details.
This commit inserts a new menu item into the export menu called
"Wavefront OBJ (.obj) - New".
For now the old Python exporter remains in the menu, along with
the Python importer, but we plan to remove it soon (leaving the
old addon bundled with Blender but not enabled by default).
Compare the start of the range to zero to figure out whether the
indices for the instances to keep starts at zero. Also rename the
selection argument, since it made it seem like the selected indices
should be removed rather than kept.
The activation of the text button is a bit special, since it happens during
drawing, the layout isn't computed yet then. Comparable cases where the button
is added on top don't use the layout system, so this didn't become an issue
until now. Trigger a delayed call to `UI_but_ensure_in_view()`.
This completes 1a721c5dbe by versioning old files to correct the
region type. The "tools" region type is relatively standard for this type
of region and doesn't require any changes to the theme, unlike
the "nav bar" type, which would have been a reasonable choice.
Calculates the angle in radians between two faces that meet at an edge.
0 to PI in either direction with flat being 0 and folded over on itself PI.
If there are not 2 faces on the edge, the angle will be 0.
For valid edges, the angle is the same as the 'edge angle' overlay.
For the Face and Point domain, the node uses simple interpolation to calculate a value.
Differential Revision: https://developer.blender.org/D13366
Occured because "PATH_RAY_SHADOW_CATCHER_BACKGROUND" is expressed as an unsigned
integer, because too large for a signed integer, but the "PathRayFlag" enum type defaulted to a
signed integer still.
Allow overriding simple properties of cloth simulations, colliders
and force fields. Vertex group and shape key selectors in cloth are
still not overridable since they are tied to mesh data.
Force fields have a number of physical fields shared between multiple
RNA fields. Until they are decoupled, they will produce redundant
overrides, and cannot have different hard range limits.
Differential Revision: https://developer.blender.org/D13710
Regression introduced in rB098008f42d8127d9b60717c7059d3c55a3bfada7
Previously the selected geometry was ignored along with the hidden one.
The mentioned commit caused neither the hidden nor the selected one to be ignored.
But hidden geometry needs to be ignored.
Continuation of the D13404 which finished the design of not having
geometry-level nodes dependent on object-level.
Differential Revision: https://developer.blender.org/D13405
Weirdly enough, our 'mono' font already had it, but not the main one.
Copied from DeJaVu sans font.
CC @Tamuna who started the translation for that language.
This implements the design detailed in T92696 to support virtual
filenames for UDIM textures. Currently, the following 2 substitution
tokens are supported:
| Token | Meaning |
| ----- | ---- |
| <UDIM> | 1001 + u-tile + v-tile * 10 |
| <UVTILE> | Equivalent to u<u-tile + 1>_v<v-tile + 1> |
Example for u-tile of 3 and v-tile of 1:
filename.<UDIM>_ver0023.png --> filename.1014_ver0023.png
filename.<UVTILE>_ver0023.png --> filename.u4_v2_ver0023.png
For image loading, the existing workflow is unchanged. A user can select
one or more image files, belonging to one or more UDIM tile sets, and
have Blender load them all as it does today. Now the <UVTILE> format is
"guessed" just as the <UDIM> format was guessed before.
If guessing fails, the user can simply go into the Image Editor and type
the proper substitution in the filename. Once typing is complete,
Blender will reload the files and correctly fill the tiles. This
workflow is new as attempting to fix the guessing in current versions
did not really work, and the user was often stuck with a confusing
situation.
For image saving, the existing workflow is changed slightly. Currently,
when saving, a user has to be sure to type the filename of the first
tile (e.g. filename.1001.png) to save the entire UDIM set. The number
could differ if they start at a different tile etc. This is confusing.
Now, the user should type a filename containing the appropriate
substitution token. By default Blender will fill in a default name using
the <UDIM> token but the user is free to save out images using <UVTILE>
if they wish.
Differential Revision: https://developer.blender.org/D13057
It is common to have fields that contain a constant value. Before this
commit, such constants were represented by operation nodes which
don't have inputs. Having a special node type for constants makes
working with them a bit cheaper.
It also allows skipping some unnecessary processing when evaluating
fields, because constant fields can be detected more easily.
This commit also generalizes the concept of field node types a bit.
Currently, a node either supports lazyness during execution (like the Switch
node), or it doesn't. If it does support lazyness, then every input is computed
lazily. However, usually not all inputs actually have to be computed lazily.
E.g. the boolean switch input is always required, while the other inputs
should be computed lazily.
Better support for such sockets can avoid unnecessary round trips through
the node execution function.
Exposes compare operations via rna emums.
This uses the rna enum to build the search list using
named operations linked to socket A.
This also weights the Math Node comparison operations lower
for geometry node trees.
Differential Revision: https://developer.blender.org/D13695
The logic used to be:
"if collection doesn't have child collection, check if ob is from this one"
The correct logic should be:
"if collection child does not have this ob, then check this collection".
In the past that worked because the `GPUMaterial` referenced the
`ImageUser` from the image node. However, that design was incompatible
with the recent node tree update refactor (rB7e712b2d6a0d257d272e).
Also, in general it is a bad idea to have references between data that is
owned by two different data blocks.
This incompatibility was resolved by copying the image user from the node
to the `GPUMaterial` (rB28df0107d4a8). Unfortunately, eevee depended
on this reference, because the image user on the node was update when the
frame changed. Because the image user was copied, the image user in the
`GPUMaterial` did not receive the frame update anymore.
This frame update is added back by this commit. The main change is that
the image user iterator now also iterates over image users in `GPUMaterial`s
on material and world data blocks. An issue is that these materials don't
exist on the original data blocks and that caused the check in
`build_animation_images` in the depsgraph to give the wrong answer.
Therefore the check is extended.
Right now the check is not optimal, because it results in more depsgraph
nodes than are necessary. This can be improved when it becomes cheaper
to check if a node tree contains any references to a video texture.
The node tree update refactor mentioned before makes it much easier
to construct this kind of run-time data from the bottom up, instead of
scanning the entire node tree recursively every time some information
is needed.
This just skips the entire algorithm when there are cycles.
In the future, cycles could be handled more gracefully in the
algorithm, but for now that's not worth it and is not necessary
to fix the bug.
As @hooglyboogly suggested in D13680, this patch adds weighting
to the search results. Dragging from a vector/rgba socket weights
the Vector Math node higher than a float Math node, and vice versa.
Reviewed By: HooglyBoogly
Differential Revision: https://developer.blender.org/D13691
When the material is used in several objects, the filter by material is not working as expected because the internal pointers are different due eval version.
Now, the original version of the material is compared to keep same address.
This is analagous to 6a71b2af66 which did the same
thing for mesh data. Two differences are that here the coordinates
are simply `float3`, and we account for the radius if it's available.
Here I observed a similar performance increase, from 50ms
average to 10ms average, with 16 million points, a 5x speedup.
The calculation is about 1.4 times faster when no radius is used, down
to 7.3ms average. Before, the calculation was only 1.2 times faster.
This was caused by a mistake in eb0eb54d96, which removed
the clearing of the curve edit mode pointers that are set when creating
the temporary data for the conversion. If they are not cleared, the
generic ID free function will also free the edit mode data, which is
wrong when the source curve is in edit mode.
At the time of allocating the buffer with vertices in context, we don't
know exactly how many vertices are affected, but we do know that it is
less than or equal to twice the number of vertices killed.
The defrag shader make sure the free heap is free of holes. Making
the allocation more straightforward.
Since we now only reference the pages using the tiles, we introduce
a debug shader that produces an image with page data in a visual way.
This replaces the debug 8 option.
This also fixes some bug that were still present in the pipeline.
This separate the handling of directional lights (sun) into their
own loops. This will help reduce register pressure and remove some
pollution of the local light culling.
All sun lights are packed at the start of the light array.
We now scan the depth buffer after the prepass to tag the needed
shadow tiles.
This is much more precise than the bound box tagging which is now
reserved for transparent objects.
This also:
- fix pixel radius size.
- add a dedicated info buffer to avoid having one unused tile.
Until now the LOD selection was based on distance from camera.
Now it is based on receiver distance ratio. We compute the world
size of one view pixel along with the world size of one shadow texel.
By knowing one point distance to the light or to the view, we can
compute the pixel density ratio and deduce the corresponding LOD.
We use this to compute the min LOD during the visibility selection phase
and the "mean" LOD for usage tagging by BBoxes.
The tagging LOD is a crude approximation as it only uses the BBox
center.
This makes every shadow setup pass aware of the LOD chain of the tilemap
for each cubemap face.
In the free phase, we mask any LOD page that is completely covered by
higher LOD. This avoir commiting memory twice or more per area.
In the allocation phase, we check for the last valid LOD and set it
in the LOD 0 meta data. We also store the actual page location in LOD0 but
do not mark it as allocated as the LOD tile has the ownership of the page.
This removes the light count limit for the forward shaded object. This
also provides a more efficient way of computing the culling directly on
the GPU. Moreover, this avoids doing multiple lighting passes for high
light counts in the deferred pipeline, improving performance.
This continue the effort to implement virtual shadow mapping.
This includes:
- Spot cone culling of tile.
- Tile vs. view frustum tagging.
- Shadowmap Page allocation / freeing.
- Rendering to 4K buffer only tiles that needs it.
- Copying to shadow atlas.
This debug buffer is automatically bound if a shader is including
`common_debug_lib.glsl`. One buffer is created for each shading group
using such a shader.
The shader can then use the functions from that file to draw debug
lines. There is a hardcoded limit of line one buffer can contain. Make
sure to only output lines for a few threads at most.
Under the hood this uses a vertex buffer bound as SSBO that contains
the number of verts and all the positions and colors packed into 1 vec4.
We render by just rendering the whole buffer.
All unused vertices are initialized with NaN positions and will not be
drawn.
This is a total refactor of how shadows are handled.
We use Virtual shadow maps with different Level of details to
ensure a somewhat evenly distributed precision.
The shadow test is a really crude shadow test that will be
improved in further commit.
There is a pool of 4096 Tilemaps that are distributed between
shadowed ligths. These tilemaps are 16x16 each and reference
shadow map pages that are allocated in an atlas. Pages are only
allocated if needed (i.e: visible for rendering an object).
Page management is done on GPU using compute shaders to reduce
CPU task.
On CPU only one draw pass per updated tilemaps is issued.
This reduces the memory requirement of shadowmapping large scenes
with many lights.
Denoising make use of more memory to store and reproject the result of
previous frame to reduce noise. This only works for viewport.
There is a final bilateral filter for cleaning up noise even more.
Screen space Raytracing is supported by alpha blended surfaces.
However only opaque surfaces will be visible to the rays. This means
Alpha blended surfaces cannot reflect or refract themselves.
Denoising is not possible on alpha blended surfaces. Many samples
are needed for noise free results.
Since the cost of tracing can be very high, raytracing will only be
enabled on demand, on a per-material basis.
This simply reuse the reflection raytracing pipeline but with another
ray distribution. Only direct lighting, distant lighting and emissive
light are visible to diffuse rays.
Subsurface effect is not visible but transmittance effect is visible
to diffuse rays.
Indirect diffuse light is processed by the SSS filter.
The new pipeline is now cleaner and allows for deferred refraction.
The refractions are more accurate but are not denoised for now. More
research needs to be done in this area.
There is no feedback buffer for now, so reflections of metallic surfaces
will appear black.
The same restriction on refractive materials still holds true. They will
not appear in screen space tracing of other non refractive surfaces.
However, refractive surfaces (non-blended) can now reflect themselves
and the other surfaces with screen space reflections.
Half res tracing is not implemented back yet.
This is to automate the generation of reuse sample tables and maybe more
in the future. This is not designed to make compilation way longer than
expected.
Same as SSS this has been rewritten to support varying SSS radius.
Instead of relying on shadowmap hack to improve the transmittance
artifact (previously called translucency) we exposed a min thickness
output that will reduce the maximum of light bleeding that can happen
at the shading point. This is far from perfect but at least it is
tweakable.
The effect is now cheaper and the option to enable it is now gone.
It can always be artificially disabled by making the thickness bigger
than the sss radius.
The effect is always enabled for all SSS surfaces and will even be
applied on forward shaded object (alpha blend mode).
This only adds the output but the output is not yet used.
This thickness output is meant to control the aspect of subsurface,
refraction, absorption and volume shaders.
The value expected is the mean thickness inside the object at the
shading point. The source can be a vertex color or a texture map baked
from a raytracer.
This new implementation follows the technique described in
"Efficient screen space subsurface scattering Siggraph 2018".
Compared to the old implementation it fixes a lot of issues at
the cost of it being slower. This fixes:
- Light leaking between different objects.
- Light leaking between different surfaces with different depths.
- SSS radii are now "texturable" per pixel. No SSS surfaces limits.
- Noise should be lower.
- Precomputation is only done once for all SSS surfaces which lowers the
per material storage and precomputation time.
Implementation is also simpler as it is only a one pass processing.
We differ from the reference presentation by not precomputing the
RGB weights per samples. We actually compute them on the fly in order
to support varying SSS radii.
Notes:
- SSS IOR and SSS anisotropy are not supported.
- Object level light leak prevention might not work for high number of
objects in the scene (> 1024). In this case light leak might occur.
Adding or deleting (hidding) objects in the scene might change which
objects can leak.
This was caused by the StructArrayBuffer wrapper not being tagged as NonMovable.
The UBO was in fact being freed at creation time in debug build, but the
pointer was kept as valid in the copied wrapper.
Changing the higher level structure to not use the copy constructor to avoid this.
This is a needed change for the viewport compositor. The compositor
needs to draw to `dtxl->color` to have correct overlay / background
composition.
The solution here is to have a separate buffer that keeps the first
sample we blend from. This increases VRAM usage but it is the most
elegant option.
This integer divide by zero was evaluated to 0 on all platform but apple,
where it yields 1. The world lighting would then sample the 1 sample of the
first grid instead of its own sample.
This was caused by the blend mode that was used even with full opacity.
This caused issues with the viewport was resized and the color of the
framebuffer becomes undefined, leading to undefined values in the blend
equation.
Another fix would be to clear the viewport color on resize inside the
GPUViewport.
This is a necessary step for EEVEE's new arch. This moves more data
to the draw manager. This makes it easier to have the render or draw
engines manage their own data.
This makes more sense and cleans-up what the GPUViewport holds
Also rewrites the Texture pool manager to be in C++.
This also move the DefaultFramebuffer/TextureList and the engine related
data to a new `DRWViewData` struct. This struct manages the per view
(as in stereo view) engine data.
There is a bit of cleanup in the way the draw manager is setup.
We now use a temporary DRWData instead of creating a dummy viewport.
Differential Revision: https://developer.blender.org/D11966
This change the gbuffer layout to use more of the hardware to converting
data back and forth. Normals are encoded as two 16 bits components and
colors as R11G11B10F format.
This was motivated by the need of better quality normals. The issue is
that this increase the GBuffer size consequently. In order to balance
this we chose to merge the refraction and Diffuse/SSS data to use the
same buffer. This means we need to stochastically chose one of these
layers (so noise appear). Given that Glass BSDFs are rarely mixed
with Diffuse BSDFs, we think this is a good tradeoff.
The functions need to be declared before main as prototypes.
The appended libs will use the resources (textures, UBOs) defined at
global scope.
This removes a bit of code duplication and some long macros.
Instead of appending using `BLENDER_REQUIRE`, shaders can now ask for
libs to be added after the shader's `main()` by using the
`BLENDER_REQUIRE_POST` pragma.
Use viewspace instead of world space to compute pixel projection.
This fix issues when camera is far from origin and float precision would
produce artifacts.
This port the facing "flat" normal trick used by the gpencil engine
to EEVEE as well as the thickness mode.
The objects parameters are passed via the objectInfos UBO to avoid
much boiler plate code. However if this UBO grows too much we might have
to split it.
The normal trick for planar surfaces is quite simple to port to the
vertex shader even if it is less efficient.
However to compute it we need the objects bounds. This is passed as a
scale only through the orco factors. This will needs a bit of cleaning
at some points, with boundbox computed at object level.
Nothing much different compared to the previous implementation.
The transparent BSDF and principled BSDF now detects when the material
is potentially transparent to select the best way to render it.
This makes is possible to have AA and correct blending of the
forward rendered spheres.
However, to avoid distorded spheres we need to not support Lookdev
in panoramic projection mode.
Also remove support for LookDev when using render border for now.
This differs a bit from old implementation.
- Instead of manually adjusting the viewport we correctly place the
sphere in the vertex shader.
- Rendering happens after TAA accumulation: This is because we now
support panoramic cameras and TAA would distort the spheres.
This expose the capability of having no light and no probe (except the
world one) for specific views / code path.
The caller just need to pass 0 as extent to the `set_view()` function.
This is usefull for lookdev.
This does not include reference spheres rendering.
The approach is a bit different than before.
Now we use a `bNodeTree` to control the rendering of lookdev. This
generates a `GPUMaterial` that is stored per `Instance`. This way
rendering lookdev is just updating the temp light cache using this
material as world material. Removing the use of custom shader.
This introduces a small hack in order to bind the studiolight hdri after
the nodetree glsl parsing.
The background display however is still using a custom shader in order
to sample the world cubemap with different roughness.
The view space option of the studiolight is now faster by using a
transform before shading instead of rebaking the lightprobe constantly.
This should not have any particular impact on render time.
When evaluating surfaces, the deferred passes needs to sample the
depth buffer. But it also test against the stancil buffer.
Moreover the sampler needs to be a 2D sampler which is not the case
for cubemaps and texture2Darrays.
To overcome this we simply copy the gbuffer depth to another
temp texture using framebuffer blitting.
Some things differs from old implementation.
- Object visibility is filtered correctly without using a visibility
callback (which is to be removed).
The implementation is also more high level using less low level tricks.
A dedicated LightProbeView is created for each lightprobe cubeface to
render using all pipeline (deferred and forward).
There is still a few things not working.
Only world probe is supported for now.
The new implementation diverge from the original by randomly
selecting one lightprobe instead of sampling them all.
This speeds-up rendering a bit.
This is a small convenience. This let the render engine use this
default world if scene has no world.
World is black to keep the same behavior as before.
Shading groups are now created by the material_array_get functions
instead of passing a reference to be filled later. This avoids having
to wait later to maybe create a sub shading group.
This also simplifies different geomety type handling.
This adds a new closure selection method.
- In a first pass, weights are accumulated per output type (diffuse,
reflection, refraction).
- A random threshold is then generated before evaluating the BSDF nodes
again.
- During the evaluation pass the random threshold is decremented until
it reaches 0. At this moment the current BSDF is sampled.
For this to work, I splited the evaluation and the weighting in two
functions for all BSDF. The `*_eval` nodes are generated as dangling
nodes from the graph and only serialized after the rest of the graph.
Recalc flag on Material ID being unavailable to render engine, this
adds a simple way to detect material update by detecting shader creation
or update.
This constructs a "mirror" nodetree that feeds the closure "shader"
nodes with their respective final weight.
The tree is mirrored using simple math nodes. This is quite messy but
this is the only way to proceed without introducing special nodes.
The other issue with this method is that inputs are all uniforms even
for unplugged socket on temporary math nodes with add bloat to the
shader uniform buffer structure.
Only the part relevant to the weighting is duplicated. Other connexions
with the shading tree are reuse.
All shader nodes are updated to receive a `Weight` hidden parameter.
The original shader mixing tree is preserve to let the choice of using
either way to weight the output.
For now this is only done for the output nodes. This will need to be
extended to Closure to RGBA sub-tree.
This is the first step towards the new evaluation scheme of EEVEE
closures.
This commit contains:
- Removal of GPU_SOURCE_BUILTIN type, prefering global instead. This
avoid many boilerplate code since most of the old builtins are now
datas that are always present (i.e: view matrices, normals).
- Rewritting of codegen in C++ to use `std::stringstream`.
- Added a callback to let engine decide what to do with codegen code.
This remove a lot of needs for defines because of code order
dependency. The engine can insert the nodetree code in custom ways
to create advance effects (i.e: add displacement or vertex lighting).
Engine now returns final shader strings.
- Closure nodes evaluation replacment is a placeholder for now.
This is a port of the old material grouping. This is a bit more
clean as we use containers for each passes and other structures.
Nodetree is generated without major error for simple materials but
it is not yet used as closures are not outputed.
This adds the transparency and volume handling in the deferred
render pipeline.
Implementation is still unfinished.
To have better naming convention, I renamed object shader to surface.
This introduce a fat Gbuffer layout that groups closure data in groups
of similar BSDF. The goal is to have at least one sample for each
group to avoid too much code complexity and expected worse performance.
There is a lot of room for buffer reuse to reduce memory usage but it is
not considered a priority for now.
Add a smooth transition to avoid flickering of stochastic effects such
as soft shadows.
This use a simple blend method to progressively reveal the render
after some low sample count to avoid most of the flickering.
Parameters are hardcoded for now.
We use a new RNG to avoid correlation artifacts between Anti-Aliasing
and Shadow samples (see T68594).
The new sequence is a leap halton sequence. This makes it good with
low number of samples and yield less correlation issues.
Another change is that we directly jitter the projection matrix instead
of rotating the view matrix. This is improving convergence time and
avoid passing a second matrix to the shader.
However this case lead to discontinuity artifacts at face boders.
We might want to revert to the old rotation method for this
reason even if convergence is slower.
Now the shadows are linked to a `Light` object. The `Light` object is
linked to an `ObjectKey` to ensure persistence and deletion tracking.
The Uniform data are packed so that there is 1 `ShadowPunctualData`
per light in a `LightBatch`. This means there is only a shadowmap
limit to the number of `Shadow` in a scene.
Difference with previous implementation:
- Better texture space usage of cone and area light shadow.
- Shadows are packed in an atlas. Reducing requirements for future
features.
- Sampling is simpler because shadow matrix does everything.
This follows closely the implementation of 2.5D tiled light
culling described in the presentation:
"Improved Culling for Tiled and Clustered Rendering"
from Michal Drobot
http://advances.realtimerendering.com/s2017/2017_Sig_Improved_Culling_final.pdf
I chose the tile + Z binning approach for its high depth range support
and low CPU overhead & low memory consumption compared to the cluster
based culling. The cons is that the culling is a bit less precise in
some aspect but it is quite balanced.
The culling is done by the `Culling` object which is templated to easily
be reused for light probes cullg.
The Z-binning process is described starting from slide 20 in the
reference pdf.
I also implemented a debug pass to visualize false negative (light
culled when they shouldn't) and light evaluation density.
This is useful to detect failure case and hotspot. This could be exposed
as a developper only render pass in the future.
Some optimization of the reference implementation requires extensions
not yet added to GPU module and will be added later.
This has the basis of clustered light culling but does not yet do
it. The lights are only culled by frustum.
Its the same as if there was only one Cell for the entire Viewport.
This also wrap GPUFrameBuffer & GPUTexture inside eevee:Framebuffer
and eevee:Texture to improve managment.
Another cleanup was to put all members of `Instance` public to
avoid much complexity in accessing the data with modules
dependencies.
Also split velocity View related data to `class Velocity` and
rename previous `Velocity` to `VelocityModule`
Support infinite light count by dividing rendering into chucks of
LIGHT_MAX. Forward passes are just rendered again and deferred passes
(not implemented yet) will just have to have multiple light evaluation
passes.
This is almost the same thing as old implementation.
Differences:
- We clamp the motion vectors to their maximum when sampling the velocity buffer.
- Velocity rendering (and data manager) is separated from motion blur. This allows
outputing the motion vector render pass and in the future use motion vectors to
reproject older frames.
- Vector render pass support (only if motion blur is disabled, just like cycles).
- Velocity tiles are computed in one pass (simpler code, less CPU overhead, less
VRAM usage, maybe a bit slower but imperceivable (< 0.3ms)).
- Two velocity passes are outputed, one for motion blur fx (applied per shading view)
and one for the vector pass. This could be optimized further in the future.
- No current support for deformation & hair (to come).
Bonus addition, support for shutter curve.
Compared to the old implementation, the per time step sync function
is lighter and localized. Also it does not require a full engine
"reboot" in order to work.
Also modifies camera setup to be compatible with future camera motion
blur.
Bonus addition, support for shutter curve.
Compared to the old implementation, the per time step sync function
is lighter and localized. Also it does not require a full engine
"reboot" in order to work.
Pretty much identical to the previous implementation. With the exception
of a temporary noise function and some simplification of the CoC
computation. This also fixes issues with the Ortho depth of field.
Most of the files were modified to comply to new shader codestyle.
This also adds partial support of panoramic cameras (bokeh and
anamorphic is still buggy).
This cleansup a lot of confusion / complexity in the setup code.
Setup is closer to what cycles does now.
Also duplicates some buggy behavior of Cycles for now until this
is fixed.
This move view resolution handling to the `Camera` class that will
in the future clip and trim each view in panoramic projection.
There is a new `CameraView` that contains the `DRWView` and subview.
This way each `ShadingView` is associated to a unique `CameraView`.
ShadingView` & `CameraView` are all allocated & defined at creation time
but only the one activated by `Camera` will be rendered.
This option will make accumulation happen in a pre exposed logarithm
color space. This reduces the importance of bright pixels in the pixel
filter which will result in less aliasing in theses areas.
There is a few cases where one might want to disable this option to
match cycles better.
Render mode is really close to what the viewport render does.
Film output is done by resolving the data to the next (double buffered)
framebuffer and read back.
This also includes a bit of cleaning about naming of init() and sync()
functions.
This commit adds the Film class that handles accumulation of color and
non-color data using arbitrary projection and filter size.
A weighted accumulation (sum) is done into a data buffer with an
additional weight buffer. The sum being per pixel, it allows the input
textures that are not aligned with the output pixel grid.
Panoramic projection works by rendering a cubemap (6 views) of the scene
at the camera position. The Film filter pass then gather the pixels
using the correct Panoramic projection ensuring correct Anti-Aliasing.
For Non-color data (depth, normals) we only keep the closest value to
the target pixel center (simulating a filter size of 0).
Color data is accumulated in a log space to improve AntiAliasing output.
This is hardcoded for now.
Larger filters have poor performance but are very fast to converge.
Code Wise: This commit rename some modules to avoid possible confusion
and have better meaning. Use namespace instead of prefixes.
Added a new eevee_shared.hh file to share structure and enum definitions
between GLSL and C++.
Same idea as previous commit. This cleans-up the interface and put all
viewport related data inside the `DRWData` struct.
The draw manager is responsible for freeing it. That is the main point
of this all. In the future, we can have custom freeing method for each
engine.
This also move the DefaultFramebuffer/TextureList and the engine related
data to a new `DRWViewData` struct. This struct manages the per view
(as in stereo view) engine data.
There is a bit of cleanup in the way the draw manager is setup.
We now use a temporary DRWData instead of creating a dummy viewport.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.