Very easy fix, the bug seemed to be a result of a typo on the right-most margin.
Old: {F13013777}
New: {F13013782}
Maniphest Tasks: T97497
Differential Revision: https://developer.blender.org/D14711
This was projecting the unnormalized z scale axis onto the plane defined
by the view vector. If object scale was very small, this made the empty
still visible at viewing angles far from the object axis.
Now use the normalized z scale axis to make this work the same at all
object scales.
Fixes T97004.
Maniphest Tasks: T97004
Differential Revision: https://developer.blender.org/D14557
This was possible with the gizmo, but no using the operator (e.g. from
the sidebar).
Now store original offsets and reset to these on calcel (ESC or RMB).
Maniphest Tasks: T97465
Differential Revision: https://developer.blender.org/D14704
Text inserted via TEXT_OT_insert would always have auto-close
logic applies which interfered any insertion of literal strings
containing brackets.
Now auto-closing brackets is restricted to characters read from
events from the operators invoke functions.
Sculpt paint tools now pop up an error message if
dynamic topology or multires are enabled.
Implementation notes:
* SCULPT_vertex_colors_poll is now a static function in sculpt_ops.c.
It is now used solely by the legacy color attribute conversion
operators (SCULPT_OT_vertex_to_loop_colors and SCULPT_OT_loop_to_vertex_colors)
and should be deleted when they are.
* There is a new method, SCULPT_handles_colors_report, that returns true if
the sculpt session can handle color attributes; otherwise it returns false
and displays an error message to the user.
- Verrtex paint mode has been refactored into C++ templates.
It now works with both byte and float colors and point
& corner attribute domains.
- There is a new API for mixing colors (also based
on C++ templates). Unlike the existing APIs byte
and float colors are interpolated identically.
Interpolation does happen in a squared rgb space,
this may be changed in the future.
- Vertex paint now uses the sculpt undo system.
Reviewed By: Brecht Van Lommel.
Differential Revision: https://developer.blender.org/D14179
Ref D14179
Python 3.10 has added "soft keywords" [0] to their list of identifiers.
This patch adds these soft keywords to the list of builtin functions
that the text editor searches for when highlighting Python code.
The only soft keywords that Python 3.10 current has are: `match`,
`case`, and `_`, but more will likely be added in future versions.
Currently, the `_` soft keyword is ignored from highlighting. It is a
wildcard matching pattern when used with `case` (similar to `default`
for `switch`es in C/C++), but `_` is far more often used in other
contexts where highlighting the `_` might seem strange. For example,
ignoring elements when unpacking tuples (`_, g, _, a = color`).
This patch also updates the commented Python code for creating the list
of keywords, for convenience.
Before:
{F13012878}
After:
{F13012880}
Example from PEP-636 [1]
Note: These soft keywords are only reserved under specific contexts.
However, in order for the text editor to know when the keywords are used
in the appropriate contexts, the text editor would need a full-blown
Python grammar [2] parser. So, for now, these keywords are simply added
in along with the other keywords in order to highlight them in the text
editor.
[0]: https://docs.python.org/3/reference/lexical_analysis.html#soft-keywords
[1]: https://peps.python.org/pep-0636/#matching-specific-values
[2]: https://docs.python.org/3/reference/grammar.html
Ref D14707
Currently the "exclude" option for autopep8 isn't as reliable as it
should be since passing in absolute paths to autopep8 causes
the paths not to match. See: D14686 for details.
So explicitly disable autopep8 in this generated file (the generator
has already been updated).
Also use spaces for indentation otherwise autopep8 re-indents them.
This seems like a bug in autopep8 since it's changing lines with
autopep8 disabled. Use a workaround instead of looking into a fix since
it's simpler for all our Python files to use spaces instead of tabs and
there isn't much benefit mixing indentation for scripts.
Sometimes, when moving strip with 2 input effect attached and causing
strip overlap, this overlap is resolved incorrectly - too big offset
is applied. This is because effects are not taken into consideration
when "shuffeling" to resolve overlap.
To fix usual cases (transitions), overlap between strip and it's effect
is considered to be impossible.
There are edge cases, but these would be much more complicated to
implement and could have negative impact on performance.
Reviewed By: ISS
Differential Revision: https://developer.blender.org/D14578
Crash was caused by incorrectly assuming, that rendered strip is meta
when editing code in bulk, but this wasn't true and strip had no
channels initialized, which caused crash on NULL dereference.
Correct approach is to find list of channels where on same level as
rendered strip, similar to how `SEQ_get_seqbase_by_seq` function works
for list of strips.
This reverts commit 390b9f1305. It seems to
break things on Linux for unknown reasons, so leave it out for now. A solution
to this will be required for Vega cards though.
There is not much point in having those editable in overrides, and since
those are pointers, their value always differs from ref linked ID vs.
local override one, generating 'noise' in Outliner's override property
view.
Building against the existing 3.1 libraries should continue to work, until
the precompiled libraries are committed for all platforms.
* Enable WebP by default.
* Update Windows for new library file names.
* Automatically clear outdated CMake cache variables when upgrading to new
libraries.
* Fix static library linking order issues on Linux for OpenEXR and OpenVDB.
Implemented by Ray Molenkamp, Sybren Stüvel and Brecht Van Lommel.
Ref T95206
The "PROP" in the name reflects its generic status, and removing
"LOOP" makes sense because it is no longer associated with just
mesh face corners. In general the goal is to remove extra semantic
meaning from the custom data types.
For some reason, the rework of liboverride handling of Collection items
insertion (rB33c5e7bcd5e5) completely missed to update accordingly the
default liboverride apply code...
Many thanks to Wayde Moss (@GuiltyGhost) for the investigation and
proposed solution.
This was caused by the use of a reserved keyword macro that is not
directly used but causes an error on some compiler.
Change the occurences to not match the macros.
This file was skipped by source/tools/utils/autopep8_clean.py
since it doesn't have a .py extension, running the autopep8 tool
recursively detects Python scripts without extensions.
- Increase the stack level so the reported line number references
script authors code (not Blender's wrapper function).
- Include the operator name and poll/call usage in the warning.
Support a way to temporarily override the context from Python.
- Added method `Context.temp_override` context manager.
- Special support for windowing variables "window", "area" and "region",
other context members such as "active_object".
- Nesting context overrides is supported.
- Previous windowing members are restored when the context exists unless
they have been removed.
- Overriding context members by passing a dictionary into operators in
`bpy.ops` has been deprecated and warns when used.
This allows the window in a newly loaded file to be used, see: T92464
Reviewed by: mont29
Ref D13126
These functions can be used with PyArg_ParseTupleAndKeywords
(and related functions) to take typed RNA arguments without
having to extract and type-check them separately.
No functional changes, extracted from D13126.
There's a small typo in the tool tip for applying the Parent Inverse. This patch fixes that typo
old:
{F13010751}
new:
{F13010749}
Reviewed By: Blendify
Maniphest Tasks: T97437
Differential Revision: https://developer.blender.org/D14693
This is mostly a cleanup to avoid hardcoding the eager calculation of
normals it isn't necessary, by reducing calls to `BKE_mesh_calc_normals`
and by removing calls to `BKE_mesh_normals_tag_dirty` when the mesh
is newly created and already has dirty normals anyway. This reduces
boilerplate code and makes the "dirty by default" state more clear.
Any regressions from this commit should be easy to fix, though the
lazy calculation is solid enough that none are expected.
This adds support for rendering motion blur for volumes, using their
velocity field. This works for fluid simulations and imported VDB
volumes. For the latter, the name of the velocity field can be set per
volume object, with automatic detection of velocity fields that are
split into 3 scalar grids.
A new parameter is also added to scale velocity for more artistic control.
Like for Alembic and USD caches, a parameter to set the unit of time in
which the velocity vectors are expressed is also added. For Blender gas
simulations, the velocity unit should always be in seconds, so this is
only exposed for volume objects which may come from external OpenVDB
files.
These parameters are available under the `Render` panels for the fluid
domain and the volume object data properties respectively.
Credits: kernel advection code from Tangent Animation's Blackbird based
on earlier work by Geraldine Chua
Differential Revision: https://developer.blender.org/D14629
The fix ensures that the reference count for `IShellItem *pSI` is decremented,
preventing a memory leak. For `IFileOperation *pfo` the decrement of the
reference count is only attempted when `CoCreateInstance` is successful.
Additionally, the gotos have been replaced with nested if/else statements.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D14681
Since shader sources are now parsed on demand via `GPUShaderCreateInfo`,
sources are not available to be read via
`GPU_shader_get_builtin_shader_code`.
Currently this results in a crash as the code tries to read `NULL`
pointers.
`GPU_shader_get_builtin_shader_code` was created with the intention of
informing the user how a builtin shader works, thus "replacing"
detailed documentation.
Therefore this function doesn't really have a practical use in an addon.
So, instead of updating the function (which would require several
changes to the gpu module), remove it and improve the documentation.
Release Notes: https://wiki.blender.org/wiki/Reference/Release_Notes/3.2/Python_API#Breaking_Changes
Reviewed By: fclem
Differential Revision: https://developer.blender.org/D14678
Noted as part of T94775 investigation by Wayde Moss (@GuiltyGhost),
thanks!
NOTE: this mistake probably did not have any pratical impact in current
code, at least for overrides.
This is to make the codegen and shading nodes object type agnostic. This
is essential for flexibility of the engine to use the nodetree as it see
fits.
The essential volume attributes struct properties are moved to the
`GPUMaterialAttribute` which see its final input name set on creation.
The binding process is centralized into `draw_volume.cc` to avoid
duplicating the code between multiple engines. It mimics the hair attributes
process.
Volume object grid transforms and other per object uniforms are packed into
one UBO per object. The grid transform is now based on object which simplify
the matrix preparations.
This also gets rid of the double transforms and use object info orco factors
for volume objects.
Tagging @brecht because he did the initial implementation of Volume Grids.
Noticed while looking into oneAPI patch.
Seems to be unused, without clear indication why/when it might be
needed. Removing the function simplifies adding the new backend.
Differential Revision: https://developer.blender.org/D14652
Continuing the refactors described in T93602, this commit moves
the face dot tag set by the subdivision surface modifier out of
`MVert` to `MeshRuntime`. This clarifies its status as runtime data
and allows further refactoring of mesh positions in the future.
Before, `BKE_modifiers_uses_subsurf_facedots` was used to check
whether subsurf face dots should be drawn, but now we can just check
if the tags exist on the mesh. Modifiers that create new new geometry
or modify topology will already remove the array by clearing mesh
runtime data.
Differential Revision: https://developer.blender.org/D14680
The following methods weren't included in API docs.
- BlendDataLibraries.load
- BlendDataLibraries.write
- Text.region_as_string
- Text.region_from_string
This doesn't cause any functional change as the RNA property
of Bone wasn't overridden by the _GenericBone's property,
however the `children` property was documented twice, causing a warning.
The UI description for the `bpy.types.ActionFCurves.remove` was incorrect;
seemingly a copy-paste typo from the `rna_Action_groups_remove` function.
Reviewed By: sybren, Blendify
Differential Revision: https://developer.blender.org/D14659
Preserve multi socket link order when copying nodes or adding a new
group input sockets by linking directly to multi inputs from the group
input node's extension socket.
This is done by also copying the `multi_input_socket_index` when
the new links are created by copying existing or temporary links.
Reviewed By: Hans Goudey
Differential Revision: http://developer.blender.org/D14535
Keep the existing Rec.709 fit and convert to other colorspace if needed, it
seems accurate enough in practice, and keeps the same performance for the
default case.
Reimplement copy geometry node groups in C. The version implemented in
Python could also manually copy the animation data, but it's more
standard to do this with `BKE_id_copy_ex` and `LIB_ID_COPY_ACTIONS`.
Differential Revision: https://developer.blender.org/D14615
`rna_NodeSocket_refine` and `rna_Node_refine` take significant time
when building the `NodeTreeRef` acceleration data structure, but they
aren't used at all. This commit removes their eager calculation and
instead creates them on-demand in the `rna()` functions. They also
aren't inlined to avoid including `RNA_prototypes.h` in the header.
Differential Revision: https://developer.blender.org/D14674
Continued improvements to the new C++ based OBJ importer.
Performance: about 2x faster.
- Rungholt.obj (several meshes, 263MB file): Windows 12.7s -> 5.9s, Mac 7.7s -> 3.1s.
- Blender 3.0 splash (24k meshes, 2.4GB file): Windows 97.3s -> 53.6s, Mac 137.3s -> 80.0s.
- "Windows" is VS2022, AMD Ryzen 5950X (32 threads), "Mac" is Xcode/clang 13, M1Max (10 threads).
- Slightly reduced memory usage during import as well.
The performance gains are a combination of several things:
- Replacing `std::stof` / `std::stoi` with C++17 `from_chars`.
- Stop reading input file char-by-char using `std::getline`, and instead read in 64kb chunks, and parse from there (taking care of possibly handling lines split mid-way due to chunk boundaries).
- Removing abstractions for splitting a line by some char,
- Avoid tiny memory allocations: instead of storing a vector of polygon corners in each face, store all the corners in one big array, and per-face only store indices "where do corners start, and how many". Likewise, don't store full string names of material/group names for each face; only store indices into overall material/group names arrays.
- Stop always doing mesh validation, which is slow. Do it just like the Alembic importer does: only do validation if found some invalid faces during import, or if requested by the user via an import setting checkbox (which defaults to off).
- Stop doing "collection sync" for each object being added; instead do the collection sync right after creating all the objects.
Cleanup / Robustness:
This reworking of parser (see "removing abstractions" point above) means that all the functions that were in `parser_string_utils` file are gone, and replaced with different set of functions. However they are not OBJ specific, so as pointed out during review of the previous differential, they are now in `source/blender/io/common` library.
Added gtest coverage for said functions as well; something that was only indirectly covered by obj tests previously.
Rework of some bits of parsing made the parser actually better able to deal with invalid syntax. E.g. previously, if a face corner were a `/123` string, it would have incorrectly treated that as a vertex index (since it would get "hey that's one number" after splitting a string by a slash), instead of properly marking it as invalid syntax.
Added gtest coverage for .mtl parsing; something that was not covered by any tests at all previously.
Reviewed By: Howard Trickey
Differential Revision: https://developer.blender.org/D14586
- Was not exporting "Poly" curves at all,
- Had a crash when a single object contains multiple curves of different types -- it had a check for "is this nurbs compatible?" only for the first curve, and then proceeded to treat the other curves as nurbs as well, without checking for validity.
Fixed both issues by doing the same logic as in the old python exporter:
- Poly curves are supported,
- Treat object as "nurbs compatible" only if all the curves within it are nurbs compatible.
Added test coverage in the gtest suite. While at it, made "all_curves" test use the "golden obj file template" style test, instead of a manually coded test that checks intermediate objects but does not check the final exported result.
Reviewed By: Howard Trickey
Differential Revision: https://developer.blender.org/D14611
The new 3.1 OBJ exporter code had incorrect code to determine which vertex group a polygon belongs to -- for each vertex, it was only looking at the first vertex group it has, and not using the group weight either.
This 99% fixes T96824, but not 100% on the user's submitted mesh -- exactly two faces from that mesh get assigned a different group compared to the old exporter. Either choice is "correct" given that on these two faces there are two vertex groups with equal contribution. The old Python exporter was picking the group based on internal python group name map order, whereas the new C++ exporter is picking the group with the lowest index, in case of ties. I'm not sure if it's possible to fix this TBH, will have to wait until the importer is also C++.
While at it, the new vertex group calculation code was doing a lot of redundant work for each and every face (traversing group lists several times, allocating & freeing memory), so I fixed that. Exporting a 6-level subdivided Monkey mesh with 30 vertex groups was taking 810ms, now takes 330ms.
Reviewed By: Howard Trickey
Differential Revision: https://developer.blender.org/D14500
This adds a basic unit test to check USD has been correctly
build with imaging components to support building both with
the old and new libs, it automatically adds the test when it
detects a library with imaging enabled. (platform devs will
have to pay attention it runs the test to validate the libs
build correctly)
For future use in the code it also defines a USD_HAS_IMAGING
define one could check if we're building against an USD lib
that has it (just because we build/ship with it, doesn't
mean downstream builds will ship with it, so we'll have
to be a little pro-active there)
Reviewed By: sybren
Differential Revision:https://developer.blender.org/D14456
In some circumstances singular files with numbers in their name (like
turntable-1080p.png or frame-1042.png) might be detected as a UDIM.
The root cause in this particular instance was because `BKE_image_get_tile_info`
believed this file to be a tiled texture and replaced the filename with
a tokenized version of it. However, later on, the code inside `image_open_single`
did not believe it was tiled because only 1 file was detected and our
tiled textures require at least 2. This discrepancy lead to the broken
filename situation.
This was a regression since rB180b66ae8a1f as that introduced the
tokenization changes.
Differential Revision: https://developer.blender.org/D14667
Also, as a suggestion, this patch changes Mask By Color and Color Filter to be the same shade of green as paint and smear tool icons
{F12998856}
{F12998857}
{F12998858}
Reviewed By: Julian Kaspar & Joseph Eagar
Differential Revision: https://developer.blender.org/D14632
Ref D14632
If the `wpoly` vector was small, the `wpoly_new` pointer could point
to part of its inline buffer on the stack, which becomes invalid out of
that scope. Instead, store `wpoly_new` as a span, and assign it properly
from the moved vector.
The HIG mentions that redundant words like "Enables" or "Activates"
shouldn't be used for tooltips of boolean properties. In this case
"When checked" was the redundant language that was implied by
the checkbox itself-- convention is to just state what the property
does when it's on.
Also change a few conjugations to the imperative and simplify
wording slightly, in order to be more consistent with language
elsewhere in Blender, and to be a bit more direct.
Differential Revision: https://developer.blender.org/D14644
Introduced by my recent commit: {rB3acbe2d1e933}
Lead to crash when insert_keyframe_direct() was called. Keyframing
crashed for NLA special properties (influence, animated_time),
driven properties, etc.
This commit changes the Curve to Mesh node to work with `Curves`
instead of `CurveEval`. The change ends up basically completely
rewriting the node, since the different attribute storage means that
the decisions made previously don't make much sense anymore.
The main loops are now "for each attribute: for each curve combination"
rather than the other way around, with the goal of taking advantage
of the locality of curve attributes. This improvement is quite
noticeable with many small curves; I measured a 4-5x improvement
(around 4-5s to <1s) when converting millions of curves to tens of
millions of faces. I didn't obverse any change in performance compared
to 3.1 with fewer curves though.
The changes also solve an algorithmic flaw where any interpolated
attributes would be evaluated for every curve combination instead
of just once per curve. This can be a large improvement when there
are many profile curves.
The code relies heavily on a function `foreach_curve_combination`
which calculates some basic information about each combination and
calls a templated function. I made assumptions about unnecessary reads
being removed by compiler optimizations. For further performance
improvements in the future that might be an area to investigate.
Another might be using a "for a group of curves: for each attribute:
for each curve" pattern to increase the locality of memory access.
Differential Revision: https://developer.blender.org/D14642
This is a partial fix to the fact that rendering with EEVEE or other GL
render engines is currently blocking the whole UI when asking to redraw
a viewport.
This patch just bypasses the viewport bind (containing the Draw Context
lock) and the following drawing. There is an update tagging to not
loose a viewport update if there was one asked.
Other queries other than view redraw (such as selection depth drawing or
offscreen drawing) will still block the whole UI as they need immediate
data feedback.
Ping @Severin for the change in `WM_draw_region_viewport_bind()`.
I'm assuming this is not an issue because it's highly unlikely to
bring up this operator during rendering. But in this case, it would just
lock as usual.
The bypassing in `DRW_notify_view_update` might be a bit overparanoid.
The ported normal calculation from ceed37fc5c neglected to
use the tilt attribute to rotate the normals around the tangents.
This commit adds that behavior back, adding a new math header file
to avoid duplicating the rotation function for normalized axes.
Differential Revision: https://developer.blender.org/D14655
This patch contains an initial pixel extractor for PBVH and an initial paint brush implementation.
PBVH is an accelleration structure blender uses internally to speed up 3d painting operations.
At this moment it is extensively used by sculpt, vertex painting and weight painting.
For the 3d texturing brush we will be using the PBVH for texture painting.
Currently PBVH is organized to work on geometry (vertices, polygons and triangles).
For texture painting this should be extended it to use pixels.
{F12995467}
Screen recording has been done on a Mac Mini with a 6 core 3.3 GHZ Intel processor.
# Scope
This patch only contains an extending uv seams to fix uv seams. This is not actually we want, but was easy to add
to make the brush usable.
Pixels are places in the PBVH_Leaf nodes. We want to introduce a special node for pixels, but that will be done
in a separate patch to keep the code review small. This reduces the painting performance when using
low and medium poly assets.
In workbench textures aren't forced to be shown. For now use Material/Rendered view.
# Rasterization process
The rasterization process will generate the pixel information for a leaf node. In the future those
leaf nodes will be split up into multiple leaf nodes to increase the performance when there
isn't enough geometry. For this patch this was left out of scope.
In order to do so every polygon should be uniquely assigned to a leaf node.
For each leaf node
for each polygon
If polygon not assigned
assign polygon to node.
Polygons are to complicated to be used directly we have to split the polygons into triangles.
For each leaf node
for each polygon
extract triangles from polygon.
The list of triangles can be stored inside the leaf node. The list of polygons aren't needed anymore.
Each triangle has:
poly_index.
vert_indices
delta barycentric coordinate between x steps.
Each triangle is rasterized in rows. Sequential pixels (in uv space) are stored in a single structure.
image position
barycentric coordinate of the first pixel
number of pixels
triangle index inside the leaf node.
During the performed experiments we used a fairly simple rasterization process by
finding the UV bounds of an triangle and calculate the barycentric coordinates per
pixel inside the bounds. Even for complex models and huge images this process is
normally finished within 0.5 second. It could be that we want to change this algorithm
to reduce hickups when nodes are initialized during a stroke.
Reviewed By: brecht
Maniphest Tasks: T96710
Differential Revision: https://developer.blender.org/D14504
* Curves objects now support the geometry nodes modifier.
* It's possible to use the curves object with the Object Info node.
* The spreadsheet shows the curve data.
The main thing holding this back currently is that the drawing code
for the curves object is very incomplete. E.g. it resamples the curves
always in the end, which is not expected for curves in general.
Differential Revision: https://developer.blender.org/D14277
**Relevant to Artists:** This patch adds an option to the Parenting
menu, `Object (Keep Transform Without Inverse)`, and Apply menu, `Parent
Inverse`. The operators preserve the child's world transform without
using the parent inverse matrix. Effectively, we set the child's origin
to the parent. When the child has an identity local transform, then the
child is world-space aligned with its parent (scale excluded).
**Technical:** In both cases, the hidden parent inverse matrix is
generally set to identity (cleared or "not used") as long as the parent
has no shear. If the parent has shear, then this matrix will not be
entirely cleared. It will contain shear to counter the parent's shear.
This is required, otherwise the object's local matrix cannot be properly
decomposed into location, rotation and scale, and thus cannot preserve
the world transform.
If the child's world transform has shear, then its world transform is
not preserved. This is currently not supported for consistency in the
handling of shear during the other parenting ops: Parent (Keep
Transform), Clear [Parent] and Keep Transform. If it should work, then
another patch should add the support for all of them.
Reviewed By: sybren, RiggingDojo
Differential Revision: https://developer.blender.org/D14581
The new created strokes were not setting the right `orig` pointers, so the select operator could not select the strokes and points.
Now, the original pointers are set in the new strokes. To set the values is necessary assign the original pointer from the reference stroke because in modifiers the stroke is evaluated, so we need back two levels: Eval->Eval->Orig
Issue introduced in {rB80859a6cb272}
Only seen on Windows.
The `keyword_parse` lambda function code did not consider `\r` as a
whitespace char, resulting in a wrong parse.
(More investigation needs to be done).
Assume that only one layer matches the id and return instead
of continuing to iterate over attributes after the layers have
been potentially reallocated.
This commit removes all EEVEE specific code from the `gpu_shader_material*.glsl`
files. It defines a clear interface to evaluate the closure nodes leaving
more flexibility to the render engine.
Some of the long standing workaround are fixed:
- bump mapping support is no longer duplicating a lot of node and is instead
compiled into a function call.
- bump rewiring to Normal socket is no longer needed as we now use a global
`g_data.N` for that.
Closure sampling with upstread weight eval is now supported if the engine needs
it.
This also makes all the material GLSL sources use `GPUSource` for better
debugging experience. The `GPUFunction` parsing now happens in `GPUSource`
creation.
The whole `GPUCodegen` now uses the `ShaderCreateInfo` and is object type
agnostic. Is has also been rewritten in C++.
This patch changes a view behavior for EEVEE:
- Mix shader node factor imput is now clamped.
- Tangent Vector displacement behavior is now matching cycles.
- The chosen BSDF used for SSR might change.
- Hair shading may have very small changes on very large hairs when using hair
polygon stripes.
- ShaderToRGB node will remove any SSR and SSS form a shader.
- SSS radius input now is no longer a scaling factor but defines an average
radius. The SSS kernel "shape" (radii) are still defined by the socket default
values.
Appart from the listed changes no other regressions are expected.
This adds a structure, `ABCReadParams`, to store some parameters passed
to `ABC_read_mesh` so we avoid passing too many parameters, and makes it
easier to add more parameters in the future without worrying about
argument order.
Differential Revision: https://developer.blender.org/D14484
Box, Circle and Lasso select were not taking into account if a
mask(point) was parented; selection was only succeeding in the original
place.
Now check coordinates from evaluated mask (points) instead while
setting selection flags and DEG tagging still happens on the original ID.
Maniphest Tasks: T97135
Differential Revision: https://developer.blender.org/D14651
Extremely subttle bug that would only appear in some specific
circumstances, would cause memfile undo writing code to falsely detect
some ID as changed because it would get the wrong 'starting point' of
comparison with existing previous memfile step.
See T85756 for detailed explanation and reproducible case.
This adds a new node editor overlay that helps users to see where
named attributes are used. This is important, because named
attributes can have name collisions between independent node
groups which can lead to hard to find issues.
Differential Revision: https://developer.blender.org/D14618
When GPU subdivision is enabled the mesh objects remain unsubdivided on
the CPU side, and subdivision should be requested somewhat manually (via
`BKE_object_get_evaluated_mesh`).
When referencing an object, the Object Info node calls
`bke::object_get_evaluated_geometry_set` which first checks if a Geometry
Set is present on the object (unless we have a mesh in edit mode). If so
it will return it, if not, the object type is discriminated, which will
return a properly subdivided mesh object via `add_final_mesh_as_geometry_component`.
The unsubdivided mesh is returned because apparently the check on the
Geometry Set always succeeds (as it is always set in `mesh_build_data`).
However, the mesh inside this Geometry Set is not subdivided.
This adds a check for a MeshComponent in this Geometry Set, and if one
exists, calls `add_final_mesh_as_geometry_component` which will ensure
that the mesh is subdivided on the CPU side as well.
Differential Revision: https://developer.blender.org/D14643
Originally was noticed when using a linked background scene and a scene
camera from another (local) scene.
The root issue was that relation from view layer to object's base flags
evaluation was using wrong view layer. This is because the relation was
created between object and currently built view layer, and it only was
happening once (since the object-level relations are only built once).
Depending on order in which `build_object` was called it was possible
that relation from a wrong view layer was used.
Now the code is better split to indicate which parts of object relations
are built when object comes from a base in the view layer, and which ones
are built on indirect linking of object to the dependency graph.
This patch makes relations correct in the cases when the same object is
used as a base in both active and set scenes. But, the operation which
handles object-level flags might not behave correctly as there is no
known design of what is the proper thing to do in this case. Making a
clear design and implementation of case when object is shared between
active and set scene is outside of the scope of this patch.
Differential Revision: https://developer.blender.org/D14626
Prefer using immVertex3f when 3D shaders are used for 2D rendering due to overhead of vertex padding in hardware. CPU overhead is negligible.
Authored by Apple: Michael Parkin-White
Ref T96261
Reviewed By: fclem
Maniphest Tasks: T96261
Differential Revision: https://developer.blender.org/D14494
MSL follows C++ standard convention, and such variable declarations within switch statements must have their scope localised within each case. Adding braces in all cases ensures correct behaviour and avoids 'case ... bypass initialization of local variable' compilation error.
struct initialisation to follow C++ rules for Metal. Implicit constructors replaced with either explicit constructors or list-initialization where appropriate.
Ref T96261
Authored by Apple: Michael Parkin-White
Reviewed By: fclem
Maniphest Tasks: T96261
Differential Revision: https://developer.blender.org/D14451
Add a new operator, "Start Tweaking Strip Actions (Full Stack)", which
allows you to insert keyframes and preserve the pose that you visually
keyed while upper strips are evaluating,
The old operator has been renamed from "Start Tweaking Strip Actions" to
"Start Tweaking Strip Actions (Lower Stack)" and remains the default for
the hotkey {key TAB}.
**Limitations, Keyframe Remapping Failure Cases**:
1. For *transitions* above the tweaked strip, keyframe remapping will
fail for channel values that are affected by the transition. A work
around is to tweak the active strip without evaluating the upper NLA
stack.
It's not supported because it's non-trivial and I couldn't figure it
out for all transition combinations of blend modes. In the future, it
would be nice if transitions (and metas) supported nested tracks
instead of using the left/right strips for the transitions. That
would allow the transitioned strips to overlap in time. It would also
allow N strips to be part of the (previously) left and right strips,
or perhaps even N strips being transitioned in sequence (similar to a
blend tree). Proper keyframe remapping through all that is currently
beyond my mathematical ability. And even if I could figure it out,
would it make sense to keyframe remap through a transition?
//This case is reported to the user for failed keyframe insertions.//
2. Full replace upper strip that contains the keyed channels.
//This case is reported to the user for failed keyframe insertions.//
3. When the same action clip occurs multiple times (colored Red to
denote it's a linked strip) and vertically overlaps the tweaked
strip, then the remapping will generally fail and is expected to
fail.
I don't plan on adding support for this case as it's also non-trivial
and (hopefully) not a common or expected use case so it shouldn't be
much of an issue to lack support here.
For anyone curious on the cases that would work, it works when the
linked strips aren't time-aligned and when we can insert a keyframe
into the tweaked strip without modifying the current frame output of
the other linked strips. Having all frames sampled and the strip
non-time aligned leads to a working case. But if all key handles are
AUTO, then it's likely to fail.
//This case is not reported to the user for failed keyframe
insertions.//
4. When using Quaternions and a small strip influence on the tweaked
Combine strip. This was an existing failure case before this patch
too but worth a mention in case it causes confusion. D10504 has an
example file with instructions.
//This case is not reported to the user for failed keyframe insertions. //
5. When an upper Replace strip with high influence and animator keys to
Quaternion Combine (Replace is fine). This case is similar to (4)
where Quaternion 180 degree rotation limitations prevent a solution.
//This case is not reported to the user for failed keyframe insertions.//
Reviewed By: sybren, RiggingDojo
Differential Revision: https://developer.blender.org/D10504
Undefined behaviour for divergent control-flow fixes, replacement for partial vector references, and resolution of a number of calculation precision issues occuring on macOS.
Authored by Apple: Michael Parkin-White
Ref: T96261
Reviewed By: fclem
Differential Revision: https://developer.blender.org/D14437
This patch fixes a typo in commit e59f754c16 which incorrectly uses `GPU_TEXTURE_ARRAY` instead of `GPU_FORMAT_COMPRESSED`.
`GPU_FORMAT_COMPRESSED` and `GPU_TEXTURE_ARRAY` both currently evaluate to 16, so this patch does not change anything functionally; however, this patch will prevent issues from arising in the future.
Reviewed By: fclem
Differential Revision: https://developer.blender.org/D14384
Add operator to select markers left/right of the current frame
(including the current frame).
`bpy.ops.marker.select_leftright(mode='LEFT', extend=False)`
`mode` can be either 'LEFT' or 'RIGHT'.
The naming and defaults of the above variables match similar operators
(e.g., `bpy.ops.nla.select_leftright`)
This also adds a new sub-menu to the Marker menu found in animation
editors, exposing both the new `bpy.ops.marker.select_leftright`
operator as well as the `bpy.ops.marker.select_all` operator.
Despite the name "Before Current Frame" and "After Current Frame", it
also selects a marker that falls on the current from for both of the
modes. This is to match the behavior found in the `nla.select_leftright`
operator.
RCS: https://blender.community/c/rightclickselect/OgmG/
Reviewed by: sybren, looch
Differential Revision: https://developer.blender.org/D14176
F2 allows renaming lots of different types of active items, and now it
also works for markers.
Before, Ctrl+M was used, and it's context-sensitive: you often get
"Mirror Keys" instead, when your cursor isn't on the markers region, and
that operator has nothing to do with either renaming or markers.
**What this commit does:**
- Replace Ctrl+M shortcut with F2.
- Adds the `TOPBAR_PT_name_marker` panel which is implemented similar
to the global rename panel. This having to press enter twice to
confirm or escape twice to cancel, which would happen if the
`marker.rename` operator was called directly.
- Replace usages of `marker.rename` in the UI with `wm.call_panel`.
- To make the Industry Compatible keymap consistent with Blender
Default, the rename shortcut only works when hovering the markers
area.
Reviewed By: ChrisLend, sybren
Differential Revision: https://developer.blender.org/D12298
There are no real difference from the previous icons, but since some
manual changes were introduced in the icons file, we still needed
a final sync between them.
Unlike regular selection cycling that is activated when clicking again
in the same location, object mode would cycle to another object
if the object that was selected happened to already be active.
This made it impossible to click-drag to tweak the active object
if there were other objects behind it as those would be activated first.
Resolves T96752.
Particles baked into memory would never load the final frame because
of an off-by-one error calculating the particles `dietime`.
This value indicates the frame which the particle ceases to exist but
was being set to the end-frame which caused this bug as the scenes
end-frame is inclusive.
While the last frame was properly written and read from memory,
the `dietime` was set to the last frame causing all the particles to be
considered dead when calculating the cached particle system.
The GPU evaluation for curves will have to change significantly from the
current particle hair drawing code, due to its more general use cases
and support for more curve types. To simplify that process and avoid
introducing regressions for the rendering of hair particle systems,
this commit splits drawing functions for the curves object and
particle hair.
The changes are just inlining of functions and copying code
where necessary.
Differential Revision: https://developer.blender.org/D14576
Problem is that the orco layer was not taken care of by the GPU
subdivision routines. This only handles the issues for EEVEE/Workbench.
For Cycles, this would need to be handled at the wrapper level somehow.
Use Alt-Slash to remove objects from local-view (was M prior to [0]),
following the convention of using Alt to perform the reverse of an
action. Also remove the confirmation menu as this key as it can be
undone and it's not likely to be pressed by accident.
This can be useful to quickly subtract items from a complex selection
with items that only become visible when entering local-view.
The M key was originally used in 2.4x since moving between layers wasn't
possible. Now moving between collections is possible in local-view
the keys collided.
[0]: cf5d582b77
edges
When wireframe mode is turned on, the subdivision edges not originating
from coarse edges were also drawn as regular edges, which would confuse
users trying to select them. These should not be drawn in edit mode,
only in object mode when optimal display is turned off (matching the CPU
subdivision case).
Fix word-wrapped tooltip text not showing by aligning to pixel grid.
See D14639 for more details.
Differential Revision: https://developer.blender.org/D14639
Reviewed by Campbell Barton
The "dir" argument to `BKE_where_on_path` was only actually
used in a few places. It's easier to see where those are if there
isn't always a dummy argument.
When displaying a deform modifier in edit mode, a cached
array of positions is used. Parallelizing bounds calculation when
that array exists can improve the framerate when editing slightly
(a few percent). I observed an improvement of the min/max itself
of about 10x (4-5ms to 0.4ms).
If all of the curves are poly curves, the evaluated positions are the
same as the original positions. In this case just reuse the original
positions span as the evaluated positions.
Addresses T97257, to make it consistent with regular attributes tab
Review by: Julian Kaspar
Differential Revision: https://developer.blender.org/D14631
Ref D14631
`GPU_shader_get_uniform_block` is marked as deprecated and the value
returned does not match what `GPU_uniformbuf_bind` expects.
Also, small typo fix in python error message.
Differential Revision: https://developer.blender.org/D14638
This behaviour was introduced in a687d98e67 to bring the old obscure
"M" operator to remove objects from the local view. In order to avoid
the keymap clash with the Move to Collection operator, the Move to
Collection was artificially restricted to work in local view.
In retrospect, the "Remove from Local View" operator is in the menu anyways,
so it didn't even need to have a shortcut (back in 2.79 the operator was
not in a menu).
The changes introduced here are:
* No shortcut for "Remove from Local View"
* No more restrictions to "Move/Link to Collection" from local view.
Thanks for Philipp Oeser for digging the old commit that introduced this
and for the rationale on the changes.
This can be useful to match transforms to what native Cycles
would see in Blender, as USD typically uses centimeters, but
Blender uses meters. This patch also fixes the hardcoded focal
length multiplicator, which is now using the same units as
everything else. Default of "stageMetersPerUnit" is 0.01 to match
the USD default of centimeters.
Differential Revision: https://developer.blender.org/D14630
Print it as a "%s" so that possible percentage symbols in the
error message does not cause issues.
Use proper assert (assert(true) is a no-op).
Also use `empty()` instead of `length()`.
Reviewed with Clement in real life.
Solves compilation warning with Clang, and moves manipulation with
DNA structures to the designed way for C++.
The tests and few other places are update to the new code by Jacques.
Ref T96847
Maniphest Tasks: T96847
Differential Revision: https://developer.blender.org/D14625
After appending, new link/append code would delete linked IDs, even if
those where pre-existing. Note that this would actually lead to invalid
memory access later in append code (ASAN crash).
This was one of multiple placeholder brushes to simplify development.
Having it is not necessary anymore.
It was a brush that could add new curves according to a specific density.
This functionality will be brought back as a new brush later.
Ref T97255.
Implements T97163
Newly created meshes have all voxel remesher checkboxes aside from Fix Poles enabled.
Startup files updated with versioning.
Reviewed By @JulianKaspar
Differential Revision: https://developer.blender.org/D14608
Ref D14608
Regression in [0] which caused canceled PRESS events not to generate
CLICK_DRAG.
Resolve by checking for an active brush tool in poll instead of the
PARTICLE_OT_brush_edit invoke function.
[0]: 4d0f846b93,
- Add logging for CLICK_DRAG event handling to debug drag events.
- Use logging API for reporting the key-map, operator and event.
This command now prints useful information for investigating
key-map and event handling issues:
blender --log "wm.handler.*" --log-level 4
Internally many offsets for BLF were integers but exposed as floats,
since these are used in pixel-space, many callers were converging them
back to integers. Simplify logic by using ints.
Support sub-pixel kerning and hinting for future support for improved
character placement. No user visible changes have been made.
- Calculate sub-pixel offsets, using integer maths.
- Use convenience functions to perform the conversions and hide the
underlying values.
- Use `ft_pix` type to distinguish values that use sub-pixel integer
values from freetype and values rounded to pixels.
This was originally based on D12999 by @harley with the user visible
changes removed so they can be applied separately.
In order to allow GLSL Cross Compilation across platforms, expose in
Python the `GPUShaderCreateInfo` strategy as detailed in
https://wiki.blender.org/wiki/EEVEE_%26_Viewport/GPU_Module/GLSL_Cross_Compilation
The new features can be listed as follows:
```
>>> gpu.types.GPUShaderCreateInfo.
define(
fragment_out(
fragment_source(
push_constant(
sampler(
typedef_source(
uniform_buf(
vertex_in(
vertex_out(
vertex_source(
>>> gpu.types.GPUStageInterfaceInfo.
flat(
name
no_perspective(
smooth(
>>> gpu.shader.create_from_info(
```
Reviewed By: fclem, campbellbarton
Differential Revision: https://developer.blender.org/D14497
Propagate the fp settings from the main thread to all the worker threads (the fp settings includes the FZ settings among other things) - this guarantees consistency in execution of floating point math regardless if its executed in tbb thread arena or on main thread
Add FZ mode to arm64/aarch64 in parallel to the way its been done on intel processors, currently compiling for arm target does not set this mode at all, hence potentially runs slower and with possible results mismatch with intel x86.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D14454
We better handle NULL object pointers before doing layer collections
resync, otherwise said resync process has to deal with those NULL
pointers. By the look of it this mistake has been there since the origin
of the remapping/relinking code.
Also for safety (and optimization), do not perform layer collection
resync from `libblock_remap_data_postprocess_object_update` when
`libblock_remap_data_postprocess_collection_update` is called
immediately afterwards.
Also added same 'skip on NULL collection object pointer' check to
`layer_collection_local_sync` as the one in
`layer_collection_objects_sync`, since it's fairly hard to always
guaranty there is no such NULL pointer when calling that code.
Animation would still play in the viewport.
There are two ways to unlink an action from the Outliner:
[1] `Unlink Action` on the Animation Data context menu.
This does `outliner_do_data_operation` / `unlinkact_animdata_fn` and has
the correct DEG update.
[2] `Unlink` on the Action context menu
This does `outliner_do_libdata_operation` / `unlink_action_fn` and was
missing the DEG update.
Now add the missing DEG update to the second case.
Maniphest Tasks: T95679
Differential Revision: https://developer.blender.org/D14089
The issue reported was that the recently introduced manual framerange of
an action (see rB5d59b38605d6) was not having an immediate effect in
case the action was pushed down from the Action Editor and only showed
its effects after e.g. saving and reloading the file. However doing the
same thing (pushing down the action) was working fine when done from the
NLA.
Now bring pushdown in sync (in terms of DEG update tagging) between the
Action Editor and the NLA, meaning that now both the owner and the
action are tagged when pushdown happens from the Action Editor as well.
Fixes T96964.
Maniphest Tasks: T96964
Differential Revision: https://developer.blender.org/D14564
The new tab in the properties panel for "Color Attributes" shows the
data type and domain, just like the Attributes tab.
The issue is that the UI does not scale well and can only display the
full names when dragged to an extreme width.
This is due to the inclusion of the render icon in between the
attribute name and attribute type description.
This patch changes the order that items are displayed in the Color
Attributes Panel. Moving the render icon to the very right.
The result is consistent with other parts of the Blender UI and does
not take as much space to display the full text.
Reviewed By: @jbakker
Differential Revision: https://developer.blender.org/D14567
Ref D14567
Color filter sharpening now clamps the output.
The sharpening delta is now calculated from the
difference of two levels of vertex averaging instead
of one smooth iteration and the base color.
TODO: Sharpen in a different color space;
SRGB-linear has saturation artifacts. I
tried HSL but it had value artifacts. I'd
like to try LAB but we don't seem to have
conversion functions for it (at least as far
as I could see).
- Replace SPACE_TYPE_LAST with SPACE_TYPE_NUM (adding 1).
- Rename RGN_TYPE_LEN to RGN_TYPE_NUM
This makes it possible to tag space-type/region-type combinations
with `bool tag[SPACE_TYPE_NUM][RGN_TYPE_NUM]` which reads more clearly
than `bool tag[SPACE_TYPE_LAST + 1][RGN_TYPE_LEN]`.
The type of the key was changed from string to integer in 66da2f537a, but
the GSet was still being created for string keys.
As long as the integers stay small enough, this even kind of works on
little-endian systems, but of course it should use an integer hash instead.
Currently, hovering over a socket itself shows no tooltip at all, while
hovering over its value field shows "Default value", which is not helpful.
This patch therefore implements socket tooltips following the proposal at
https://blender.community/c/rightclickselect/2Qgbbc/.
A lot of the basic functionality was already implemented for Geometry Nodes,
where hovering over the socket itself shows introspection info.
This patch extends this by:
- Supporting dynamic tooltips on labels, which is important for good tooltip
coverage in a socket's region of the node.
- Adding a function to setting a dynamic tooltip for an entire uiLayout, which
avoids needing to set it manually for a wide variety of socket types.
- Hiding the property label field in a tooltip when dynamic tooltip is also
provided. If really needed, this label can be restored through the dynamic
tooltip, but in all current cases the label is actually pointless anyways
since the dynamic tooltip gives more accurate and specific information.
- Adding dynamic tooltips to a socket's UI layout row if it has a description
configured, both in the Node Editor as well as in the Material Properties.
Note that the patch does not add any actual tooltip content yet, just the
infrastructure to show them. By default, sockets without a description still
show the old "Default value" tooltip.
For an example of how to add socket descriptions, check the Cylinder node
in the Geometry Nodes.
Differential Revision: https://developer.blender.org/D9967
When using proportional editing, the 'project onto self' snap setting
is ignored since proportional editing does not allow snapping to
self. The UI should reflect this fact. This patch makes 'project onto
self' active only when proportional editing is off.
Reviewed By: mano-wii
Differential Revision: https://developer.blender.org/D14496
Simply add the few lines so that the Cycles renderable New Curves
(with a small temporary patch to output the new type to render engines)
would get assigned a shader (in particular a hair shader).
Differential Revision: https://developer.blender.org/D14622
* Don't assume the display colorspace name fully defines the transform
to display space, this is not true in OpenColorIO 2 where view transforms
may be defined in more complexs ways than just specifying a colorspace.
* In places where we need to store the display colorspace name, resolve
<USE_DISPLAY_NAME> token manually.
Ref T96590
Adding nodes via `NodeAddOperator` could fail if the node's poll fails
in `rna_NodeTree_node_new`. Examples are trying to add a RenderLayers
node or a Cryptomatte node to a nodegroup.
Now except the python error and use blender error reporting only instead
of throwing full python stacktraces at the user.
Differential Revision: https://developer.blender.org/D14584
For some reasons(c) some base classes (like `bpy.types.Modifier`) are
not listed anymore by a call to
`bpy.types.ID.__base__.__subclasses__()`, unless they are first accessed
explicitely (e.g. by executing `bpy.types.Modifier` etc.).
Coordinate checks in `spline_under_mouse_get` need to take place with
the evaluated mask to get the right center. This is now more in line to
how this is done in `ED_mask_point_find_nearest`.
Note: similar issues are reported with box/circle/lasso selection in
T97135, will tackle these separately though.
Maniphest Tasks: T85467
Differential Revision: https://developer.blender.org/D14598
This implements the icon for snake hook, as well as tweak the
growth brush to make it bigger and with the triangles more
distinct for better reading of the icon.
Differential Revision: https://developer.blender.org/D14616
It was not possible to assign a shortcut to menu items in the insert
key-frame menu going back to version 2.7x. Doing so would replace the
current key that opens the insert keyframe menu (I-key by default),
instead of binding a key to insert a key-frame for the keying-set
referenced by the menu item.
Now each menu item can be bound to a key or added to the "Quick
Favorites" menu, directly inserting a key-frame for the corresponding
keying-set.
Note that users must use the operator `anim.keyframe_insert_by_name`
when setting up key-shortcuts as `anim.keyframe_insert` is only intended
to launch the menu.
Keymap Editor:
When editing these key-map items in the key-map editor, the keying-set
identifier must be used. At the moment the key-map editor doesn't
support showing a drop-down list. The identifiers can be used from the
tool-tip or the info editor.
{F12994924}
Details:
Use `ANIM_OT_keyframe_insert_by_name` instead of
`ANIM_OT_keyframe_insert_menu` for the insert keyframe popup menu to
resolve the following issues binding keys to keying sets:
- The index of the keying set isn't stable (adding/removing keying sets
may change it).
- Binding a key to items in the popup menu triggers a popup instead of
inserting a key using the keying set from the menu item.
While support for using the current operator could be improved, it will
still only work for built-in keying sets, so I'd prefer to use an
operator that is intended for key-bindings.
Besides supporting binding keys to menu items there are no functional
changes.
Reviewed By: sybren
Ref D14289
This simply adds access to the vertex crease layer from Python
code. Documentation was modified to remove "Edge" as it is
shared between edges and vertices, similarly to bevel weights.
Differential Revision: https://developer.blender.org/D14605
- Missing star prefix.
- Unnecessary indentation.
- Blank line after dot-points
(otherwise doxygen merges with the previous dot-point).
- Use back-slash for doxygen commands.
- Correct spelling.
These operators build a list of all lightgroups that are used by the view layer's
scene and either add all used lightgroups that are not part of the view layer yet
or remove all lightgroups in the view layer that are not being used.
Differential Revision: https://developer.blender.org/D14596
As far as I can see, it makes a lot of sense to have the alpha channel here, it matches the 2.x behavior and also matches what Eevee is doing.
Differential Revision: https://developer.blender.org/D14595
Use the Extend method for these, as these do not work correctly. For UVs
it's better to extend the UVs from the same face, and for tangent space
the normals should be encoded in a matching tangent space.
Later the Adjacent Faces method might be improved to support these cases.
Ref T96977
Differential Revision: https://developer.blender.org/D14572
Port the "Normal" and "Curve Tangent" nodes to the new curves data-block
to avoid the conversion to `CurveEval`. This should make them faster by
avoiding all that copying, but otherwise nothing else has changed.
This also includes a fix to move the normal mode as a built-in curve
attribute when converting to and from `CurveEval`. The attribute is
needed because the option is used implicitly in many nodes currently.
Differential Revision: https://developer.blender.org/D14609
Since effect strips can't be transformed directly, their selection had
to be forced in order to process them. This often failed in more
complicated scenarios, because there was no attempt to parse hierarchy
completely. In worst case only one effect in chain would be selected.
This code was marked by `XXX_DURIAN_ANIM_TX_HACK`.
Instead solution described above, a collection of strips that depend
on non effect strip position is built by function
`query_time_dependent_strips_strips` and it is stored in `TransSeq`.
In `flushTransSeq` this collection is iterated and transformation offset
is applied to effect strip animation. Strips in collection should be
consistent with true state of dependency and should be complete.
Functional changes:
- When 2-input effect strip changes position, animation is offset even
if only handles are moved. This only applies to 2-input effect however.
- Selection is not extended to include effect strips anymore. If effects
are to be moved, they must be selected manually. This is because
previously it was very hard to reorganize effects in chain, since moving
first strip in chain would always select anywhere from 1 to n effects.
So creating or filling gap in channel would almos always result in
collision especially if their order in timeline doesn't perfectly
represent their order in chain.
Previously, the new attributes were zero-initialized. However, sometimes
the default has to be something else. The old behavior led to unexpected
behavior in the Snap Curves to Surface operator in Deform mode, when the
curves were not attached to the surface before.
Differential Revision: https://developer.blender.org/D14588
This change moves the grid panel UI from the View tab up into the
Overlay panel.
Reasons to move to the Overlay panel include:
- Consistency with the grid options in the 3D viewport
- The grid has been drawn as an Overlay for quite some time already
Additional changes that now make sense to have:
- The grid responds to the main Overlay show/hide toggle
- Adds a toggle to show/hide the grid which is consistent with overlays in general
As before, these grid controls are only available for active UV edit
sessions.
Differential Revision: https://developer.blender.org/D11862
- Use "curve" instead of "spline" in comments
- Use non-plural variable names
- Tag topology dirty after resolution modified rather than positions
- Reorder enum values to change which value is zero (and the default)
- Remove a duplicate unused variable
draw_common.h was included in a C++ file
leading to the linker looking for the
decorated name for `G_draw` which lead
to a linker error.
adding an extern "C" for C++ fixes
the issue.
lengths along a set of points. This can be used for the sample curves
node, or finding new points along a curve when extending
or shrinking it.
This commit uses it in the snake hook brush as an example.
The logic is similar to the uniform length sampling, but the next
sample length is retrieved from the input instead of multiplication.
For the sample node in the future, though this sort of sampling can be
potentially done more efficiently for specific curve types besides
poly curves, it's simpler, at least as a start, to work on a set of
evaluated points that can be treated like a poly curve.
Differential Revision: https://developer.blender.org/D14571
Avoid installing all the ogg, theora, xvid etc. codec lib dev packages
unless we actually build ffmpeg itself. Otherwise they are not necessary
for Blender build itself.
This adds support to show dots for the curves points when in edit mode,
using a specific overlay.
This also adds `DRW_curves_batch_cache_create_requested` which for now
only creates the point buffer for the newly added `edit_points` batch.
In the future, this will also handle other edit mode overlays, and
probably also replace the current curves batch cache creation.
Maniphest Tasks: T95770
Differential Revision: https://developer.blender.org/D14262
The menu with the options was not visible because the tool checked must be the sculpt, not draw.
This was broken in old version, but I cannot determine when or if never worked at expected.
The `frame_offset` used for creating `TimeSamplings` when exporting was
being clamped, which would make subframe sampling potentially fail, or
get out of sync.
Both the Alembic and USD libraries use double precision floating
point numbers internally to store time. However the Alembic I/O
code defaulted to floats even though Blender's Scene FPS, which is
generally used for look ups, is stored using a double type. Such
downcasts could lead to imprecise lookups, and would cause
compilation warnings (at least on MSVC).
This modifies the Alembic exporter and importer to make use of
doubles for the current scene time, and only downcasting to float
at the very last steps (e.g. for vertex interpolation). For the
importer, doubles are also used for computing interpolation weights,
as it is based on a time offset.
Although the USD code already used doubles internally, floats were used
at the C API level. Those were replaced as well.
Differential Revision: https://developer.blender.org/D13855
This patch adds color attributes to TexPaintSlot. This allows an easier selection
when painting color attributes.
Previously when selecting a paint tool the user had to start a stroke, before the
UI reflected the correct TexPaintSlot. Now when switching the slot the active
tool is checked and immediate the UI is drawn correctly.
In the future the canvas selector will also be used to select an image or image texture node
to paint on. Basic implementation has already been done inside this patch.
A limitation of this patch is that is isn't possible anymore to rename images directly from
the selection panel. This is currently allowed in master. But as CustomDataLayers
aren't ID fields and not owned by the material supporting this wouldn't be easy.
{F12953989}
In the future we should update the create slot operator to also include color attributes.
Sources could also be extended to use other areas of the object that use image textures
(particles, geom nodes, etc... ).
Reviewed By: brecht
Maniphest Tasks: T96709
Differential Revision: https://developer.blender.org/D14455
Avoid Blender overwriting artist's choices. The automatic change from
"Hold" (i.e. bidirectional extrapolation) to "Hold Forward" (i.e. only
extrapolate forward in time) has been removed.
This patch does not change strip evaluation. Between two strips, the
first with `None` extrapolation and the next with `Hold`, neither strip
will evaluate, which matches previous behavior. A future patch can
change the evaluation behavior.
Reviewed By: RiggingDojo, sybren
Maniphest Tasks: T82230
Differential Revision: https://developer.blender.org/D14230
Previous commit (rBrB59681a7ccdcf) was effectively doing nothing, due to
weird hacks we had to do with OpenVDB 8.0 to 'integrate' NanoVDB.
Now OpenVDB 9.0 natively includes NanoVDB, which allows us to greatly
simplify that part of the code in install_deps.
The all_objects.blend test scene (in subversion tests repo) contained an
object with a subdivision surface. Which changes vertex positions
slightly, depending on used OpenSubDiv version and the compile flags. It
seems that the intent of the test was "test export of meshes that use
modifiers", so I changed that object to be a cube with a simple "taper"
modifier instead.
While at it, changed OBJ exporter test code to always print the
"expected and what we got" text difference details, when a test fails.
Much easier to see than just "the files are different" output. The code
to print that was behind an off by default flag for some reason.
This diff should get comitted together with updated all_objects templates
in subversion tests repo.
Reviewed By: Sebastian Parborg
Differential Revision: https://developer.blender.org/D14597
The problem was the original file had some vertex weight information, but the weights array was empty, so the duplication was not done and the free memory crashed.
To avoid this type of errors, now before duplicate weights the function checks the pointer and also the number of weights elements in the array to avoid the duplicatiopn of empty data.
With the increased use of multi-character format units and keyword-only
arguments these are increasingly difficult to make sense of.
Split the string onto multiple lines, one per argument.
While verbose it's easier to understand and add new arguments.
Currently, only Lightgroups that exist in the current view layer can be
selected from object or world properties.
The internal UI code already has support for search fields that accept
unknown input, so I just added that to the API and use it for lightgroups.
When a lightgroup is entered that does not exist in the current view layer
(e.g. because it's completely new, because the view layer was switched or
because it was deleted earlier), a new button next to it becomes active and
adds it to the view layer when pressed.
Differential Revision: https://developer.blender.org/D14540
* Add missing GLEW and hgiGL libraries for Hydra
* Fix wrong case sensitive include
* Fix link errors by adding external libs to static Hydra lib
* Work around weird Hydra link error with MAX_SAMPLES
* Use Embree by default for Hydra
* Sync external libs code with standalone
* Update version number to match Blender
* Remove unneeded CLEW/GLEW from test executable
None of this should affect Cycles in Blender.
Ref T96731
This seems to serve no purpose anymore, I don't see anywhere that
CD_MFACE is requested for modifier evaluation, and it's confusing
to have this in this final normals computation function.
Found while looking into D14579.
Differential Revision: https://developer.blender.org/D14580
The mechanism to instance meshes when there are no modifiers did not take
into account that modifiers might get re-evaluated from an operator that
requests loop normals. Now check for that case and no longer use the
instance then.
In the future, a better solution may be to compute loop normals on demand
as is already done for poly and vertex normals, but that would be a big
change.
Differential Revision: https://developer.blender.org/D14579
This frequently showed up in profiling but shouldn't.
This also updates the code to use atomics for more correctness and
adds multi-threading for better performance.
This implements two optimizations:
* Reduce virtual function call overhead when a non-standard virtual
array is used as input.
* Use a lambda in `type_conversion.cc`.
In my test setup, which creates a float attribute filled with the index,
the running time drops from `4.0 ms` to `2.0 ms`.
Differential Revision: https://developer.blender.org/D14585
I observed a 4-5x performance improvement (from 50ms to 12ms)
with five million points, though obviously the change depends on
the hardware.
In the future we may want to disable the parallelization in
`parallel_invoke` when there is a small amount of points.
Differential Revision: https://developer.blender.org/D14590
This patch adds an option to only use every n-th segment of the
envelope result. This can be used to reduce the complexity of the
result.
Differential Revision: http://developer.blender.org/D14503
Method which overrides a base class's virtual methods are expetced to
be marked with `override`. This also gives better idea to the developers
about what is going on.
The first element of the iterator was not being tested against the flag.
So in some cases it would lead to more objects been made into
single-user than the active (or selected) ones.
The goal is to make the Add menu more convenient for the new curves object.
The following changes are done:
* Add `curves` submenu.
* Add an `Empty Hair` operator that also sets the surface object.
* Rename the old operator to `Random`. It's mostly for testing at this point.
Differential Revision: https://developer.blender.org/D14556
This operator snaps the first point of every curve to the corresponding
surface object. The shape of individual curves or their orientation is
not changed.
There are two different attachment modes:
* `Nearest`: Move each curve so that the first point is on the closest
point on the surface. This should be used when the topology of the
surface mesh changed, but the shape generally stayed the same.
* `Deform`: Use the existing attachment information that is stored
for curves to move curves to their new location when the surface
mesh was deformed. This generally does not work when the
topology changed.
The purpose of the operator is to help setup the "ground truth"
for how curves are attached to the surface. When the ground
truth surface changed, the original curves have to be updated
as well. Deforming curves based on an animated surface will be
done with geometry nodes independent of the operator.
In the UI, the operator is currently exposed in curves sculpt mode
in the `Curves > Snap Curves to Surface` menu.
Differential Revision: https://developer.blender.org/D14515
Avoid errors in the legacy Pose Library panel (in Armature properties)
when the Pose Library add-on is disabled.
It's unfortunate that a built-in panel now has knowledge of an add-on.
Then again, it's temporary (one or two Blender releases), and it now uses
feature detection instead of just assuming the add-on is enabled.
The operator could crash in case the context "object" was overridden
from python, but the "active_object" wasnt (and the active object was
not a mesh).
Reason for the crash is a mismatch in the operators poll function
`data_transfer_poll` vs. `dt_layers_select_src_itemf` -- in the former,
the overriden "object" was respected (and if this was a mesh, the poll
was permissive), in the later it wasnt and only the "active_object" was
used (if this was not a mesh, a crash would happen trying to get an
evaluated mesh).
Now rectify how the object which is used is being fetched -> use
`ED_object_active_context` everywhere (see also rBe560bbe1d584).
Maniphest Tasks: T96888
Differential Revision: https://developer.blender.org/D14552
This does two things:
* Introduce new `materialize_compressed` methods. Those are used
when the dst array should not have any gaps.
* Add materialize methods in various classes where they were missing
(and therefore caused overhead, because slower fallbacks had to be used).
This improves performance e.g. when creating an integer attribute
based on an index field. For 4 million vertices, I measured a speedup
from 3.5 ms to 1.2 ms.
Add the ability to get/set the selected text.
**Calling the new methods:**
- `bpy.data.texts["Text"].region_as_string()`
- `bpy.data.texts["Text"].region_from_string("Replacement")`
Expose the "Connected" mode from the weld modifier in the
"Merge by Distance" geometry node. This method only merges
vertices along existing edges, but it can be much faster
because it doesn't have to build a KD Tree of all selected
points.
Differential Revision: https://developer.blender.org/D14321
Change the modifier name in the modifier stack to "Curvature 3D"
to be consistent with the modifier name in the drop-down.
Differential Revision: https://developer.blender.org/D14476
If all islands had a size of zero, a division by zero would occur in
`GEO_uv_parametrizer_pack`, causing the UV coordinates to be set to
NaN. An alternative approach would be to skip packing islands with a
zero size, but If UV coordinates are for example outside the 0-1 range,
it's better if they get moved into that range.
Differential Revision: https://developer.blender.org/D14522
Set the curve resolution to Bezier and Nurbs curves when converting
data using `curves_to_curve_eval`. This was missed in 9ec12c26f1.
Differential Revision: https://developer.blender.org/D14577
Add "for_write" on function names that retrieve mutable data arrays.
Though this makes function names longer, it's likely worth it because
it allows more easily using the const functions in a non-const context,
and reduces cases of mistakenly retrieving with edit access.
In the long term, this situation might change more if we implement
attributes storage that is accessible directly on `CurvesGeometry`
without duplicating the attribute API on geometry components,
which is currently the rough plan.
Differential Revision: https://developer.blender.org/D14562
Change uses of "Hair" in Render Settings UI in the property editor
and the "Hair Info" node to use the "Curves" name to reflect the
design described in T95355, where hair is just a use case of a more
general curves data type.
While these settings still affect the particle hair system,
the idea is that if we have to choose one naming scheme to align
with, we should choose the option that aligns with future plans
and current development efforts, especially since the particle
system is considered a legacy feature.
A few notes:
- "Principled Hair BSDF" is not affected since it's meant for hair.
- Python API property identifiers are not affected.
Differential Revision: https://developer.blender.org/D14573
The last length value was not initialized, and all length values were
moved one position towards the front of each curve incorrectly.
Also fix an assert when a curve only had a single point.
This revision allows to specify CUDA host compiler (nvcc's -ccbin command
line option) when configuring the build. It addresses the case where the
C/C++ compiler to be used in CUDA toolchain should be different from the
default C/C++ compiler, for instance in case of compilers versions conflicts
or multiple installed compilers.
The new CMake option is named `CUDA_HOST_COMPILER` and can be used as follows:
`cmake -DCUDA_HOST_COMPILER=<path-to-host-compiler>`
If the option is not specified, the build configuration behaves as previously.
Differential Revision: https://developer.blender.org/D14248
When showing an action data-block added to a library overridden object
in the Graph Editor, the visibility toggles would be disabled.
Toggling the visibility should be possible still and works with the
shortcuts, just the button was incorrectly disabled.
Also added the usual disabled hint for the tooltip.
Differential Revision: https://developer.blender.org/D14568
Reviewed by: Bastien Montagne
2022-04-06 18:30:51 +02:00
1055 changed files with 36631 additions and 17357 deletions
The fast_float library provides fast header-only implementations for the C++ from_chars
functions for `float` and `double` types. These functions convert ASCII strings representing
decimal values (e.g., `1.3e10`) into binary types. We provide exact rounding (including
round to even). In our experience, these `fast_float` functions many times faster than comparable number-parsing functions from existing C++ standard libraries.
Specifically, `fast_float` provides the following two functions with a C++17-like syntax (the library itself only requires C++11):
```C++
from_chars_result from_chars(const char* first, const char* last, float& value, ...);
from_chars_result from_chars(const char* first, const char* last, double& value, ...);
```
The return type (`from_chars_result`) is defined as the struct:
```C++
struct from_chars_result {
const char* ptr;
std::errc ec;
};
```
It parses the character sequence [first,last) for a number. It parses floating-point numbers expecting
a locale-independent format equivalent to the C++17 from_chars function.
The resulting floating-point value is the closest floating-point values (using either float or double),
using the "round to even" convention for values that would otherwise fall right in-between two values.
That is, we provide exact parsing according to the IEEE standard.
Given a successful parse, the pointer (`ptr`) in the returned value is set to point right after the
parsed number, and the `value` referenced is set to the parsed value. In case of error, the returned
`ec` contains a representative error, otherwise the default (`std::errc()`) value is stored.
The implementation does not throw and does not allocate memory (e.g., with `new` or `malloc`).
It will parse infinity and nan values.
Example:
``` C++
#include "fast_float/fast_float.h"
#include <iostream>
int main() {
const std::string input = "3.1416 xyz ";
double result;
auto answer = fast_float::from_chars(input.data(), input.data()+input.size(), result);
std::cout << "parsed the number " << result << std::endl;
return EXIT_SUCCESS;
}
```
Like the C++17 standard, the `fast_float::from_chars` functions take an optional last argument of
the type `fast_float::chars_format`. It is a bitset value: we check whether
`fmt & fast_float::chars_format::fixed` and `fmt & fast_float::chars_format::scientific` are set
to determine whether we allow the fixed point and scientific notation respectively.
The default is `fast_float::chars_format::general` which allows both `fixed` and `scientific`.
The library seeks to follow the C++17 (see [20.19.3](http://eel.is/c++draft/charconv.from.chars).(7.1)) specification.
* The `from_chars` function does not skip leading white-space characters.
* [A leading `+` sign](https://en.cppreference.com/w/cpp/utility/from_chars) is forbidden.
* It is generally impossible to represent a decimal value exactly as binary floating-point number (`float` and `double` types). We seek the nearest value. We round to an even mantissa when we are in-between two binary floating-point numbers.
Furthermore, we have the following restrictions:
* We only support `float` and `double` types at this time.
* We only support the decimal format: we do not support hexadecimal strings.
* For values that are either very large or very small (e.g., `1e9999`), we represent it using the infinity or negative infinity value.
We support Visual Studio, macOS, Linux, freeBSD. We support big and little endian. We support 32-bit and 64-bit systems.
## Using commas as decimal separator
The C++ standard stipulate that `from_chars` has to be locale-independent. In
particular, the decimal separator has to be the period (`.`). However,
some users still want to use the `fast_float` library with in a locale-dependent
manner. Using a separate function called `from_chars_advanced`, we allow the users
to pass a `parse_options` instance which contains a custom decimal separator (e.g.,
std::cout << "parsed the number " << result << std::endl;
return EXIT_SUCCESS;
}
```
## Reference
- Daniel Lemire, [Number Parsing at a Gigabyte per Second](https://arxiv.org/abs/2101.11408), Software: Pratice and Experience 51 (8), 2021.
## Other programming languages
- [There is an R binding](https://github.com/eddelbuettel/rcppfastfloat) called `rcppfastfloat`.
- [There is a Rust port of the fast_float library](https://github.com/aldanor/fast-float-rust/) called `fast-float-rust`.
- [There is a Java port of the fast_float library](https://github.com/wrandelshofer/FastDoubleParser) called `FastDoubleParser`.
- [There is a C# port of the fast_float library](https://github.com/CarlVerret/csFastFloat) called `csFastFloat`.
## Relation With Other Work
The fastfloat algorithm is part of the [LLVM standard libraries](https://github.com/llvm/llvm-project/commit/87c016078ad72c46505461e4ff8bfa04819fe7ba).
The fast_float library provides a performance similar to that of the [fast_double_parser](https://github.com/lemire/fast_double_parser) library but using an updated algorithm reworked from the ground up, and while offering an API more in line with the expectations of C++ programmers. The fast_double_parser library is part of the [Microsoft LightGBM machine-learning framework](https://github.com/microsoft/LightGBM).
## Users
The fast_float library is used by [Apache Arrow](https://github.com/apache/arrow/pull/8494) where it multiplied the number parsing speed by two or three times. It is also used by [Yandex ClickHouse](https://github.com/ClickHouse/ClickHouse) and by [Google Jsonnet](https://github.com/google/jsonnet).
## How fast is it?
It can parse random floating-point numbers at a speed of 1 GB/s on some systems. We find that it is often twice as fast as the best available competitor, and many times faster than many standard-library implementations.
('MULTIPLE_IMPORTANCE_SAMPLING',"Multiple Importance Sampling","Multiple importance sampling is used to combine direct light contributions from next-event estimation and forward path tracing",0),
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.