In comments made by a tester on rB17b89f6dacba007bf, it seems that baking
of the spray maps would be useful. This commit adds that capability. Both
the spray map and its inverse are baked out in this change, for maximum
convenience and to avoid assuming what the user wants.
Differential Revision: https://developer.blender.org/D8470
The design for how we approach the "Everything Nodes" project
has changed. We will focus on a different part of the project initially.
While future me will likely refer back to some of the code I remove here,
there is no point in keeping this code around in master currently.
It would just confuse other developers working on the project.
This does not remove the simulation modifier and data block. Those are
just cleaned up, so that the boilerplate code can be reused in the future.
This changes how the simplify volumes setting works. Before, it only
affeted viewport rendering. This was an issue, because all internal
computations would still have to happen on the high resolution volumes.
With this patch, the simplify setting already affects file loading and
procedural generation of volumes.
Rendering does not have to care about the simplify option anymore,
it just gets the correct simplified version from the depsgraph.
Reviewers: brecht
Differential Revision: https://developer.blender.org/D9176
Corrects incorrect usages of the fragment 'apart of' when 'a part of' was required.
Differential Revision: https://developer.blender.org/D9245
Reviewed by Campbell Barton
Corrects incorrect usages of the word 'loose' when 'lose' was required.
Differential Revision: https://developer.blender.org/D9243
Reviewed by Campbell Barton
Corrects incorrect usage of contraction for 'it is', when possessive 'its' was required.
Differential Revision: https://developer.blender.org/D9250
Reviewed by Campbell Barton
This modifier is the opposite of the recently added Mesh to Volume modifier.
It converts the "surface" of a volume into a mesh. The "surface" is defined
by a threshold value. All voxels with a density higher than the threshold
are considered to be inside the volume, while all others will be outside.
By default, the resolution of the generated mesh depends on the voxel
size of the volume grid. The resolution can be customized. It should be
noted that a lower resolution might not make this modifier faster. This
is because we have to downsample the openvdb grid, which isn't a cheap
operation.
Converting a mesh to a volume and then back to a mesh is possible,
but it does require two separate mesh objects for now.
Reviewers: brecht
Differential Revision: https://developer.blender.org/D9141
Field time_base of video stream must be set for some containers,
otherwise avformat_write_header() will set it to default values.
Rendered file in such case won't be played at desired frame rate.
See init_muxer() in mux.c in ffpmeg sources.
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D9213
A general refactor / fix commit that should clear out the issues that have been reported on external forces and moving effectors (e.g. T79537, T81660, T80088).
Code could call CustomData_get_layer_index_n with a negative index (if
no active and/or render UV layers are found). This would assert since
rBe86785c51445.
Spotted while looking into T81398.
Differential Revision: https://developer.blender.org/D9212
The new parameter made so that previously keyed Alpha values were lost
and instead the new "Emission Strength" was keyed.
Issue introduced with the original commit of Emission Strength: b248ec9776
Note: Files created since the issue (September 17) that keyframed the
Emission Strength will have to fix their files manually.
Differential Revision: https://developer.blender.org/D9221
Restore the old `correct_bezpart()` (pre-rBda95d1d851b4) function as
`BKE_curve_correct_bezpart()`, and use that where the old behaviour was
desired (that is, curve maps like used by the RGB Curves shader node).
The new (post-rBda95d1d851b4) function is also renamed to
`BKE_fcurve_correct_bezpart()` to avoid confusion.
Previously, all Face Set visibility logic was using mvert flags directly
to store the visibility state on the vertices while sculpting. As Face
Sets are a poly attribute, it is much simpler to use mpoly flags and let
BKE_mesh_flush_hidden_from_polys handle the vertex visibility, even for
Multires.
Now all operators that update the Face Set visibility state will always
copy the visibility to the mesh (using poly flags) and the grids, all
using the same code.
This should fix a lot of visibility glitches and bugs like the following:
- Sculpt visibility reset when changing multires levels.
- Multires visibility not updating in edit mode.
- Single face visibible when surrounded by visibile face set, even when
the face set was hidden.
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D9175
ME_POLY_LOOP_NEXT and ME_POLY_LOOP_PREV expect the offset of
the loop in the poly as an argument, in other words, corner index of the poly.
This was violated in some places. It didn't cause issues when base mesh consists
of only quads due to the way how modulus worked inside of the macro. However,
if mesh had non-quad faces adjacency information was returning wrong vertex
indices. This was causing multiple brushes to work erratically, including brushes
like Face Set, Boundary automasking, mesh relax, and others.
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D9173
This adds an operator property to use the paint cursor radius and
position for the depth of the trimming shape created by the trimming
tools.
When enabled, the shape is located in the surface point when the gesture
started and it will have the depth of the cursor radius. When the cursor
is not over the mesh, the shape will be positioned at the center of the
depth of the whole object from the viewport camera.
Reviewed By: dbystedt, sergey
Differential Revision: https://developer.blender.org/D9129
The voxel remesher was using the voxel size to limit the shrink-wrap
projection distance. Now that distance is increased to help preserving
more detail on hard surface edges.
Reviewed By: pablodp606
Differential Revision: https://developer.blender.org/D6204
Being in render 'context'was not taken into account in code evaluating
modifiers for meshes in Edit mode.
Reviewed By: #modeling, mont29
Differential Revision: https://developer.blender.org/D9217
When creating a particle system to display simulated particles, the phystype needs to be set to 'no physics' so that particle positions are just copied and not integrated.
This will make the "Reset to Default Value" operator in button right
click menus work for the fluid modifier. Before they always reset
the values to 0.
Differential Revision: https://developer.blender.org/D9206
With constructive + deform modifiers, loop-cut visualization
wasn't following the displayed mesh.
This now gets the coordinates from the cage when available.
Previously the softbody strength property was controlling the strength
of the constraints that pin all vertices to the original location. This
was causing problems when the forces were trying to deform the vertices
too much, like when using gravity or grab brushes.
Now softbody is implemented with plasticity, which creates constraints to
a separate coordinates array. These coordinates are deformed with the
simulation, and the plasticity parameter controls how much the
simulation moves the coordinates (plasticity 0), or the coordinates move
the simulation back to its previous position (plasticity 1).
This creates much better and predictable results and adding softbody
plasticity to the brushes can increase its control and the stability of
the simulation.
Reviewed By: sergey, zeddb
Differential Revision: https://developer.blender.org/D9187
When we override a whole collection, we want to add non-instantiated
objects to a hidden sub-collection at the end of the process.
However, this makes no sense when instantiating an object, if other
dependencies objects get also overridden on the process, we should just
add them to the same collection owning the root object.
- BKE_bezt_subdivide_handles -> BKE_fcurve_bezt_subdivide_handles
- binarysearch_bezt_index -> BKE_fcurve_bezt_binarysearch_index
These functions are specific to F-Curves and don't make sense for other
uses of BezTriple (curve-object data for e.g.)
Also:
- Move detailed doxygen comment above code, following code-style.
- Mark bezt_add_to_cfra_elem unused.
Check selection state in `BKE_fcurve_active_keyframe_index()`, and only
return the active keyframe index when that keyframe is actually selected.
This is now also asserted in the `BKE_fcurve_active_keyframe_set()` function,
which is now also used when inserting a keyframe.
The code that restored collection flags after they are rebuilt when
moving a collection didn't take into account collection children. The
flag for the active collection was properly restored, but all of its
children would take on the exclude flag of the collection the active
collection was dragged into.
This commit builds a temporary tree structure to store the flags for
the moving collection and its children. Then it reapplies these flags
after `BKE_main_collection_sync`.
Differential Revision: https://developer.blender.org/D9158
This is a follow up commit for rB309c919ee9.
Clearing hash tables is now parallelized as well. Surprisingly, most of
the time is actually spent in `free` (a couple of milliseconds per call
in my test).
Benchmark of individual functions:
reserve_hash_maps: 17%
add_polygon_edges_to_hash_maps: 49%
serialize_and_initialize_deduplicated_edges: 12%
update_edge_indices_in_poly_loops: 14%
clear_hash_tables: 5%
`BKE_mesh_calc_edges` was the main performance bottleneck in D9141.
While openvdb only needed ~115ms, calculating the edges afterwards
took ~960ms. Now with some parallelization this is reduced to ~210ms.
Parallelizing `BKE_mesh_calc_edges` is not entirely trivial, because it
has to perform deduplication and some other things that have to happen
in a certain order. Even though the multithreading improves performance
with more threads, there are diminishing returns when too many threads
are used in this function.
The speedup is mainly achieved by having multiple hash tables that are
filled in parallel. The distribution of the edges to hash tables is based on
a hash (that is different from the hash used in the actual hash tables).
I moved the function to C++, because that made it easier for me to
optimize it. Furthermore, I added `BLI_task.hh` which contains some
light tbb wrappers for parallelization.
Reviewers: campbellbarton
Differential Revision: https://developer.blender.org/D9151
- Move some security checks outside of `interp` callbacks.
Namely, that we do get interpolation weights, and have something to
interpolate.
Some callbacks where not checking on those anyway, safer to move that
up into calling code.
- Cleanup usage of sub-weights, lots of interpolation callbacks wher
actually using those completely wrong.
- Change default behavior when no weights are given to higher-level API
functions: prevriously, each callback was responsible to handle that
case (and one did not even do it!), they were switching to purely
additive behavior then.
Instead, we now default to expected simple average of source values.
Note that the only real important change here is defaulting to actual
average of source value when no inertpolation weights are given (afaik,
this only happens in Weld modifier code).
Differential Revision: https://developer.blender.org/D9114