* Fix incorrect handling of children collections being linked more than
once in the hierarchy (previous code would make a new copy for each
link, instead of just re-linking the first copy for each extra link).
* Simplify some aspects of it (we do not need a GHash for new objects,
we can use ID->newid pointer instead, and some iterations can be done
directly on existing linked lists of old collection, instead of making
temp local copies of them).
* Move all copy logic into a single private recursive function (it was a
bit odd/disturbing to see calling function being indirectly called again
by the recursive helper one - not wrong, but that kind of code path can
quickly become problematic in recursive patterns).
* Added some comments about expected behavior of
`BKE_collection_duplicate()` depending on its booleans options.
Use 'size' instead of 'len' to represent the size of data in bytes,
'len' is used for the result of 'strlen' or the length of an array
in some parts of 'makesdna.c' & 'dna_genfile.c'.
Also clarify comments and some variable names, no functional changes.
As the 2D viewport color is white, if the object default color is white, when wireframe is enabled the object is invisible.
Now, the grease pencil object default viewport color is black.
There was a bug when selected Solid mode with Material or Texture mode. The textures were not visible.
Now, the mode is passed to shaders to decide if use the solid color or the result texture color. The mode is passed using an array with shading type and mode.
Also ensure elements fit evenly into the chunk size
causing allocations to be slightly smaller in some cases.
In own tests reduces overall memory use by about ~4.5%
for high poly meshes in edit-mode.
This enables static linking of libstdc++ by default when building using
`WITH_STATIC_LIBS`. This makes builds more portable for anyone making
static builds (in particular for older systems).
Reviewed By: brecht, campbellbarton, sergey
Differential Revision: https://developer.blender.org/D4393
When annotations enabled, image borders were drawn around whole area instead around preview image.
Reviewed by: Brecht
Differential revision: https://developer.blender.org/D4430
Revert "Outliner: Enable new faster 'Delete Hierarchy' code by default."
This reverts commit 491a98ca44.
It fails in the most basic of tests (see report). No point in leaving
this commit around until it passes the easy to test cases.
-Wireframe use Background color for X-Ray off
- Added support to Solid mode.
- Solid mode shows fill or not depending X-Ray.
- Solid can use Single, Material, etc.
- Wireframe and Solid mode don't show FXs.
Bug introduced on 012483b6e4.
Since we notify similar things when changing active and selected
objects, I believe we didn't notice this was missing a ND_OB_SELECT
notification before the small refactor to use the messenging system
exposed that bug.
We are mixing bool and fancy 3-in-1 func-set buttons in the outliner.
So they would return different pushed state in
ui_drag_toggle_but_pushed_state().
We now have a callback function that allows the button to set its own
pushed_button_state callback function.
Note: This is a bit of overkill since we are planning to change the
3-in-1 outliner buttons. That said, it may be nice to have, since in the
future we can mix those buttons for other things.
Reviewers: brecht
Differential Revision: https://developer.blender.org/D4434
The issue was discovered only after recent changes, but roots back
to much older changes.
What was happening is scene's ID recalc flags where never cleared,
which caused ensure_view_layer() to always run copy-on-write on the
scene. This resulted in certain runtime data being cleared, without
proper flag stored in the dependency graph.
This was caused by ID recalc clear function checking whether any ID
was tagged for recalc in that graph or not. This was happening due
to all areas using DEG_id_type_tag() which can only set flags on the
graph from viewport scenes, and could not inform render dependency
graph.
Now ID tyoe tagging is happening on per-graph level, which avoids
possibility of flags running out of sync.
In a bit longer term we also need to get rid of two functions which
are clearing flags: DEG_id_type_tag() and deg_graph_clear_tags().
Object of evaluated base is not yet copied, so we can not know whether
it has animation on visibility or not.
This issue was reported in T56635#630383.
Now that we have better options (duplicate collection and duplicate linked) there is no
longer need for the original dupli operator.
In fact, as it was it was of little use if you ever had nested collections.
As per the suggestion on T57064, this introduces two new options to duplicate collections.
We then have:
* Duplicate > Collection (New collection with linked content).
* Duplicate > Hierachy (Duplicate entire hierarchy and make all contents single user).
* Duplicate > Linked Hierarchy (Duplicate entire hierarchy keeping content linked with original).
Development TODO: `single_object_users` can/should use the new functions.
Reviewers: brecht, mont29
Subscribers: pablovazquez, billreynish, JulienKaspar
Differential Revision: https://developer.blender.org/D4394
In short, the settings to expand/collapse the Particles Animation Dopesheet expander
were no longer getting exposed, so the F-Curves attached to the particle settings
were not showing up in the channels list as that section was collapsed and couldn't
be opened from the UI.
Early on during the development of 2.8, we originally wanted to completely remove
the Particle System. Eventually that decision got walked back, and so particles
were reinstated. Well... most of the relevant code was! One of the areas that was
the most messed up during this process was the animation editor support for these
channels. It seems that there was almost a two-step removal process here -
the first pass tried to keep the channel definitions while removing all references
to particle stuff, while the second pass tried to remove the definitions completely
and/or re-added them in the wrong places, etc. To say the removal/reverting history is
here is "colourful" is an understatement...
This refreshes on cursor motion so it's worth avoiding redundant
updates, especially for multi-object edit-modes where many objects
aren't even near the object being selected.
This commit also moves to passing eSelectOp to circle select functions
in preparation for adding a select mode tool option.
This reverts commit b104b3cdcf and 04baefcc2f. Changes to core UI like
this should go through review, and doing them during Beta development is
not generally the right moment unless they fix an important problem.
Was caused by another fix in the area, and root to the wrong though that
transformation is only initialized from a fully evaluated dependency graph.
The latter one is not a case when changing transformation mode.
Solved by copying transform to an evaluated object.
Two changes:
Removed the explicit version for the macOS SDK, recent
versions of Xcode have a symlink to the newest SDK.
Fixed the build script for OpenMP by removing extra ' marks that
install_name_tool took literally and replaced INSTALL_PATH with
INSTALL_DIR.
Previously a modifier key-map type only worked when the same key was
enabled as a modifier as well.
This allows for users to assign an action to double-tap-shift for eg.
Workaround to tranform feedback not working on mac.
On some system it crashes (see T58489) and on some other it outputs
garbage (see T60171).
So instead of using transform feedback we render to a texture,
readback the result to system memory and reupload as VBO data.
It is really not ideal performance wise, but it is the simplest
and the most local workaround that still uses the power of the GPU.
This should fix T59426, T60171 and T58489.
Now it will be simpler for code jsut wanting to preserve custom normals
around to set them, without having to add same boiler plate code all the
time around actual code.
Currently modifier stack assumes there are no poly normals data passed
around, so in case a modifier generates such data, it has to clean it up
after usage.
Whole handling of normals is a bit annoying and week currently, we can
probably enhance that once fully per-mesh item type cddata masks are in?
Not a bug, but supporting preservation of custom normals in that
specific modifier makes sense, in game pipeline contexts.
Could also ease work of IO add-ons that want to export
triangulated geometry...
* Move Revert, Recover Last Session, Recover Auto Save to its own sub-menu.
Had three entries of the same category, and solves user reports on "Revert" being dangerously accessible under Open.
* Move up Link, Append, Import, Export as they are used more often than e.g. Save Startup File.
Now always refresh when the material changes. Depsgraph tag moved out
of the refresh function since that gets called on depsgraph update,
which should not trigger a second depsgraph update.
* Remove single item Armature sub-menu. Add Armature straight away, unless the menu is expanded (like with Rigify enabled)
* Group Light and Light Probe between separators
* Move the lesser used Speaker item below Camera
Original optimization idea was wrong: it is possible that some other
ID would reference an object which is also used by a base.
Rolled back to a bit more fragile solution.
In the future would be nice to make it somewhat less duplicated with
the builder itself.
Fixes assert failure (and possibly crashes) when adding grease pencil
object and switching to a draw mode.
While it's kind of common to use camel case in C++ this is not
currently agreed style for C++ in Blender.
Got confused by working on other areas with 3rd party libraries.
-Rename 'Specials' menus to 'Context' menus for Grease Pencil
-Make Grease Pencil contextual menus follow the design of the regular contextual menus more
-Add more useful operators to the contextual menus in the paint modes
Almost every pulldown menu and popover has a little dropdown arrow shape.
Unfortunately it is a bit wonky. The top of the right side of it is wider than the top of the left side. And both sides are narrower at the bottom than the top. It might be hard to see, but this image should help:
{F6728281}
The patch fixes the symmetry of the shape while keeping the weight as similar as possible. In the following image you can see the outline of the current version in red and this new version in green.
{F6728298}
With patch applied the arrow looks perfect:
{F6728302}
Reviewers: brecht, billreynish
Reviewed By: billreynish
Subscribers: pablovazquez
Tags: #bf_blender, #bf_blender_2.8, #user_interface
Differential Revision: https://developer.blender.org/D4424
This allows to update base flags to a proper state then object's restriction
flags are changed, without requiring to re-evaluate an entire tree of flags.
Some old unused flags are were removed by this change, and also disabling
menu items might not work the same as before. This is something we can bring
back if it's really needed (the way how flags are handled did change since
that interface code was done anyway, so code was looking weird anyway).
Reviewers: brecht
Differential Revision: https://developer.blender.org/D4420
This allows dependency graph to evaluate drivers of those objects
and put them to a correct state. It will increase memory usage
since now we can no longer save it by skipping copy-on-write for
such objects. It will also currently make things slower, because
we do not have granular enough visibility update of components in
the dependency graph. Can do it later when the rest of the changes
are finished.
This commit does not update restriction flags on the base, since
that is somewhat tricky to do currently: need to somehow see whether
object is disabled due to flags on collection or due to own flags.
Differential Revision: https://developer.blender.org/D4419
The idea is to keep bases which are known for sure to be in the
dependency graph. Previously, this code was duplicating logic
around checking restriction flags, which becomes more hard to
maintain once we are moving towards to more comprehensive checks
about whether base is a part of evaluated scene or not.
As discussed with @billreynish this makes little sense now that we don't
have a dedicated textured mode. We don't have a superior texture or shaded
mode anymore and we also cannot mix different engines together (workbench
with eevee/lookdev).
The only feature it removes is the possibility to hide textures for certain
object in solid mode.
This actually makes more sense than removing them in the
load-post callback. During load, the file might register
timers that would be removed immediately.
Removed in aa7b013bd5 for performance reasons, however highlights
can't always be seen against specular shading, see: T55456#510873
Instead of having a highlighted inner-edge, use the active edge color.
Remove confirmation popup menu, just exit.
Note that this option is mainly for developers or people reviewing
blend files, see D4406 for discussion on reason for keeping this feature
while simplifying how it works.
Fix T61241
- Changing preview size does not affect drawn image size, only resolution.
-- Consistent behavior, when changing full-size / proxy / scene render size
-- Scopes are rendered in *same size* as source image
-- Over all, user does not have to readjust preview zoom.
Reviewed by: Brecht
Differential revision: https://developer.blender.org/D4315
It was being drawn when any Gizmo was highlighted.
There are several ways to solve this (creating a new parameter to the operator, checking the gizmos in the invoke or creating a global context).
Also, for a micro-optimization, all of those conditions in the `drawDial3d` could be done once in the invoke.
But since all of these changes involve changing the customdata (`TransInfo`), which is already a bit confusing with so many members, I thought it best to make minimal changes.
This makes the bones transparent when the object or the viewport display
type is Wireframe. This is in order to make things consistent.
In object mode all bones are fully transparent to not create more visual
noise if the scene is complex.
Another small addition is that the Bounding Box draw mode now works as
expected on armatures.
After a previous commit to use unique identifiers for Ghash key, I had missed to free the memory of the name key.
Thanks to Jacques Lucke for detecting the leak.
No need to have iterator loop in the view layer evaluation,
this only makes it more difficult to have base flags covered
by the dependency graph.
Other good thing is that we don't need to worry about whether
base has been removed from the evaluated view layer or not.
Reviewers: brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D4414
Shear gizmo now uses a single orientation and sets
different ortho axis instead of constructing a different matrix for
each handle - allowing the redo panel to select orientations.
ApplySnapResize did not take into account invalid distances. Added check for this.
Reviewed By: Campbell Barton
Differential Revision: https://developer.blender.org/D4417
Transform orientation was previously related to constraints,
recent changes meant it was used even when not constraining to an axis.
Now transform orientation is separate from axis constraints.
Avoids mixing these in with regular variables in code-completion.
Use char for pad members except for 'void *', to make size clearer.
Removed/shrink a few redundant padding vars which were >= 8 bytes.
Note: Things were working fine if you were to pin the Grease Pencil
object, but not if you were pinning the GP data.
In too many poll functions context.object was being requested when
a simple context.gpencil_data would suffit.
Panels that are still not showing in pinning:
* DATA_PT_gpencil_display.
The panel needs to be split in sub-panels, leaving all object-dependent
properties in its own panel so we can poll it away, while showing the
rest.
* * *
This commit doesn't handle greasepencil material. In this case I
recommend we do as properties_material.py and have a generous poll(),
followed by different drawing logics whether or not we have an object.
In Blender face is for tesselated faces, that kind of miss-naming is a
pretty good way to shoot yourself in the foot at some point or another
(see T61979)...
Broken logic in rB181356edba04, thanks most likely to stupid naming
(face in Blender is for tessellated faces, use poly for BMesh polygons).
Cleanup to follow in next commit...
The problem was not only for instances, but for particles too, and produced segment fault.
For some reason due any internal modification of how duplicated objects are generated, the duplicated object are not available when the draw manager try to use runtime data.
Now, before drawing the particle or the instance, the pointers of the duplicated objects are reassigned to the original "real object" to get full access to runtime data.
-The pivot point and orientation of any transform are strongly related
-It matches the comma-key and period-key on the keyboard who are neighbours
-We get slightly nicer grouping this way, with the two axis-related options on the left and the two toggles on the right
Reviewers: pablovazquez, campbellbarton
Differential Revision: https://developer.blender.org/D4413
New icons from Andrzej Ambroż / jendrzych:
-New trash icon for deleting ID's and other data (currently unused)
-New icon for the Grease Pencil select between strokes mode
-New icon for Proportional Editing Root Falloff curve
Also adjustments for Jump to Next / Prev. Keyframe, Camera ObData, Point Light ObData, Light Probe Object and ObData, Collection & Save icons.
This also includes fixed/slighly refactored drawing code for marker lines.
The old code used the wrong height.
Reviewers: brecht
Differential Revision: https://developer.blender.org/D4411
Now it's possible use the different Wire modes (Single, Object & Random)
Also support for x-ray mode.
For random colors, the name of the object and the name of the layer is used.
Also some parameters cleanup.
New edit mode operator and post-processing brush option.
Trim works on a single GP stroke. It removes trailing points before and after the first intersection (or loop) nearest to the start of the stroke.
When there was no rotation the axis was zerod,
while not exactly a bug, it means changing the angle does nothing
and all axis-angle values are initialized with Y=1,
use this convention when resetting the axis too.
Displacement and Background kernels are selectively used, but always compiled. This patch will not compile these kernels when they are not needed.
Displacement kernel is only used for true displacement.
Background kernel is only used when there is a (Cycles)Light of type `LIGHT_BACKGROUND`.
Reviewed By: brecht, #cycles
Tags: #cycles
Maniphest Tasks: T61971
Differential Revision: https://developer.blender.org/D4412
The goal of this patch is to have limit the number of times
kernels needs to be compiled and are reused as kernels with
different compile directives can lead to identical same
binaries.
The implementation does this by stripping the compile directives.
and reshuffling kernels so the output is more likely to be the
same.
We focussed on the kernels where it was easy to detect and maintain
(bundle, bake, displace, do_volume and background). More optimizations
could be done but they are probably less obvious.
Merged the data_init and state_buffer_size kernels to split_bundle.
This patch will also remove empty kernels for do_volume and bake
when their features are not enabled.
When using the benchmark files there are less background, bake and
do_volume kernels compiled.
Fix: T61576, T61501, T61466
Reviewed By: brecht, #cycles
Differential Revision: https://developer.blender.org/D4390
- Add XYZ option.
- Orientation now works as expected.
Now a redo for rotation works logically,
setting the axis to Z & the orientation to view.
Resolves T57205
- Add a new panel to differentiate between viewport display and stroke options
- Use clearer naming for depth ordering and stroke thickness properties
Reviewers: antoniov
Differential Revision: https://developer.blender.org/D4405
When enable Wireframe mode in the shading type, all strokes are displayed as simple 1 pixel lines.
The color of the line is equal to the stroke color or the fill color if the fill is enabled and the stroke is disabled or has invisible alpha value.
In wireframe mode, all FX are disabled because sometimes the effects can make the lines invisible.
The modifiers are not disabled.
Still pending to decide if we must add support for Random colors, but not sure if this is useful in 2D.
BLI should always comes first, before DNA, BKE etc. And
`BLI_utildefines.h` should come before any other BLI (since it's some
sort of system include really, among other things...).
Thisi should help to reduce the noise in patches when adding stuff
like uint64_t members to DNA structs... ;)
While Buffering output is useful for file writing and memfile
compression it's redundant when the output is already buffered.
It doesn't make a significant difference for ZLIB
however it makes a moderate improvement for LZ4, see T56162.
Glyph cache is cleared by UI_view2d_zoom_cache_reset, when zooming V2D, but is required to calculate text height in UI_view2d_text_cache_draw
This caused text in strips to "jump around"
There was a comment in UI_view2d_zoom_cache_reset:
While scaling we can accumulate fonts at many sizes (~20 or so).
Not an issue with embedded font, but can use over 500Mb with i18n ones! See [#38244].
This fix sets the Base color in the principled BSDF Shader and in
the Material->r,g,b,a values. So the transparency and color are the
same when switching the "use nodes" option for the material
- The Collada exporter did not take care of
material transparency when nodes are turned off.
- recent change to use ma->alpha_threshold seems to have
been wrong. transparency is now taken from ma->a when
nodes are turned off.
This caused text in strips to "jump around"
There was a comment in UI_view2d_zoom_cache_reset:
While scaling we can accumulate fonts at many sizes (~20 or so).
Not an issue with embedded font, but can use over 500Mb with i18n ones! See [#38244].
Reviewed by: Brecht
Differential revision: https://developer.blender.org/D4389
Add a new Draw Mode to display panel in order to define the z-.depth order of the strokes using the real 3D position and not the 2D layer position.
This change makes possible to use VR with grease pencil drawings because the depth of the strokes change with camera position. Also, provide an alternative solution to tasks: T57859, T60325,
The parameter only works with 3D space depth ordering. The Back and Front depths are incompatible with 3D Space mode.
Options are:
- Back
- Front
- 3D Space->2D Layers (default)
-3D Space->3D Location (new mode)
VS2019 is binary compatible with the existing vc14 libraries and no
new libraries libs are required in svn.
VS2019 support requires cmake 3.14.
VS2019 is still in pre-release state, you are required to explicitly
select the pre-release version by using:
make full 2019pre
VS2019 is binary compatible with the existing vc14 libraries and no
new libraries libs are required in svn.
VS2019 support requires cmake 3.14.
VS2019 is still in pre-release state, you are required to explicitly
select the pre-release version by using:
make full 2019pre
On CCG side it is done similar to displacement, where we have
a dedicated functor which evaluates displacement. Might be seemed
as an overkill, but allows to decouple SubdivCCG from mesh entirely,
and maybe even free up coarse mesh in order to save some memory.
Some weak-looking aspect is the call to update normals from the
draw manager. Ideally, the manager will only draw what is already
evaluated. But it's a bit tricky to find a best place for this since
we avoid dependency graph updates during sculpt as much as possible.
The new code mimics the old code, this is how it was in 2.7.
Fix shading part of T58307.
It is always dangerous to add more shortcuts those days. But this way it
is consistent with 2.79 to a point.
When in edit mode (mesh, greasepencil, ...) 1-3 to change submode still
has priority.
When in posemode or greasepencil draw mode however, 1-9 can still be
used to temporarily get some collections out of the way.
The option is separated from the solid mode color option.
Random color uses the same method as solid mode.
Selection state is indicated by a brighter color that is outside the
brightness range of the unselected state colors. The active state is
indicated by the outlines that is, now, still drawn in wireframe mode.
Coloring of the selection / active outline is not optimal because it
can look ugly in some cases of color combination. But the outline color
is using index range coloring so it's not trivial to change the color of
the outline per object. For now we use the same outline color used in solid
mode for consistency and also still add an emphasis on the selected objects.
The Single color option uses the theme color. Maybe it would be nice to
change the name of it in a latter commit to avoid confusion.
The change in outliner and viewport visibility (897e047374) was made
assuming the bases of the render and viewport depsgraph were
independent. Thus we were deliberately setting base visibility when
rendering:
```
/* When rendering, visibility is controlled by the enable/disable option. */
if (mode == DAG_EVAL_RENDER) {
base->flag |= BASE_VISIBLE;
}
```
However, we were syncing data back to the original depsgraph, leading to
hidden viewport objects to re-appear.
Reviewers: sergey
Differential Revision: https://developer.blender.org/D4391
The goal is to make it easy to know which exact blender version
and built was used for a job on a farm. This includes but not
exclusively render farms. But same is handy for simulation tasks
as well.
This problem existed in 2.79 as well. The rigid body setting is related
to the scene the object was created.
We now clear all the rigid body properties of the appended objects to
prevent them from lingering in this state where they have settings yet
cannot be used in the simulation.
Reviewers: mont29, brecht
Differential Revision: https://developer.blender.org/D4380
There is still work needed to be done from multires side to fully
support smooth shading. So can't just always have smooth shading.
Roll back to a proper code in GPU side, the rest will be handled
from CCG side.
Delay loading all DATA sections of the blend file until they're needed.
Loading all data-blocks caused high peak memory usage especially with
libraries - since a lot of data may exist which isn't used directly.
In one test (spring project: 10_010_A.anim.blend),
peaked at ~12.5gig, dropping back to ~2.5gig once loaded.
With this change peaks memory usage reaches ~2.7gig while loading.
Besides this there are some minor gains from not having to read data
from the file-system and we can skip an alloc + memcpy reading data
written with the same version of Blender.
Removes the flat shader variant since the attrib is specified for each vert
loop in flat shaded mode. It was something leftover from the previous
implementation.
The multires sculpt drawing was a not working in smooth mode.
Also hidding was not supported by the wireframe overlay and flat shaded
faces.
Codewise it is cleaner and index buffers are only updated if the
smoothing changes.
Before 1bfbfa2810 this wasn't essential because the constraints
prevented the axes from being applied.
Now redo ignores constraints - the input values must be constrained.
When the line width was larger than the UI scale, there was not enough
space for thicker widget outlines to draw properly. Now widgets are made
a little larger to accommodate the thicker outlines.
Differential Revision: https://developer.blender.org/D4368
* Two cursors for horizontal and vertical split.
* Four cursors for each join direction.
* One cursor to indicate when splitting is not possible.
Differential Revision: https://developer.blender.org/D4264
Made it so that generated coordinate is always calculated.
Ideally, it will only be done depending on a current shading,
but code is quite deep, and doing smarter thing here will end
up in way bigger refactor.
First, make things working, and then make them fast if they
pop up in a profiles.
When using preview rendering through a camera or final rendering
the `scene.render.use_motion_blur` was not respected when building
the compile directives.
This patch will when building the compile directives check if
motion blur is enabled at all. This should lead to more efficient
kernels when no motion blur is needed.
Tags: #cycles
Differential Revision: https://developer.blender.org/D4387
Transfomr init code called just after duplication (presumably before
regular depsgraph update is executed) would erase new objects'
transflags.
This is more like a hack than a real fix, but since that transform piece
of code is already a hack... Other solution would have been to force DEG
to run after object duplication, think it's better to go with that
solution for now.
Not to mention to fact that dupli flags are put into transflag... ;)
Issue was a concurrent modification of an evaluated mesh by two
other meshes using it as source for custom normals data transfer.
Note that this fixes the crash (modifiers are strictly forbidden to modify
any data besides their own!), but now will have to add a new CD type to
be able to specifically request 'computed' clnors data layer, and not
only 'encoded' one, for source mesh...
This commit makes it so both Subdivision Surface and Multiresolution
modifiers are caching OpenSubdiv topology. This cuts down evaluation
time quite a bit, especially for meshes which don't have many extra
ordinary verticies.
Only working for animation. Other modifications like edit mode needs
more work to make topology cache preserved by copy-on-write.
It was copying the alpha from the foreground instead of background image,
which is not usually what is needed and inconsistent with the compositor.
Differential Revision: https://developer.blender.org/D4371
Constraint options had confusing behavior:
- When non were pressed, the orientation was ignored.
- When any were pressed, the orientation was used,
but only unconstrained axed could be adjusted.
Now constraining is only used for modal execution
so there is no need to show these in the interface.
When an orientation is selected, the XYZ values always transform
using that space.
Note, transform system should be refactored to support different
orientations w/o having to use constraints.
Addresses T57204
The bake kernels are also used during mesh displacement and light
importance sampling. We disabled the implementation of these kernels
when baking was not enabled.
These are place-holders with only a few items in each, as with the rest
of the context menus they need to be populated & organized.
Weight Paint 'weight' shortcut has been changed from W to Ctrl-F,
to co-exist w/ the context menu shortcut.
Was happening when looking for all intersections for transparent shadow rays
in the case the ray is degenerate.
Still quesitonable whether we should consider this a transparent or opaque
configuraiton. Ideally, we should prevent such rays from happening, but that
is another vector of debugging.
-Use factor for flame_vorticity, slice_depth, density & volume_density
-Use distance for surface_distance
-Use factor for mix factor in Data Transfer modifier
-Use prop_translation for pivot constraint offset
-Use distance for Shrink/Fatten Distance
-Use factor for Smooth Factor
-Use Factor for Randomize Uniform and Normal values
-Use Distance for Randomize distance amount
-Randomize Transform Scale was wrongly using distance
Default behavior is unchanged still, but can be changed in the keymap.
From testing I think this needs better visual feedback to indicate that
you are in local view, if the view does not move it's not as clear.
To keep running these tests relatively fast and practical to run often,
running it on all .blend files is a bit much. So now we only run it on
files from this directory.
Additionally this adds supports for following symlinks, so that you can
easily symlinks to other directories if you want to tests extra files
which may have linked libraries.
Using OpenCL MegaKernel has been slow and therefore not usefull.
This patch will remove the mega kernel from the OpenCL codebase
and the OpenCLDeviceBase class.
T61736: removal of mega kernel
T61703: baking does not work with mega kernel
Tags: #cycles
Differential Revision: https://developer.blender.org/D4383
Don't disable the save over popup through the keymap, just remove it entirely
from the code so that the file browser interprets the property correctly.
Was once again caused by an ambiguity of the entry/exit operations.
Only did for objects since those are the only one who needs this.
The rest types of IDs needs to be checked and only added extra
operations if needed (adding operations and relations causes some
overhead for evaluation, so need to be careful).
Add getter callback support for 'WM_HANDLER_TYPE_KEYMAP' type handlers
this is needed for key-maps which change based on the active tool.
Replaces 'sneaky_handler' hack which temporarily inserted a handler.
- Many factor properties were set to PROP_NONE,
even properties that had 'Factor' in the name!
- Some time properties were not set to PROP_TIME,
especially in Particles.
- Changed motion_blur_shutter to use a soft max value of 1 instead of 2.
Anything > 1 here is not physically correct
and makes no real logical sense.
- Changed display name of Dynamic Paint dissolve_speed to Dissolve Time,
since it's a time property, not speed.
We are still ditching the specular intensity of SSR (ssr_data.xyz).
But at least now there is some comment about it.
See T61704 for user reports on that matter.
Comments with the blessing of Clément Foucault.
The reported case was with the render output filename,
however the same was happening for file open.
Bug introduced on c20c203b82.
I can't find in the original commit any reasoning for the change
that introduced this bug.
If all axis and grid options were turned off, the grid in the main ortho views would not be rendered.
Now we force rendering of the grid regardless of the settings when in one of the main ortho views.
Reviewed By: Brecht Van Lommel
Differential Revision: https://developer.blender.org/D4378
This is a regression from Blender 2.79 where the usage
of <triangles> was already implemented, but unintentionally
removed in Blender 2.80
Also renamed variables for better reading.
There is a generic function to retrieve float and float3 attributes
`primitive_attribute_float` and primitive_attribute_float3`. Inside
these functions an prioritised if-else construction checked where
the attribute is stored and then retrieved from that location.
Actually the calling function most of the time already knows where
the data is stored. So we could simplify this by splitting these
functions and remove the check logic.
This patch splits the `primitive_attribute_float?` functions into
`primitive_surface_attribute_float?` and `primitive_volume_attribute_float?`.
What leads to less branching and more optimum kernels.
The original function is still being used by OSL and `svm_node_attr`.
This will reduce the compilation time and render time for kernels.
Especially in production scenes there is a lot of benefit.
Impact in compilation times
job | scene_name | previous | new | percentage
-------+-----------------+----------+-------+------------
t61513 | empty | 10.63 | 10.66 | 0%
t61513 | bmw | 17.91 | 17.65 | 1%
t61513 | fishycat | 19.57 | 17.68 | 10%
t61513 | barbershop | 54.10 | 24.41 | 55%
t61513 | classroom | 17.55 | 16.29 | 7%
t61513 | koro | 18.92 | 18.05 | 5%
t61513 | pavillion | 17.43 | 16.52 | 5%
t61513 | splash279 | 16.48 | 14.91 | 10%
t61513 | volume_emission | 36.22 | 21.60 | 40%
Impact in render times
job | scene_name | previous | new | percentage
-------+-----------------+----------+--------+------------
61513 | empty | 21.06 | 20.35 | 3%
61513 | bmw | 198.44 | 190.05 | 4%
61513 | fishycat | 394.20 | 401.25 | -2%
61513 | barbershop | 1188.16 | 912.39 | 23%
61513 | classroom | 341.08 | 340.38 | 0%
61513 | koro | 472.43 | 471.80 | 0%
61513 | pavillion | 905.77 | 899.80 | 1%
61513 | splash279 | 55.26 | 54.86 | 1%
61513 | volume_emission | 62.59 | 61.70 | 1%
There is also a possitive impact when using CPU and CUDA, but they are small.
I didn't split the hair logic from the surface logic due to:
* Hair and surface use same attribute types. It was not clear if it could be
splitted when looking at the code only.
* Hair and surface are quick to compile and to read. So the benefit is quite
small.
Differential Revision: https://developer.blender.org/D4375
There is a generic function to retrieve float and float3 attributes
`primitive_attribute_float` and primitive_attribute_float3`. Inside
these functions an prioritised if-else construction checked where
the attribute is stored and then retrieved from that location.
Actually the calling function most of the time already knows where
the data is stored. So we could simplify this by splitting these
functions and remove the check logic.
This patch splits the `primitive_attribute_float?` functions into
`primitive_surface_attribute_float?` and `primitive_volume_attribute_float?`.
What leads to less branching and more optimum kernels.
The original function is still being used by OSL and `svm_node_attr`.
This will reduce the compilation time and render time for kernels.
Especially in production scenes there is a lot of benefit.
Impact in compilation times
job | scene_name | previous | new | percentage
-------+-----------------+----------+-------+------------
t61513 | empty | 10.63 | 10.66 | 0%
t61513 | bmw | 17.91 | 17.65 | 1%
t61513 | fishycat | 19.57 | 17.68 | 10%
t61513 | barbershop | 54.10 | 24.41 | 55%
t61513 | classroom | 17.55 | 16.29 | 7%
t61513 | koro | 18.92 | 18.05 | 5%
t61513 | pavillion | 17.43 | 16.52 | 5%
t61513 | splash279 | 16.48 | 14.91 | 10%
t61513 | volume_emission | 36.22 | 21.60 | 40%
Impact in render times
job | scene_name | previous | new | percentage
-------+-----------------+----------+--------+------------
61513 | empty | 21.06 | 20.35 | 3%
61513 | bmw | 198.44 | 190.05 | 4%
61513 | fishycat | 394.20 | 401.25 | -2%
61513 | barbershop | 1188.16 | 912.39 | 23%
61513 | classroom | 341.08 | 340.38 | 0%
61513 | koro | 472.43 | 471.80 | 0%
61513 | pavillion | 905.77 | 899.80 | 1%
61513 | splash279 | 55.26 | 54.86 | 1%
61513 | volume_emission | 62.59 | 61.70 | 1%
There is also a possitive impact when using CPU and CUDA, but they are small.
I didn't split the hair logic from the surface logic due to:
* Hair and surface use same attribute types. It was not clear if it could be
splitted when looking at the code only.
* Hair and surface are quick to compile and to read. So the benefit is quite
small.
Differential Revision: https://developer.blender.org/D4375
When hiding the curve handles/points previously, the control points would still be drawn (loose verts).
Now we hide everything related to the handle if it is hidden.
Reviewed By: Clément Foucault
Differential Revision: https://developer.blender.org/D4373
Now that we are looping over all image users that were previously ignored,
it shows some scene pointers are invalid. Always clear them on load, and
don't keep scene permanently in the image user except for the image editor.
Otherwise the pointer can go out of date.
This commit only contains some of the changes in the diff.
Some require more discussion/work.
Differential Revision: https://developer.blender.org/D4337
Do not instance linked object immediately in scene, this was never a
good idea and is doomed to fail nowadays, with complex relations between
objects, collections and scenes.
Instead, this commit refactors a bit linking code to add loose objects
to current scene *after* everything has been imported, and ID pointers
have been properly remapped to new ones - i.e. once new linked data is
supposed to be fully valid, just like we were already doing with
collections.
As a bonus, it means we do not have to pass around scene, view3d etc. to
`BLO_library_link_named_part_ex()` and co.
Currently it's effectively a boolean for file-select handlers.
Prepare for refactoring event handlers into their own types (keymap,
operator, gizmo, ui & dropbox) to help make logic easier to follow.
Committing this since it does fix broken logic (previously in that
condition obdata would always be set to NULL, since
`BKE_object_runtime_reset()` is called before).
However, this has presumably been broken that way since 05/2018, so
maybe that whole condition is not needed anymore? Or NULL pointer was
working as well here?
@sergey eyes are required here I guess ;)
Caused by rBae2b677dcb5a70f5, Object.runtime has lot of weird specific
handlings in depsgraph...
For now modified `deg_backup_object_runtime()` and
`deg_restore_object_runtime()` to mimic previous behavior regarding
Object bbox (i.e. pass it around, instead of wiping it clean).
Reported in T61660.
The dependency graph now handles updating image users to point to the current
frame, and tags images to be refreshed on the GPU. The image editor user is
still updated outside of the dependency graph.
We still do not support multiple image users using a different current frame
in the same image, same as 2.7. This may require adding a GPU image texture
cache to keep memory usage under control. Things like rendering an animation
while the viewport stays fixed at the current frame works though.
The initial idea of using char pointer was to save some
memory since the dependency graph was kind of the one
with the main database.
Nowadays dependency graph should be separatable from the
main database and being self-sustainable.
Other issue which was caused by this pointer is the
re-tagging of operations during relations update: it is
possible to have node which as tagged for update but had
the owner of the name removed (i.e. driver or bone was
removed).
Patch by @sergey.
Note that this is really a bad thing actually, ideally we should never
get that situation (IDs in Main referencing temp IDs outside of it).
That can lead to many possible similar cases...
Fixing that is not trivial though, so for now we'll have to live with
it, until we have migrated *all* of our temp datablocks generation
outside of Main's.
Previously, hair strands of zero length of too few control
points would have been ignored. This is fine for a render
without motion blur. But once motion blur is enabled it is
becoming more tricky to match topology.
Even more, it was causing access (and possibly writes) past
the array boundaries in case when time step 0 ignored some
strands and steps around it did not.
If this is becoming problematic for BVH to do reliable
intersections this is to be solved on the BVH builder side.
The export from Blender to Cycles shouldn't really make
decisions there.
Was happening when image buffer had cryptomatte pass, which can easily
exceed 530 bytes used by the buffer.
Now default buffer is bumped to 1K, and also allowed to be heap-allocated
when really need bigger buffer.
Possible optimization is to allocate buffer once, but in practice those
re-allocations will not happen often, so keeping code simpler is not an
issue. Just something for a rainy day.
* Added a TL;DR first paragraph summarizing that one shall not keep any
reference to Blender data when modifying its container.
* Added some info about fact that adding items to some data containers
(like Collection) can also invalidate existing items (due to array
re-allocation).
* Added a Do/Don't example which shows a crash after adding some items
to a collection.
Related to T61297.
This introduce the wireframe batches. Creating the indices buffer does
not seems to slow down the sculpt in my testing (but it is kind of hard to
test reliably)
This includes a bit of cleanup in gpu_buffers.c.
This will effectively make the AA passes thicker in some cases but it is
required for better AA on wireframes. The trick is to occlude the wire
passes so that they do not output fragment that could be behind actual
geometry.
- Makes it possible to show a vertical line for every marker in the graph editor.
- Makes the marker line visiblity optional in the sequencer and graph editor.
Request from @hjalti.
Reviewers: brecht
Differential Revision: https://developer.blender.org/D4348
All the controls were just really thrown in there without any proper
organization.
This gives it more structure.
- Correct use of sub-panels to communicate hierarchy and sections.
- Use flow layout for toggles.
- Use consistent names for "Bake Type".
Smaller adjustments to the Light Probe properties layout.
- Correctly use alignment for multi-property values.
- Correctly use sub-panels.
- Correctly use PROP_FACTOR for visibility_bleed_bias and
visibility_blur.
Some properties were accidentally hidden for particle fluids.
- Made sure we show the Forces and Integration
sub-panels for particle fluids.
- Slightly re-ordered the sub-panels here, so that the same sub-panels
are at the top for Newtonian and Fluid particles.
- Separated the Fluid Interaction sub-panel so we can give it a unique
name.
- Removed lingering unnecessary 'Keys' label in the Keyed physics.
Some RNA errors are quite similar, use clog for consistent logging that
always includes the file, function and line number - making errors
quicker to troubleshoot.
This affects point, spot and area lights. Sun light strength remains without
a unit. This change does not affect .blend file compatibility in any way, as
with the rest of the unit system it's purely a display and editing feature.
Not used for Cycles yet, that will be done after unifying the settings with
Eevee.
The code assumed all datablocks were read from .blend files saved with the
same version. This restructures the Cycles versioning code to take into
account libraries.
For llvm 6 the visual studio integration was 'not great' and we had
our own, which broke when llvm 7.0.1 came out. llvm now has properly
supported integration available on the VS market place hence we can
retire our custom support.
When modal map was introduced, left out handling of what
happens when bevel is made active tool in toolbar and user
starts bevel by clicking and dragging.
Fail to build on errors in new names - without this renamed values
would be written to DNA breaking backwards & forwards compatibility.
Note that errors in old names aren't detected.
This allows us to rename struct & struct members in the source code
without changing the file format.
This is useful because the code becomes increasingly confusing when
names such as oops, ipo & dupli aren't used anywhere except DNA headers.
dna_rename_defs.h is used to define renaming operations.
The renaming it's self will be done separately.
In this case we simply create a new screen area that copies the currently
fullscreened area.
Note: At the moment there is no indicative in the non-main window that we are in
fullscreen. That happens because this information is part of the bar and we have
no topbar in this window.
The problem
===========
For armature, if the active object was in pose mode and the newly
selected armature data (not the pose, but the edit armature) we would
get a crash.
For mesh objects, the issue would happen with the active object in object mode.
Then the new selected object would switch to edit mode, however the overall
mode would still be object mode, leading to unsynced mode across the objects.
The solution
============
Using shift to extend selection makes current selected (compatible)
objects to go to edit mode as well. Otherwise only the newly selected
object will switch to edit mode.
This also works if you are in edit mode for a curve, and click in a mesh icon.
This also changes the rules for multi-object editing (or rather, how we
put objects in and out of it). Now shirt is also taking into
consideration there. So if you simply click in another mesh object's
data, it will have only the newly selected object in edit mode.
To reproduce the old behaviour you need to use shift to include the
newly selected object in the multi-edit party.
Reviewers: campbellbarton
Subscribers: brecht
Differential Revision: https://developer.blender.org/D4344
was caused by rBc6e3a20ab60b, copied node was actually added to the
nodetree, resulting in an endless loop.
Reviewers: brecht
Differential Revision: https://developer.blender.org/D4360
At some I unified the "move to collection" with the remove from all collections
functionality. That meant that even when we were still to keep the object in one
of the collections we would clear its rigid body data.
Now why to even remove the rigidbody data when removing an object from all
collections? That mimics the 2.79 behaviour when we were to unlink an
object from a scene. I suspect it has to do with the rigid body data
being tied to the scene rigid body. Which is a strange design anyways
(add to the list?) since an object can be in more than one scene.
Is mainly used by drivers variables. The slow part was about
iterating over all pose channels to find the one which has a
given constraint.
Now we build a look up table, so this operation is way cheaper,
Brings down relations update time from 0.7sec to 0.4 with Spring
production file.
This triggered an "almost parallel" case in setting the
offset meet points, which is OK but code needed improvement
put the meet point in a more accurate place.
This ia fix for part of the report T61214.
This partially reverts bf2c5217 and makes it so animation is evaluated
for datablocks which were never evaluated within the dependency graph.
Not ideal, but safest way currently.
Animation for already evaluated datablocks will only be evaluated on
manual edits, so the initial bugfix is still valid.
This is unreliable for cases when multiple dependency graphs
are to be updated.
The only reason why it was attempted to be made is to deal
with cases when ID appears in the dependency graph for the
first time. But even then it should be smart enough bring
itself to an up-to-date state without any extra tricks.
This patch implements a workaround to get the multithreaded compilation from D2231 working.
So far, it only works for Blender, not for Cycles Standalone. Also, I have only tested the Linux codepath in the helper function.
Depends on D2231.
Patch by lukasstockner97, jbakker, brecht
job | scene_name | compilation_time
----------+-----------------+------------------
Baseline | empty | 22.73
D2264 | empty | 13.94
Baseline | bmw | 56.44
D2264 | bmw | 41.32
Baseline | fishycat | 59.50
D2264 | fishycat | 45.19
Baseline | barbershop | 212.28
D2264 | barbershop | 169.81
Baseline | victor | 67.51
D2264 | victor | 53.60
Baseline | classroom | 51.46
D2264 | classroom | 39.02
Baseline | koro | 62.48
D2264 | koro | 49.03
Baseline | pavillion | 54.37
D2264 | pavillion | 38.82
Baseline | splash279 | 47.43
D2264 | splash279 | 37.94
Baseline | volume_emission | 145.22
D2264 | volume_emission | 121.10
This patch reduced compilation time as the split kernels and base
kernels are compiled in parallel. In cycles debug mode (256) you can set
unmark the opencl single program file, what reduces the compilation time
even further (bmw 17 seconds, barbershop 53 seconds).
Reviewers: brecht, dingto, sergey, juicyfruit, lukasstockner97
Reviewed By: brecht
Subscribers: Loner, jbakker, candreacchio, 3dLuver, LazyDodo, bliblubli
Differential Revision: https://developer.blender.org/D2264
This patch implements a workaround to get the multithreaded compilation from D2231 working.
So far, it only works for Blender, not for Cycles Standalone. Also, I have only tested the Linux codepath in the helper function.
Depends on D2231.
Reviewers: brecht, dingto, sergey, juicyfruit, lukasstockner97
Reviewed By: brecht
Subscribers: Loner, jbakker, candreacchio, 3dLuver, LazyDodo, bliblubli
Differential Revision: https://developer.blender.org/D2264
Right clicking on a menu item now closes it's sub-menus and opens
the button's context menu.
This is needed for adding them to the quick favourites menu.
Resolves T58729, T61015.
Also capture event to avoid Move transform.
Note: Now it's using a report message. Maybe this can be removed, but without the message, the event is captured by move transform.
Previously, the curve self snapping would only snap to points that were
earlier in the curve structure. This was because of a simple coding
snafu of using break when meaning to use continue.
This was caused by curves pointing to each other
creating a cyclic dependency.
While the dependency graph detects this, generating a mesh for render
recursively generates data which cashes in this case.
Add in a check to detect cyclic links.
Note, this bug exists in 2.7x too - but only crashes on render
since 2.7x didn't use 'for_render' when converting data.
By default wire would z-fight against the surface.
Increase the bias, also don't adjust the 'w' component
since it causes bias that depends on the view direction.
This allows secondary keys on tap.
Currently Z-key to toggle wireframe and tilde for navigation.
This is currently experimental, if users like this the preference
can be kept and used where appropriate.
Add shortcut since this has been removed from the context menu,
now it's in the mesh normals menu which isn't so convenient to access.
Shift-N is already used to recalculate normals,
this fits the convention of Alt removing/reversing.
There is no reason not to duplicate Actions too here, especially when
Materials' Actions are pretty much impossible to edit from current UI
(afaik, DopeSheet editor does not has any way to change them?).
Values outside the 0..1 range produce negative colors, so now clamp to that
range everywhere. Also fixes improper handling of hue > 2.0 in some places.
That one is utterly ugly fix really, but unfortunately a proper one
would require some changes to our RNA (or more precisely, pyrna) code,
so that when we subscript a dynamically generated RNA collection, the
item is somehow duplicated (and probably 'assigned' to its py object?),
before the temp RNA array memory is freed...
Usual legacy/history crap in NodeTree code... Datablocks's specific
freeing code should never, ever do refcounting management, this is
handled by higher-level code from BKE_library area.
Nuke away old nodeCopyNode(), much better to use new BKE_node_copy_ex(),
which behaves as expected for the various optional flags that can be passed.
This also removes the need to handle ID refcounting in calling code
(ugly!) and allows us to remove an even uglier name from our codebase! :D
Note that this fixes three related issues actually, that bug was also
affecting copy/paste of nodes, and 'Separate with copy' operator (the
latter being actually fully wrong, since it was not refcounting
anything, not even node->id pointer...).
More or less same code was being executed twice during ID copying.
Makes no sense to add yet another switch-by-ID-type to handle
specificaly runtime data during ID copying, we already have
BKE_xxx_copy_data() functions for that.
For OIIO 2.x we must use unique_ptr. This also required updating the
guarded allocator for std::move to work. Since C++11 construct/destroy
have a default implementation that also works this case, so we just
leave it out.
rB55c281415b67 removed 'BLENDER_RENDER' as a COMPAT_ENGINE but the
cycles addon checks for this in its get_panels() function.
Adding this back for now.
Reviewers: brecht, billreynish
Maniphest Tasks: T61499
Differential Revision: https://developer.blender.org/D4346
Was a use-after-free during relations update.
Now we do similar dependency graph tags, but without any
extra animation update logic, which was accessing various
pointers.
Was found when looking into a file from T56635.
Using ID_LIGHT or ID_ID for "Lamp" meaning, "Light" without context
being for 'not heavy'.
That rename of data-block was not really nice on that side of things :/
Related to T43295.
Tested on an `AMD Radeon HD 7570M`.
It seems that a VBO containing only `unsigned bytes` or `unsigned shorts` can't be read correctly in a shader.
Strange that if the index buffer repeats the drawing of the vertices (as was done before rBa04dd15193e6) the problem disappears.
The disadvantage of this solution is that the memory size for a selection VBO increases by about 4 times.
But the loss in optimization is negligible.
Thanks to @fclem for pointing out the possible source of the problem and reviewing the fix.
It is supposed to be already evaluated. If for some reason it's not
doing such direct evaluation will not be reliable anyway (indirect
dependencies for example).
This fixes an assert part of T61431.
Ensures that object which is set for instance-vert or instance-face
is evaluated prior to metaball. This is because metaball will request
list of instances during evaluation.
This should fix issue reported T61431 in release build. The assert is
still there and is to be addressed separately.
This keymap was used in the old grease pencil and now must be removed.
The keymaps for brush are:
F: Change Radius
Shift + F: Change strength
Ctrl+F -> Removed.
Make Difference a default value for boolean modifier operation property.
Currently operation property of the boolean modifier is set to Intersect, which is the least frequently used boolean operation of the three available. It is also goes out of sync with Intersect (Boolean) tool, where Difference is a default operation.
Reviewers: mont29, brecht, sergey
Reviewed By: mont29, brecht, sergey
Subscribers: mont29, brecht, campbellbarton, sergey, billreynish
Tags: #modifiers
Differential Revision: https://developer.blender.org/D4340
This makes it so modifiers are using object transform prior
to the rigid body simulation, and then result of modifier
stack is fed to the solver.
Solves dependency cycle which was happening when object's
modifier was dependent on the modifier transform.
While now it is not possible to change simulation, things
are somewhat more clear and reliable in other ways.
For example previously, solver was using derives mesh from
a previous step in time, which causes unfixable simulation
issues (with intersections and such)
Fixex T57589: 2.79 Rigid Body Sim. Does Not Behave The Same In 2.8
Fixex T61256: Compositing scenes causes crash, but rendering separately does not
Fixes T61262: Armature and rigid body crash
Fixes T61346: Rigid body with modifiers incorrect work
This is what modifiers are to use to indicate that they depend
on a transformation of the object itself.
Currently should be no functional changes, but in the future
this will allow to easily change transform operation depending
on whether there is a simulation associated with the object.
There is an issue of hair being completely messed up when
switching to a simulation view layer for Autumn.
Restoring back the code which was re-setting particles on
file load. This will re-set unbacked particles on file load
but this appears to be happening in 2.7 as well.
Can not reproduce bugs which were fixed in this area recently,
so maybe it's finally tackled (fingers crossed!).
Currently only a single function was duplicated which isn't so bad,
this change is to allow DNA versioning code to be shared between
dna_genfile.c and makesdna.c.
Blender is typically used maximized or fullscreen,
load maximized instead of attempting to fill the screen bounds.
To load un-maximized use '--window-border' argument.
See D4332
`BKE_modifier_get_evaluated_mesh_from_evaluated_object()` used by
modifiers needing access to other objects' geometry probably skipped out
of the radar when cage and final evaluated meshes were added to
BMEditMesh? In any case, we do not need to duplicate (and then free!) a
temp mesh from editdata anymore, and we can even add instead a parameter
to get cage instead of final. Also makes modifiers code a bit simpler.
The render layer name is now always included. Best to keep these consistent,
so that animation denoising and sample merging works the same for both and
tests can be the same. Ref D4311.
It is now possible to adjust the group node background alpha.
The defaults are the same as before, but you can now adjust the alpha
level via the theme preferences (and the alpha value is no longer hard
coded).
Fix T61406: Particles don't render
Consider initial dependency graph evaluation as a file load.
Is still resetting too much, but that we can solve later.
This adds a cycles.denoise_animation operator, which denoises an animation
sequence or individual file. Renders must be saved as multilayer EXR files
with denoising data passes.
By default file path and frame range come from the current scene, and EXR
files are denoised in-place. Alternatively, a different input and/or output
file path can be provided.
Denoising settings come from the current view layer. Renders can be denoised
again with different settings, as the original noisy image is preserved along
with other passes and metadata.
There is no user interface yet for this feature, that comes later.
Code by Lukas with modifications by Brecht. This feature was originally
developed for Tangent Animation, thanks for the support!
Differential Revision: https://developer.blender.org/D3889
This adds a cycles.denoise_animation operator, which denoises an animation
sequence or individual file. Renders must be saved as multilayer EXR files
with denoising data passes.
By default file path and frame range come from the current scene, and EXR
files are denoised in-place. Alternatively, a different input and/or output
file path can be provided.
Denoising settings come from the current view layer. Renders can be denoised
again with different settings, as the original noisy image is preserved along
with other passes and metadata.
There is no user interface yet for this feature, that comes later.
Code by Lukas with modifications by Brecht. This feature was originally
developed for Tangent Animation, thanks for the support!
Also update relations when modifier texture changes.
Basically same as rB6e00415a85a9, rBca2680aaeb1 but this time for
VertexWeight modifiers
Reviewers: sergey
Maniphest Tasks: T61185
Differential Revision: https://developer.blender.org/D4305
We may want to use 'TEST' BCode in the future for including data
besides thumbnails. This allows negative values to be used w/o
attempting to load a thumbnail.
Currently the preferences have both tweak and drag threshold,
this is confusing because most actions users would consider
dragging use the 'tweak' setting.
Now one drag threshold is used for both, with a maximum limit of half
the button unit-size in case of dragging UI elements.
Also added keys for toggling harden normals,
and cycling through miter types.
Still to do: add some shortcuts for affecting the
spread value for arc miters.
Most artists agree that RGB by default is not as flexible as HSV.
It's just the first time it opens anyway, since it will remember whatever
was set last like it always does.
When one is indirectly linking collections, better add collection to the
scene, than instantiating its objects into master collection of the
scene. That is much cleaner.
Noted/related to T61141.
Issue is, ob->id.us is not relevant anymore here, since several
collection might be referencing it inside of a same scene, that is still
only one usage from user perspective...
Note that for now we are just counting scenes instantiating an object,
time will say wether we need more refined/complete check (as a reminder,
most [all?] other Object usages are *not* refcounting ones).
Yes, we do can undo an ID deletion now.
However, this requires extra care in UI 'remapping' to new IDs step
(when undoing, we do not fully reload the UI from saved .blend).
Otherwise, new UI (i.e. one from saved .blend file) might reference
IDs that where freed in old bmain (the one before the undo), we cannot
use those to get ID name then, that would be a nasty use-after-free!
To prevent this, we generate a GSet of all valid ID pointers at that
time (i.e. those found in both old and new Main's), and ensure any ID
we try to remap by its name is in that GSet. Otherwise, there is no
possible remapping, just return NULL.
While ideally we could have a complete detailed list of deleted IDs,
that would require more work UI wise, think for now we can live with
just a rough summary.
Related to T61209.
make_duplis_collection() depends on the collection object cache,
which was not freed upon object viewport disable change.
The best way to reproduce the bug was to instance the default
collection, disable the cube, save and re-open the file.
Now even if you set the original cube to be enabled, you wouldn't see
the instanced one until you forced collection cache to be freed (e.g.,
changing a collection disable state.
Fix T61289: Emitting particles from instances not working properly
The first issue has been re-introduced by a code which was dealing
with missing hair after opening the file. That was re-setting all
particle systems all the time because modifier flags were not copied
back to original. This made every modifier run to be seem as an
initial file open.
Now we copy flags back to an original modifier. But also we are
trying to not do any resets unless needed in that case. This way
we can preserve in-memory caches.
Other part of the change is related on re-setting particle system
if number of mesh elements changed. But we only do it if the
modifier has been already evaluated once.
Caused by an error in rBe65784a0519e.
And since we are going over loop triangles anyways, we can remove the
part quecking for quads [remainder of tessface era] entirely.
Reviewers: campbellbarton
Maniphest Tasks: T61309
Differential Revision: https://developer.blender.org/D4324
Mainly 'X' icon for Delete, which was already on modifiers and constraints,
but not for objects. Select icon for object selection and refresh for library reload.
Show One Level, Hide One Level, Show Active, Show Hierarchy were taking
four items on the context menu when they are not accessed that often
from the context menu (they all have shortcuts).
The "View" name is used to match other editors View menus.
Mainly the first of their category or when they need to be highlighted:
* Delete
* Enable Viewport/Render (match icons to make a visual connection)
* ViewLayers (it's used pretty often so it makes it easier to find)
Also group Show/Show All, Hide/Hide All together.
Originally, when transferring all source data layers to destination
meshes, code would abort in case destination did not have all needed
layers, and creating them was not allowed.
Now, it will instead transfer data to layers that exists, merely
skipping source ones for which it cannot find a matching destination.
Some developers were using undo for their scripts, this allows for undo
pushes in background mode, however - as with 2.7x, undo isn't
initialized at startup in background mode.
See replies to T60934
As far as we know this wasn't widely used, and relied no storing data
in the temp dir which may be cleared on reboot.
More generally, alternative behavior for a core area like file IO
is not something to keep if it has unresolved issues.
See D4310 for details.
That guy was still from the era where only way to remove an ID was to
save & reload the .blend file. Use modern code instead, should also be
much, much more efficient in big production files.
And that’s another nice occasion to add/test new batch ID deletion code, too. ;)
Related to T61276 Make Single User unlinks original object.
This avoid edges covering a part of vertices.
This comes at a (very minor) perf cost as vertices can cover some edges
pixels and early discard them with the depth test. But this only happens
in artificialy dense mesh and is not a real problem for common cases.
This make sure only one line is drawn per edge.
It makes the function mesh_create_edit_loops_points_lines() non-thread safe
but this is fine as of now because nothing is multithreaded at this point.
Also this is the only function use this flag so it might be OK.
The side effect is that we don't need to use depth test in edit mode
overlay so the masking artifact will not appear.
This make it (theoriticaly) compatible with all supported hardware with
consistent results.
Also we now draw the lines with analytic anti-aliasing instead of relying
on MSAA (which offers less benefits in our case).
The remaining aliasing comes from edges cut in half by the mesh which is
not rendered with MSAA. Hopefully this is not too much distracting and only
happen if the face is almost parallel to the view.
This is backporting a change from 2.8, which may help solve crashes when
activating a window. Previously bringTabletContextToFront() would call
tablet API functions with NULL tablet, which may crash on some drivers.
Ref T60811.
Is available when doing "View -> Show Metadata". Will draw all the
fields which are not part of the stamp at the bottom of the image.
Couple of hand-picked fields are ignored, since those are not very
useful to be seen.
Aimed to ease review of rendered shots.
Reviewers: brecht
Reviewed By: brecht
Subscribers: fsiddi
Differential Revision: https://developer.blender.org/D4316
Is a missing do-verisoning code in e3d31b8dfb.
Unfortunately, at this point it is rather tricky to tell old and new
hair dynamics modifiers apart. Probably easier to accept possible
breackage of the files which were created in 2.7 and saved during
2.8 which had incomplete do-version code.
- studiolights were not installed to their proper subfolder (thus not
recognized on blender restart)
- they were actually loaded with a wrong path which could lead to
deletion of the original source file when uninstalled again
This is a follow-up to rBb44e6f2b3d32, for some reason that issue was
not detected back then: in some cases, DEG_iterator_objects_next() will
free the temp list of dupli objects once it does not need it anymore,
henceforth freeing the dupli_object_current memory of the DEGObjectIterData
that we are storing in the RNA_Depsgraph_Instances_Iterator struct.
And yes, the uglyness of that hack is getting even better now...
Found while trying to export dupliobjects with FBX...
Note: We still change it to the collection we are directly isolating/making
visible and its parents (in the case of isolating). But no longer its children.
Feedback and discussion on D4011. The motivation is that if we don't keep those
locked the disable state becomes useless.
Was introduced by point cache reset on manual edits. Needed to
split evaluation and introduce an explicit init key, which allows
to hook up relations which are "monitoring" manual edits to the
channel.
Noticed while looking into T61190.
Was visible with certain configuration only, is a numeric
instability caused by degenerate ray direction.
Not sure the distribution is correct, just fixing crash
which was caused by usage of watertight intersection.
This is a request by the studio here to make it possible to see how
many samples were used to render a specific shot or a frame. It is a
bit more tricky than simply stamping number of samples from a scene
since rendering is happening in multiple ranges of samples.
This change makes it so Cycles saves configured number of samples for
the specific view layer, and also stores start sample and number of
samples when rendering only a subrange of all samples.
The format used is "cycles.<view_layer_name>.><field>", which allows
to have information about all layers in a multi-layer EXR file.
Ideally we can store simplified "cycles.<field>" if we know that there
is only one render layer in the file, but detecting this is somewhat
tricky since Cycles operates on an evaluated scene which always have
single view layer.
The metadata is shown in the Metadata panels for clip, image and
sequencer spaces.
Example screenshot which shows the metadata:
{F6527727}
Reviewers: brecht
Reviewed By: brecht
Subscribers: fsiddi
Differential Revision: https://developer.blender.org/D4311
This is the internal implementation, not available from the API or
interface yet. The algorithm takes into account past and future frames,
both to get more coherent animation and reduce noise.
Ref D3889.
Prefiltering of feature passes will happen during rendering, which can
then be used for denoising immediately or written as a render pass for
later (animation) denoising.
The number of denoising data passes written is reduced because of this,
leaving out the feature variance passes. The passes are now Normal,
Albedo, Depth, Shadowing, Variance and Intensity.
Ref D3889.
Most of this code is deprecated for many years already and does not
work at all in Blender 2.8.
Reviewers: brecht, aligorith
Differential Revision: https://developer.blender.org/D4271
The old function was numerically very unstable for 2 reasons:
computing the square and then subtracting the results.
In the example in T60935 all precision was lost and it returned the distance 0
for all points.
I also removed the `depth` parameter since it wasn't used and computing
it would have made the code more complicated.
Reviewers: brecht, campbellbarton
Differential Revision: https://developer.blender.org/D4308
Need to synchronize simulated frame back to original object.
Solves the lag during transformation, but amount of floppyness is
lower for some reason. Final animated object behaves the same as
in older Blender though.
rBec3357e03ab1 introduced multi-object snapping.
Seems like this was done without mixed-mode selections in mind.
So code assumed that all selected objects are actually armatures [which
can fail].
In 2.7 this was not a problem, because code only took active object into
account, 2.8 was iterating over all selected_editable_objects.
Now just iterate over objects in posemode instead
Reviewers: brecht, dfelinto
Maniphest Tasks: T61051
Differential Revision: https://developer.blender.org/D4287
This complicated handling of undo steps in a generic way
especially switching between undo systems that stored data to ones
that accumulated changes.
Now each undo system must treat it's steps as check-point,
internally it can apply/rewind changes.
This commit also fixes projection paint where the object mode wasn't
following the undo steps.
Needed to fix T61196, supporting clipped back-buffer in the 3D view
which is done outside the draw module.
It was also inconvenient having DRW_shader_* versions of GPU_shader_*
API calls.
- Clipping distances are now supported as a shader configuration
for builtin shaders.
- Add shader config argument when accessing builtin shaders.
- Move GPU_shader_create_from_arrays() from DRW to GPU.
Now collection and objects can be either:
* Disabled for all the view layers.
* Hidden for a view layer but not necessarily for all others.
* Visible for a view layer but not necessarily for all others.
Regarding icons: Whatever we decide to use for the "Hidden for all view
layers" needs to be a toggle-like icon. Because when viewing "Scenes"
instead of "View Layer" in the outliner we should be able to edit the
collection "Hidden for all the view layers" as an on/off option.
The operators are accessible via a Visibility context menu or shortcuts:
* Ctrl + Click: Isolate collection (use shift to extend).
* Alt + Click: Disable collection.
* Shift + Click: Hide/Show collection and its children (objects and collections)
Things yet to be tackled:
* Object outliner context menu can also get a Visibility sub-menu.
* Get better icons for viewport enable/disable.
Note:
* When using emulate 3 button mouse alt+click is used for 2d panning.
In this case users have to use the operator from the menu.
See T57857 for discussion.
Patch: https://developer.blender.org/D4011
Reviewers: brecht and sergey
Thanks to the reviewers and William Reynish and Julien Kasper in
particular for the feedback.
[re-committing]
We still control this in the viewport collections visibility menu. But
now we are actually changing the visibility of the collections, not of
the objects.
If a collection is indirectly invisible (because one of its parents are
invisible) we gray it out.
Also if you click directly in the collection names, it "isolates" the
collection by hiding all collections, and showing the direct parents and
all the children of the selected collection.
Development Note:
Right now I'm excluding the hidden collections from the depsgraph.
Thus the need for tagging relations to update.
If this proves to be too slow, we can change.
This was deliberately disabled since I didn't get the drawing working
originally. It is fully working now.
Note: camera lens widget still needs to be fixed since it still draws it
wrongly.
This will help with upcoming outliner visibility icons with 3 states.
It is done by using the icon to identify the state. If that is not unique
there is no visible difference to users anyway.
When generating a mesh from a curve object, do not generate temp objects
and curves in main, but rather as 'localized' copies.
This is cleaner, and might add a marginal speed-up in some cases (like
rendering thousands of curve objects), since we save some processing.
Note that this is the function behind py API's `Object.to_mesh()` too.
Use first combined pass if possible. Is not ideal but better than
showing completely empty image.
Also, covers quite a lot of usecases when movie clip editor is
used to review animation render of single-layer renders but with
multiple passes.
This adds a new geometry shader (specific to edit mesh for now) that
reproduces the effect of glLineWidth > 1.0, since this is not supported on
all platform.
This fix could be generalized to other shaders later.
- Add manual depth offset to vertices and edges.
- Revert to plain edge decoration.
- Fix active edge coloring.
- Remove active face display if not in face selection mode.
- Add wide line support.
This is work in progress. Look is not final.
This align data VBO data structure used for edti cage drawing to the one
use for normal drawing.
We no longer use barycentric coords to draw the lines an just rasterize
line primitives for edge drawing. This is a bit slower than using the
previous fast method but faster than the "correct" (edge artifact free)
method. This also make the code way simpler.
This also makes it possible to reuse possible and normal vbos used for
shading if the edit cage matches the
This also touches the UV batch code to share as much render data as
possible. The code also prepare for edit cage "modified" drawing cage (with
modifier applied) but is not enabled since selection and operators does not
work with modified cage yet.
Quite straightforward: first, convert metadata from file to
stamp data which is stored in the render result, and then
for every requested layer/pass use that as a metadata.
Noted those as missing in XXX comments some time ago, running again on
that code I still see no reason for this missing feature, so now when
doing a full scene copy, including duplication of Freestyl's linestyles
and scene's greasepencil data, their potential Actions will also be
properly duplicated (like it was already the case for world, and scene
itself).
This is a slightly more risky commit, as it is very difficult to fathom
all that may happen when localazing IDs. Would not expect any issue
though.
Note that a big TODO remain to refactor fully that ID localization
process (for 'shading IDs'), it's still doing pretty much same thing as
regular out-of-main copies, but the infamous ntree topic makes it
delicate to handle...
Turns out most of our 'local working copy' cases can use same set of
flags.
Note that this commit adds LIB_ID_COPY_CACHES to all our local meshes
copying, however this is no-op since that flag is unused during mesh
copying... We may want to add another set of flags without that one at
some point, but for now it would not be useful imho.
No local work copy is expected to need preview data, at least it should
not. Part of copy flags cleanup, done in separate commit in case
something goes wrong here...
Those two first sets of flags should represent some common use cases.
The goal here is to reduce verbosity of calls to BKE_id_copy_ex, and
help make it more obvious the 'common behaviours' of ID copying across
codebase.
Previously, if double-quotes appeared in the KeyingSet.bl_description field,
these would cause a syntax error in the resulting .py script export of the
KeyingSet. Since single quotes are even more likely to appear
(e.g. as apostrophes), we now use triple quotes here.
Unreported bug, noticed earlier when investigating T61010.
This commit adds a datablock filtering option for cache files channels,
so that a shot with lots of these in addition to standard animation
(e.g. the Spring production files) don't become bogged down by these.
Furthermore, these channels also respect the "Only Selected" toggle too now.
This feature is intended only for testing,
to automate simulating user input.
- Enabled by '--enable-event-simulate'.
- Disables handling all real input events.
- Access by calling `Window.event_simulate(..)`
- Disabling `bpy.app.use_event_simulate`
to allow handling real events (can only disable).
Currently only mouse & keyboard events work well,
NDOF, IME... etc could be added as needed.
See D4286 for example usage.
* Use simple default view transform for color pickers, as Filmic does not work
well for all types of colors. We better handle this with an option and tagging
of colors as emissive or albedo like.
* For solid/workbench we also no longer use Filmic, as there is not enough contrast
and it's not really needed since this is not physically based lighting.
* For lookdev always take into account the view transform and look. Other view
settings like exposure are only taken into account if scene lighting is used,
since these are often dependent on scene light intensity.
Fixes T61022, T57649, T59363.
Now when remove points from a cyclic stroke, the last island is joined with first island in order to fill the gap of the cyclic.
This change affects not only to cutter, but to any delete process in cyclic strokes.
BF-admins agree to remove header information that isn't useful,
to reduce noise.
- BEGIN/END license blocks
Developers should add non license comments as separate comment blocks.
No need for separator text.
- Contributors
This is often invalid, outdated or misleading
especially when splitting files.
It's more useful to git-blame to find out who has developed the code.
See P901 for script to perform these edits.
For ears it was already how we evaluate modifiers. There is no
need to go more granular than is actually needed. And no need
to use some obscure prefix for operation.
BF-admins agree to remove header information that isn't useful,
to reduce noise.
- BEGIN/END license blocks
Developers should add non license comments as separate comment blocks.
No need for separator text.
- Contributors
This is often invalid, outdated or misleading
especially when splitting files.
It's more useful to git-blame to find out who has developed the code.
See P901 for script to perform these edits.
Was happening due to missing relation from geometry to
transform component. Did not happen in old dependency
graph because that one could never evaluate geometry
prior to transform.
Scopes were moved to properties area, so need to adjust
the optimization part of tagging.
Ideally, tagging will always happen (and happen for free)
and then drawing code will update scopes when they are
actually displayed. But this is outside of the scope of
this fix since requires some design changes.
The issue was caused by dependency graph resetting particles
when evaluating copy-on-write version of object. Solved by
only doing reset from dependency graph on user edits.
Other issue was caused by modifier itself trying to compare
topology and reset particles when number of vertices or faces
changed. This isn't reliable, since topology might change even
with same number of elements. But also, since copy-on-written
object initially always have those fields zero-ed the reset
was happening on every F12.
The latter issue is solved by moving reset from modifier stack
to places where we exit edit/paint modes which might be changing
topology.
There is still weird issue of particles generated at some
weird location after tapping tab twice, but this is not a new
issue in 2.8 branch and is to be looked separately.
Was missing synchronization of current frame to the original one,
which is one of the issues why point cache does not properly reset
on edits.
Also clear recalc flag on original particle system.
Ideally we need to get rid of recalc on a particle system, since
that is not really covered by tagging system of dependency graph.
Some summary of changes:
- Don't use DEG prefix for types and enumerator values:
the code is already inside DEG namespace.
- Put code where it locally belongs to: avoid having one
single header file with all sort of definitions in it.
- Take advantage of modern C++11 enabled by default.
Was happening when value of one shape key was driving property of
another shape key of same datablock.
Solved by making shape key blocks properties more granular.
Makes it more explicit whether RNA property is used as a source
dependency for something else, or whether some other dependency
is being hooked up to evaluate that property.
This is necessary when adding a new keyframe to a fcurve
that also has a driver.
Reviewers: brecht, campbellbarton
Differential Revision: https://developer.blender.org/D4278
This removes a bunch of animation/driver evaluations and recalc flags that
should be redundant in the new depsgraph, and were incorrectly affecting
the evaluated scene in a permanent way.
Still two cases that could be removed if the depsgraph is improved, in
BKE_object_handle_data_update and BKE_cachefile_update_frame.
For physics subframe interpolation there are also still calls to
BKE_object_where_is_calc that should ideally be removed as well, though
they are not known to cause keyframing bugs.
Differential Revision: https://developer.blender.org/D4274
Some features are incompatible with multithreading and reliable evaluation
of dependencies. We are now removing them as part of a bigger cleanup to
fix bugs in keyframing and invalid animation evaluations.
* Dupliframes have been removed. This was a hack added before there were
more powerful features like the array modifier.
* Slow parent has been removed, never worked in 2.8. It was always
unreliable for use in production due to depending on whatever frame was
previously evaluated, which was not always the previous frame.
* Particle instanced objects used to have their transform evaluated at
the particle time. Now it always gets the current time transform.
* Boids can no longer do predictive avoidance of force field objects,
but still for other particles.
Differential Revision: https://developer.blender.org/D4274
While this is harmless, it did cause T55399 in the past.
Sculpt adds it's own undo steps, so don't request the operator type
to do it too.
This is consistent with other sculpt operators.
Issue actually exists since ages, probably 2.7x update system forced all
armature proxies to be fully refreshed after an undo?
In any case, proxy_from should only be reset for 'local' proxies (i.e.
directly linked datablocks), not for linked proxies...
Only keep this function when drawing to avoid COW overhead that reduce performance.
After some changes I did some time ago, the use of original ID was not required and this only added depsgraph overhead and problems.
This change solves the problems with updates in render mode.
Related to T57484 and the changes requested by Sergey.
Kind of funny to see that this has been missing presumably since the
first version of library linking in Blender, and only gets noticed now.
Then again, that was not really a critical issue, iirc write code
ensures all libraries directly used get properly written, even if flags
are incorrect.
When using `--cycles-resumable-num-chunks N` to render a subset of the
samples, having N close to the total number of samples causes rounding
issues.
For example, a file configured for 250 samples and 150 chunks should
have 1.6666 sample per chunk. The old code rounded this to 2 samples per
chunk, which would result in too many samples being rendered. When
rendering a single chunk this doesn't matter much, but when larger chunk
ranges are rendered with `--cycles-resumable-start-chunk` and
`--cycles-resumable-end-chunk` the rounding errors start to add up.
By multiplying with the number of chunks to render first, and only round
to integers after that, this issue is solved. In the above example,
rendering 3 chunks will correctly render 5 samples rather than 6.
When the requested number of chunks is larger than the number of samples
there will be duplicate samples (that is, sample N appearing both in
chunk M and M+1). In this case a warning is printed to stderr.
This is needed for T50977 Progressive render: use non-uniform sample
chunks.
Reviewed by: sergey
Differential Revision: https://developer.blender.org/D4282
Add the ability for undo steps to request memfile undo step added after
them, useful for mode switching, where we need the data to exist for
undo to enter the mode.
This is only partially working, because some bAnimListElem items do not
have any ID pointer set (for wome mysterious reason...), notably the
'group' ones.
Will re-assign to @aligorith for that, this code is rather complicated
and hard to follow (with all those macros ;) ).
Support the alpha channel use of the object color in solid mode.
The Transparency effect is still using the Xray algorithm and not
true Alpha blending.
We've had many reported crashes on Windows where we suspect there is a
corrupted OpenCL driver. The purpose here is to keep Blender generally
usable in such cases.
Now it always shows None / CUDA / OpenCL in the preferences, and only when
selecting one will it reveal if there are any GPUs available. This should
avoid crashes when opening the preferences or on startup.
Differential Revision: https://developer.blender.org/D4265
This adds the posibility of having certain materials transparent in solid
mode. The option is (for now) per material only and thus only shows in
material color mode.
This uses the same rendering technique as Xray mode.
Note that objects are not considered transparent for selection with this.
- Add noise to remove undersampling artifact
- Create 2 mipmaps to the scene color buffer in order to have bigger blurs
- Replace blur2 with a 3x3 median filter that doesn't dilate the highlights
- Use temporal accumulation to remove noise
For some reason all of this exacerbate some bleeding issues happening on
far foreground elements from near foreground elements. The actual problem
was already happening before but was not really noticeable. It needs some
more work to be fixed.
The inset operator uses 0.01 as default for the inset.
When the face is very small than this default is very confusing (see T60226).
The simplest fix seems to be to just use 0 as default.
This is similar to the extrude operator which uses 0 as default as well.
Reviewers: brecht, campbellbarton
Differential Revision: https://developer.blender.org/D4273
The subdivision method for getting corner shapes has a fullness
parameter which had been set by eye before. This change uses fullness
as found by offline search process to best match the superellipsoid
octant in the cube corner case (except cube corner case is still handled
by other code). This somewhat improves the look of cube corners with
inner arc miters, however.
Currently names are used for edit-mode undo-steps,
any changes to Main ID names cause lookup failure (crashing).
This commit ensures any undo steps that use ID lookups have the same
mem-file undo state loaded that was used to encode the steps.
Renaming also has an undo push added (last commit).
Some more tests showed no issue, so now feeling reasonably confident.
Old, 'safer' one remains available through setting debug value to 666,
for a few more weeks.
This bring macOS on par with Windows and Linux. It uses the OpenMP library
added to our precompiled libraries.
Custom flags are set because FindOpenMP from CMake below 3.12 does not support
AppleClang, and more recent versions do not work with our custom directory
location either.
Differential Revision: https://developer.blender.org/D4257
This is a quick workaround to prevent the crashes with multi-view.
The ultimate solution can be plenty, and would turn around refactoring
Cycles to handle multi-view internally, so that depsgraph could be freed
before render with no problems.
Reviewers: brecht, sergey
For the complete discussion check: https://developer.blender.org/D4239
This was very simple to reproduce, just turn on Freestyle and press render.
Now to the truth of things. Most (if not all) of
~BlenderStrokeRenderer() can be removed. I believe this was done back
when freestyle was using G.main, and since we gave freestyle its own
main we can just leave the cleanup for later.
I will leave this for freestyle maintainers to think over though.
Note: There is a chance this was the issue reported on T57890. I will
wait for the reporter to confirm this as fixed though.
This commit adds another optional check (when `--debug-io` is set) on
write .blend process, to check and ensure all shape keys have their
'from' pointer properly set to their respective user ID.
This is intended to be used as debuging tool mostly (to try to detect
when/why some of those pointers can become NULL).
For now, it also systematically perform same checks/fixes when loading a
.blend file, to fix all broken ones laying around. Later we might move
that usage to a do_version instead, but for now think it's safer to
always perfom it (and it's rather cheap process anyway).
Does not make sense to keep that with BLO_writefile.h, this can also be
used by read code, and some other parts of Blender (like ed_undo.c
currently)...
Render-type panels are only shown when the relevant type is active anyway.
Saves a click especially when using object or collection as render, since
you _have_ to set an object or collection to use it.
- Compute samples positions on CPU.
- Use 3x3 Box blur instead of 2x2.
- Implement bokeh parameters.
With this commit, dof performance is almost negligeable.
The quality is a bit lower than before but can be improve. Also now big
Circle of confusion are supported (up to 200px).
Cost is ~1.25ms on AMD Vega with a 2560p viewport than full HD and
pretty shallow depth of field.
Coc downsampling and dilation is not used anymore for now (commented).
The algorithm used is borrowed from :
http://tuxedolabs.blogspot.com/2018/05/bokeh-depth-of-field-in-single-pass.html
This makes it possible to have a decent blur for foreground over defocused
background in one pass only.
The algorithm is using a gather approach that is much faster
than the scatter approach used in Eevee. This makes it possible to have
custom bokeh shapes (not implemented yet) which would be impossible with
a separable gaussian technique.
The blur is done in 2 steps. The first one define the shape of the bokeh
and the second that fill the undersampling.
A downsample max-CoC tile texture speed up the gathering process.
Was generating INVALID_FRAMEBUFFER here instead of failled texture alloc.
Add safety asserts in gpu_texture.c and clamp minimum size to 1 inside
GPU_offscreen_create.
Artists requested to show the stroke while drawing a new stroke using a material with fill color only, because it's very difficult to see the stroke.
Now the stroke shows always but using the fill color, not the stroke color because maybe is not set.
The issue was caused by the hair step checking whether
particle system needs to have path cache. This was done
in a way which was traversing an entire scene and was
checking every object for particle instance modifier.
Ideally, path cache should be an own operation in the
dependency graph. Or at least, this flag should be set
by dependency graph builder, similar to curve's path.
Since the code was broken already (it was only checking
first particle instance modifier), it is easier to
remove the buggy code, solve the crash and move on for
now.
If this causes an issue, simply set particle system to
be rendered as path.
Fixes crash with playback of Spring scenes.
This reverts commit 1d908bffdd.
Enough uses of repeat last expect skip-save properties to be set,
transform being the most obvious example T60777#605681.
I wanted to avoid operators having account for two kinds of 'skip-save'
but this may be unavoidable.
Originally I wanted to avoid adding draw manager specific ifdef's all
over generic shaders however this isn't needed in so many places.
Also there are shaders that are only used by the draw manager so
duplicating them only to have the original unused doesn't make sense.
The integrator maximum number of closures was not set properly for the CPU/mega
kernels to match the actual available memory. Before relatively recent code
refactoring we did not use this value in those kernels so it worked fine.
this can now be found in the sidebar View panel
- uses existing 'lock_camera_and_layers' but renames the property to
'use_local_camera'
- uses RNA_def_property_boolean_negative_sdna to flip the value
- remove the local view code in
rna_SpaceView3D_lock_camera_and_layers_set
- update Python code
- update Addons code will be separate commit
Fixes T60756
Reviewers: billreynish, brecht
Maniphest Tasks: T60756
Differential Revision: https://developer.blender.org/D4247
Do not see why flags from loaded file should be skipped when we do not
load UI, this is not related to UI...
Think we can keep flags from file in both cases, should this raise some
other issue we'll just have to fine tune masked flags in each case
separately.
Do not see why flags from loaded file should be skipped when we do not
load UI, this is not related to UI...
Think we can keep flags from file in both cases, should this raise some
other issue we'll just have to fine tune masked flags in each case
separately.
Even though it makes sense logically to have displacement actually displace
the mesh, this is causing a lot of confusion for existing users that are used
to the previous behavior. Further, since Eevee does not support displacement
yet and the discrepancy between the viewport and final render is problematic.
Operator relys on 3DView and was failing from Topbar and Properties
Editor. Now tries to find the biggest 3DView and uses that.
Reviewers: brecht
Maniphest Tasks: T60133
Differential Revision: https://developer.blender.org/D4215
Use zoom steps lower than 1. This allows to zoom out a high-res
image. For example, before it was not possible to maker 4K image
to fit on FullHD monitor.
Also, don't force zoom to be above 1. Not sure why that was done,
but this disallows zooming out.
It is still not possible to zoom in higher than the window size
allows. In order to support this the player needs to be refactored
in a way that allows to decouple zoom from window size.
Fixes T59177: Animplayer extreme zooms in when playing rendered animation
There were two problems:
1. The scopes were only updated when the "Scopes" category is active,
but this category has been removed in Blender 2.8.
2. The scopes moved from the TOOLS to the UI region.
However the update-code still searched for the "Scopes" category
in the TOOLS region.
Both problems are fixed with this commit:
1. Scopes have there own category again.
2. The update code is in the correct draw function now.
Reviewers: brecht
Differential Revision: https://developer.blender.org/D4245
The problem was related to cache data that was removed from memory before the FX finished. This could affect to any FX.
Now all the information is saved in the FX itself in runtime struct to keep memory safe when cache memory is released.
DRW_shader_get_builtin_shader can replace GPU_shader_get_builtin_shader
when we need to support clipping.
Use this for loose point & wire drawing in object mode,
clips edges in lattice edit mode.
PROP_SKIP_SAVE is often used as a way to detect the difference between
adjusting options from the redo panel and initial execution.
Repeat last operator was executing with skip-save properties set,
preventing operators from initializing them based on the context.
Fixes 60777.
This helps to generate cleaner topology and define sharp features for dynamic
topology. Best used on relatively low-poly meshes, it is not needed as much
for high detail areas and has a performance impact.
Differential Revision: https://developer.blender.org/D4189
It is possible that context does have selected_sequences but
it will be set to None. In this case getattr() will return
None, breaking the intended logic.
Depending on area size, the scrollbar covered the bottom of the text,
with the extra it will only cover the padding at worst.
Differential Revision: https://developer.blender.org/D4207
* Add threshold for minimum amount of mouse movement for dragging to
get activated.
* Limit angles at which dragging is considered an action, do nothing if
mouse does not clearly move up/down/left/right.
* Increase action zone size vertically.
Differential Revision: https://developer.blender.org/D4227
There are probably many more cases in which the menu looks a little different.
However, I don't know them all and it's too easy to break something accidentally here.
Maybe a user could try the different combinations of object types and check if there are entries that should not be there.
Reviewers: brecht
Differential Revision: https://developer.blender.org/D4240
For now this is not part of copy-on-write, and needs extra animation
evaluation.
Reviewers: sergey, brecht
Maniphest Tasks: T59939
Differential Revision: https://developer.blender.org/D4140
Allows users to select a font for text strips in the video sequence editor.
Related: 3610f1fc43 Sequencer: refactor clipboard copy to no longer increase user count.
Reviewed by: Brecht
Differential Revision: https://developer.blender.org/D3621
The clipboard is not a real user and should not be counted. Only on paste
should the user count increase.
This is part of D3621, and was implemented by Richard Antalik and me.
Cycles shows first the render, and then the viewport settings.
One could argue that EEVEE's main setting is the viewport one.
But that is silly. If we need an extra setting for the lookdev mode so be it.
But EEVEE should be treated as an engine just as Cycles.
Also, removed the " Samples" bit from their labels since they are under
the Sampling panel.
This is more like a band-aid than a real fix actually, real fix would be
to understand why rendering smoke requires auto texspace to be ON
(afaict, this was not the case in 2.7x)...
But I've already spent way too much time on this issue, at least now we
get better situation than before (i.e. smoke with adaptive domain works
well even when orig domain mesh has autospace flag disabled).
Not sure why that was that way (can't remember any good reason at least,
so assuming this is a dummy mistake from own rB33cbcd73448f), this
should be done in any case.
Also includes some minor refactoring: use guard clauses instead of nested conditionals.
Reviewers: brecht
Differential Revision: https://developer.blender.org/D4238
Free the BVH tree immediately along with the mesh, otherwise we might access
invalid mesh data.
Differential Revision: https://developer.blender.org/D4201
After rename is done we need to make sure all copies of
corresponding datablocks are updated in all dependency
graphs: otherwise bone will have a new name, but animation
will still be using an old one.
Index files used by emacs, vim and others, for autocompletion and
searching; generated by etags, universal-ctags and others.
Differential Revision: https://developer.blender.org/D4208
The issue was caused by intermediate DerivedMesh being created with
scene's Simplify settings taken into account. This is what happens
when one area makes implicit decisions based on whether passed Scene
pointer is not NULL.
Made it so ignoring simplification serttings is an explicit flag,
which makes it easier to follow what's going on.
This is actually a workaround for the crash in OpenSubdiv.
Topology refiner will have a crash when special conditions
are met:
- Refiner is configured to use infinitely sharp patches.
- Refinement happens for the level 1 (which we call Quality 1 on
Blender side).
- Mesh has non-quad faces.
The workaround is to force refinement to happen to level 2 (or
quality 2 on Blender side) when those conditions are met.
Later on with the next OpenSubdiv update we can remove this
workaround, since there was work done on OpenSubdiv side to
deal better with such configurations.
The modifier will now be somewhat slower, but this will be
compensated with upcoming topology cache enabled by default.
The workaround is done when initializing settings, so the
comparison of topology refiner settings is happening without
any extra workarounds there.
Add 'G_draw' for all draw manager globals,
avoids adding extern to each file.
Connection between `ts` and `globals_ubo` wasn't obvious,
now called `G_draw.block` & `G_draw.block_ubo`.
While verbose, this is a more flexible way to construct shaders.
Libs & defines can be optionally included for each shader type
which was previously done with inline string creation.
Not only were those often making doublons with already existing
BLI_math's stuff, but they were also used to hide implicit type
conversions...
As usual this adds some more exotic inlined vector functions (one of
the rare cases where I really miss C++ and its templates... ;) ).
Will document the new options in release notes, then in manual.
Still a bit of work to do on the bulging shape that appears
on cube corners if using arc inner miters, but will do that later.
Also need to do something smarter in clamp overlap.
Reported by @pepeland.
Adding missing events on the first point was breaking the guide behaviour.
Also, updated Ckey so it always defaults to Circular mode when guides are off.
Instead of doing manual ray-plane intersection we use normalized positions
of the grid mesh and apply scaling after interpolation so that we keep
good precision even at really far distances.
Precision is now two order of magnitude better and does not produce the
same kind of artifact at lower clip start values.
This commit also cleanup the implementation.
Fixes T58918 Grid not appearing correctly at low clip start in 2.8
Memfile undo isn't compatible with sculpt or edit-mode.
This didn't work in 2.7x, so best disable memfile undo for now in
situations where it's going to loose data or crash.
Before that only normal component was averaged, which is not
really correct.
Unfortunately, the new code is somewhat slower due to more
involved math to deal properly with non-quad faces, but the
plan is to move averaging from runtime to edit time, This
means, that mdisps will always be continuous around the edges
and no averaging on every frame change of animated character
will be needed.
Avoids nasty code all over where such math is required, and
compilers can easily deal with such situation.
Don't prefer questionable micro-optimization which comes with
a cost of nasty actual logic code.
The idea is to run reshaping for every boundary vertex
of a grid rather than trying to copy boundary grid
elements.
While this is somewhat slower, this avoids all this
tangent flipping magic, which tempts to be rather tricky
and fragile.
The boundary copy code was not dealing correct with flipping
tangent vectors, hence causing discontinuity in the final
positions.
Now we only copy boundaries for quads, where we know how to
deal with tangent vectors and where we know that this is
needed.
More clear solution could be to change the code in a way that
handles handles displacement grids of quads in the same way
as it's done for non-quad faces.
Use normal_quad_v3 instead of normal_tri_v3 and compute the mean of all
corner distance during frustum plane extraction.
Fix T58243 Flickering of viewport when rotating and zooming
Having clipping limit selection and tools is confusing when not visible.
Disable on load until it's supported
(doing this via ifdef's isn't practical).
Fixes T59580
While changing RGBA or color wheel didn't add undo steps,
HSV and Hex values did.
Disable undo for these button types since an undo push happens when
exiting the picker.
The problem here was that when a render result is allocated, the standard render passes are added according to the
pass bitfield. Then, when the render engine sets the result, it adds the additional passes which are then merged
into the main render result.
However, when using Save Buffers, the EXR is created before the actual render starts, so it's missing all those
additional passes.
To fix that, we need to query the render engine for a list of additional passes so they can be added before the EXR
is created. Luckily, there already is a function to do that for the node editor.
The same needs to be done when the EXR is loaded back.
Due to how that is implemented though (Render API calls into engine, engine calls back for each pass), if we have
multiple places that call this function there needs to be a way to tell which one the call came from in the pass
registration callback. Therefore, the original caller now provides a callback that is called for each pass.
Some of Eevee's Bloom defaults are not very good for physically based rendering. This patches addresses this issue.
This picture shows one of the problems with current default. Bloom looks very foggy:
{F6280495}
Even worse, light emitters much dimmer than the Sun can make everything equally hazy if Clamp is set to 1.0 and intensity to 0.8 (current default). Artists often forget to adjust Clamp value and do not know what value to use for realistic intensity. Also, currently both Clamp and Intensity do not have good UI ranges. This is why often Eevee renders end up very hazy and bloom often does not look right.
Bloom effect plays important role to help to distinguish between bright and relatively dim light sources. With current defaults this is broken because Clamp set to 1.0. Also, it cannot be disabled if set to 0 like expected. This patch fixes this and sets it to 0 by default. If users need to clamp, they can do so easily with UI range up to 1000. This range is good enough for most cases and provides enough precision to control lower values, and the highest value helps to limit bloom from the Sun if necessary and will leave untouched most other light emitters. If needed, much higher values for Clamp can be entered manually up to 100000. 10000 is still affects the Sun, but up to 100000 highest limit allows to clamp anything that is much brighter than the Sun if user needs to limit bloom in such cases (for example, bright explosion in the sky or anything else very bright).
I propose new default for bloom Intensity - 0.05 and UI range to suggests realistic values. Bloom Intensity > 0.1 is not realistic for clean lens but the user can enter manually much larger values if needed.
For comparison, here is a my own photo with and without bloom caused by the Sun (on second photo the Sun was occluded with an object).
{F6280500}
{F6280492}
In real life bloom is much more subtle and does not look hazy. If Clamp is disabled, then out of 0.1, 0.05 and 0.025 values I have tried, 0.05 looks most similar to the photo. Here is test render with and without bloom with the Sun in similar position like on the photo:
{F6280496}
{F6280494}
Using color probe 27x27 I compared lightness below the horizon under the Sun. In rendered by Eevee images lightness difference was 17. In case of the photos lightness difference in similar place was 11. I then compared leftmost spot (also below the horizon) and lightness difference was approximately 2 between two photos and 1 between rendered images. In other words, with these settings bloom effect is not too strong and is not too weak. Visually it may seem like decreasing bloom intensity may increase photorealism, but then bloom effect would be too localized even for the Sun.
Besides this single test, I tested in many other scenes as well, with and without the Sun, with different HDRIs, and as far as I can tell 0.05 intensity turned out to be good default - it produces bloom strong enough to be noticeable and not too hazy.
In Cycles shutter default value is 0.50, so for consistency set to 0.5 by default in Eevee too. Besides, 0.5 is typical standard for real cameras, and values higher than 0.5 usually are needed only if very strong motion blur is desired.
Here is summary of all changes:
Bloom Intensity: 0.8 > 0.05
Bloom Intensity UI range: 0-10 > 0-0.1
Bloom Clamp: 1.0 > 0.0 (disabled by default)
Bloom Clamp manual range: 0-1000 > 0-100000
Bloom Clamp UI range: 0-10 > 0-1000
Shutter: 1.0 > 0.5
This patch is related to the discussion in this thread, there are more examples of what bloom will look like with 0.05 intensity by me and others:
https://devtalk.blender.org/t/eevee-needs-to-have-physically-based-defaults/4700
Reviewers: fclem
Reviewed By: fclem
Subscribers: pablovazquez, billreynish, rboxman
Tags: #eevee
Differential Revision: https://developer.blender.org/D4212
This is in order to make the API more multithread friendly inside the
draw manager.
GPU_shader_get_uniform will only serve to query the shader interface and
not do any GL call, making it threadsafe.
For now it only print a warning if the uniform was not queried before.
While form a strict consistency point of view it could make sense to
return identity matrix for non-instance items, it can be very handy to
get that info (common to both instances and regular objects) directly in
all cases.
This uses the same command as regular hierarchy delete, and is only
activated when debug value is set to 666 for now.
Here on file from T60419, it gives about 20% speed-up (from 5.5s to 4.4s).
This further changes the preferences organization, to avoid grouping unrelated
settings together. With more sections we can also expand more panels by default,
making it possible to quickly go through sections and see the settings of each.
Panels with less used settings are still collapsed by default, to keep all panel
headers visible without scrolling.
Differential Revision: https://developer.blender.org/D4216
This is still not fully correct, since the event loop is blocked by GHOST
and no timer events are happening for animation while the mouse is still.
But for the most part it looks ok.
When use the Alt mode to draw close strokes, if the color had the stroke diabled, the stroke was not visiblle while drawing.
Now, it's visible while drawing, but it's hidden again when the stoke is finished. To display close strokes, enable stroke mode in material or enter in edit mode.
There was no documentation at all, some very bad practices (like using
G.debug_value > 0 as some sort of global debug print switch), and even
an overlapping use of '1' value...
Also, python setter did not check for valid range (since this is a
short, not an int).
That one was used to allow specifying in system console a new path for
missing libraries, when loading a .blend file.
We now have a much more easy and user-friendly way of repairing missing
linked datablocks/libraries, so this is not needed anymore.
Main idea is to remove IDs to be deleted from Main, to avoid looping on
them to remove other deleted IDs usage (this is the most expensive
process in ID deletion, by far).
Speed improvements when deleting a large amount of IDs from a Main
containing a lot of them is quite significant, some examples for Objects:
* Removing 1k from 10k: 32% quicker (2.5s to 1.7s).
* Removing 10k from 20k: 60% quicker (59s to 23s).
* Removing 20k from 20k: 99.5% quicker (82s to 0.4s)!
Note however that this process is more risky/touchy, since we by-pass
some safety checks from regular ID removal here.
So will only give access to that code from python API for now (in
separate commit), so that it gets really tested. Also still need to
think about how to hook it up in UI (probably mostly for Outliner),
since we often do higher-level operations there...
Seems to be caused by cae3750 which changed free() function used
by bmain free to the one which does dependency graph tag. We do
no want to do any tags here.
This commit makes it so OpenSubdiv's topology refiner is kept
in memory and reused for until topology changes. There are the
following modifications which causes topology refiner to become
invalid:
- Change in a mesh topology (for example, vertices, edges, and
faces connectivity).
- Change in UV islands (adding new islands, merging them and
so on),
- Change in UV smoothing options.
- Change in creases.
- Change in Catmull-Clark / Simple subdivisions.
The following limitations are known:
- CPU evaluator is not yet cached.
- UV islands topology is not checked.
The UV limitation is currently a stopper for making this cache
enabled by default.
This fixes following errors:
- The code didn't work correctly for edges reconstructed by
the OpenSubdiv's topology refiner (due to indexing
difference).
- Sharpness of non-manifold and boundary edges was not
working correctly.
C++11 doesn't need the space between '> >' in a nested templated
declaration, so instead of `std::vector<std::pair<a, b> >` we can now
write `std::vector<std::pair<a, b> >`.
It's now possible to export curves and NURBS as mesh data to Alembic.
This allows artists to do any crazy thing on curves and export the
visual result to Alembic for interoperability with other software (or
caching for later use, etc.). It's an often-requested feature.
This works around T60503 and the fixes export part of T51311.
Note that exporting zero-width curves is currently not supported, as
exporting a faceless mesh (e.g. just edges and vertices) is not
supported by the mesh writer at all.
To test, create a curve with thickness (for example extruded), export to
Alembic and check the 'Curves to Mesh' checkbox in the export options.
Reviewers: sergey
Differential Revision: https://developer.blender.org/D4213
I moved most of the `AbcMeshWriter` code to a new class
`AbcGenericMeshWriter`. The latter is an abstract class and does not
make any assumptions about the type of Blender object being written.
This makes it possible to write metaballs, curves, nurbs surfaces, etc.
as mesh to Alembic files.
The `AbcMeshWriter` class now is the concrete implementation of
`AbcGenericMeshWriter` for writing mesh objects.
Reviewers: sergey
Differential Revision: https://developer.blender.org/D4213
If the triangulated mesh was in itself a new mesh that should be freed this
should happen before the function returns (as it only returns a single mesh,
and thus the caller can only free one).
Add 'missing' bpy code from BKE_libblock_free_ex(), now both functions
do exactly the same thing, only the later is less flexible (fewer
'exotic' behaviors supported, like handling IDs outside of bmain etc.).
Next step: nuke usages of BKE_libblock_free functions, makes no sense to
have twice the same code around!
Xorg's XIWarpPointer doesn't support multi-head display while
XWarpPointer does.
Revert since this is a known TODO in Xorg and setting a custom
xinput matrix seems not to be used often.
Resolves T50383
Xorg's XIWarpPointer doesn't support multi-head display while
XWarpPointer does.
Revert since this is a known TODO in Xorg and setting a custom
xinput matrix seems not to be used often.
Resolves T50383
The issue is that the edge fix geometry goes on top of the actual drawn
points.
This commit reduce the edge fix size to the strict minimum but does not
get rid of it.
Related to T60139
Before this Blender always needed the Wintab driver. This adds support for the
native pressure API in Windows 8+, making it possible to get pressure sensitivity
on e.g. Microsoft Surface hardware without any extra drivers.
By default Blender will automatically use Wintab if available, and if not use
Windows Ink instead. There is also a new user preference to explicitly specify
which API to use if automatic detection fails.
Fixes T57869: no pressure sensitivity with Surface pen or laptop.
Code by Christopher Peerman with some tweaks by Brecht Van Lommel.
Differential Revision: https://developer.blender.org/D4165
Base outline is 2px wide (because of how we detect them).
And since inflating this outline will only produce outlines that are 2*x
thick we map the UI scalling and the outline width setting to the closest
match.
Do note that thicker outlines have a performance cost since they need more
texture fetches and passes.
This fixes T60252 3D View Outline Width not working
The existing Add and Multiply blending modes have limited usability,
because the appropriate operation for meaningfully combining values
depends on the channel. This adds a new mode that chooses the operation
automatically based on property settings:
- Axis+Angle channels are summed, effectively averaging the
axis, but adding up the angle. Default is forced to 0.
- Quaternion channels use quaternion multiplication:
result = prev * value ^ influence
- Scale-like multiplicative channels use multiplication:
result = prev * (value / default) ^ influence
- Other channels use addition:
result = prev + (value - default) * influence
Inclusion of default in the computation ensures that combining
keyframed default values of properties keeps the default state,
even if the default isn't 0 or 1.
Strips with this mode can be keyframed normally in Tweak mode,
except that for quaternion rotation keyframing always inserts
all 4 channels, and the channel value sliders on the left side
of Graph/Action editors won't insert keys without Auto Key.
Quaternion keys are also automatically normalized.
Differential Revision: https://developer.blender.org/D4190
Supporting a strip blending type that treats quaternions as a unit
also means being able to adjust all sub-channels as a unit when
inserting keyframes. This requires refactoring keyframe insertion
code to retrieve array property values for all channels at once,
before iterating over the indices being inserted.
This disables touch gesture recognition in Blender, avoiding any initial delay
when drawing with grease pencil, texture paint, etc.
Differential Revision: https://developer.blender.org/D4203
Allows users to select a font for text strips in the video sequence editor.
Related: 3610f1fc43 Sequencer: refactor clipboard copy to no longer increase user count.
Reviewed by: Brecht
Differential Revision: https://developer.blender.org/D3621
When editing an action without a strip, or tweaking a strip without
time mapping enabled or supported, the extents of the virtual strip
can't be controlled and are purely derived from keys in the action.
Thus, cutting off evaluation of the action at these arbitrary points
gets in the way of observing the natural extrapolation of the F-Curves
and thus appears to be a mis-feature.
With this change non-mapped actions are evaluated with infinite
range, exactly like they are handled without NLA, unless extend
mode is set to Nothing.
The viewport stereoscopy support helpers are finally ported to 2.80.
We now can scale the camera and the "stereo cameras" will scale
in the viewport as well (unlike 2.7x).
At the moment I disabled the drawing of the camera frame when
stereo is selected and you are looking through the camera.
It is to be fixed later, but for now it draws the border wrong.
In 2.79 this was not a problem because the camera frame was drawn
afterwards as a hack.
Viewport > Stereoscopy:
* Cameras
* Convergence plane
* Convergence plane alpha
* Stereoscopy volume
* Stereoscopy volume alpha
shgroup_instance_alpha was getting a color[4] but would only use the
alpha defined upon creation of the shading group.
This was very limiting since it wouldn't allow for different instances
to have different alpha values.
Patch made with Clément Foucault (he made the code of it, while I fixed
all the parts of the code that were relying on shgroup_instance_alpha.
The original issue is that we were changing the camera shiftx
temporarily for the stereoscopic calculation. However we are using the
evaluated object when calculating the projection matrix.
Note: Camera framing drawing for stereo still seems to be broken.
But the viewport itself is now correct.
Not sure exactly why this happened for 'apply as shape' and not in other
cases (did not took time to fully trace what happens there). But in any
case, `BKE_key_evaluate_object_ex()` can be called from a fair amount of
places, including during depsgraph evaluation, so setting back key's
owner here is plain wrong in CoW era.
More like a band-aid than anything else really, that code is horribly
weak and need to be fully re-written at some point (putting all those
temp data-blocks fully outside of bmain...). But for now should do.
Fix T60194: Sequencer cut loses animation data for the right strip.
Fixing the first also fixes the second. First attempt was delaying
uniquename check at a later step of cut process, after everything had
been duplicated. While this fixed first issue, second one became even
more proeminent (it become active for all strips, and not only
video/audio movie strips in meta's).
So instead, passing along the list of (new) sequences, so that duplicated
seqs can be put there immediately, before checking for unique names,
henceforth ensuring even strips inside meta's get properly handled.
This partially reverts commit bb98e83b99.
It fixed 'strips having same name' issue, but broke handling of
animation then. Need to find a better way to handle this.
This commit groups a set of new tools that were tested in grease pencil object branch before moving to master. We decide to do all the development in a separated branch because it could break master during days or weeks before the new tools were ready to deploy.
The commit includes:
- New Cutter tool to trim strokes and help cleaning up drawings.
- New set of constraints and guides to draw different types of shapes. All the credits for this development goes to Charlie Jolly (@charlie), thanks for your help!
- Segment selection mode to select strokes between intersections.
- New operator to change strokes cap mode.
- New option to display only keyframed frames. This option is very important when fill strokes with color.
- Multiple small fixes and tweaks.
Thanks to @pepeland and @mendio for their ideas, tests, reviews and support.
Note: Still pending the final icons for Cutter in Toolbar and Segment Selection in Topbar. @billreynish could help us here?
This change brings back old original logic which was checking
whether worker threads do fit into an active CPU group. But
it does it a bit smarter now and is also checking affinity
within that group. This way Cycles will use all threads on a
Threadripper2 CPU if it's set to automatic number of threads,
but on another hand will not change affinity if user requested
16 threads and changed Blender affinity.
There was a problem counting the number of points for edit points and lines. Now the total size is used allocating the VBO size and not the stroke size.
This removes code duplication and put an end to the old "create at request"
batch creation.
Also it uses the same vbo as the uv layer used for shading. Reducing VRAM
usage.
Also fixes the modified uv display in uv edit mode.
This is in order to allow more spaces to have their batches created at the
same time and sharing the batches.
This is part of the effort fo making the drawing code more optimized. This
commit however should not introduce any difference.
This commit bypass the aspect ratio correction for angle stretch display
but this should be fixed in the next commit.
This is in order to use batch cache directly without using tricks like
batch presets reseting the VAOs.
Note: For now it also create a depth buffer for this area which is not
needed. We could get rid of this to lower VRAM usage.
The overlay should now use the texture interpolation setting in material
mode.
In image mode, there is now a new button to let the user choose the texture
filter. The option is located in the Texture Slots popover and only shows
in Image mode.
Debugging the edit mode selection I realized the vertices are often
occluded by edges with the same depth. Sometime it can be the center
pixel of a vertex point and that can lead to some selection issue.
So I increased the offset a bit for the vertices and it seems to fix it.
This makes it more future proof and remove baked id offset inside the vbos.
Instead we add the offset as a uniform. This makes it possible to reuse
the vbos instead of discarding them all the time.
Also using batch request may reduce batches creation time.
Tweaked scheduling so it survives this situation by scattering
"extra" threads uniformly over all the NUMA nodes.
There are still tweaks possible to make some specific hardware
configurations work better.
TH_BACK was being used when drawing the 3D view even though
there was no way to set the color in the preferences.
The color was zero'd when moving to the new 2.8x theme.
Having both gradient and background colors was confusing,
especially having to use 'TH_HIGH_GRAD' for the 3D view, 'TH_BACK' for
other views.
Move the background color back to 'TH_BACK', 'TH_BACK_GRAD' is used
when gradients are enabled.
RNA is unchanged so presets don't need updating.
This was only used by collada export metadata.
If metadata like this is needed, we can use per-filetype preferences,
to make it clear when user identifiable information is being used.
We should *never* prevent copying basic mesh CDLayers (vertices etc.),
that does not make sense.
I guess issue was not in old DM because geometry was duplicated anyway,
and in 'normal' modifier stack eval, probably because bare mesh was
awlays requested? But we should not have to be explicit here about it.
This is a part of another which, which needs to tag owner
of f-curve for an update. But since this is too many lines
changed committing separately.
Basically, for f-curve AnimElement we are now storing ID
whih owns the f-curve.
When holding down the key for a while, the pie menu will disappear when
releasing the key. This is under the assumption that in this case the user
decided to cancel the action.
Differential Revision: https://developer.blender.org/D4180
There was a problem in the caps start and in some situations, the pixel was wrong.
Also changed the way the caps are detected because use the alpha in negative was a hack that maybe will not work with all drivers.
Exiting modes shouldn't be needed since loading the new memfile
will free the old data.
Sculpt mode dynamic topology was adding undo data on exiting the mode
which isn't logical in this case and can be avoided altogether.
Too many things done wrong in original rBd12b3767f81d to list them all
here, hopefully nothing bad sneaked in again this time :|
Also cleaned up a little the 'sort by name', even though (since we only
have two options by default, sort by index and by name) we can abuse it
as a binary option for now, this is not a bitflag...
That was indeed not working properly, not at all. Except for
the basic case, but as soon as you used another object to define the
mirror plane, it would be utterly broken, in several different ways!
Exiting modes shouldn't be needed since loading the new memfile
will free the old data.
Sculpt mode dynamic topology was adding undo data on exiting the mode
which isn't logical in this case and can be avoided altogether.
- When toggling a mode that doesn't support multi editing
only do this once of the active object.
- For sculpt mode create sculpt data since this is needed
for activating other sculpt objects on reload.
This reverts commit 4d8ed937f2.
An alternative fix will come soon as a patch, since this introduced an issue.
Rolling back since the original fix (sculpt cursor on load) is less important
than the issue it introduced (crash on weight paint undo/redo).
Fix T60322.
We now perform COW -> original data flushing for all the debug values + error
status flags on Drivers/DriverVariables/DriverTargets, as these are only set
when errors are encountered when evaluating drivers.
When BPY_python_end() is not called, there can be buffered data still in
`sys.stdout` or `sys.stderr`. This generally isn't an issue when those are
connected to a terminal, but when they are read by another process (in the case
of rendering with Flamenco, for example) we could miss the actual error message
that's causing the exit in the first place.
The following script demonstrates the issue; before this commit neither the
writes to STDERR and STDOUT nor the traceback of the NameError were shown.
#!/bin/bash
cat > file-with-errors.py <<EOT
import sys
print('THIS IS STDERR', file=sys.stderr)
print('THIS IS STDOUT', file=sys.stdout)
nonexisting.monkey = 3
EOT
blender --enable-autoexec -noaudio --background \
any-existing-blendfile.blend \
--python-exit-code 42 \
--python file-with-errors.py 2>&1 | cat
Reviewers: campbellbarton, mont29
Reviewed By: campbellbarton, mont29
Subscribers: fsiddi
Differential Revision: https://developer.blender.org/D4168
This switches evaluation of vertices which are on the boundaries
of PTex faces to a single threaded one. While this introduces
some slowdown it fixes ambiguity of PTex index used to evaluate
particular vertex.
Possible alternative solutions would be:
- Do some pre-calculation of index, then do evaluation in threads.
- Try using Gregory patches and see if that makes any affect.
Fix T60235: Flickering of object instances
The units scaling was inappropriate when the bevel value was
to be interpreted as a percent, so added a separate rna property
for "Width Percent" and made UI show the width appropriate for
current offset_type.
Harden normals causes normal splitting, which will not give the
appearance expected due to autosmooth unless some edges are sharpened,
so this change fixes that. Also bevel tool will turn on autosmooth
if not already on if hardening normals.
Makes the entire Preferences UI nicely width responsive. Also, move
use_tabs_as_spaces option back to file path options, it was too lonely
in its own panel ;)
* Expand more sub-panels by default.
* Move release confirms and numeric input settings to Input.
* Move 3D cursor settings to Editing.
* Move region overlap to Interface.
- Move author to save&load
(was incorrectly under text editor).
- Rename Memory -> Memory/Limits
(some of the settings aren't obviously todo with memory).
The Collada exporter suppresses the export of flat animation curves
to optimize the animation (in fact to make the exported file smaller).
But sometimes it is important to also have the flat curves exported
because they may be needed to define an initial transformation to
a fixed location - like translating the weapon from the ground floor
to the back of the model in the report.
I added a new option "all keyed curves" which is disabled by default
but when enabled it also exports flat curves.
feedback is very welcome
In a77b63c569, the Preferences navigation region background was
made brighter. Recently stored userpref.blends (since b00963afc1,
so beta release included) would still use the slightly darker
background for the Preferences navigation region.
Now the version patch added for a77b63c569 also sets the new color
for those recent configs.
The problem was that removing entries from a vetor while iterating
the vector was implemented badly. This caused the failure when only
one element was in vector.
Partially revert rB1b8c3774a86ebc04fceb9cd, there is no good reason to
make object.dimensions read-only, it works perfectly well from python
API! Only breaking case was that weird multi-editing UI feature, due to
how it sets things. But RNA setter itself works fine, and it's a handy
shortcut/helper for scripts.
Also when breaking API, it is good practivce to at least check official
add-ons...
Implementing a new intelligent mixing mode that combines quaternions
via multiplication requires rewriting the NLA code to recombine array
properties from separate scalar channels during evaluation.
In addition, stable evaluation of NLA stack requires that any channel
that is touched by any of the actions in the stack should always be
set to a definite value by evaluation, even if no strip affects it
at this point of the timeline. The obvious choice for the fallback
is the default value of the property.
To make scanning all actions reasonably efficient, mapping paths to
channels should be done using hash tables.
Differential Revision: https://developer.blender.org/D4120
(Part 1 was 00963afc14978b)
Does the following changes visible to users:
* Use panels and sub-panels for more structured & logical grouping
* Re-organized options more logically than before (see images in D4148)
* Use flow layout (single column by default).
* New layout uses horizontal margin if there's enough space.
* Change size of Preferences window to suit new layout.
* Move keymap related options from "Input" into own section.
* Own, left-bottom aligned region for Save Preferences button.
* Adjustments of names, tooltips & icons.
* Move buttons from header into the main region (except editor switch).
* Hide Preferences header when opened in temporary window.
* Use full area width for header.
* Don't use slider but regular number widget for UI scale.
* Gray out animation player path option if player isn't "Custom"
Internal changes:
* Rearrange RNA properties to match changed UI structure.
* Introduces new "EXECUTE" region type, see reasoning in D3982.
* Changes to panel layout and AZone code for dynamic panel region.
* Bumps subversion and does versioning for new regions.
RNA changes are documented in the release notes:
https://wiki.blender.org/wiki/Reference/Release_Notes/2.80/Python_API/Preferences_API
Design & implementation mostly done by @billreynish and myself.
I recommend checking out the screenshots posted by William:
https://developer.blender.org/D4148#93787
Reviewed By: brecht
Maniphest Tasks: T54115
Differential Revision: https://developer.blender.org/D4148
When fill a stroke if the fill layer hasn't keyframe, the fill is wrong because previous fill shape hide area to fill.
Now, if the keyframe is missing in the active layer for the current frame, a new frame is added.
- clips/masks were not showing an icon [both dont have a dedicated icon,
took the ones used elsewhere]
- masks hit an assert in outliner_add_element()
- missing outliner update when adding a mask
spotted while looking into T59939
Reviewers: mont29, brecht
Differential Revision: https://developer.blender.org/D4142
It was intended to be a quarter-circle, however it was oriented wrong.
Since the triangle is no longer visible and does not overlap with the
button anymore, this just makes it a square.
Differential Revision: https://developer.blender.org/D4139
Needed to port operator to use evaluated particle system.
But also changed interface to always show Convert button when
draw type is set to Path (Hair particle system is forced to
be draws as path). This avoid rather expensive lookup on every
redraw, but will show Convert button for un-baked particle
emitter.
Probably, an acceptable compromise.
Fixed by setting the limit to the original limit I used for Cycles.
Rendering still goes extremely slow when bokeh is lower than 1.0.
But at least now it is "waitable". With lower numbers than 0.01 I don't
think we would ever get a render to finish.
@fclem feel free to address the real root of the problem, but I'm afraid
it may be a limitation of the algorithm you are using.
Even though the fragment shader was already discarding all members of
dof_bokeh_sides when blades was zero, the C code was still trying to
use this for a few divisions leading to runtime asserts.
Those are harmless yet can lead some to waste time while pursuiting
other bugs (namely a near freeze when blades aspect ratio is too low).
The original issue is that wm->paintcursors is empty until we go in and
out of the sculpt mode. To fix this we need to toggle inside the sculpt
mode.
This is usually tackled by ED_editors_init(), however the sculpt mode
toggling was never call because the object technically had "mode data".
Reviewers: campbellbarton
Differential Revision: https://developer.blender.org/D4153
The intention was to flip normals when extruding in the opposite
direction, however the sign of the angle isn't meaningful unless
the geometry center and region normal are taken into account.
Disable, may add back in a way that works more predictably.
The intention was to flip normals when extruding in the opposite
direction, however the sign of the angle isn't meaningful unless
the geometry center and region normal are taken into account.
Disable, may add back in a way that works more predictably.
Word wrap and alignment layout args only used by UI_fontstyle_draw
were vars in uiFontStyle.
These were written to before drawing, so better pass as an argument.
Pass uiFontStyle & uiWidgetColors as const args.
Move the bevel hardening code all into bmesh_bevel.c.
Based on user feedback, rewrote the bevel hardening algorithm
to be more what users want.
Based on user feedback, changed the UI, removing some
not-useful options. Now hardening normals while beveling
is enabled by a simple checkbox.
Now setting face strength gives options for which faces
get their face strength set.
Am not totally convinced that generating meshes without fully valid
material info is a good thing, but this seems to be rather common in our
code base (in both mesh editing and convert-to-mesh cases).
So for now, duplicated code in mesh eval finalization to main displist
creation/eval function, synchronizing mat data at the end of modifiers
stack eval, if needed.
We are core profile now, no need to link against GLU.
This change makes it so Blender binary is not dependent on liGLU.so.
That was a weird thing that Blender was dependent on it, but was not
using any functions from it.
Caused by rB36ca072375deea4803df4681716c1d3224095e07
[one instance of `DEG_get_original_object` was neccesary, the other one
breaks getting the parent in `BLI_ghash_lookup`]
Reviewed by: brecht
Differential Revision: https://developer.blender.org/D4154
As the z-depth is calculated using the internal drawing, if we use the front mode the z-depth is wrong. The Front or Back mode must be used only for display, but not for calculation.
- Remove pathlib use
(was converting to/from string with no real advantage).
- Use user_resource(..., create=True) to ensure the path exists.
- Pass full path to BKE_studiolight_create, don't add extension after.
- Fix 'sl' filtering glob and move from ui code to operator.
- Fix string copy length.
Mixing file rename with other changes should be avoided.
Using 'module_py_api' convention here
is in keeping with imbuf, idprop, blf & bmesh.
No reason for gpu to have a different convention.
Instead of crashing, an error message is displayed if a function of the gpu module is called without a GPU context.
Reviewers: brecht, campbellbarton, JacquesLucke, mont29
Subscribers: abdelmatinboulbayam, amir.shehata
Differential Revision: https://developer.blender.org/D4143
Note this is also broken in 2.7x.
This is not a big deal since the operator is exposed in the correct
menus. But some users were accessing it via the search menu which would
lead to issues.
The original comment in the file was not acknoledging pose bones could be tacked
here as well (my fault since I should not have trusted the comments and read the code
intead).
Problem introduced on aeb8e81f27.
- Fixes missing check for unified brush in sculpt mode.
- Re-orders material first in gpencil paint mode
(matching color-first for other paint modes).
- Avoid minor differences (missing tablet pressure options from topbar).
- Don't repeat properties already displayed in the topbar
when opening the brush popover.
* 2D Animation: lots of changes from the grease pencil team. Properties
editor layouts, brush and material settings, and more.
* 3D Viewport: wireframes set to 1.0.
* World: use nodes by default.
* Node Editor: use narrow toolbar.
Add city, courtyard and interior HDRs. Replace grass field and night
HDR with different images.
Command used for compression:
oiiotool %s --resize 1024x512 --ch R,G,B -d float --compression dwab:300 -o output/%s
This operator allows to create a new stroke joining several selected points of different strokes.
The new stroke will use the current material.
To use, first select the points to be merged. Optionally can remove the old points and strokes.
The operator is available in Edit mode in the Specials menu and Stroke menu.
Steps to reproduce were:
* Open Preferences
* Choose "Input" category
* Scroll to the bottom
* Choose "Interface" category
The newly activated category should now use the scrolling set previously
in the other category, causing the contents to be out of view. You
would have to scroll to bring it back.
Now scrolling is stored per category.
The old code assumed that if the number of curves was the same, the
entire set of curves would have the same topology (in other words, it
assumed 'same number of curves => same number of vertices for each
curve').
I've added a more thorough check that also considers the number of
vertices in each curve. This still keeps certain assumptions in place
(for example that if the topology is the same, the weights won't change,
which is not necessarily true). However, when the assumption doesn't
hold, at least now Blender doesn't crash any more.
This is similar to what physics baking is doing: invoking the operator
runs a background job, whereas executing blocks. This makes Python
scripts calling the Alembic import/export operators more predictable.
For backwardward compatibility with existing Python code the
`as_background_job` parameter still exists, which overrides the
behaviour chosen by INVOKE/EXECUTE.
Reviewers: brecht
Reviewed by: brecht
Differential revision: https://developer.blender.org/D4137/new/
The clipboard is not a real user and should not be counted. Only on paste
should the user count increase.
This is part of D3621, and was implemented by Richard Antalik and me.
- own error in rB2c196de56bbb163048b08f321983234a5e72e804
- now introduce RE_PASSNAME_DEPRECATED placeholder for old passes
- also dont allocate NodeImageLayers for these
Reviewers: brecht
Maniphest Tasks: T59922
Differential Revision: https://developer.blender.org/D4132
The GP_STROKE_RECALC_CACHE identifier was changed to GP_STROKE_RECALC_GEOMETRY because the previous name was confusing and could be confused with the recalculation of the Draw Manager cache.
Please always build tests when messing with build system/libs, am tired
of fixing that kind of issues...
Also, that fix is probably not working for standalone, no idea where's
the numaapi lib then, but committing since I need a building blender
here (with the tests, yes).
Please always build tests when messing with build system/libs, am tired
of fixing that kind of issues...
Also, that fix is probably not working for standalone, no idea where's
the numaapi lib then, but committing since I need a building blender
here (with the tests, yes).
The issue was introduced by a Threadripper2 commit back in
ce927e15e0. This boils down to threads inheriting affinity
from the parent thread. It is a question how this slipped
through the review (we definitely run benchmark round).
Quick fix could have been to always set CPU group affinity
in Cycles, and it would work for Windows. On other platforms
we did not have CPU groups API finished.
Ended up making Cycles aware of NUMA topology, so now we
bound threads to a specific NUMA node. This required adding
an external dependency to Cycles, but made some code there
shorter.
Would free evaluated mesh even when it was the one cached in runtime
data by depsgraph evaluation!
Also fixes the asserts about using non-eval object in some cases.
Previously we would try to guess what the main tablet device is, but this is
error prone. Now we keep a list of X11 devices and try to match events to
them. On the Blender side there are still some limitations in regards to using
multiple devices at the same time, but this should improve things already.
Fixes T59645.
Previously we would try to guess what the main tablet device is, but this is
error prone. Now we keep a list of X11 devices and try to match events to
them. On the Blender side there are still some limitations in regards to using
multiple devices at the same time, but this should improve things already.
Fixes T59645.
Previously we would try to guess what the main tablet device is, but this is
error prone. Now we keep a list of X11 devices and try to match events to
them. On the Blender side there are still some limitations in regards to using
multiple devices at the same time, but this should improve things already.
Fixes T59645.
Curve modifier eval code was actually doing nothing to ensure we passed
mesh with valid normals when required by the modifier.
This is a bit basic, rough code, but think it should cover all cases,
time will say...
The issue was caused by NaN valid of the average spring length being
stored in the file. This caused accumulation in the springs builder
to also deliver NaNs, which then caused solver itself to not do
anything.
Not sure why these values where never initialized prior to the
accumulation. Or even, why this runime data is stored in a DNA.
Some sanitizing is possible here, but needs to be done with care
to not disrupt Spring production.
In fact, we can get valid depsgraph 99% of time from current context.
Still added extra optionnal depsgraph parameter just in case (and also
for future, when we might be handling much more temp depsgraphs).
Now, things are becoming REALLY confusing. The script does build
pugi, but is never telling OIIO to use an external one. Which makes
it to use a bundled one.
Trying to link OSL to a different version of pugi causes a lot of
linking errors.
Interestingly enough, that was me who made OSL to use external pugi
to solve configuration problem. But now i can not reproduce that
anymore.
Ideally we would either link everything against our pugi, or not
compile it at all.
ARegion.sizex/y should never have DPI factor applied. For regular panel
regions, DPI will be applied in region_rect_recursive already, causing
it to be applied twice when region size is set dynamically (= based on
content dimensions).
The root of the problem is that KnifeTool_OpData->colors was not init in
some cases. But the reason is unknown as it seems to be random and the
init function was always called.
So instead on init the color only once, we query the colors each time
we draw the knife points.
The overhead of this approach is negligeable.
Allow more flexible use of drivers on B-Bone properties by
connecting the dependencies to the actual operation node that
uses the values, instead of the whole component.
Modifiers
Since user menu entries from SPACE_BUTS/SPACE_TOPBAR are also shown in
other Editors (SPACE_VIEW3D), also allow these entries to be removed
from Quick Favorites from these Editors.
Match and deduplicate logic from screen_user_menu_draw() and
ui_popup_context_menu_for_button().
Reviewers: campbellbarton, brecht
Maniphest Tasks: T58327
Differential Revision: https://developer.blender.org/D4112
In blender 2.8, when you zoom in, the adaptive subdivisions appear earlier than previous versions.
The grid still appears a little before the snap, but since it is very small I see no advantage in snap for this case.
Object visibility is now handled by the depsgraph iterator, but this API
was incomplete as it made no distinction for visibility of the object itself,
particles and generated instances.
The depsgraph iterator API now includes information about which part of the
object is visible, and this is used by Cycles to replace the old custom logic.
Cycles and EEVEE visibility should now be consistent, which unfortunately does
means some subtle compatibility breakage for both.
Fixes T58956, T58202, T59284.
Differential Revision: https://developer.blender.org/D4109
Was only visible after going in and out (with some strokes inbetween)
hair edit mode. The edit structure was never freed during Blender
session for some reason. Now we free those when leaving particle
edit mode.
The issue was caused by shape keys datablock from evaluated mesh
being added to the main database.
This commit makes it so shape keys are not copied for the mesh
used as cage.
We were never removing the parent collection from a collection upon removal
of the parent.
Reviewers: mont29
Differential Revision: https://developer.blender.org/D4099
Has some advantages over existing options.
- Using material links color to rendering with no way to vary colors
if objects share a material.
- Random gives no control, objects may randomly have the same color,
duplicating an object often changes it's color.
The '+' widget to show a hidden region came too close to overlapping
the viewport navigation gizmo and text editor text.
Reduce size and use an arrow icon.
D4110 by @gnastacast
Now, the internal data is recalculated when add or remove a point.
The change in the API affect to stroke.points.add() that now requires a datablock parameter. This parameter is required to identify the datablock affected.
For example: stroke.points.add(gpencil, 1) instead of stroke.points.add(1)
This is the second try to fix T59600
The key indices were wrong: need to offset curve key index
by first curve key index. Also corrected calculation of the
interpolation step.
Annoyingly, can not reproduce this on a simple file, need
production rig. For the possible future look the following
file from Spring was used: 03_005_A.lighting.debug.blend
For strokes:
myframe.strokes.update(mystroke)
For datablock:
gpencil = bpy.data.grease_pencil['gpencil']
gpencil.update()
Still need a manual refresh of viewport.
Also remove special case when no items are selected,
since this only has one or two menu items, one being the add menu
which can be better accessed from the header or add shortcut.
If the no-selection case is to have it's own alternate menu - it should
be more complete before enabling.
Fix for T59219 was using low alpha-light grey for text background
so editing text would always be slightly brighter then the existing
background.
This causes outliner rename to have low alpha making text overlap
icons.
Use solid color to avoid issues with overlapping UI
elements in the future.
The perspective effect deformed the stroke. Now when you are in camera view and the lock axis is not enabled, the stroke is reprojected flat over the view to remove any deformation.
Also fixed reproject operator to use the origin set in topbar and not cursor 3D always.
Another case where editstr from search button would be used, when we
actually have desired pointer itself already available in button.
Am growing tired of doing bandaids fixes on that search menu stuff,
whole thing would require some real re-coding imho, to get rid of that
tantacular dependency over string 'identifier' only (when we should also
have access to at the very least, the active index, and also probably
active data pointer itself...).
And/or clearly separate string identifier from 'UI' string shown to user.
- Silence harmless error print about relation.
Object with particle system which doesn't use physics will
not have point cache component.
- Tag relations for update when particle system physics type
change.
This ensures correct state of point cache component.
This is all part of T59258.
There is no point having operations that iterate over the whole
bit array as macros, so convert BLI_BITMAP_SET_ALL to a function.
Also, add more utilities for copying and manipulating masks.
Reviewers: brecht, campbellbarton
Differential Revision: https://developer.blender.org/D4101
The original issue was that different platforms will use different
hash lengths, just because defaults on Git client were different.
Now we use explicit length for the hash, and length is the same as
is used for short hashes in Linux -- apparently they started to have
collisions with length of 11.
Instead of link toggle with enum, use a single popover that contains
both settings. The code for this isn't nice - needing 3x panels for now.
See D4075
rBa520e7c85c83 defined T_OVERRIDE_CENTER(1 << 25)
which was already in use T_PROP_PROJECTED(1 << 25)
thus skipping center calculation
Fixes T58882, T59518
Reviewers: campbellbarton, brecht
Maniphest Tasks: T58882, T59518
Differential Revision: https://developer.blender.org/D4100
NLA requires a usable default value for all properties that
are to be animated via it, without any exceptions. This is
the real cause of T36496: using the default of 0 for a scale
related custom property obviously doesn't work.
Thus, to really fix this it is necessary to support configurable
default values for custom properties, which are very frequently
used in rigs for auxiliary settings. For common use it is enough
to support this for scalar float and integer properties.
The default can be set via the custom property configuration
popup, or a right click menu option. In addition, to help in
updating old rigs, an operator that saves current values as
defaults for all object and bone properties is added.
Reviewers: campbellbarton, brecht
Differential Revision: https://developer.blender.org/D4084
This commit makes it so curve path parent solving accepts an explicit
arguments for both time and curve speed flag, making it so we don't
have to mock around with scene's frame.
One unfortunate issue still is that if the instancing object is used
for something else, we might be running into a threading conflict.
Possible solution would be to create a temp copy of an object, but
then it will be an issue of preventing drivers from modifying other
datablocks.
At least the original issue is fixed now, and things behave same as
in older Blender version. Additionally, the global variable which
was defining curve speed flag behavior is gone now!
This aims to resolve a conflict where some users want to keep keyboard
axis setting global, even when the orientation is set to something else.
Move/rotate/scale can optionally each have a separate orientation.
Some UI changes will be made next.
- Use the user orientation when pressing XYZ keys,
second press switches to global.
- Pressing again switches to global, or local
if you're have global orientation set.
The option for gizmos to have their own orientations will be added,
see: D4075
Show backface culling option even with rendered shading since it doesn't
yet support meshes two-sided option (noted as TODO).
Also correct bad string comparison.
This has been unbelievably painful to understand... And solution is only
partially good actually, we may even want a single axis for all the
islands in that case? But for now this is giving much better results
already, compared to the random crazyness it used to produce.
this was working for object/collection display/render but lattice was
not taken into account for non object/collection display/render types
(halo, axis, cross, circle, ...)
Reviewers: sergey, brecht
Maniphest Tasks: T59484
Differential Revision: https://developer.blender.org/D4096
Display statistics from CCG structure.
This makes values to be different from what is shown in object
mode, since CCG is operating on individual grids, and object
mode will stitch those grids. But on another, those values from
CCG is what sculpt mode is actually "sees" or "uses".
The number of faces should be the same in both sculpt and object
modes.
While the operator needs a depth to work as intended,
it feels buggy if the initial drag does nothing until a depth is found.
If the cursor isn't over any geometry calculate an initial depth.
This resolves this issue where users would enable a snapping mode
besides incremental (vertex for eg), then notice strange behavior w/
rotate and scale.
While this ability can be useful, it's quite an obscure use case.
Now changing snap-modes keeps rotate and scale using incremental snap,
with the option for these modes to be affected by other snapping modes.
D4022 by @kioku w/ own minor edits.
- This extends context menus, checking the selection in some cases
to conditionally show operators.
- When nothing is selected, add, paste .. etc are added to the menu.
- Use columns when mixed mesh modes are used (vert/edge/face).
- Move armature naming operators into sub-menu.
See D4043
This now only upload data per loops to the GPU, making use of index buffer
to draw polygon. This make use of the vertex cache, speed up renders
and saves a lot of vram.
Update performance is also slightly faster and can even be improved further
by updating only uvs or vcol independently.
This commits breaks texture paint batches. It will be added back in another
commit.
This is a second attempt to get the crash fixed. The original fix
worked, but it was reverted by d3e0d7f082.
Now the logic goes as:
- All pointers which we can not have shared (the ones which are
owned by the runtime) are cleared.
- The rest of runtime stays untouched.
This seems to be enough to keep particles happy.
This changes the bent normal effect to be a bit more subtle.
I also tuned down the bent normal blending factor so mesh faceted look may
appear more in occluded regions. this is to increase the fidelity of the
indirect lighting. This blending might be a parameter in the future.
Based the calculation on "Bent Normals and Cones in Screen-space"
by O. Klehm, T. Ritschel, E. Eisemann, H.-P. Seidel
This commit adds support for new curve tool and adds more functionalities to the existing primitives, including new handles, editing, stroke thickness curve, noise, preview of the real stroke, etc.
Thanks to @charlie for his great contribution to this improvement.
This replaces the test of consistency and capacity made with `GL_PROXY_TEXTURE_..` on AMD GPUs with one that checks only if the texture fits the limits of size and layer.
Differential Revision: https://developer.blender.org/D4081
Pinning doesn't really change the geometry, so we don't really have a
tag for this. Using '0' for now (same we use for Mark Seam).
I will check with Sergey Sharybin which flag to use instead.
We could use ID_RECALC_GEOMETRY to match the notifier, but even the
simpler ID_RECALC_SELECT would do it.
NLA strips support using the keyframe values in a variety of ways:
adding, subtracting, multiplying, linearly mixing with the result
of strips located below in the stack. This is intended for layering
tweaks on top of a base animation.
However, when inserting keyframes into such strips, it simply inserts
the final value of the property, irrespective of these settings. This
in fact makes the feature nearly useless.
To fix this it is necessary to evaluate the NLA stack below the
edited strip and correctly compute the raw key that would produce
the intended final value, according to the mode and influence.
Differential Revision: https://developer.blender.org/D3927
Basically uiRNACollectionSearch->but_changed was always NULL for the
search templates. So skip_filter would always be true, and we would
never filter.
An alternative fix would be to add the following to the begin of
`template_search_add_button_searchmenu`:
```
static bool always_true = true;
template_search->search_data.but_changed = &always_true;
```
This changes a bit the batches data structure. Instead of using one
vbo per material we use one for all material and use index buffers for
selecting the correct triangles.
This is less optimized than before but has potential to become more
optimized by merging the wireframe data vbo into the shading one.
Also the index buffers are not strictly necessary and could be just
ranges inside the buffer. But this needs more adding things inside
GPUIndexBuf.
Shaded triangles are not yet implemented (request from gpumaterials).
This also changes the mechanism to draw curve normals to make it not
dependant on normal size display. This way different viewport can
reuse the same batch.
The issue was caused by a special code in node tree freeing function
which will free extra fields in the case when tree is not in bmain.
This is how old code was dealing with "nested" trees, but is now
making behavior different from other datablocks. This is exactly
what was confusing copy-on-write logic.
Ideally, ntreeFreeTree() need to behave same as all other datablocks,
ad freeing of data of nested trees should be up to the owner of the
tree (this way it's all explicit and does not depend on check of
some special flag.
Prefer legacy OpenGL library, for the compatibility and portability
reasons.
Also use proper OpenGL libraries to be linked against, so we can
change preference to GLVND.
this can be different though (e.g. vertex parenting) and correct location
is already stored in ob->orig
spotted while looking into T59332
Reviewers: fclem, brecht
Differential Revision: https://developer.blender.org/D4076
This restores the object->data to a non-modifier evaluated state.
So this allow us to change evaluated object modifier stack directly and
get BKE_mesh_new_from_object() for the evalauted object.
By default the text button background color was a similar brightness
to the cursor, making it hard to see at times.
Button types number/slider/text background brightness when editing
varied quite a lot too.
- Change the background while editing to match the number button.
- Darken the selection for greater contrast.
Resolves T59219
Hex color values are now always in sRGB space, as would be expected by
most other applications. Previously they were in display space and using
the view transform.
In 2d655d3 the color picker was changed to use display space HSV values.
This works ok for a simple sRGB EOTF, but fails with view transforms like
Filmic where display space V 1.0 maps to RGB 16.292.
Instead we now use the color_picking role from the OCIO config when
converting from RGB to HSV in the color picker. This role is set to sRGB
in the default OCIO config.
This color space fits the following requirements:
* It is approximately perceptually linear, so that the HSV numbers and
the HSV cube/circle have an intuitive distribution.
* It has the same gamut as the scene linear color space.
* Color picking values 0..1 map to scene linear values in the 0..1 range,
so that picked albedo values are energy conserving.
In production files that use a lot of linking I measured loading speedups between 5% and 18%. In files that use less linking the speedup might not be noticeable at all, but it should not be slower.
Reviewer: brecht
Differential Revision: https://developer.blender.org/D4038
The new data structure uses open addressing instead of chaining to resolve collisions in the hash table.
This new structure was never slower than the old implementation in my tests. Code that first inserts all edges and then iterates through all edges (e.g. to remove duplicates) benefits the most, because the `EdgeHashIterator` becomes a simple for loop over a continuous array.
Reviewer: campbellbarton
Differential Revision: D4050
Mostly rewrite logic which now matches (de)select picking,
share between both operators.
- Support all selection operations (eSelectOp), fixes T59255.
- Add function that selects using 'BONESEL_*' flags & eSelectOp.
This avoids lasso & box select having to handle selection flushing.
- Fix strange behavior with lasso where selecting a bone in a chain
would only select the tip (from 2.7x).
There are some minor differences in center calculation but I don't think
they're important, if so we can update the code.
- Edit-bone uses head (instead of the middle when both are selected).
- Edit-bone flag for restricting components uses the selection instead
of the active bone.
It should be possible to use this when the active element is unselected, but
there still needs to be something else selected. Otherwise it is not possible
to deselect all as a way to get the gizmo out of the way.
- Key-map items properties now override tool-options
so modifier keys can have different behavior to the default action.
- Box & circle select now have `wait_for_input` properties
instead of detecting this based on selection options being set or not.
This relied on the key-map setting properties which may need to be
initialize from the tool settings.
Basically what we address here is to make sure the active object and the cage
are not interferring with the baking result (e.g., when baking Combined).
To do so, we take advantage of the fact that we create our own depsgraph
for baking. So now we can change the cowed objects, instead of the
original ones.
Note: There is still a way to get a crash. If you try to bake from
selected to active when is_cage, but with no cage object, we get an
assert:
```
BLI_assert failed: //source/blender/blenkernel/intern/DerivedMesh.c
mesh_calc_modifiers(), at
(((Mesh *)ob->data)->id.tag & LIB_TAG_COPIED_ON_WRITE_EVAL_RESULT) == 0
```
We can bypass this by passing ob_low instead of ob_low_eval to
bake_mesh_new_from_object on object_bake_api.c:847 . But then the edge
split modifier change will take no effect.
The alignment makes it so the button edges overlap, now one pixel is removed
to account for this.
Differential Revision: https://developer.blender.org/D4063
This enables ffmpeg to encode each frame in its own thread. However in most
cases Blender does not pass frames to ffmpeg fast enough to actually use the
more than two threads. In some tests the speed was measured to be about 20%.
If other parts of the video sequencer get optimized, this should improve.
Differential Revision: https://developer.blender.org/D4031
Applying the effect of bone parent is much more complicated than
simple matrix multiplication because of the various flags like
Inherit Scale. Thus it is reasonable to provide access to this
math from Python for complicated rest pose related manipulations.
The simple case of this is handled by Object.convert_space, so
the new method is only needed for complex tasks.
Differential Revision: https://developer.blender.org/D4053
There are some changes in API of OpenImageIO, but those are quite
simple to keep working with older and newer library versions.
Reviewers: brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D4064
It used to be used for some sort of ignoring automatically
generated bump nodes. But nowadays it causes one of the shaders
in Classroom demo file to be compiled wrong.
as opposed to the 'real' Dopesheet e.g. keyframes were not merged when
placed on the same frame
Reviewers: brecht, aligorith, angavrilov
Maniphest Tasks: T59005
Differential Revision: https://developer.blender.org/D4061
Also this display is optimized. It does not use blending and pixel discard.
Working with scanned data should be more pleasant with this.
A better option would be to use gl_FragDepth to have a better sense of
volume but this discards early depth test.
This makes it possible for engines to ask for batches and only fill their
data after all engine populate functions have run.
This means that, when creating the batches data we already know all the
batches that are needed for this redraw and the needed data.
This allows for less redundant data preparation and better attrib masking.
Ideally, we should run all viewports populate function before executing
the batch construction but this is not the scope of this patch.
Conversion from the old request method will be progressive and both can
coexist (see uses of mesh_create_pos_and_nor()).
This is a small change. We delay all gl calls at the first use of the
GPUIndexBuf / GPUVertBuf in order to be able to create multiple buffers
from different threads without having many gl contexts.
Basically, armature update is not supposed to be run in edit mode.
This worked in master and new dependency graph because nobody was
tagging armature for an update.
But with all those copy-on-write and other things we can't ensure
tag doesn't happen (and we shouldn't). So now we ensure unwanted
code is not run from the code itself.
P.S. Deeper reason of this goes to the optimization of not updating
pose channels when in edit mode. Since pose doesn't define anything
there we don't want to be bothered with a pose update after every
operation which changes it.
Textures are now hooked up to the RESET operation of particle
settings, which ensures particles being re-distributed when
texture is changed.
This is limited to a direct user modifications, which matches
old behavior in 2.79.
Seriously... There is no point in having those subversions if one does
not take advantage of them to reduce doversion work on file load! Now we
have to raise subversion again just for that. :(
We cannot let those data be generated on-the-fly in RBW evaluation
anymore, since those would be added to CoW eval object and never ported
back to orig objects.
We *could* get orig objects in eval code, of course, but as in
constratints, this is not really threadsafe and future proof, depsgraph
evaluation should really write back to orig data as little as possible.
So instead, add code to ensure required data is generated to objects
when their collection is added to rigidbody world.
Note that we *may* want to clean that up once collection is no more used
by RB? On the other hand, people might want to keep those data around to
be able to switch between different setups easily... So think it's OK to
keep them at least for now.
There is no guarantee that object in rigidbody collection already have a
valid rgigidbody data when rebuilding deg relations, that is often
generated on-the-fly by actual rigid body simulation.
Note that this can be an issue when generating deg relations I guess...
But at least it won't crash anymore.
The issue here is that in the new dependency graph drivers are
individual nodes which depends on what they are driving. This
means that changes to RNA path or property index should ensure
those nodes are updated. Easiest way to do so is to tag relations
for update.
One issue that especially newer users often run into is that they accidentally reset changes to the scene by switching frame without creating a keyframe first.
Therefore, this commit adds a new color that is used to draw properties if their current value differs from the one that would be set when switching to this frame.
This works both for existing keyframes as well as for currently interpolated frames.
Unfortunately the flags in but->flag are full, so I had to move the new flag to but->drawflag and pass that to all relevant functions.
I went with orange for the color since afaics it fits with the green and yellow that are currently used for keyframe states and since it's somewhat reddish to signify that there might be something to look out for here.
Reviewers: campbellbarton, #user_interface, brecht
Reviewed By: campbellbarton
Subscribers: brecht, predoe
Differential Revision: https://developer.blender.org/D3949
This is for 2.80 (though bug I mistakenly merged into was for 2.78.
Duplicate bugs T58127, T58411, T58440, and T58789 all fixed.
Bevel weights and crease are not real Mesh layers so get lost
on coversion of mesh to bmesh unelss the mesh's cd_flag member
tells the converter to create layers for them.
Most code the copies or partially copies meshes uses
mesh_new_nomain_from_template_ex, so copied the flag there.
Hit normal originates from tesselated triangles and isn't the
actual normal used for shading of flat faces. Thus, it is better
to use the actual polygon normals when available.
This makes the `#include <Windows.h>` use more localised and out of the
`image.c` file.
Reviewers: LazyDodo
Reviewed by: LazyDodo
Differential revision: https://developer.blender.org/D4049
Fix T58237: Exporters: Curve Modifier not applied when "apply modifiers" are selected.
Fix T58856: Python: "to_mesh" broken in 2.8.
...And many other cases... ;)
Thing is, we need target IDs to always be evaluated ones (at least I
cannot see any case where having orig ones is desired effect here).
Depsgraph/Cow system ensures us that when modifiers are evaluated by it,
but they can also be called outside of this context, e.g. when doing
binding, or object conversion...
So we need to ensure in modifiers code that we actually are always
working with eval data for those targets.
Note that I did not touch to physics modifiers, those are a bit touchy
and rather not 'fix' something there until proven broken!
Since we started using looptris we no longer need a triangulation
modifier in the highpoly object. In fact having was causing a bug
where baking would be utterly broken.
This fix normal baking. Combined pass still needs a fix to hide the
objects during baking.
Having the hostname allows us to identify which machine rendered which
frame in our render farm.
This simply uses the host's name, and doesn't do any DNS lookup of any
IP address of the machine. As such, it's only usable for identification
purposes, and not for reachability over a network.
Reviewers: sergey, brecht
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D4047
better have this vertex color layer cover the whole 0-1 range
thx @sergey for checking
Maniphest Tasks: T57994
Differential Revision: https://developer.blender.org/D3976
This is unfortunate, but the number of bugs in this configuration
keeps growing, and almost all of them are caused by bug in OpenCL
compiler.
The compiler is not likely to be fixed, since Apple declared OpenCL
deprecated.
This evil commit is aimed to keep officially supported features
of Blender in a good working and stable state.
The fix itself simply is to store the cage object as a pointer instead
of a string/name.
That said baking with or without cage is yielding very different results
than in 2.7.
The issue was caused mpoly array urequired by the cache filling,
but the pointer was never set when preparing render data.
Seems this change is safe enough, in terms it shouldn't be
causing slowdown, since the assignment of mpoly is cheap, but
hard to tell if there is anything else affected by thing underneath.
This fix aims to fix crash/assert failure related on wrong
evaluation order which happens when there is a cyclic
dependency involved.
The rationality of this change is that we can allow use
of uninitialized scalar value, but memory is better be
allocated.
This might not be ideal still, but worth a try.
There were at least three copies of those:
- OB_RECALC* family of flags, which are rudiment of an old
dependency graph system.
- PSYS_RECALC* which were used by old dependency graph system
as a separate set since the graph itself did not handle
particle systems.
- DEG_TAG_* which was used to tag IDs.
Now there is a single set, which defines what can be tagged
and queried for an update. It also has some aggregate flags
to make queries simpler.
Lets once and for all solve the madness of those flags, stick
to a single set, which will not overlap with anything or require
any extra conversion.
Technically, shouldn't be measurable user difference, but some
of the agregate flags for few dependency graph components did
change.
Fixes T58632: Particle don't update rotation settings
We had two different ways of doing it, SurfaceDeform and LaplacianDeform
would do it through a special modifier stack evaluation triggered from
binding operator, while MeshDeform would do it through a regular
depsgraph update/eval (also triggered from its binding op).
This enforces the later to search back for orig modifier data inside
modifier code (to apply binding on that one, and not on useless CoW
one).
Besides the question of safety about modifying orig data from threaded
despgraph (that was *probably* OK, but think it's bad idea in general),
it's much better to have a common way of doing that kind of things.
For now it remains rather dodgy, but at least it's reasonably consistent
and safe now.
This commit also fixes a potential memleak from binding process of
MeshDeform, and does some general cleanup a bit.
The shader is way simpler and run way faster on lower end hardware
(2x faster on intel HD5000) but did not notice any improvement on AMD Vega.
This also adds a few changes to the way the wireframes are drawn:
- the slider is more linearly progressive.
- optimize display shows all wires and progressively decrease "inner" wires
intensity. This is subject to change in the future.
- text/surface/metaballs support is pretty rough. More work needs to be done.
This remove the optimization introduced in f1975a4639.
This also removes the GPU side "sharpness" calculation which means that
animated meshes with wireframe display will update slower.
The CPU sharpness calculation has still room for optimization. Also
it is not excluded that GPU calculation can be added back as a
separate preprocessing pass (saving the computation result [compute or
feedback]).
The goal here was to have more speed for static objects and remove
the dependency of having buffer textures with triangle count. This is
preparation work for multithreading the whole DRW manager.
Right now does not add padding at the end of the buffer.
This seems not necessary but may cause problem on some platform. If needed
we will add this padding (only 2 more vertices).
Using listener here, although I suspect we should be using message
subscriber only. That said, this mimics the behaviour of the buttons
main region.
As for the original bug report what was happening was that when
switching between viewlayers (or when creating one) we may not get the
same active object. So the context breadcrumbs are different.
And the bug itself was that we were missing a redraw on view layer
change.
Aka all the thousand of reports duplicated here.
I should have seen this coming, since I had to add a hack in the first
place because things were "not working".
I should have figured out earlier that COW handles base in a really
special way, with its own special object_runtime_backup hack.
Other software uses this to define UV islands, so we can't just merge
any UVs with the same coordinate. They have to share a vertex too.
Contributed by Maxime Robinot, with changes by me.
Differential Revision: https://developer.blender.org/D4006
This fixes our workaround for until proper solution is accepted
in upstream.
Now, when default view behaves same as it was supposed to (and
as it behaves in OCIO-1.0.9) it is obvious that our configuration
violates own design -- default view is used for cases when
images don't want to be displays using "render" settings.
LaplacianDeform binding handling is a catastrophee in CoW context,
because half of the binding (the laplacian solver cache thingy) is not
saved, and can be re-generated on the fly from stored vcos.
This means that binding is not only done when hitting 'bind' button, but
also at file load, and when some things change.
And this utterly breaks with CoW design, not sure how to fix, will add a
task about that.
But this also means that NULL laplacian solver cache pointer is not a
good check to know whether it is binded or not, only stored vcos are
relevant for that (and the binding flag, of course).
This fixes/clarifies Surface Deform evaluation code that does the
binding, since that part should only be called outside of depsgraph
evaluation, with orig data-blocks and not CoW ones.
Now we have a decent amount of asserts and checks to ensure eveything
works as expected.
Also had to add a special case to get target's mesh in binding case,
since often target's evaluated mesh is not available, in that case (and
in that case only), we can actually compute that mesh (because we are
out of depsgraph evaluation).
Binding and unbinding *has* to happen outside of 'normal' depsgraph
evaluation of modifiers now that we have CoW, otherwise persistent data
stored in modifier data are always lost!
Note that this is only first step of the fix, modifiers code needs also
some work. Surfacedeform one is in next commit, Laplacian case is much,
much more complicated to handle, given how it uses its cached data. :(
This would overlap with buttons in the header. It's smaller to hit, but
still wider than the outlines for resizing, so hopefully it's fine.
Differential Revision: https://developer.blender.org/D4033
Was initially reported when painting on a mesh with armature,
which was failing due to missing bbone cache. The issue was
deeper, and was related on the way which object was used to
calculate crazyspace.
The layout changed when the radius property was added to shape
keys in 2.8, but the RNA code wasn't updated.
Also, even before that, the code didn't do anything to correctly
handle mixing sub-curves of different type (nurbs vs bezier) in
the same Curve object. Now that case is handled correctly but not
very efficiently by allocating a mapping table when necessary. To
recover some performance, a custom index lookup function is added.
When enable annotations with a grease pencil object, the GP Object must be set to Object mode because the annotation Draw and the GP draw are incompatible.
This is something what is caused by OCIO library. The patch
has been submitted there:
https://github.com/imageworks/OpenColorIO/pull/638
For until it is refined and checked we do workaround from
our side.
Solves weird situation when default display name is queried
from OCIO, but Default view being assumed to be set for it.
Now view is initialized to a default view of that display.
That one was flagged as useless since 2.77, and only kept 'to be sure'
everything was OK. This was years ago now, and never got any report on
this, so 2.8 sounds like a good time to nuke it.
Nice side-effect of using new __annotations__ thingy to store
dynamically-generated fields in a class: __annotations__ dict is not
ensured to exist for a given class, so we may end up modifying on of the
parents' one!
Passing depsgraph instead of scene, since a scene does not fully define the
state of object you want to use for the BVH.
Also, mesh_create_eval_final_view and mesh_create_eval_final_render are pretty
much the same, so mesh_create_eval_no_deform and
mesh_create_eval_no_deform_render are as well.
Issue reported on: T58734
Reviewers: sergey
https://developer.blender.org/D4032
No functional changes, but it makes all the coordinates more consistent
(going from small to larger values). It helps debugging in the future
to be able to rule out vertex order as a culprit.
The top-left and bottom-right corners were creating the new area in the
wrong place.
Blender 2.7x only had action zone corners in the top-right, and
bottom-left corners. So it had some hardcoded assumptions based on that.
This commit feels a bit like a hack, but I think it may be fine.
Bug reported via IRC, how to reproduce:
* Change shading to Rendered.
* Split viewport from the top-left corner.
Since it seems that CD_ORIGINDEX is not available for loops,
the only choice is to simply use the loop normals already
computed by depsgraph after evaluating modifiers.
This revealed a bug where the Auto Smooth settings would be lost
from the mesh after complex modifiers, or after edit mesh to mesh
conversion, so restoring them is needed to get correct results.
This only happens after a certain wireframe threshold.
We sort triangles into 2 bins (start and end of the buffer) based on a
threshold and just draw the first bin if the wireframe slider is low enough.
This optimization is disabled for deformed meshes when playback is active.
This optimization is only implemented for meshes object for now.
This should help resolve (to some extent) T58188.
This only happens after a certain threshold.
We sort triangles into 2 bins (start and end of the buffer) based on a
threshold and just draw the start bin if the wireframe slider is low enough.
This optimization is disabled for deformed meshes.
This should help resolve (to some extent) T58188.
The issue was caused by transflag set in geometry evaluation
never copied back top original object.
Now we have a dedicated operation which does all sort copy
back to original object, so we don't have to worry about
atomic assignments or what gets set where.
Still need to move boundbox to the same function, but it
needs some careful doublechecking first.
This seems to be a bug in OpenSubdiv. For now simply use Catmark
subdivision scheme with infinitely sharp edges.
Later on it's either gets fixed in OpenSubdiv or we do bilinear
subdivision on our side.
All localized datablocks are not supposed to have animation
data associated with them.
There was an easy way to reproduce assert failure: toggle
animation decorator for Viewport Display -> Color.
COW nodes in the graph are mostly connected via a relation type
that doesn't propagate the update flags. Unfortunately, due to
the scheduling implementation that means the relations don't
actually guarantee execution order for indirect dependencies.
Relations also don't guarantee order in case of cycles.
As mentioned in IRC, the simplest way to fix possible problems
is to execute all COW nodes as a separate execution stage. This
seems to fix crashes with Data Transfer modifier in a cycle.
Staging works by simply delaying actual scheduling of tasks for
non-COW nodes until the second run of schedule_graph.
Reviewers: sergey
Differential Revision: https://developer.blender.org/D4027
To make the pool more usable for running multiple stages of tasks,
fix local queue handling in BLI_task_pool_work_and_wait.
Specifically, after the wait loop the local queue should be empty,
or the wait part of the function contract isn't fulfilled. Instead,
check and run any tasks in queue before the wait loop.
Also, add a new function that resets the suspended state of the pool.
Add special handling for both edge cases (:p):
* 180° is same as no splitting by angle;
* 0° is same as split on all edges unconditionnaly.
In both cases we can also avoid computing poly normals.
This adds an elliptical arc primitive.
Press CKEY for toggling closed/open arc.
Press FKEY key for flipping arc.
Additional changes to gpencil primitives.
Increases default edges of circle to 64.
Keymap changes to allow primitives to be drawn with Shift or Alt key.
Allow Plus/Minus key to adjust number of edges.
Missing: Toolbar icon
Differential Revision: https://developer.blender.org/D4024
After update of the mesh some of that data is so broken that using
it would crash. To reduce the risk of crashes in case of dependency
cycles, clean it up immediately.
Presets were not updated when parameter were changed in rBe3d31b8dfbdc.
Note that will also check on generating more resistent py code for that
kind of presets, since that will also affect any custom preset made by
users...
The hang was due to the nodes being "evaluated" for every incomming link.
Solution: only evaluate once per nodetree.
Also merge the tagging of SSS and SSR into one traversal only.
We separate the background and foreground shading passes to be able to make
the object id pass optionnal if we don't need it.
This saves a bit more memory. Also not clearing all rendertargets saves
some GPU time too.
We exploit the fact that we are using the metallic workflow for material
and pass the metallic parameter instead of the specular color.
Pack the front facing bit in the color buffer only for matcap display.
Change buffer formats to use less bytes as possible.
Also don't request buffers that we won't use.
Saved 40MB on 2K screen on StudioLight + Shadows + Specular Lighting.
Includes several cleanups.
Move all mask-related fields from Object and OperationDepsNode
to Object_Runtime and IDDepsNode. Auto-apply DEG_TAG_GEOMETRY
if the mask changes after DEG rebuild. Update DEG API and all
code that uses it.
This fixes "source mesh data is not ready" errors from Data
Transfer modifier when parameters are changed in the UI after
the recent mesh_get_eval_final fix.
Reviewers: sergey
Differential Revision: https://developer.blender.org/D4025
Our mesh validation was only checking cd layout so far, not their actual
data. While this might only be needed for a few types, this is a
required addition for things like imported UVs, else we have no way to
avoid nasty things like NANs & co.
Note that more layer types may need that callback, time will say. For
now added it to some obvious missing cases...
Previously the shift key for line primitives only allowed diagonals.
This change allows the line to constrain to vertical and horizontal lines.
Differential Revision: https://developer.blender.org/D4012
Ensure we use lists for keymap items and item properties.
This means scripts can access keymap definitions from other layouts,
manipulating them without sometimes encountering a tuple that needs
to be converted into a list.
Instead of doing a lot of alpha blended drawing with jittering, use the
fragment shader to do the masking using a circle mask.
This is much simpler and requires much less resources.
Hopefully this may solve the issue we have with the Intels UHD Graphics 620
on linux.
The cause is that FOREACH_OBJECT_IN_MODE_BEGIN assumed that the active
object is in the correct mode, which is wrong in this case. It also
only considered objects of the same type as active, which had to be
replaced with an explicit type parameter.
Fix the old code that propagates selection changes to the
evaluated mesh directly without rebuilding, and avoid tagging
DEG_TAG_COPY_ON_WRITE if it succeeds.
It's a very bad idea to call this on non-COW instances - see T58150.
Also, when rebuilding mesh it's better to accumulate mask flags to
avoid possible repeated rebuilds from different users.
There was a bug due to non-aligned struct in the DNA that prevented us
to increase the size of the userdef light array.
Since the studio lights are now presets and stored in external files,
there is no need to keep backward compatibility with theses lights.
Remove the old array and create a new one.
Add blue tint light for specular.
Shadow focus let the user choose how hard are is the shadows transition.
Harder shadow transition can be used for stylistic effects or more uniform
shading.
Make shadow orientation respect the same orientation as the studio light
(view from +Y direction aka. front view). Make the default shadow direction
more similar to the default light position (the default light object, not
the default studio lighting).
Texture paint code was retrieving the evaluated mesh from the
original object, which isn't supposed to happen, so the cached
mesh isn't properly cleaned up by Edit Mode toggle.
Note that am not sure that is actually needed, since switching to that
mode does not actually use any eval data, it's only needed during init
of first stroke... But in doubt, that won't hurt to have it here anyway.
This reverts commit 3f31c28a02.
Gives issues zooming, could be resolved but it mostly worked OK before,
and it's not a priority to spend time on, so leave as is for now.
Now used the original dist instead, since using the distance between
the camera and the views offset may seem random from the users POV.
This addresses strange behavior noticed in T56934.
* Move the curvature computation to the cavity pass: One can argue it's not
the best performance wise (it gets a tiny perf pernalty if it is done
alone without the ssao), but it make the code cleaner and reduce
considerably the number of shader variation possible.
* Lower shader variation to 2^8 instead of 2^12
Now when try to add annotation, if a grease pencil object is selected, first the object is unselected. This solution is not perfect but it's better than cancel the annotation.
Thanks Dalai for his help.
This option is per viewport.
Having view space shading make sense when working on isolated objects like
if you were holding them in your hands. But for entire scene work, it is
better to have the lighting fixed to have a better spatial representation.
This changes a bit how the userprefs solid lights works. They are not
visible until enabling the "Edit Solid Light" checkbox. Once enabled the
current studiolight used for solid mode will be overwritten.
Once the lighting settings are tweaked, the user can click the
"Save as Studio light" button to save the current settings.
This makes it easy to create new lighting without messing the other
presets.
The studio lights are stored as ASCII files on the disk using a dead
simple custom format.
The UI/UX is not perfect and will be improved in other commits.
Also includes:
* Separate LookDev HDRI selection from Solid Lights
* Hide LookDev HDRIs from the Solid Lights selection list
For most brushes, texture painting uses a special mask accumulation
table in order to ensure that the amount of added color only increases
when the same pixel is touched multiple times by the stroke.
Unfortunately, only the mask texture was added to the mask before
this check, while normal, stencil, texture alpha masks were applied
after this check. This means that the check can pass if e.g. the
pressure is increased, but the final mask value is actually lower.
One might think that the mask values are fixed per pixel, but with
symmetry that isn't true. The result is a nasty stripe artifact due
to the discrete cutoff nature of the accumulation test.
In order to fix this, apply all masks before accumulation.
This allows primatives to be drawn from the center using the ALT key.
Also fixes SHIFT constraint not working correctly in all directions.
Both options can be used together.
Differential Revision: https://developer.blender.org/D4009
Not sure what those #ifdef's were supposed to do exactly... But one
thing is for sure, clearing that flag in particlesettings after first
encounter would prevent transferring it properly to other objects that
would use same particlesettings.
The original offset was wrong because it applied a constant to
homogenous coordinates (the actual depth is z/w), which broke
totally if near clip distance was reduced.
A correct depth offset has to take slope into account like
glPolygonOffset in order to avoid dotted lines caused by
interpolation precision variations. When drawing wire lines
however only the slope of the line itself is accessible, so
also generally increase the offset when the object is close.
Sculpt (and paint) modes rely on valid evaluated data at their initialization.
Added code to ensure that in `ED_object_mode_toggle()`, when relevant
toggle operator requires it (looks like sculpt/paint should be the only
ones affected, although particle edit may be too...).
While atomics library was trying to use "user-space" defined
LIKELY() and UNLIKELY(), this is not always true that user
code was checking for those macro coming from an unrelated
area.
Some space types are exposed as multiple space types,
previously the key binding to set the space type would use the last
used space-type.
Now pressing the key again cycles to the next space sub-type.
Without this, shortcut display is confusing since some space types share
a key. Keymap display will need to be updated to support this.
This commit adds a sample-based profiler that runs during CPU rendering and collects statistics on time spent in different parts of the kernel (ray intersection, shader evaluation etc.) as well as time spent per material and object.
The results are currently not exposed in the user interface or per Python yet, to see the stats on the console pass the "--cycles-print-stats" argument to Cycles (e.g. "./blender -- --cycles-print-stats").
Unfortunately, there is no clear way to extend this functionality to CUDA or OpenCL, so it is CPU-only for now.
Reviewers: brecht, sergey, swerner
Reviewed By: brecht, swerner
Differential Revision: https://developer.blender.org/D3892
To make consistent with Left click select, now if click outside any point, all points are deselected.
Reduced the circle of selection to get more precission. The radius used before was too wide.
Note: There is a minimum distance to consider outside selection area.
The old onion skinning used in 2.7x has been ported and converted to 2.8. Only basic features have been included. For more advanced onion skin features, use grease pencil objects.
Onion Skin is supported in View 3D and Sequencer.
Not exactly sure why we did not have cached displist for bevel object
here... But anyway, that conversion operation should really happen
outside of depsgraph evaluation area, so makes sense to do it as when
generating geometry for rendering, imho. Also solves issues like loosing
hidden parts of the curve/surface, etc. Still using viewport resolution
for curves, though.
Meshes from evaluated objects may already have modifiers applied, but
that's not the case for curves, we need to do that when converting them
to meshes.
This is in order to have more flexible ligthing presets in the future.
The diffuse lighting from hdris was nice but lacked the corresponding
specular information. This is an attempt to make it possible to customize
the lighting and have a cheap/easy/nice-looking pseudo-PBR workflow.
* Add cheap PBR to Workbench with fresnel and better roughness support.
This improves the look of the metallic surfaces and is easier to control.
* Add ambient light to studio lights settings: just a constant color added
to the shading.
* Add Smooth option to studio lights settings: This option fakes the
effect of making the light bigger making the lighting smoother for this
light. Smoother lights gets reflected like a background hdri.
* Change default light settings to include the smooth params.
* Remove specular highlights from flat shading. (could be added back but
how do we make it good looking?)
* If specular lighting is disabled, use base color without using metallic.
* Include a lot of code simplification/cleanup/confusion fix.
The idea is to make main thread and job threads to be scheduled
on CPU dies which has direct access to memory (those are NUMA
nodes 0 and 2).
We also do this for new EPYC CPUs since their NUMA nodes 1 and 3
do have access but only to a higher range DDR slots. By preferring
nodes 0 and 2 on EPYC we make it so users with partially filled
DDR slots has fast memory access.
One thing which is not really solved yet is localization of
memory allocation: we do not guarantee that memory is allocated
on the closest to the NUMA node DDR slot and hope that memory
manager of OS is acting in favor of us.
See T57857 for discussion. This reverts:
"Outliner: Do not gray out empty collections"
4521d3e707.
"Remove eye column from the outliner"
fd16b35997.
Fix/workaround issues in pose and edit mode"
6d2e2e30d5.
"Per view-layer collection visibility"
4de6a210c6.
Tools can define a function that generates the tooltip using a function,
this takes the tools keymap as an argument which can be used
to extract keys to include in the tip.
A few reasons motivating this change:
* It works well for all devices: mouse, trackpad, and tablet pens.
* For beginners or users coming from other software, it's easier to get
started and avoids an initial stumbling block.
* Many users in 2.7 (about half?) were already using left click select, so
combined with the above advantages it makes for a practical default.
Note that we continue to support right click select, as many experienced
Blender users (and developers) see efficiency advantages in this approach.
The option to switch is in the first time setup splash screen, and in the
user preferences.
Curve/surface/text final data may be an evaluated mesh instead of a
displist, when some modifiers (constructive ones in particular) are
applied to it.
Note that this is just getting feet wet, whole draw code suffers from
the same issue! :P
We still control this in the viewport collections visibility menu. But
now we are actually changing the visibility of the collections, not of
the objects.
If a collection is indirectly invisible (because one of its parents are
invisible) we gray it out.
Also if you click directly in the collection names, it "isolates" the
collection by hiding all collections, and showing the direct parents and
all the children of the selected collection.
Development Note:
Right now I'm excluding the hidden collections from the depsgraph.
Thus the need for tagging relations to update.
If this proves to be too slow, we can change.
To control per-viewlayer, per-object visiblity users should either use
the menu or the H shortcuts (H, Alt+H, Shift+H).
Following next will be to have proper per-viewlayer collection
visibility. This will replace the functionality currently at the
viewport collections visibility menu.
Previously we tried this but reverted (see 64d40c82c3)
because there wasn't a predictable set of keys to use global-space.
Now the keys are swapped:
- 'GX' always transforms in the user defined orientation.
- 'GXX' always transforms in global space.
As before 'GXXX' cycles back to disabling constraints.
This does have a down side that GXX won't be used for local-space
when the user has global space set.
Also, when global is the user-orientation, pressing GX and GXX
does the same thing.
Note: examples here use GX but could be any transform-mode/axis.
This has been a contentious topic: Artists at the Blender-Studio prefer
this behavior, while the user community overwhelmingly prefers 2.7x
operator search. Previously this defaulted to accessing tools
(eg: Space-T activates transform.. Space-R rotate etc) which I still
believe is a better long term default - otherwise we don't have
efficient tool switching for a system we intend to make more use of,
nevertheless as far as I can tell users haven't been keen on adopting
this so far. Show the preference on the setup screen since many users
don't animate and will may want to quickly search or switch tools.
When a modifier depends on some other object's position, then it also
depends in its own position, this has to be also told to depsgraph.
Fixes several modifiers where moving target would update the modifier,
while moving modified object itself would not.
Also fixes a few issues (like meshdeform's EM variant not using editmesh
data), and adds a few optimizations (like only generating that source
mesh when we do have a vgroup defined in parameters, for modifiers only
using it to access vgroup)...
These changes are necessary. Need to mark vertices of edges passed
in geom; also the normals.out slot has a custom element type, not
ELEM, so need to prevent attempt by python code to convert it to
an elem. But this leaves a memory leak. I will rework code to not
use normals.out slot at all, but that's a bigger fix.
Now there is a crash in a different place (GPU code). Think that if
using Op on its own (instead of from edbm_bevel_calc, there needs to
be a dependency graph update and maybe more?
Fixes T57931 Particle weight edit mode is not supported.
There is a bug that prevent refresh of the toolsettings on which is based
the weight / non-weight display selection (see T58086).
* The exporter now gets the view layer from the context
instead of the depsgraph.
* The depsgrap is now fetched only on demand since the graph
is not always needed for exporting (currently only for armature exports).
was caused by NC_SCENE notifier being skipped with a non-scene reference
showed e.g. in timeline not updating keyframes/cachelines
Maniphest Tasks: T57929
Differential Revision: https://developer.blender.org/D4000
Accessing ob->bb directly is not a good idea anyway.
Still, would like to know why/where this bbox is freed, since it is
allocated at least once by depsgraph eval, as part of
`BKE_object_handle_data_update()` function...
Not sharing caused duplication in the keymap and
required a factory class generator.
Simplify tool & keymap definitions by sharing them.
It's highly unlikely we will ever want these to use different keys
once they're set as the active tool.
B-Bone shape is a non-trivial computation, so access to
the results would be useful for Python scripts working with
B-Bones, e.g. rig generation.
This exposes both final segment matrices, and the tangent
vectors computed from the custom handle bones.
Since the handle tangents use the axis+roll orientation math
of edit bones, add matrix conversion static methods to Bone.
Reviewers: campbellbarton
Differential Revision: https://developer.blender.org/D3983
Sometimes the text doesn't fit. What to do in this case?
* Overflow: The default behaviour still is to overflow the text.
* Truncated: If any text box is defined we can also not draw the text
that goes outside the text boxes.
* Scale to Fit: For single-text box texts we can scale down the text until
it fits.
To support textboxes we are bisecting the scale until we find a good
match. Right now the hardcoded iteration limit is 20, and the threshold 0.0001f.
An alternative in the future would be to tackle this by integrating existing
layout engines such as HarfBuzz.
Note: Scale to fit won't work for multiple text-boxes if any of them has
either width or height as zero.
Reviewers: campbellbarton
Differential Revision: https://developer.blender.org/D3874
Feature development sponsored by Viddyoze.
Implement strand selection visualisation but without any shading.
I think this is not the overlay job to draw the strands shaded.
We can already view the children strands shaded for now but we might add
an option to draw the shaded strand instead of (or in addition to) the
guide strand.
This modifier only uses mesh to get vgroup, which is only needed in case
modified object is indeed a mesh! Building a mesh from curve here is not
only useless and time-consuming, it will also easily fail the assert
about same number of vertices!
Note that surface_project and subsurf option also need more work at some
point, but this is probably not that urgent for now.
Also, use MOD_get_vgroup() helper in modifier code itself and pass
resulting MDeformVert & index to BKE_shrinkwrap's `shrinkwrapModifier_deform()`,
this is simpler and avoids duplicating vgroup handling code.
Related to T57972.
This modifier only uses mesh to get vgroup, which is only needed in case
modified object is indeed a mesh! Building a mesh from curve here is not
only useless and time-consuming, it will also easily fail the assert
about same number of vertices!
Also, use MOD_get_vgroup() helper in modifier code itself and pass
reluting MDeformVert & index to BKE_curve's curve_deform_verts(),
this is simpler and avoids duplicating vgroup handling code.
Also fixes crash when used on lattice.
Related to T57972.
This modifier only uses mesh to get vgroup, which is only needed in case
modified object is indeed a mesh! Building a mesh from curve here is not
only useless and time-consuming, it will also easily fail the assert
about same number of vertices!
Also fixes crash when used on lattice.
Related to T57972.
There is a new `bpy.app.timers` api.
For more details, look in the Python API documentation.
Reviewers: campbellbarton
Differential Revision: https://developer.blender.org/D3994
In 2.79 hiding works in paint modes with selection enabled,
so it is a missing feature. This implements it in texture
paint overlays and in workbench base shading.
Reviewers: fclem
Differential Revision: https://developer.blender.org/D3989
Now it's possible define the blend mode between layers including the option to clamp the layer using underlying layers.
Also a new Simplify option has been added to disable blend layers.
The approach is fairly simple, just apply an edge detection filter to the view normal and scale the brightness based on that.
The overlay is disabled at object boundaries to avoid dark lines around objects.
Generally, this implementation follows the proposal of @monio at https://blender.community/c/rightclickselect/J9bbbc.
The changes are:
- Dynamic filter radius (on high-DPI displays, a radius of two is used)
- Options to reduce the strength of both ridges and valleys
- Tweaked function for the strength reduction (the original method actually had a local maximum, resulting in a brighter line inside valleys)
- Multiplication for blending instead of overlay, which doesn't work reliably with scene-referred intensities
- Renamed to point out the distinction between it and the SSAO-based cavity overlay
Reviewers: jbakker
Reviewed By: jbakker
Subscribers: billreynish, manitwo, linko, monio
Differential Revision: https://developer.blender.org/D3617
This fixes conflicts where the tool and editor keymap use different event
types. Tools need to be able to mouse buttons on PRESS without triggering
CLICK events in the editor keymap.
This commit makes it so that subsurf/multires modifiers will respect
the WITH_OPENSUBDIV option. The WITH_OPENSUBDIV_MODIFIER option is
now gone.
For artists it mean that subsurf modifier will behave same as it is
planned for 2.80. Multires will now support sculpting, but it has some
known limitations. Those will be worked on before the final release.
If OpenSubdiv is disabled, no subsurf/multires functionality will
present.
For the details see:
https://wiki.blender.org/wiki/Reference/Release_Notes/2.80/Modeling#Subsurf.2FMultires
The old behavior with two clicks evolved out of a gesture system, and it can
have some advantages if you want to press more keys to constrain for example. But
this seems a better default.
This is a local fix.
The problem with duplicate looptris still remains.
That is, it can still be released in one place but not upgraded in the other.
(note: setting the looptris to NULL in the evaluated mesh and assert whether it is still NULL when the mesh is freed could indicate where those cases are).
Go for the simple solution for now (disable auto-texspace in evaluated mesh).
Proper fix would be part of known TODO redesign of bbox handling.
Solution suggested by @sergey, thanks!
For users that want the 2.7 LMB keymap behavior, this provides a way of
working without tools interfering. For RMB select this operator is quite
redundant with the Cursor tool, we may have to find a solution for that.
Note that we also might later add transform tweak to the transform tools,
when nothing is selected. But this is important for existing users who
preferred the existing workflow.
mesh_finalize_eval() may set ob->data to evaluated mesh, needs to be
done *after* call to BKE_mesh_texspace_copy_from_object(), else that one
is meaningless.
Related to investigations on T57985, but does not solve it at all. :(
Thinks whole bbox code needs a complete rewrite, one can see a lot of
old history in it, it has way too many functions doing
nearly-the-same-thing(c), it spreads in very inconsistent ways across a
lot of files, ... But have no time for this right now, and would not be
a good idea with Beta comming up close anyway.
So for now going the simple and (hopefully) sane & safe way: forbid
object-level functions to affect data-level bbox. Mesh and curve ones
would generate bbox in obdata instead of object, for some reason (all
other obdata types only use object's bbox ever). That may have been
working in old ages, but with CoW and threaded depsgraph this is just
calling for piles of issues.
Implements the first changes for T54115:
* Rename "User Preferences" window to "Settings" in the UI.
We'll likely put workspace settings in there, separate from the global
user settings. System settings should become separate from user
settings in future to allow settings for specific hardware.
* Add sidebar region for navigation (scrolls independently).
Addresses space problems, so we can add more categories as needed now.
* Increase size of Settings window to compensate new navigation bar.
* Group sections into User Preferences and System.
Icons for section groups by Andrzej Ambroz. Thanks!
* Bumps subversion for file compatibility.
Screenshot: https://developer.blender.org/F5715337
I also added categories for future work, but commented them out.
We may also want to redesign contents of each section now.
Reviewers: brecht, campbellbarton
Differential Revision: https://developer.blender.org/D3088
Design Task: https://developer.blender.org/T54115
If backface culling is off, the user obviously wants to paint on
back faces, so the normal angle cutoff designed to prevent painting
at glancing angles shouldn't do the culling as a side effect.
Bring back per-viewport localview. This is based on Blender 2.79.
We have a limit of 16 different local view viewports.
We are using both the numpad /, as well as the regular /.
Missing features:
* Hack to make sure lights are always visible.
* Make rendered mode with external engines to support this as well
(probably just need to support this in the RNA iterators).
* Support over 16 viewports by taking existing viewports out of local view.
The code can use a cleanup pass in the future to unify the test to see
if an object is visible (or we can use TESTBASE in more places).
Logic to update child windows on workspace changes should simply ignore
temporary child windows. Users opened those for a specific purpose (i.e. edit
user preferences or show render result). Blender should not come in and
repurpose it.
- Class constructors without body (only attribute initialisations)
can safely be kept in the class header files
- Constructor variables should be initialized in the order of their
definition in the header files
This change is also aimed to remove a couple of
build warnings from the linux builds.
Computing the shape of a B-Bone is a quite expensive operation, and
there are multiple constraints that can access this information in
a variety of useful ways. This means computing the shape once per
bone and saving it is good for performance.
Since the shape may depend on the position of up to two other bones,
often in a "cyclic" manner, this computation has to be a separate
node with its own dependencies.
Reviewers: sergey
Differential Revision: https://developer.blender.org/D3975
Most important changes are in the Animation exporter and Animation Importer.
There is still some cleaning up to be done. But the Exporter/Importer basically
work within Blender 2.8
Some details:
User Interface:
The interface has been reorganized to look more like the FBX interface.
New options in user interface:
* keep_keyframes:
When sampling the distance between 2 keyframes is defined by
the sampling rate. Furthermore the keyframes defined in the
FCurves are not exported. However when this option is enabled
then also the defined keyframes will be added to the exported fcurves
* keep_smooth_curves:
When sampling we do not use FCurves. So we also have no Curve handles
for smooth exporting. However when this option is enabled, Blender
does its best to recreate the handles for export. This is a very
experimental feature and it is know to break when:
- the exported animated objects have parent inverse matrices
different from the unit matrix
- The exported objects have negative scaling
There may be many other situations when this feature breaks.
This needs to be further tested. It may be removed later or replaced
by something less wonky.
BlenderContext:
is a new class that contains the bridge to Blender. It contains
pointers to the current export/import context plus derived values
of Depsgraph, Scene, Main
Reporting:
I reorganized the output on the Blender Console to become more
informative and more readable
Preservation of Item names:
name attributes are now encoded with XML entities. This makes
sure that i can export/import names exactly defined in the tool.
This affects material names, bone names and object names.
Hierarchy export:
* Object and Bone Hierarchies are now exported correctly
by taking the Blender parent/child hierarchy into account
* Export also not selected intermediate objects
Problem:
When we export an Object Hierarchy, then we must export
all elements of the hierarchy to maintain the transforms. This
is especially important when exporting animated objects, because the
animation curves are exported as relative curves based on the
parent-child hierarchy. If an intermediate animated object is missing
then the exported animation breaks.
Solution:
If the "Selected" Optioon is enabled, then take care
to also export all objects which are not selected and hidden,
but which are parents of selected objects.
Node Based Material Importer (wip):
Added basic support for Materials with diffuse color and
diffuse textures. More properties (opacity, emission) need
changes in the used shader.
Note: Materials are all constructed by using the principled BSDF shader.
Animation Exporter:
* Massive optimization of the Animation Bake tool (Animation Sampler).
Instead of sampling each fcurve separately, i now sample all
exported fcurves simultaneously. So i avoid many (many!)
scene updates during animation export.
* Add support for Continuous Acceleration (Fcurve handles)
This allows us to create smoother FCurves during importing Collada
Animation curves. Possibly this should become an option ionstead of
a fixed import feature.
* Add support for sampling curves (to bake animations)
* The animation sampler now can be used for any animation curve.
Before the sampler only looked at curves which are supported by
Standard Collada 1.4. However the Collada exporter currently
ignores all animation curves which are not covered by the 1.4.1
Collada Standards. There is still some room for improvements
here (work in progres)
Known issues:
* Some exports do currently not work reliably, among those
are the camera animations, material animations and light animations
those animations will be added back next (work in progres)
* Exporting animation curves with keyframes (and tangents)
sometimes results in odd curves (when parent inverse matrix is involved)
This needs to be checked in more depth (probably it can not be solved).
* Export of "all animations in scene" is disabled because the
Collada Importer can not handle this reliably at the
moment (work in progres).
* Support for Animation Clip export
Added one extra level to the exported animations
such that now all scene animations are enclosed:
<Animation name="id_name(ob)_Action">
<Animation>...</Animation>
...
</Animation>
Animation Importer:
* Import of animations for objects with multiple materials
When importing multiple materials for one object,
the imported material animation curves have all been
assigned to the first material in the object.
Error handling (wip):
The Importer was a bit confused as it sometimes ignored fatal
parsing errors and continued to import. I did my best to
unconfuse it, but i believe that this needs to be tested more.
Refactoring:
update : move generation of effect id names into own function
update : adjust importer/exporter for no longer supported HEMI lights
cleanup: Removed no lopnger existing attribute from the exporter presets
cleanup: Removed not needed Context attribute from DocumentExporter
fix : Avoid duplicate deletion of temporary items
cleanup: fixed indentation and white space issues
update : Make BCAnimation class more self contained
cleanup: Renamed classes, updated comments for better reading
cleanup: Moved static class functions to collada_utils
cleanup: Moved typedefs to more intuitive locations
cleanup: indentation and class method declarations
cleanup: Removed no longer needed methods
update : Moved Classes into separate files
cleanup: Added comments
cleanup: take care of name conventions
... : many more small changes, not helpful to list them all
In some instances, the number of control vertices of a hair could change mid-frame.
Cycles would then be unable to calculate proper motion blur for those hairs. This adds
interpolated CVs to fill in for the missing data. While this will not necessarily result in
a fully accurate reconstruction of the guide hair, it preserves motion blur instead of disabling it.
Reviewers: #cycles, sergey
Reviewed By: #cycles, sergey
Subscribers: sergey, brecht, #cycles
Tags: #cycles
Differential Revision: https://developer.blender.org/D3695
Explicitly tag copy-on-write form library remap. Previously, this
tag was used implicitly via geometry/transform tagging, which worked
ok for objects. For non-objects we do need to ensure all copies has
correct pointer and the only way to do so is to pass tag explicitly.
There is probably more places in the library remap where this is
needed, but not being familiar with the code makes it difficult to
spot where possible tags are missing.
When dragging in a vertical or horizontal region,
there is no need to detect the drag axis.
Gives minor usability improvement for dragging over vertical tabs.
- Z shows pie menu (removed wire/xray toggles).
- Alt-Z toggles x-ray.
- Shift-Z toggles wireframe.
- Shift-Alt-Z toggles overlays.
Note that toggle overlays had no binding for 2.7x,
this is likely not a heavily used option and could even be left out.
setBUILD_CMAKE_ARGS=%BUILD_CMAKE_ARGS% -G "Visual Studio %BUILD_VS_VER%%BUILD_VS_YEAR%%WINDOWS_ARCH%"%TESTS_CMAKE_ARGS%%CLANG_CMAKE_ARGS%%ASAN_CMAKE_ARGS%%PYDEBUG_CMAKE_ARGS%
if"%BUILD_VS_YEAR%"=="2019"(
setBUILD_PLATFORM_SELECT=-A %MSBUILD_PLATFORM%
)else(
setBUILD_GENERATOR_POST=%WINDOWS_ARCH%
)
setBUILD_CMAKE_ARGS=%BUILD_CMAKE_ARGS% -G "Visual Studio %BUILD_VS_VER%%BUILD_VS_YEAR%%BUILD_GENERATOR_POST%"%BUILD_PLATFORM_SELECT%%TESTS_CMAKE_ARGS%%CLANG_CMAKE_ARGS%%ASAN_CMAKE_ARGS%%PYDEBUG_CMAKE_ARGS%
blender \- a 3D modelling and rendering package''')
blender \- a full-featured 3D application''')
fw('''
.SH SYNOPSIS
@@ -76,9 +76,9 @@ fw('''
.SH DESCRIPTION
.PP
.B blender
is a 3D modelling and rendering package. Originating as the in-house software of a high quality animation studio, Blender has proven to be an extremely fast and versatile design instrument. The software has a personal touch, offering a unique approach to the world of Three Dimensions.
is a full-featured 3D application. It supports the entirety of the 3D pipeline - modeling, rigging, animation, simulation, rendering, compositing, motion tracking, and video editing.
Use Blender to create TV commercials, to make technical visualizations, business graphics, to create content for games, or design user interfaces. You can easy build and manage complex environments. The renderer is versatile and extremely fast. All basic animation principles (curves & keys) are well implemented.
Use Blender to create 3D images and animations, films and commercials, content for games, architectural and industrial visualizatons, and scientific visualizations.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.