With the clear separation between data and a state we need to make sure
operators and other areas are readingstate from evaluated datablocks.
Code-wise it means that all evaluated values are to be read from dataglock
which is owned by dependency graph, using DEG_get_evaluated_id() or similar
helper.
Reviewers: brecht, mont29, campbellbarton, dfelinto
Differential Revision: https://developer.blender.org/D3036
Applied similar fix to T54233 to get the "Record with NLA" feature working with
active action blending + influence settings. Extrapolation is explicitly ignored
though, as it shouldn't be used with this feature (i.e. it is already disabled
with the new strips and also on the animdata by default)
ViewRender was removed, which means we can't get the render engine for files
saved in 2.8. We assume that any files saved in 2.8 were intended to use Eevee
and set the engine to that.
A fix included with this is that .blend thumbails now draw with Clay mode,
and never Eevee or Cycles. These were drawn with solid mode in 2.7, and should
be very fast and not e.g. load heavy image textures.
Differential Revision: https://developer.blender.org/D3156
The depsgraph was always created within a fixed evaluation context. Passing
both risks the depsgraph and evaluation context not matching, and it
complicates the Python API where we'd have to expose both which is not so
easy to understand.
This also removes the global evaluation context in main, which assumed there
to be a single active scene and view layer.
Differential Revision: https://developer.blender.org/D3152
This adds initial multi-object editing support.
- Selected objects are used when entering edit & pose modes.
- Selection & tools work on all objects however many tools need porting
See: T54641 for remaining tasks.
Indentation will be done separately.
See patch: D3101
- Wasn't clear which functions handle edit-bones.
- Mixed both ebone and edit_bone in names.
- Didn't use ED_armature_* prefix for public API.
See P655 to apply to branches.
This was only used for viewport rendering, where we can just pass the engine
type directly. There is no technical reason why we can't draw the same depsgrpah
with different render engines.
It also led to some weird things like requiring a render engine for snapping
and raycast API functions.
Differential Revision: https://developer.blender.org/D3145
- Undo that changes modes currently asserts,
since undo is now screen data.
Most likely we will change how object mode and workspaces work
since it's not practical/maintainable at the moment.
- Removed view_layer from particle settings
(wasn't needed and complicated undo).
Regression caused by earlier commits to improve the automerge behaviour.
In this case, the problems only occurred when moving a selected keyframe
forwards in time to overlap an unselected keyframe.
The other approach was causing too much error in some cases (e.g. favouring
the lower-valued keyframes). This fix should make the resulting curves less
bumpy/jagged.
This commit removes an earlier attempt at optimising the lookups
for duplicates of a particular tRetainedKeyframe once we'd already
deleted all the selected copies. The problem was that now, instead
of getting rid of the unselected keys (i.e. the basic function here),
we were only getting rid of the selected duplicates.
With this fix, unselected keyframes will now get removed (as expected)
again. However, we currently don't take their values into account
when merging keyframes, since it is assumed that we don't care so much
about their values when overriding.
that end up on the same frame
Currently, when scaling keyframes in the Dopesheet, if multiple
selected keyframes end up on the same frame post-scaling, they
would not get removed by the "Automerge" setting that normally
removes duplicates on the same frame.
This commit changes the behaviour so that when multiple selected
keyframes end up on the same frame, instead of keeping all these
around on the same frame (e.g. resulting in a column of keyframes
on different values), we will instead merge them into a single
keyframe (by averaging the values). This should result in a
smoother F-Curve with fewer "stair-steps" that need to be carefully
cleaned out afterwards.
Requested by @hjalti