We usually don't silence migh-be-uninitialized warning (which is the only
thing which could explain setting matrix to all zeroes) so we can catch
such errors when using tools like Valgrind.
I don't get warning here and the initializer was wrong, so removing it.
If it-s _REALLY_ needed please do a proper initialization.
This was because the poll callback was checking for the presence of an active layer.
If you just create an empty datablock and try to paste, nothing would happen.
However, this check was kindof redundant anyway, as the operator would add a layer for
you if it didn't find one.
(Later this calculation should be moved into the iteration macro instead, since
it only needs to be applied once per layer along with the diff_mat calculation)
A common problem encountered by artists was that they would accidentally move
the 3D cursor while drawing, causing their strokes to end up in weird places in
3D space when viewing the drawing again from other perspectives.
This operator helps fix up this mess by taking the selected strokes, projecting them
to screenspace, and then back to 3D space again. As a result, it should be as if
you had directly drawn the whole thing again, but from the current viewpoint instead.
Unfortunately, if there was originally some depth information present (i.e. you already
started reshaping the sketch in 3D), then that will get lost during this process.
But so far, my tests indicate that this seems to work well enough.
transparency.
The issue is that we are rendering to a 0..1 clamped sRGB buffer with
unpremultiplied alpha, where the correct thing to do would be to render
to an unclamped linear premultiplied alpha buffer. Then we would just
make fire purely emissive without affecting the alpha channel at all,
but that doesn't work here.
So for now, draw fire and smoke separately using different shaders and
blend modes, like it used to before the smoke programs were rewritten
(see rB0372b642).
Application code can pass ubytes, Gawain converts to float vec4 expected by shader.
For now the conversion is simple linear. We can add sRGB support later if needed.
General reshuffling of defines and spacing/brace usage for consistency.
In particular:
* When defining types, don't mix pointers and non-pointer types on same line
to avoid confusion
* As much as possible, have all defines at the top of each block instead of
scattered haphazardly throughout the code
Assigning NULL to scopes' data pointer in 'non-UI readfile' context was terribly wrong for sure,
if we'd really want to reset them here we should freed them first.
But we can rather just ignore them here, those are purely runtime data managed by image editor,
no need to touch them here.
Significant rewrite with some improvements.
Maintain visual hierarchy of the grid:
- emphasized lines draw atop normal lines
- axes draw atop all lines (same as before)
Draw axes only once, not twice.
Return early if nothing to draw.
Single draw call for the default case (grid floor with X and Y axes).
Z axis needs a second draw call because it uses 3D coordinates.
Part of T49043
* Stroke editing functions should be in gpencil_edit.c not gpencil_data.c
(the latter is only for handling "CRUD" operations on things like
layers, brushes, and palettes)
* Deduplicate the GP_STROKE_BUFFER_MAX define
Added a way to select all the currently visible strokes that use the same
color as the selected stroke. This can be accessed via the Select Grouped (Shift-G)
operator as an alternative to selecting by layer.
This commit adds optional "pressure" and "strength" arguments to the
stroke.points.add() method. These are given default values of 1.0,
so that old scripts can be ported over to the new API with less effort
while reducing confusion about why auto generated strokes won't appear.
Also reduced number of matrix ops by generating final positions directly.
Also removed a display list (deprecated in modern GL).
Tried to reuse sinval & cosval tables but those values are skewed (last value repeats first value, middle values are squished to compensate). Went with sinf & cosf instead.
Part of T49042
The idea of the change is to avoid queue growing too long
and handle all the operations as quick as possible.
Gives about 3% speedup on one of the barber shots here.
Now we pass streams to Alembic instead of passing the filename string.
That way we can open the stream ourselves with the proper unicode
encoding.
Note that this only applies to Ogawa archive, as HDF5 does not support
streams.
Differential Revision: https://developer.blender.org/D2160
Auto-scale is expected to work just fine now.
Only thing changed now is the pivot point for the scale: it is now
the same as rotation pivot, so scaling happens around weighted median
of the translation tracks. This seems to be what is actually required
for the VFX workflow.
Previously, this extension used the translation compensated image centre
as reference point for rotation measurement and compensation. During
user tests, it turned out that this setup tends to give poor results
with very simple track configurations.
This can be improved by useiing the weighted average of the location
tracks for each frame as pivot point. But there is a technical problem:
the existing public API functions do not allow to pass the pivot point
for each frame alongside with the stabilisation data. Thus this
change implements a trick to package a compensation shift into
the translation offset, so the rotation can be performed around
a fixed point (center of frame). The compensation shift will then shift
the image as if it had been rotated around the desired pivot point.
It is common in blender to use 1-based counting for
frame sequences (while 0-based is allowed). Thus
initializing to use frame 1 as reference for stabilization
is likely to produce smooth start values in most cases
values > 1 will zoom in and values < 1 zoom out
Rationale: the changed orientation is more natural
from a user POV and doing it this way is also more
consistent with the calculation of the other
target_* parameters.
Compatibility: This will break *.blend files saved
with the previous version of this patch from the
last days (test period). It will *not* break any
old/migrated files: Previously, the DNA field "scale"
was only used to cache autoscale. Only with the
Stabilisator rework, "scale" becomes a first class
persistent DNA field. There is migration code to
init this field to 1.0
We should treat all three "target" ("expected") parameters in a similar way:
The "influence" control should only work on the measurement part of stabilisation,
i.e. it should only control the automatic part of stabilisation, while
the target parameters are deliberately set by the user and thus should
even be in effect when the automatic stabilsation is turned down.
It used to be so for location and rotation, but for the scale part,
I re-used the existing code for autoscale, which also had the scale influence
work on the autoscale factor. This was sensible in the old version,
since scale_influence was the only way to control the result. But now,
the user has always total control trough the "target_*" parameters
and thus we should prefer to treat all similar.