Compare commits

...

374 Commits

Author SHA1 Message Date
1fbdf19ee2 Remove "Last Data Type" hack 2021-06-15 22:12:50 -05:00
5355e9e603 Move "Selection Only" to the editor header 2021-06-15 21:56:56 -05:00
f1ebcfeef9 Gray out row filters if the flag is turned off 2021-06-15 21:41:55 -05:00
6fe76f602f Merge branch 'master' into temp-spreadsheet-row-filter 2021-06-15 21:41:16 -05:00
0b0c7ca017 Cleanup: Fix inconsistent parameter name warning 2021-06-15 21:11:18 -05:00
f5dee9b7a5 Split selection filter and row filters (filter flag no longer applies to selection filter) 2021-06-15 21:10:30 -05:00
c8e331f450 UI - LOCAL View3D overlay stats
This patch improves the 3DView statistics overlay to show LOCAL stats
while in local view. This means the stats can vary between 3DViews and
the statusbar when views are in local view, but this gives a much more
accurate count of the objects, and their components, that you are
directly working with rather than just scene values.

Differential Revision: https://developer.blender.org/D8883

Reviewed by Campbell Barton
2021-06-15 19:01:53 -07:00
4891da8ae2 Fix T88263: Incorrect image offset from old file
Versioning code for converting strip offset property doesn't work, when
property was animated and disabled or when crop was used.

When offset property is animated and offset is enabled, animation is
converted to be used with new transform design. When offset is disabled,
animation is left untouched. New transform design doesn't have option
to disable offset, and therefore old unconverted animation is used
instead of converted static value.

Remove animation from propery if it was unused.

Another issue was that both X and Y offset animation was being corrected
by factor caluclated for X channel.

Reviewed By: sergey

Differential Revision: https://developer.blender.org/D11370
2021-06-16 00:44:37 +02:00
1a5fa2b319 VSE: Improve animation evaluation performance
Use lookup string callback function for `sequences_all` RNA property
`rna_SequenceEditor_sequences_all_lookup_string` using a GHash for faster lookups.

When names are changed or strips are added/removed the lookup is tagged invalid.
The next time the lookup is used it will rebuild it.

Reviewed By: sergey, jbakker

Differential Revision: https://developer.blender.org/D11544
2021-06-16 00:29:17 +02:00
f5f58882ef Merge branch 'master' into temp-spreadsheet-row-filter 2021-06-15 16:43:08 -05:00
143a81ccce Geometry Nodes: Allow int attribute input fields with single value
Just like the way we often have a choice between an attribute input and
a single float, this adds the ability to choose between attribute and
integer input sockets, useful for D11421.
2021-06-15 16:38:28 -05:00
c29afa5156 Cleanup: Expose function publicly, rename
There is no particular reason these two functions shouldn't be used
outside of the bezier spline implementation since they don't do anything
particularly controversial.
2021-06-15 16:31:08 -05:00
732e8c723e Splines: Add resize method to CurveEval
This helps when adding splines to a new curve in parallel.
2021-06-15 16:24:11 -05:00
Germano Cavalcante
6bb980b0f4 DRW: sanitize 'DRW_mesh_batch_cache_dirty_tag'
Create maps that specify which batches have vbo or ibo as a reference
and use these maps to discard batches along with buffers.

Differential Revision: https://developer.blender.org/D11588
2021-06-15 17:03:19 -03:00
8e84938dd0 Cleanup: Add files for version independent versioning helpers
Adds `source/blender/blendloader/intern/versioning_common.cc` and
`versioning_common.h` for version independent versioning functions.

I only placed `do_versions_add_region_if_not_found()` in there for now.
`blo_do_version_old_trackto_to_constraints()` could also be added, but
that's so old, I prefer keeping that in `versioning_legacy.c`.
2021-06-15 19:33:00 +02:00
a4f840e15b UI: Support right aligned non-shortcut hints in widgets
Widget drawing code already supported drawing right-aligned, grayed out
shortcut strings. This patch generalizes things a bit so this can also
be used to draw other hints in the same way. There have been a few
instances in the past where this would've been useful, D11046 being the
latest one.

Note that besides some manual regression testing, I didn't check if this
works yet, as there is no code actually using it (other than the
shortcuts). Can be checked as part of further development for D11046.

A possible further improvement would be providing a way to define how
clipping should be done. E.g. sometimes the right-aligned text should be
clipped first (because it's just a hint), in other cases it should be
left untouched (like current code explicitly does it for shortcuts).

Removes the `UI_BUT_HAS_SHORTCUT` flag, which isn't needed anymore.
2021-06-15 19:13:09 +02:00
fcc844f8fb BLI: use explicit task isolation, no longer part of parallel operations
After looking into task isolation issues with Sergey, we couldn't find the
reason behind the deadlocks that we are getting in T87938 and a Sprite Fright
file involving motion blur renders.

There is no apparent place where we adding or waiting on tasks in a task group
from different isolation regions, which is what is known to cause problems. Yet
it still hangs. Either we do not understand some limitation of TBB isolation,
or there is a bug in TBB, but we could not figure it out.

Instead the idea is to use isolation only where we know we need it: when
holding a mutex lock and then doing some multithreaded operation within that
locked region. Three places where we do this now:
* Generated images
* Cached BVH tree building
* OpenVDB lazy grid loading

Compared to the more automatic approach previously used, there is the downside
that it is easy to miss places where we need isolation. Yet doing it more
automatically is also causing unexpected issue and bugs that we found no
solution for, so this seems better.

Patch implemented by Sergey and me.

Differential Revision: https://developer.blender.org/D11603
2021-06-15 17:28:44 +02:00
Germano Cavalcante
b3f0dc2907 Draw Cache: avoid recalculating 'poly_normals'
Call `BKE_mesh_ensure_normals_for_display` to avoid recalculating
poly_normals.

**Benchmark**
||master:|PATCH:
|---|---|---|
|looptris_test:|Average: 3.995076 FPS|Average: 4.047470 FPS
||rdata 11ms iter 91ms (frame 235ms)|rdata 11ms iter 86ms (frame 233ms)
|subdiv_mesh_cage_and_final:|Average: 1.884492 FPS|Average: 1.900114 FPS
||rdata 7ms iter 42ms (frame 268ms)|rdata 7ms iter 39ms (frame 265ms)
||rdata 7ms iter 44ms (frame 259ms)|rdata 7ms iter 42ms (frame 257ms)
|subdiv_mesh_final_only:|Average: 6.245944 FPS|Average: 6.289000 FPS
||rdata 3ms iter 23ms (frame 153ms)|rdata 3ms iter 21ms (frame 154ms)
|subdiv_mesh_final_only_ledge:|Average: 6.263482 FPS|Average: 6.187218 FPS
||rdata 3ms iter 23ms (frame 156ms)|rdata 3ms iter 22ms (frame 154ms)

Reviewed By: jbakker

Differential Revision: https://developer.blender.org/D11527
2021-06-15 11:45:39 -03:00
71997921c4 Cleanup: use back-slash for doxygen comments 2021-06-16 00:08:34 +10:00
0c75a98561 CMake: disable TBB when not found 2021-06-16 00:08:34 +10:00
Jeroen Bakker
174ed69c1b DrawManager: Cache material offsets.
When using multiple materials in a single mesh the most time is spend in
counting the offsets of each material for the sorting.

This patch moves the counting of the offsets to render mesh data and
caches it as long as the geometry doesn't change.

This patch doesn't include multithreading of this code.

Reviewed By: mano-wii

Differential Revision: https://developer.blender.org/D11612
2021-06-15 15:31:34 +02:00
7c8b9c7a9a Fix warning treated as an error
"void' function returning a value".
2021-06-15 09:57:57 -03:00
62906cdbea Cleanup: Added hierarchy in MeshBufferExtractionCache. 2021-06-15 12:34:13 +02:00
7f570a7174 Cleanup: Split mesh_render_data_loose_geom into multiple functions. 2021-06-15 11:39:36 +02:00
9cd2e80d5d Fix: wrong size check
This fixes a bad mistake by myself. Thanks Lukas Tönne for telling me.
2021-06-15 10:26:31 +02:00
462bd81399 Fix: Sequencer backdrop not updated during playback.
Caused by recent optimization in {7b76a160a4}.
2021-06-15 09:31:03 +02:00
819152527f BMesh: assert that face normals have been updated for tessellation
This catches missing normal updates that may cause invalid tessellation.
2021-06-15 14:54:34 +10:00
89e2b441ed Cleanup: remove return value from face normal calculation
This value is always 'sides - 2', no need to return this value.
2021-06-15 14:38:10 +10:00
ae34808114 Fix missing normal update in edit-mesh blend-from shape operator 2021-06-15 14:35:43 +10:00
fdad77d73d BMesh: use faster normal update method for edit-mesh coordinates
This wasn't included in the previous fix so it'd be 2.93 compatible.
2021-06-15 13:52:00 +10:00
0a361eb5ec Fix outdated face tessellation use when editing edit-mesh coodinates 2021-06-15 13:31:33 +10:00
2ba804d7b7 Screen: clear runtime structures on file-read & data-copy
Clear the runtime data structs instead of individual members,
this simplifies adding new runtime members as there are at least
two places they would need to be cleared.

Resolves error in D8883.
2021-06-15 12:52:59 +10:00
5dc0fd08a7 Cleanup: correct incomplete comment 2021-06-15 10:59:57 +10:00
2053e1f533 Fix image space missing mask display panel 2021-06-15 10:50:48 +10:00
4ae06b6123 Cleanup: remove "_" prefix for used arguments 2021-06-15 10:50:43 +10:00
013fc69ea8 Cleanup: unused argument & variable warnings 2021-06-15 10:50:43 +10:00
3bf98d1cec Cleanup: use private methods for internal operator utilities
Also remove unused argument.
2021-06-15 10:08:57 +10:00
9ce49af32e Cleanup: use doxygen comments for DNA_color_types
Also use enum instead of defines for Scopes.wavefrm_mode
2021-06-15 09:34:58 +10:00
1f251b7a27 Win8 cleanup, remove dead function pointer and macro. 2021-06-14 14:41:58 -07:00
72198d508a Merge branch 'master' into temp-spreadsheet-row-filter 2021-06-14 15:14:47 -05:00
61fdc45034 Geometry Nodes: Join dynamic curve attributes in the join geometry node
This commit lets the join geometry node transfer dynamic attributes
to the result, the same way that point cloud and mesh attributes are
joined. The implementation is different though, because of an
optimization implemented for curves to avoid copying splines.

The result attribute is added with the highest priority domain (points
over splines), and the highest complexity data type. If one curve had
the attribute on the spline domain but not others, the point domain
values will be used.

Generally this is a bit lower level than I would have liked this code
to be, but should be efficient, and it's really not too complicated.

Differential Revision: https://developer.blender.org/D11491
2021-06-14 15:13:43 -05:00
bf7f918a0e Geometry Nodes: Parallelize curve reverse node
Each spline can be handled separately here. This gives approximately a
2x speedup on my 8 core processor on an input of 80000 2 point splines.
2021-06-14 14:28:57 -05:00
fe0fa7cec6 Cleanup: Refactor join geometry node attribute gathering
Instead of building a set and then determining the final domain and
type for every attribute separately in the loop, construct a map with
the necessary data in the first place. This is simpler and should be
slightly more efficient.

Split from D11491
2021-06-14 13:58:11 -05:00
Johnny Matthews
4a540b9b48 Geometry Nodes: Curve Reverse Node
This is an implementation of T88722. It accepts a curve object and
for each spline, reverses the order of the points and all attributes.
This is more of a foundational node to support other nodes in the
future (like curve deform)

Selection takes spline domain attributes to determine which splines
are selected. If no selection is present all splines are reversed.

Differential Revision: https://developer.blender.org/D11538
2021-06-14 13:28:38 -05:00
fcbb20286a Geometry Nodes: Curve to Points Node for Evaluated Data
This node implements the second option of T87429, creating points
along the input splines with the necessary evaluated information
for instancing: `tangent`, `normal`, and `rotation` attributes.
All generic curve point and spline attributes are copied to the
result points as well.

The "Count" and "Length" methods are just like the current options
in the resample node, but the output is points instead of a curve.
The "Evaluated" method uses the points you see on the curve directly,
and therefore should be the fastest.

The rotation data is retrieved from a transform matrix built with the
same method that the curve to mesh node uses. The radius attribute is
divided by 10 so the points don't look absurdly huge in the viewport.
In the future that could be an option.

For the implementation, one thing that could use an improvement
is the amount of temporary allocations while resampling to evaluated
points before the final points. I expect that reusing a buffer for
each thread would give a nice improvement.

Differential Revision: https://developer.blender.org/D11539
2021-06-14 12:52:08 -05:00
d08e925ef1 Fix Build Warning
Marking unused function argument.

Introduced in bcff0ef9ca

Differential Revision: https://developer.blender.org/D10887
2021-06-14 10:42:28 -07:00
bcff0ef9ca UI: Windows Blend File Association
This patch allows Windows users to specify that their current blender
installation should be used to create thumbnails and be associated
with ".blend" files. This is done from Preferences / System. The only
way to do this currently is from the command-line and this is sometimes
inconvenient.

Differential Revision: https://developer.blender.org/D10887

Reviewed by Brecht Van Lommel
2021-06-14 10:22:03 -07:00
2e5671a959 Fix: VSE seeking with proxy strips would fail on certain frames
If the last decoded frame had the same timestamp as the GOP current
packet, then we would skip over this frame when fast forwarding and we
would seek until the end of the file.

This would could only be triggered reliably in single threaded mode.

Reviewed By: Richard Antalik

Differential Revision: http://developer.blender.org/D11601
2021-06-14 19:08:51 +02:00
aadd355028 Fix possible C-linkage warning on Clang
The warning would appear when using the `ENUM_OPERATORS()` macro inside
of an `extern "C"` block.
Didn't cause a warning in master currently, but in the
`asset-browser-poselib` branch.

After macro expansion, there would be C++ code in code with C linkage
(`extern "C"`). So make sure the expanded C++ code always uses C++
linkage.
The syntax used is totally C++ compliant: the C++ standard requires that
in such nested linkage specifications, the innermost one determines the
linking language (e.g. see
https://timsong-cpp.github.io/cppwp/n4659/dcl.link#4).
2021-06-14 19:00:13 +02:00
3de6fe0b3e Error Messages Creating Thumbnail Folders
On the Windows platform there will be some errors printed to the
console if the user's thumbnail cache folder doesn't already exist.
While creating those folders there is an attempt to do so multiple
times and so we get errors when trying to create when exists. This
is caused by paths that have hard-coded forward slashes, which causes
our path processing routines to not work correctly. This patch defines
those paths using platform-varying separator characters.

Differential Revision: https://developer.blender.org/D11505

Reviewed by Brecht Van Lommel
2021-06-14 09:17:27 -07:00
e9b4de43d8 Cleanup: rename id to owner_id for python id properties
This is consistent with the naming in `PointerRNA`.
2021-06-14 18:13:27 +02:00
90b0fb135f PyAPI: remove deprecated bpy.app.binary_path_python 2021-06-14 23:52:08 +10:00
8a1860bd9a BMesh: support face-normal calculation in normal & looptri functions
Support calculating face normals when tessellating. When this is done
before updating vertex normals it gives ~20% performance improvement.

Now vertex normal calculation only needs to perform a single pass on the
mesh vertices when called after tessellation.

Extended versions of normal & looptri update functions have been added:

- BM_mesh_calc_tessellation_ex
- BM_mesh_normals_update_ex

Most callers don't need to be aware of this detail by using:

- BKE_editmesh_looptri_and_normals_calc
- BKE_editmesh_looptri_and_normals_calc_with_partial

- EDBM_update also takes advantage of this,
  where calling EDBM_update with calc_looptri & calc_normals
  enabled uses the faster normal updating logic.
2021-06-14 23:01:19 +10:00
6bef255904 BMesh: remove unit-length edge-vector cache from normal calculation
Bypass stored edge-vectors for ~16% performance gains.

While this increases unit-length edge-vector calculations by around ~4x
the overhead of a parallel loop over all edges makes it worthwhile.

Note that caching edge-vectors per-vertex performs better and may be
worth investigating further, although in my tests this increases code
complexity with barley measurable benefits over not using cache at all.

Details about performance and possible optimizations are noted in
bm_vert_calc_normals_impl.
2021-06-14 22:56:02 +10:00
8083527f90 Edit Mesh: use params arg for update function, add calc_normals arg
Rename function EDBM_update_generic to EDBM_update, use a parameters
argument for better readability.

Also add calc_normals argument, which will have benefits when
calculating normals and tessellation together is optimized.
2021-06-14 22:56:01 +10:00
1d2eb461b5 Cleanup: clang-format 2021-06-14 22:56:00 +10:00
Thomas Lachmann
b84707df17 Python API: option for render engines to disable image file saving
For some custom rendering engines it's advantageous not to write the image files to disk.
An example would be a network rendering engine which does it's own image writing.

This feature is only supported when bl_use_postprocess is also disabled, since render
engines can't influence the saving behavior of the sequencer or compositor.

Differential Revision: https://developer.blender.org/D11512
2021-06-14 14:20:04 +02:00
ada47c4772 Fix build error in release builds after recent changes 2021-06-14 13:22:11 +02:00
748475b943 BLI_math: Cleanup: Use mul_/madd_ functions.
Better to avoid explicit vectors components direct manipulation when a
generic operation for whole vector exists, if nothing else it avoids
potential mistakes in indices.
2021-06-14 12:32:38 +02:00
b21db5e698 BLI_math: Fix several division-by-zero cases.
Those were caused by various tools used on degenerate geometry, see
T79775.

Note that fixes are as low-level as possible, to ensure they cover as
much as possible of unreported issues too.

We still probably have many more of those hidden in BLI_math though.
2021-06-14 12:32:38 +02:00
Leon Zandman
5add6f2ed9 Fix T87867: file open dialog triggers OneDrive file downloads on macOS
Until OneDrive supports macOS's native placeholder file implementation, detect
the com.microsoft.OneDrive.RecallOnOpen extended file attribute.

Differential Revision: https://developer.blender.org/D11466
2021-06-14 12:26:07 +02:00
c9dc55301c Fix T88494: add missing depsgraph relation update tags
Adding e.g. a Collection Info node creates a new depsgraph relation.
Therefore the relations should be updated.
2021-06-14 11:35:36 +02:00
a19c9e9351 Fix T88947: invalid normals when converting point cloud to mesh 2021-06-14 11:07:42 +02:00
Campbell Barton
2db09f67a4 Nodes: remove redundant increment node tree current socket index
`ntree->cur_index` was being incremented twice in make_socket_interface.

Reviewed By: JacquesLucke

Ref D11590
2021-06-14 18:46:39 +10:00
03544ed54f Fix T88807: crash when there are multiple links between the same sockets
This commit does two things:

* Disallows creating more than one link from one socket to a multi socket input.
* Properly count links if there happen to be more than one link between the same sockets.

The new link counting should also be more efficient asymptotically.

Differential Revision: https://developer.blender.org/D11570
2021-06-14 10:04:52 +02:00
54a03d4247 Cleanup: Reduce indentation from redundant check 2021-06-13 18:56:52 -05:00
84adc23941 Cleanup: Order return argument last 2021-06-13 18:22:31 -05:00
0f68e5c30a Fix libmv new[]/delete[] mismatch 2021-06-13 15:13:08 +10:00
5181bc46b3 Cleanup: allocation size mismatch warning
While harmless, use fixed size int type for pixel data.
2021-06-13 14:54:54 +10:00
452590571c Cleanup: rename 'unsigned int' -> 'uint' 2021-06-13 14:54:54 +10:00
ab38223047 Cleanup: redundant initialization
These were limited to obvious cases. Some less obvious cases
were kept as refactoring might make them necessary in future.
2021-06-13 14:54:54 +10:00
f731bce6cd Cleanup: use ATTR_RETURNS_NONNULL function attribute 2021-06-13 14:47:19 +10:00
4c3bb60d0f Cleanup: use return arg prefix for ED_object_add_generic_get_opts 2021-06-13 14:47:18 +10:00
9ff4e0068f Cleanup: avoid the possibility of 'enter_editmode' being left unset
While in practice this isn't an issue currently, always set
'enter_editmode' in ED_object_add_generic_get_opts
to avoid problems in the future.
2021-06-13 14:47:07 +10:00
84e98ba182 Cleanup: misleading return argument for hair_create_input_mesh
- The argument with named with an `r_` prefix when it was in fact
  also read from.
- The argument passed in had to be 'psys->clmd->hairdata',
  if it was not - the function would not worked.
2021-06-13 14:47:04 +10:00
8e58f93215 Cleanup: remove redundant NULL check, reduce scope 2021-06-13 14:47:03 +10:00
aab4794512 Cleanup: missing include 2021-06-13 14:47:01 +10:00
952d6663e0 Fix modifier deform by armature check ignoring virtual modifiers
Regression in f00cb93dbe (fix for T63125)
2021-06-13 14:46:59 +10:00
7b0c8097a7 Fix missing directory in CMakeLists.txt 2021-06-12 00:00:29 -03:00
b313525c1b Fix T88515: Cycles does not update light transform from linked collections
When moving a linked collection, we seem to only receive a depsgraph update
for an empty object so the Blender synchronization cannot discriminate it
and tag the object(s) (light or geometry) for an update through
id_map.set_recalc.

This missing transform update only affects lights since we do not check
manually if the transformations were modified like we do for objects.

To fix this, add a check to see if the transformation is different provided
that a light was already created.

Reviewed By: brecht

Maniphest Tasks: T88515

Differential Revision: https://developer.blender.org/D11574
2021-06-12 04:13:43 +02:00
d75e45d10c Fix T88812: Child Windows on Vertical Monitors
This patch improves the positioning of child windows when on monitors
that are arranged vertically (any above any other). When calculating a
window position in Ghost coordinates from GL coordinates we were using
monitor height, which can give incorrect values when desktop is taller
than any single monitor. So use desktop height instead.

See D10637 for more details and examples.

Differential Revision: https://developer.blender.org/D10637

Reviewed by Brecht Van Lommel
2021-06-11 14:39:19 -07:00
bd87ba90e6 Render Window as Non-Child on Win32 platform
This patch makes the "Render" window a top-level window, not a child of
the main window, which was the case in blender versions prior to 2.93.
This means it is no longer "on top", nor is the icon grouped on the
taskbar in the same way, but you can Alt-Tab between it and the main
window. This change only affects the Windows platform as the other
platforms behave this way.

See D11576 for links to negative feedback that prompts this change.

Differential Revision: https://developer.blender.org/D11576

Reviewed by Brecht Van Lommel
2021-06-11 13:07:30 -07:00
Pablo Dobarro
7bc5246156 Overlays: Make flash on mode transfer an operator property
This moves the flash on mode transfer effect option from the overlays to
an operator property of the mode transfer operator.

- This effect is intended to show the target object when no overlays or
a minimal set of overlays is enabled. Making it part of the whole set of
overlays invalidates this use case.

- The effect is not intended to be configurable per viewport, it should
be a global option.

The effect is still implemented using the overlay engine (instead of a
draw modal callback) due to performance and drawing artifacts. Having it
implemented as an overlay with runtime timer data in the objects makes
also possible to run multiple animations at the same time without any
visual glitches.

Reviewed By: campbellbarton, JulienKaspar

Differential Revision: https://developer.blender.org/D11519
2021-06-11 21:48:59 +02:00
f6c5af3d47 Add option to link assets on drag & drop
Note: Linking in this case as in link vs. append. Easily confused with linking
a data-block to multiple usages (e.g. single material used by multiple
objects).

Adds a drop-down to the Asset Browser header to choose between Link and Append.
This is probably gonna be a temporary place, T54642 shows where this could be
placed eventually.

Linking support is crucial for usage of the asset browser in production
environments. It just wasn't enabled yet because a) the asset project currently
focuses on single user, not production assets, and b) because there were many
unkowns still for the workflow that have big impact on production use as well.
With the recently held asset workshop I'm more confident with enabling linking,
as design ideas relevant to production use were confirmed.

Differential Revision: https://developer.blender.org/D11536

Reviewed by: Bastien Montagne
2021-06-11 16:46:20 +02:00
c8fcea0c33 Fix object assets getting duplicated after dropping
The operator run when dropping objects would duplicate the dropped object and
place that in the scene, even though that was just appended. Addressed by
making the duplication optional for the operator. If the duplication is not
requested, the object is just added to the scene (if needed), repositioned
based on the drop location and selected (deselecting other objects).
This makes the operator work as expected when using it to drop assets.

Reviewed as part of https://developer.blender.org/D11536.

Reviewed by: Bastien Montagne
2021-06-11 16:46:20 +02:00
20ece8736f Fix T89001: node search not working anymore 2021-06-11 16:27:56 +02:00
Maxime Casas
b4adb85933 Fix T89033: segfault reordering animation channels
Fix segmentation fault that can occur when reordering animation
channels.

Under some specific conditions, the list "act->curves" is empty in the
"join_groups_action_temp" function. In particular, this happens when a
scene contains an action that has not been pushed down, and with no
keyframe in it.

Reviewed By: sybren

Differential Revision: https://developer.blender.org/D11569
2021-06-11 16:27:04 +02:00
605ce623be Nodes: cache socket identifier to index mapping
While this preprocessing does take some time upfront,
it avoids longer lookup later on, especially as nodes get
more sockets.

It's probably possible to make this more efficient in some cases
but this is good enough for now.
2021-06-11 16:21:23 +02:00
Jeroen Bakker
7b30a3e98d Performance: Use parallel range for ImBuf scanline processor.
Scanline processor did its own heurestic what didn't scale well when
having a multiple cores. In stead of using our own code this patch will
leave it to TBB to determine how to split the scanlines over the
available threads.

Performance of the IMB_transform before this change was 0.002123s, with
this change 0.001601s. This change increases performance in other areas
as well including color management conversions.

Reviewed By: zeddb

Differential Revision: https://developer.blender.org/D11578
2021-06-11 15:55:22 +02:00
Jeroen Bakker
7b76a160a4 Sequencer: Do not redraw during playback.
When using large sequences including audio the drawing of the audio on
top of the strip takes a lot of time. This effects the playback
performance heavily.

During the animation playback performance there was a solution for this
by only drawing the playhead overlay. This was reverted for the sequence
editor as it didn't update the color strips when they were animated.

This patch checks if there are animated color strips if so the full
screen is redrawn, otherwise only the playhead is redrawn.

Reviewed By: ISS

Differential Revision: https://developer.blender.org/D11580
2021-06-11 15:51:26 +02:00
0eb9351296 Refactor: use 'BLI_task_parallel_range' in Draw Cache
One drawback to trying to predict the number of threads that will be
used in the `task_graph` is that we are only sure of the number when the
threads are running.

Using `BLI_task_parallel_range` allows the driver to
choose the best thread distribution through `parallel_reduce`.

The benefit is most evident on hardware with fewer cores.

This is the result on an 4-core laptop:
||before:|after:
|---|---|---|
|large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS
||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms)

Differential Revision: https://developer.blender.org/D11558
2021-06-11 10:49:50 -03:00
Germano Cavalcante
2330cec2c6 Refactor: Draw Cache: use 'BLI_task_parallel_range'
This is an adaptation of {D11488}.

A disadvantage of manually setting the iter ranges per thread is that
we don't know how many threads are running in the background and so we
don't know how to best distribute the ranges.

To solve this limitation we can use `parallel_reduce` and thus let the
driver choose the best distribution of ranges among the threads.

This proved to be especially beneficial for computers with few cores.

**Benchmarking:**
Here's the result on an 4-core laptop:
||master:|PATCH:
|---|---|---|
|large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS
||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms)

Here's the result on an 8-core PC:
||master:|PATCH:
|---|---|---|
|large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS
||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms)
|large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS
||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms)
|looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS
||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms)
|subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS
||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms)
||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms)
|subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS
||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms)
|subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS
||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms)

**Notes:**
- The improvement can only be noticed if all extracts are multithreaded.
- This patch touches different areas of the code, so it can be split into another patch if the idea is accepted.

These screenshots show how threads behave in a quadcore:
Master:
{F10164664}
Patch:
{F10164666}

Differential Revision: https://developer.blender.org/D11558
2021-06-11 10:45:12 -03:00
fe22635bf6 Nodes: add utilities to check if there are undefined nodes/sockets 2021-06-11 14:55:10 +02:00
c0367b19e2 Fix: VSE search in mpegts files would fail
ffmpeg_generic_seek_workaround did work properly and our start pts
calculation was wrong.

Reviewed By: Richard Antalik

Differential Revision: http://developer.blender.org/D11562
2021-06-11 14:05:07 +02:00
4adbe31e2f Fix: VSE indexer seeking not working correctly
Because of the added sanity checks in rB14508ef100c9 (D11492), seeking
in proxies would not work correctly any more. This is because it wasn't
working as intended before, but in most cases this wouldn't be
noticeable. However now when the sanity checks are tripped it is very
noticeable that something is wrong

The indexer tried to use dts values for time stamps when we used pts in
our decode functions to get the time positions. This would make it
start in the wrong GOP frames when searching. Now that we enforce no
crossing of GOP frames when decoding after seek, this would lead to
issues.

Now we correctly use pts (or dts if pts is not available) and thus we
don't have any seeking issues because of time stamp format missmatch.

Reviewed By: Richard Antalik

Differential Revision: http://developer.blender.org/D11561
2021-06-11 14:04:48 +02:00
1fb2eaf1c5 Fix: VSE timecodes being used even when turned off.
Reviewed By: Richard Antalik

Differential Revision: http://developer.blender.org/D11567
2021-06-11 14:04:35 +02:00
2f280d4b92 LineArt: Fix crash due to empty duplicollection. 2021-06-11 17:55:33 +08:00
e9c8ae767a Performance: Split ImBuf sampling.
When sampling ImBuf can be a char or a float buffer. Current sampling
functions added overhead by checking which kind of buffer was passed
every pixel that was sampled. When performing image processing this
check can be removed outside the inner loop adding 5% of performance
increase in the `IMB_transform` operator.
2021-06-11 11:37:39 +02:00
Xing Liu
7fc220517f Fix T88068: Alt+I now respects keying set
Remap {key Alt I} from `anim.delete_key_v3d` to `anim.delete_key`. This
makes it keyframe removal symmetrical with keyframe insertion ({key I}).

Both the default keymap {key Alt I} and the Industry Compatible keymap
{key Alt S} have been updated.

Reviewed By: sybren, #animation_rigging

Maniphest Tasks: T88068

Differential Revision: https://developer.blender.org/D11528
2021-06-11 11:13:20 +02:00
Jeroen Bakker
28617bb167 Sequencer: Transform ImBuf Processor.
Inside the sequencer the cropping and transform of images/buffers were
implemented locally. This reduced the optimizations that a compiler
could do and added confusing code styles. This patch adds
`IMB_transform` to reduce the confusion and increases compiler
optimizations as more code can be inlined and we can keep track of
indices inside the inner loop.

This increases end-user performance by 30% when playing back aa video
in VSE.

Reviewed By: ISS, zeddb

Differential Revision: https://developer.blender.org/D11549
2021-06-11 09:34:44 +02:00
84f025c6fd Cleanup: use sentences for pose slide comments 2021-06-11 16:27:55 +10:00
Christoph Lendenfeld
066f5a4469 Cleanup: pose slider rename region to region_header
Reviewed By: sybren, campbellbarton

Ref D11365
2021-06-11 15:53:34 +10:00
Christoph Lendenfeld
d07cc5e680 Cleanup: pose slider use enum types
use enum types in `tPoseSlideOp` instead of `short`

Reviewed By: sybren, campbellbarton

Ref D11364
2021-06-11 15:40:07 +10:00
Christoph Lendenfeld
162cf8e81d Cleanup: pose slider use strncpy
use `STRNCPY` instead of `BLI_strncpy`

Reviewed By: sybren, campbellbarton

Ref D11363
2021-06-11 15:38:27 +10:00
Christoph Lendenfeld
fd5c94c48a Cleanup: pose slider data types
- change vec2f to float[2]
- pass rctf as pointer
- change `const struct rctf` to `const rctf`
2021-06-11 15:36:41 +10:00
Christoph Lendenfeld
2724d08cf5 Cleanup: pose slider rename "percentage" to "factor"
This patch changes occurrences of percentage to factor.

There are some usages of percentage left in there on purpose.
They are distinguished as follows:

- factor is 0-1 float
- percentage is 0-100 int

Ref D11361

Reviewed by: sybren, campbellbarton
2021-06-11 15:29:52 +10:00
Charlie Jolly
461ba4438f Nodes: move mix rgb node to C++
Prepare node for conversion to Geometry Nodes.

There should be no functional changes.

Reviewed By: HooglyBoogly

Differential Revision: https://developer.blender.org/D11506
2021-06-11 00:01:13 +01:00
d581c1b304 UI: Correct label naming mistake for VSE text strip box background
Seems to be a copy/paste error from 
rB235c309e5f86e84fb08e1ff2c5c11eb0b775c388
2021-06-10 17:45:16 -04:00
e8a4bddef4 deps/windows: add missing tbbmalloc_debug.lib
This file is being linked by blender without
it existing causing issues for debug builds.
2021-06-10 12:03:12 -06:00
fbd889ec28 Fix T86544: better cleanup of path given as command line argument.
When using non-default system separator in filename path, code would end up
with an absolute path mixing regular and alternative separator,
confusing the rest of the path manipulations later on.

So this commit add proper replacements of alternative separators, and
path normalization.
2021-06-10 18:37:46 +02:00
509e0c5b76 Cleanup: BLI_path_slash_native: use separator defines instead of literal values.
Even though this function is alredy using system-specific code, it's
still cleaner to use `SEP` and `ALTSEP` here.
2021-06-10 18:35:35 +02:00
7b62a54230 Fix T88578: crash when loading value from group output node
It remembered the wrong origin socket and couldn't find the value
anymore later on.
2021-06-10 17:24:53 +02:00
Eitan
53c98e45cf Geometry Nodes: Add Texture and Material options to switch node
These new socket types can be supported in the switch node
along with the others.

Differential Revision: https://developer.blender.org/D11560
2021-06-10 10:16:37 -05:00
bcefce33f2 BLI_mempool: split thread-safe iteration into the private API
Splitting out thread safe iteration logic means regular iteration
isn't checking for the thread-safe pointer each step.

This gives a small but measurable overall performance gain of 2-3%
when redrawing a high-poly mesh.

Ref D11564

Reviewed By: mont29
2021-06-11 00:31:16 +10:00
9df1e0cad5 Fix: Build error with MSVC
rB4f81b4b4ce29 mistakenly left out the changes
to platform_win32.cmake causing a linker error
when WITH_GMP and WITH_TBB_MALLOC_PROXY were on.
2021-06-10 06:50:05 -06:00
aa0bd29546 Docs: remove deprecated parameter from bmesh docs
The perimeter itself was removed but the documentation wasn't updated.

Resolves T89013
2021-06-10 21:32:30 +10:00
1a72ee4cbe Cleanup: move endian values from BKE_global into BLI_endian_defines
This change was prompted by D6408 which moves thumbnail extraction into
a shared function that happens use these endian defines but only links
blenlib.

There is no need for these defines to be associated with globals
so move into their own header.
2021-06-10 21:10:28 +10:00
e4ef8cbf7e Cleanup: add comment 2021-06-10 13:05:57 +02:00
5fa6cdb77a Add unit for time stored in seconds
Allows to define properties which will have proper units displayed
in the interface. The internal storage is expected to be seconds
(which matches how other times are stored in Blender).

Is not immediately used in Blender, but is required for the upcoming
feature in Cycles X (D11526)

The naming does not sound very exciting, but can't think of anything
better either.

For test it probably easiest to define FloatProperty with subdtype
of TIME_ABSOLUTE.

Differential Revision: https://developer.blender.org/D11532
2021-06-10 12:15:59 +02:00
5304c6ed7d DataTransfer: Fix vertices being wrongly added to vgroup.
Previously, a vertex from destination mesh would always be added to all
transferred vgroup (with a 0.0 weight), even if none of its matching
sources in source mesh belonged to the matching source vgroups.

Now a destination vertex is only added to a given destination vgroup if
at least one of its source vertices belong to the matching source
vgroup.

Issue found and initial investigation by @pls in D11524, thanks!
2021-06-10 11:33:53 +02:00
b669fd376a Cleanup: spelling in comments 2021-06-10 17:04:25 +10:00
4d4608363c Cleanup: quiet array-parameter warning from GCC11 2021-06-10 16:51:09 +10:00
7141eb75ef Docs: update oxygen configuration to v1.9.1 2021-06-10 16:34:58 +10:00
b282a065f1 Docs: Add preprocessor define for doxygen
Doxygen by default leaves out any functions inside
#ifdef blocks that it thinks are disabled.

This change adds a DOXYGEN symbol, so you can
still get the functions included in the
documentation even if the #ifdef would
have normally excluded them.

before

#if defined(_WIN32)

after

#if defined(_WIN32) || defined(DOXYGEN)

Patch provided by Campbell Barton on chat.
2021-06-09 18:44:39 -06:00
Erik Abrahamsson
4f81b4b4ce Windows: Use TBBMalloc for GMP
TBBmalloc_proxy already takes care of any allocations
being done from MSVC compiled code, some of the dependencies
like GMP cannot be build with MSVC and we have to use
mingw to build them. mingw however links against the older
msvcrt.dll for its allocation needs, which TBBMallocProxy
does not hook.

GMP has an option to supply your own allocation functions
so we can still manually redirect them to TBBMalloc.

In a test-file with a boolean geometry node, this patch
uses 32s effective CPU time compared to 52s before.

Differential Revision: https://developer.blender.org/D11435

Reviewed by Campbell Barton, Ray Molenkamp
2021-06-09 18:34:17 -06:00
a3226bdf3e Fix: Point translate and point scale don't execute on curve data 2021-06-09 16:51:07 -05:00
Christoph Lendenfeld
d96e9de9de Fix T88546: Pose slider typed input not working
Remove an unnecessary call to pose_slide_mouse_update_percentage
That call was overriding the typed value

Reviewed By: #animation_rigging, Sybren A. Stüvel

Differential Revision:  https://developer.blender.org/D11395

Ref D11395
2021-06-09 22:31:10 +01:00
93fd07e19c Geometry Nodes: Copy spline attributes in the curve resample node
Previously only point domain attributes were copied to the result curve.
2021-06-09 15:54:26 -05:00
5f19646d7e Fix T88799: 3DViewport Stats Column Measurements
To draw the overlay stats in columns the strings must be measured to
find the longest one. In some circumstances this measurement can be
incorrect. We draw the text with a specific size yet do not explicitly
set the size before calling BLF_size. This patch properly sets the size
so that the measurement will match what will be used for output.

See T88799 for examples of measurement failure.

Differential Revision: https://developer.blender.org/D11504

Reviewed by Hans Goudey
2021-06-09 13:39:08 -07:00
675677ec67 Splines: Add API functions for interpolating data
First, expand on the interpolation to evaluated points with a templated
helper function, and a function that takes a GSPan. Next, add a set of
functions to `Spline` for interpolating at arbitrary intervals between
the evaluated points. The code for doing that isn't that complicated
anyway, but it's nice to avoid repeating, and it might make it easier
to unroll the special cases for the first and last points if we require
the index factors to be sorted.
2021-06-09 14:53:39 -05:00
Henrik Dick
df2a19eac7 Geometry Nodes: Add Convex Hull Node
This commit adds a node to output the convex hull of any input geometry
as a mesh, which is an enclosing geometry around a set of points.
All geometry types are supported, besides volumes.

The code supports operating on instances to avoid copying all input
geometry before the operation. The implementation uses the same backend
as the operation in edit mode, but uses Mesh directly instead of BMesh.

Attribute transfer is not supported currently, but would be a point of
improvement for the future if it can work in a predictable way on
different geometry input types.

Differential Revision: https://developer.blender.org/D10925
2021-06-09 11:58:08 -05:00
2856f3b583 Fix T88974: Add missing liboverrides to GP modifiers and shaderfx.
Proper RNA code was simply never added for those...
2021-06-09 18:48:55 +02:00
965bd53e02 Cleanup: use doxy sections for task_iterator.c 2021-06-10 02:22:46 +10:00
05f15645ec Cleanup: missing NULL check in assert 2021-06-10 02:22:46 +10:00
cb0cab48ef Cleanup: redundant/unused assignments 2021-06-10 02:22:46 +10:00
029fb002dd Cleanup: replace 'else if' with 'else' 2021-06-10 02:22:46 +10:00
4443c4082e Cleanup: remove redundant checks which have already been tested
Note that these changes are limited simple cases as these kinds of
changes could allow for errors when refactoring code when the known
state is not so obvious.
2021-06-10 02:22:45 +10:00
bda8887e0c Cleanup: simplify grease pencil preset set logic 2021-06-10 02:22:45 +10:00
5575aba025 Cleanup: simplify grease pencil type checks 2021-06-10 02:22:45 +10:00
25d8ce16b5 Cleanup: spelling in comments 2021-06-10 02:22:45 +10:00
59553d47c0 Cleanup: quiet array-parameter warning 2021-06-10 02:22:45 +10:00
84af1eaa92 Fix invalid return value assignment in getEdgeVertexIndices 2021-06-10 02:22:45 +10:00
059f19d821 Fix uninitialized variable in task.MempoolIterTLS test
Error in 14f3b2cdad.
2021-06-10 02:21:59 +10:00
0f156a2436 LineArt: Camera marker update fix.
The original fix was probably flushed by some newer
line art commits. Fixed.

See https://developer.blender.org/T88464
2021-06-10 00:14:34 +08:00
f42a501c61 Revert "GPencil: Add custom normal entry to bGPDspoint."
This reverts commit f546b0800b.
2021-06-10 00:11:14 +08:00
d8b8b4d7e2 BMesh: multi-thread face tessellation
Use BM_iter_parallel for face tessellation, this gives around
6.5x speedup for BM_mesh_calc_tessellation on high poly meshes in my
tests, although exact speedup depends on available cores.
2021-06-10 01:19:58 +10:00
14f3b2cdad BLI_task: add TLS support to BLI_task_parallel_mempool
Support thread local storage for BLI_task_parallel_mempool,
as well as support for the reduce and free callbacks.

mempool_iter_threadsafe_* functions have been moved into a private
header thats only shared between task_iterator.c and BLI_mempool.c
so the TLS can be made part of the iterator array without having to
rely on passing in struct offsets.

Add test task.MempoolIterTLS that ensures reduce and free
are working as expected.

Reviewed By: mont29

Ref D11548
2021-06-10 00:55:04 +10:00
f546b0800b GPencil: Add custom normal entry to bGPDspoint.
Also modified existing utility functions to take
care of the new surface normal interpolation and so on.

Reviewed By: Antonio Vazquez (antoniov)

Differential Revision: https://developer.blender.org/D11543
2021-06-09 22:46:08 +08:00
92ae3ff84c Fix assigning material to linked object being forbidden in BKE.
While this should not be allowed in general, there are some cases
(library overrides at least) where supporting this is mandatory.

Further more, comment stating that this could crash is from 2011, could
not reproduce any issue with current code. Commit comment was referring
to undo/redo, but in use cases here those should not affect things.

Note that in general, such relatively high-level checks should be
handled by high-level, close to user code (like in ED area e.g.), not in
low-level BKE code anyway.
2021-06-09 16:33:34 +02:00
5954b351f0 Cleanup: Removed unused definition. 2021-06-09 16:31:28 +02:00
33c4eefabb Cleanup: Comment formatting 2021-06-09 09:28:23 -05:00
Jeroen Bakker
6e999e08ab T88352: Use threaded ibo.tris extraction for single material meshes.
This patch adds a specific extraction method when the mesh has only
one material. This method is multi-threaded.

There is a trade-off in this patch as the ibo isn't compressed (it adds
restart indexes for hidden faces). So it depends if threading is faster
than the additional GPU buffer upload.

# Subdivided cube
I used a cube subdivided 7 times, modifiers applied. that gives around 400000 faces.

The test is selecting some vertices and move them. During this test the next buffers are updated on each frame:
* vbo.pos_nor
* vbo.lnor
* vbo.edit_data
* ibo.tris
* ibo.points

System info:
|platform| Linux-5.11.0-7614-generic-x86_64-with-glibc2.33|
| renderer|      AMD SIENNA_CICHLID (DRM 3.40.0, 5.11.0-7614-generic, LLVM 11.0.1)|
|vendor|         AMD|
|version|        4.6 (Core Profile) Mesa 21.0.1|
|cpu|              Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz|
|compiler|     gcc version 10.3.0|

Timing have been measured using DEBUG_TIME in `draw_cache_extract_mesh`.

master: `rdata 8ms iter 45ms (frame 153ms)`
this patch `rdata 6ms iter 36ms (frame 132ms)`

Reviewed By: mano-wii

Maniphest Tasks: T88352

Differential Revision: https://developer.blender.org/D11290
2021-06-09 16:20:53 +02:00
ec98bb318b UI: Add the option to not display some socket labels
This commit adds a flag to disable displaying some socket labels which
just redundant eye sores. We still want to have a label, because that
information can potentially be accessed elsewhere in the UI.

The flag is used in a few geometry nodes.

Differential Revision: https://developer.blender.org/D11540
2021-06-09 09:13:09 -05:00
3a7ab62eac Fix T88885: Circle select deselects first selections after moving cursor
`is_active_prev` is always set after the first operation.
But this was not the case with `wait_for_input` `false`.
2021-06-09 10:59:33 -03:00
1f55e12206 Tests: temporarily increase threshold for OpenImageDenoise test
Until all platforms have the same version, this helps the tests pass.
2021-06-09 15:28:19 +02:00
ea3895558d Fix T88998: GPencil not projecting to the most front surface
It was projecting from the stroke position.

The behavior has changed in {rB5400be9ffee2}.
2021-06-09 09:32:50 -03:00
Germano Cavalcante
e4c6da29b2 Draw Cache: use threading for Mesh extract lines
This is an optimization, but the difference is still not that
significant as some extractions are still done in single thread.

**Benchmarking**
||before:|after:
|---|---|---|
|large_mesh_editing:|Average: 14.246502 FPS|Average: 15.438118 FPS
||rdata 9ms iter 31ms (frame 69ms)|rdata 9ms iter 27ms (frame 65ms)
|large_mesh_editing_ledge: |Average: 14.913622 FPS|Average: 15.856538 FPS
||rdata 9ms iter 30ms (frame 67ms)|rdata 9ms iter 26ms (frame 63ms)
|looptris_test:|Average: 3.970774 FPS|Average: 4.095200 FPS
||rdata 11ms iter 90ms (frame 235ms)|rdata 12ms iter 87ms (frame 229ms)

Reviewed By: jbakker

Differential Revision: https://developer.blender.org/D11467
2021-06-09 08:58:08 -03:00
4ced8900f5 Fix T88983: GPencil toggle caps error in python enum
The value for default option was wrong in the python definition.
2021-06-09 12:52:44 +02:00
f087a225dc GPencil: Hide Brush panels for some tools
Some tools don't use brush and the panel must be hidden.

Reviewed By: mendio

Differential Revision: https://developer.blender.org/D11518
2021-06-09 12:52:44 +02:00
dd98f6b55c Fix missing free calls for task iterator
A single threaded task with thread data over 8192 bytes would leak.
While this didn't happen in practice, it could cause issues in the
future.

The free call for `task_parallel_iterator_do` wasn't running
if callbacks weren't set, also not an issue in practice but avoids
potential problems in the future too.
2021-06-09 20:33:40 +10:00
b18a214ecb Fix: Compositor test desintegrate failing on arm64
Changes introduced in commit rBe9f2f17e8518
can create different render results when there is
a Math or Mix operation after TextureOperation
on tiled execution model.
This is due to WriteBufferOperation forcing a single pixel
resolution when these operations use a preferred
resolution of 0 to check if their inputs have resolution.
Fixing this behaviour creates different renders too.

This patch keeps previous tiled implementation and
adds the new implementation only for full frame execution.

Reviewed By: Jeroen Bakker (jbakker)

Differential Revision: https://developer.blender.org/D11546
2021-06-09 11:21:23 +02:00
d7c812f15b Compositor: Refactor recursive methods to iterative
In order to reduce stack size this patch converts full frame 
recursive methods into iterative.
- No functional changes.
- No performance changes.
- Memory peak may slightly vary depending on the tree because
 now breadth-first traversal is used instead of depth-first.

Tests in D11113 have same results except for test1 memory peak:
360MBs instead of 329.50MBs.

Reviewed By: Jeroen Bakker (jbakker)

Differential Revision: https://developer.blender.org/D11515
2021-06-09 11:02:40 +02:00
3ba16afa1e Fix failing test case sequence_transform. 2021-06-09 08:12:04 +02:00
8c3f4f7edf Fix: Incorrect node bezier spline tangent calculation for end points
The code was using the useless dangling handle at each end of the spline
rather than the handle pointing inwards.
2021-06-08 23:52:29 -05:00
307f8c8e76 Fix: Prevent small memory leak in VSE indexer
We need to unref the packet to tell ffmpeg it is ok to free it after
use.
2021-06-08 23:18:31 +02:00
8bd09b1d77 Build: upgrade OpenImageDenoise to 1.4.0
CMake builder and install deps changes, precompiled libraries are still to be
committed.

Ref T88438, T88434

Differential Revision: https://developer.blender.org/D11486
2021-06-08 19:22:59 +02:00
f29a738e23 PyAPI: use keyword only arguments
Use keyword only arguments for the following functions.

- addon_utils.module_bl_info 2nd arg `info_basis`.
- addon_utils.modules 1st `module_cache`, 2nd arg `refresh`.
- addon_utils.modules_refresh 1st arg `module_cache`.
- bl_app_template_utils.activate 1nd arg `template_id`.
- bl_app_template_utils.import_from_id 2nd arg `ignore_not_found`.
- bl_app_template_utils.import_from_path 2nd arg `ignore_not_found`.
- bl_keymap_utils.keymap_from_toolbar.generate 2nd & 3rd args `use_fallback_keys` & `use_reset`.
- bl_keymap_utils.platform_helpers.keyconfig_data_oskey_from_ctrl 2nd arg `filter_fn`.
- bl_ui_utils.bug_report_url.url_prefill_from_blender 1st arg `addon_info`.
- bmesh.types.BMFace.copy 1st & 2nd args `verts`, `edges`.
- bmesh.types.BMesh.calc_volume 1st arg `signed`.
- bmesh.types.BMesh.from_mesh 2nd..4th args `face_normals`, `use_shape_key`, `shape_key_index`.
- bmesh.types.BMesh.from_object 3rd & 4th args `cage`, `face_normals`.
- bmesh.types.BMesh.transform 2nd arg `filter`.
- bmesh.types.BMesh.update_edit_mesh 2nd & 3rd args `loop_triangles`, `destructive`.
- bmesh.types.{BMVertSeq,BMEdgeSeq,BMFaceSeq}.sort 1st & 2nd arg `key`, `reverse`.
- bmesh.utils.face_split 4th..6th args `coords`, `use_exist`, `example`.
- bpy.data.libraries.load 2nd..4th args `link`, `relative`, `assets_only`.
- bpy.data.user_map 1st..3rd args `subset`, `key_types, `value_types`.
- bpy.msgbus.subscribe_rna 5th arg `options`.
- bpy.path.abspath 2nd & 3rd args `start` & `library`.
- bpy.path.clean_name 2nd arg `replace`.
- bpy.path.ensure_ext 3rd arg `case_sensitive`.
- bpy.path.module_names 2nd arg `recursive`.
- bpy.path.relpath 2nd arg `start`.
- bpy.types.EditBone.transform 2nd & 3rd arg `scale`, `roll`.
- bpy.types.Operator.as_keywords 1st arg `ignore`.
- bpy.types.Struct.{keyframe_insert,keyframe_delete} 2nd..5th args `index`, `frame`, `group`, `options`.
- bpy.types.WindowManager.popup_menu 2nd & 3rd arg `title`, `icon`.
- bpy.types.WindowManager.popup_menu_pie 3rd & 4th arg `title`, `icon`.
- bpy.utils.app_template_paths 1st arg `subdir`.
- bpy.utils.app_template_paths 1st arg `subdir`.
- bpy.utils.blend_paths 1st..3rd args `absolute`, `packed`, `local`.
- bpy.utils.execfile 2nd arg `mod`.
- bpy.utils.keyconfig_set 2nd arg `report`.
- bpy.utils.load_scripts 1st & 2nd `reload_scripts` & `refresh_scripts`.
- bpy.utils.preset_find 3rd & 4th args `display_name`, `ext`.
- bpy.utils.resource_path 2nd & 3rd arg `major`, `minor`.
- bpy.utils.script_paths 1st..4th args `subdir`, `user_pref`, `check_all`, `use_user`.
- bpy.utils.smpte_from_frame 2nd & 3rd args `fps`, `fps_base`.
- bpy.utils.smpte_from_seconds 2nd & 3rd args `fps`, `fps_base`.
- bpy.utils.system_resource 2nd arg `subdir`.
- bpy.utils.time_from_frame 2nd & 3rd args `fps`, `fps_base`.
- bpy.utils.time_to_frame 2nd & 3rd args `fps`, `fps_base`.
- bpy.utils.units.to_string 4th..6th `precision`, `split_unit`, `compatible_unit`.
- bpy.utils.units.to_value 4th arg `str_ref_unit`.
- bpy.utils.user_resource 2nd & 3rd args `subdir`, `create`
- bpy_extras.view3d_utils.location_3d_to_region_2d 4th arg `default`.
- bpy_extras.view3d_utils.region_2d_to_origin_3d 4th arg `clamp`.
- gpu.offscreen.unbind 1st arg `restore`.
- gpu_extras.batch.batch_for_shader 4th arg `indices`.
- gpu_extras.batch.presets.draw_circle_2d 4th arg `segments`.
- gpu_extras.presets.draw_circle_2d 4th arg `segments`.
- imbuf.types.ImBuf.resize 2nd arg `resize`.
- imbuf.write 2nd arg `filepath`.
- mathutils.kdtree.KDTree.find 2nd arg `filter`.
- nodeitems_utils.NodeCategory 3rd & 4th arg `descriptions`, `items`.
- nodeitems_utils.NodeItem 2nd..4th args `label`, `settings`, `poll`.
- nodeitems_utils.NodeItemCustom 1st & 2nd arg `poll`, `draw`.
- rna_prop_ui.draw 5th arg `use_edit`.
- rna_prop_ui.rna_idprop_ui_get 2nd arg `create`.
- rna_prop_ui.rna_idprop_ui_prop_clear 3rd arg `remove`.
- rna_prop_ui.rna_idprop_ui_prop_get 3rd arg `create`.
- rna_xml.xml2rna 2nd arg `root_rna`.
- rna_xml.xml_file_write 4th arg `skip_typemap`.
2021-06-09 03:05:44 +10:00
c18ff180e3 Cleanup: Apply clang-format (make format) 2021-06-08 18:53:39 +02:00
22ee056c3a Geometry Nodes: Rename bounding box mesh output to "Bounding Box"
This was decided by the geometry nodes team, because the
important part of this output is not that it's a mesh.
2021-06-08 11:11:49 -05:00
f5a2d93224 Geometry Nodes: Support curve instances in the bounding box node
Currently curve instances are misleading, since `CurveEval` is created
from scratch from the original `Curve`, but this won't always be true.
2021-06-08 10:51:52 -05:00
a31bd2609f Cleanup: Remove duplicate call to function
Missed in rBa2ebbeb836ae765
2021-06-08 09:55:14 -05:00
Jeroen Bakker
2e19649bb9 Sequencer: Performance image crop transform.
During transforming an image, a matrix multiplication per pixel was done.
The matrix in itself is always linear so it could be replaced by two additions.

During testing in debug builds playing back a movie went from 20fps to
300 fps.

Reviewed By: zeddb

Differential Revision: https://developer.blender.org/D11533
2021-06-08 16:46:57 +02:00
7124c66340 Fix T87703: Failed assert when dragging object data-block into 3D View
Talked with Bastien and we ended up looking into this. Issue is that the
dupliation through drag & drop should also be considered a
"sub-process", like Shift+D duplicating does. Added a comment explaining
why this is needed.
2021-06-08 16:45:34 +02:00
Jeroen Bakker
259b9c73d0 GPU: Thread safe index buffer builders.
Current index builder is designed to be used in a single thread.
This makes all index buffer extractions single threaded.
This patch adds a thread safe solution enabling multithreaded
building of index buffers.

To reduce locking the solution would provide a task/thread local
index buffer builder (called sub builder).
When a thread is finished this thread local index buffer builder
can be joined with the initial index buffer builder.

`GPU_indexbuf_subbuilder_init`: Initialized a sub builder. The
index list is shared between the parent and sub buffer, but the
counters are localized. Ensuring that updating counters would
not need any locking.

`GPU_indexbuf_subbuilder_finish`: merge the information of the
sub builder back to the parent builder. Needs to be invoked outside
the worker thread, or when sure that all worker threads have been
finished. Internal the function is not thread safe.

For testing purposes the extract_points extractor has been migrated to
the new API. Herefore changes to the mesh extractor were needed.

* When creating tasks, the task number of current task is stored in
  ExtractTaskData including the total number of tasks.
* Adding two functions  in `MeshExtract`.
** `task_init` will initialize the task specific userdata.
** `task_finish` should merge back the task specific userdata back.
* adding task_id parameter to the iteration functions so they can
  access the correct task data without any need for locking.

There is no noticeable change in end user performance.

Reviewed By: mano-wii

Differential Revision: https://developer.blender.org/D11499
2021-06-08 16:36:06 +02:00
08b0de45f3 Geometry Nodes: new Select by Material node
This node creates a boolean face attribute that is "true" for
every face that has the given material.

Differential Revision: https://developer.blender.org/D11324
2021-06-08 16:01:54 +02:00
Maxime Casas
2246d456aa Animation: Allow selection of FCurve + its keys
Selection of an FCurve with box/circle select now selects the entire
curve and all its keys:

- Box selecting a curve selects all the keyframes of the curve.
- Ctrl + box selecting of the curve deselects all the keyframes of the
  curve.
- Shift + box selecting of the curve extends the keyframe selection,
  adding all the keyframes of the curves that were just selected to the
  selection.
- In all cases, if the selection area contains a key, nothing is
  performed on the curves themselves (the action only impacts the
  selected keys).

Reviewed By: sybren, #animation_rigging

Differential Revision: https://developer.blender.org/D11181
2021-06-08 15:45:53 +02:00
9553ba1373 Fix compile error with 'WITH_CXX_GUARDEDALLOC'
Seen with msvc
2021-06-08 10:35:04 -03:00
a2ebbeb836 Fix T88934: Crash with line node count input < 0
Some of the primitive nodes can return null in an error condition.
This is confusing mixed with adding a maderial slot in calling
functions. This is the second crash caused by that confusion. It's
simpler to add the slot right when allocating the mesh, and it will
lend itself better to copy & paste coding in the future.

Differential Revision: https://developer.blender.org/D11530
2021-06-08 08:27:13 -05:00
5b014911a5 Revert "Cleanup: use cpp new/delete."
This reverts commit 43464c94f4.
2021-06-08 15:08:09 +02:00
23fd576cf8 Cleanup: replace NULL with nullptr. 2021-06-08 13:14:18 +02:00
43464c94f4 Cleanup: use cpp new/delete. 2021-06-08 13:12:49 +02:00
322a614497 Cleanup: replace typedef structs with structs. 2021-06-08 12:03:06 +02:00
340c535dbf Cleanup: Separate compile unit edituv. 2021-06-08 12:03:06 +02:00
088ea59b7e Cleanup: Separate compile unit lines_adjacency. 2021-06-08 12:03:06 +02:00
cac9828ae3 Cleanup: Separate compile unit lines_paint_mask. 2021-06-08 12:03:06 +02:00
9e9d45ae16 Cleanup: Separate fdots extraction in own compile unit. 2021-06-08 12:03:06 +02:00
89d0cc3a0c Fix T88719: Attribute Remove node input field does nothing
An unlinked multi-input socket was not handled correctly.
2021-06-08 11:51:45 +02:00
e54a4b355e CMake: Fix FindClang not finding system clang on linux in some cases.
In Debian e.g. Clang is part of LLVM, so we need to also check its root
directory sometimes to find Clang files.
2021-06-08 11:16:45 +02:00
933c2cffd6 Geometry Nodes: enable multi-threading in evaluator again
This reverts rB223c6e1ead2940a89465ff66765d16ac14a992b7
because T88598 is resolved now.
2021-06-08 10:43:57 +02:00
ed1fc9d96b BLI: support disabling task isolation in task pool
Under some circumstances using task isolation can cause deadlocks.
Previously, our task pool implementation would run all tasks in an
isolated region. Now using task isolation is optional and can be
turned on/off for individual task pools.

Task pools that spawn new tasks recursively should never enable
task isolation. There is a new check that finds these cases at runtime.
Right now this check is disabled, so that this commit is a pure refactor.
It will be enabled in an upcoming commit.

This fixes T88598.

Differential Revision: https://developer.blender.org/D11415
2021-06-08 10:39:33 +02:00
496045fc30 BMesh: simplify normal calculation, resolve partial update error
Simplify vertex normal calculation by moving the main normal
accumulation function to operate on vertices instead of faces.

Using faces had the down side that it needed to zero, accumulate and
normalize the vertex normals in 3 separate passes, accumulating also
needed a spin-lock for thread since the face would write it's normal
to all of it's vertices which could be shared with other faces.

Now a single loop over vertices is performed without locking.
This gives 5-6% speedup calculating all normals.

This also simplifies partial updates, fixing a problem where
all connected faces were being read from when calculating normals.
While this could have been resolved separately,
it's simpler to operate on vertices directly.
2021-06-08 17:13:15 +10:00
f651cc6c4e Cleanup: Silent compile warning in interface_widgets.c. 2021-06-08 08:25:39 +02:00
0efb627bbd Cleanup: Soilent compile warning in curve_bevel.c. 2021-06-08 07:45:53 +02:00
1b07b7a068 LineArt: Threaded Object Loading.
Move BMesh conversion and all loading code into worker.

Reviewed By: Sebastian Parborg (zeddb)

Differential Revision: https://developer.blender.org/D11288
2021-06-08 13:03:37 +08:00
0abce91940 Fix test failure caused by earlier cleanup commit
rB8cbff7093d65 neglected to move the "pre-tesselation" modifier to the
next before calculating the second part of the curve modifier stack.
2021-06-07 22:33:05 -05:00
2e46a8c864 Remove noop code from WM_MOUSEWHEEL processing.
ChildWindowFromPoint retrieves the child of the provided window at a
point. In this case it always returns 0 because HWND_DESKTOP is flag
defined as 0, which is never a valid window handle and is not intended
for use in place of a window handle.

Forwarding of mousewheel events was added in adb08def61, and later
modified to the current unworking state in e9645806f5. Sending mouse
wheel events to the window under the cursor is a system preference and
therefore should not be overridden by Blender, therefore the noop code
has been removed.
2021-06-07 20:15:28 -07:00
ef5a362a5b Improve multires performance.
Added a new api function to stich multires grids
on specific faces in a mesh,
subdiv_ccg_average_faces_boundaries_and_corners,
and changed multires normal calc to use it.

VTune profiling showed that this was a major
performance hit once you get above 10,000 or so
base mesh faces and/or have a high number of
subdivision levels.

Here's a video comparing the difference. Note the
bpy.app_debug switch is not in the final commit.

{F10145323}

And the .blend file:

{F10145346}

Reviewed By: Sergey Sharybin (sergey)

Differential Revision:
https://developer.blender.org/D11334
2021-06-07 15:19:19 -07:00
a6715213c3 Revert "Improve multires performance."
. . .because I accidentally committed
submodule references.

This reverts commit 482465e18a.
2021-06-07 15:16:03 -07:00
482465e18a Improve multires performance.
Added a new api function to stich multires grids
on specific faces in a mesh,
subdiv_ccg_average_faces_boundaries_and_corners,
and changed multires normal calc to use it.

VTune profiling showed that this was a major
performance hit once you get above 10,000 or so
base mesh faces and/or have a high number of
subdivision levels.

Here's a video comparing the difference. Note the
bpy.app_debug switch is not in the final commit.

{F10145323}

And the .blend file:

{F10145346}

Reviewed By: Sergey Sharybin (sergey)

Differential Revision:
https://developer.blender.org/D11334
2021-06-07 15:11:31 -07:00
b0ec1d2747 Cleanup: Order return argument last 2021-06-07 17:04:31 -05:00
1ef33be2d4 UI: Remove property descriptions exactly the same as names
These two descriptions are exactly the same as the property names,
which only wastes people's time when reading tooltips
2021-06-07 16:47:13 -05:00
d2aee304e8 Cleanup: Use const arguments, return by value
Also use Curve as an argument instead of Object, since the object was
only used to retrieve the curve, and the calling code is already working
with curve data.
2021-06-07 13:58:47 -05:00
6e56b42faa Fix T77651: Black screen on Blender startup on ChromeOS
Apparently `textureSize` doesn't work with
`sampler1DArray` on this OS.

Thanks to @dave1853 for finding the source of the
problem.
2021-06-07 15:51:08 -03:00
1c6e338d59 Cleanup: Make function static
This was not used in any other file, and it's not likely to be used
elsewhere in the future anyway.
2021-06-07 13:42:41 -05:00
7313b243f2 Cleanup: Remove outdated/useless comments
Some of the comments referenced code that was no longer there, or even
defines that were removed. Other comments were more confusing and
vague than helpful. Also adjust formatting in a few cases.
2021-06-07 13:29:37 -05:00
1182c26978 Cleanup: Remove unused function, make function private 2021-06-07 13:12:06 -05:00
8cbff7093d Cleanup: Decrease variable scope
Also use const and bool in a few places.
2021-06-07 13:08:03 -05:00
0fcc063fd9 Fix T88801: Positioning of Menu Underlines
This patch improves the positioning of the little mnemonic underlines
shown under some hotkey letters in menus, especially when using custom
fonts.

see D11521 for details and examples.

Differential Revision: https://developer.blender.org/D11521

Reviewed by Campbell Barton
2021-06-07 10:56:20 -07:00
1949643ee5 Fix: Wrong logic for checking if we can reuse decoded frame
We should only check if the new pts value lies inside the duration of
the current frame.
2021-06-07 18:16:33 +02:00
7bf9d2c580 Cleanup: use doxy groups for bmesh_mesh_normals.c 2021-06-08 01:19:18 +10:00
4a9c5c60b7 Cleanup: Move extract lines to compile unit. 2021-06-07 16:57:21 +02:00
0e285fa23c Cleanup: Move extract tris in own compile unit. 2021-06-07 16:55:09 +02:00
214a78a46f Revert most of rB93a865dde775e.
This revert went too far, only one line (the minimal version of FFMPEG
for `install_deps.sh` script`) actually needed to be reverted...

Sorry for the noise.
2021-06-07 16:54:02 +02:00
f87f8532c3 Cleanup: split bmesh normal calculation into separate files 2021-06-08 00:50:25 +10:00
3da0b52c97 Cleanup: compiler warnings signed/unsigned mismatch 2021-06-08 00:50:25 +10:00
785a518ebe Cleanup: silence warnings 2021-06-07 11:30:25 -03:00
2bf56f7fbb Fix: do not use threading for 'extract_points'
`extract_points` doesn't support multithreading yet.
2021-06-07 11:30:17 -03:00
93a865dde7 Revert "Bump FFmpeg version from 4.2.3 to 4.4"
This reverts commit 95690dd362.

Such high version restriction is no more needed after rB9225fe933ae990.

Missing bit in https://developer.blender.org/D11417.
2021-06-07 16:25:34 +02:00
72d2355af5 Cleanup: remove redundant cast, use const casts 2021-06-08 00:23:09 +10:00
dfac5a63bd Cleanup: remove unused value 2021-06-08 00:09:07 +10:00
c87327ddeb Cleanup: use keyword only argument in bpy.props argument parsing
No functional changes as logic elsewhere already ensured this.

This just makes it obvious to anyone reading over the code that
these arguments are keyword only.
2021-06-08 00:07:19 +10:00
7ca5ba14b5 Cleanup: use keywords for unit tests
Prepare for function calls to be keyword only.
2021-06-08 00:07:19 +10:00
51bf1680bd Cleanup: quiet warning accessing deprecated attribute in bl_test 2021-06-08 00:07:19 +10:00
91d3a54869 Fix build error, remove duplicate include. 2021-06-07 19:11:36 +05:30
ee0000b8bb LineArt: Shifting fix for different camera fitting.
FOV was expanded to cover the shifting range,
rather than to precisely cut at the image border. Now fixed.

Reviewed By: Sebastian Parborg (zeddb)

Differential Revision: https://developer.blender.org/D11523
2021-06-07 21:07:17 +08:00
7f1d1b03ad Cleanup: Fix uninitialized variable warning 2021-06-07 07:54:49 -05:00
abee9a85d4 Cleanup: renamed function to extract_run_single_threaded. 2021-06-07 13:51:47 +02:00
Germano Cavalcante
223016a408 GPUIndexBuf: Find the minimum and maximum index through the builder
Moving the bounds code to the builder can be useful
for future optimizations like building multithreaded.

Reviewed By: fclem, jbakker

Differential Revision: https://developer.blender.org/D11455
2021-06-07 08:41:38 -03:00
6e6a1838ea Cleanup: Added Guarderalloc deallocators to CPP structs. 2021-06-07 13:34:30 +02:00
1d3ffc93ec Added TODO comment for putting parameters into struct. 2021-06-07 13:31:09 +02:00
e517aaa136 Cleanup: move extract points into own compile unit. 2021-06-07 13:27:38 +02:00
8b8c3c34dd Fix T88900: Crash when setting Edge Weight/Crease
The `recalcData` of "convert_mesh_edge" did more
than it was supposed to.
2021-06-07 07:52:34 -03:00
4f6cab176a Cleanup: remove trailing space from pipeline_config.json
Remove trailing spaces from `pipeline_config.json`

No functional changes.
2021-06-07 11:36:26 +02:00
98876d46ef Fix T88899: __file__ not set for text.as_module() 2021-06-07 14:04:26 +10:00
7ef2b760dc Event Simulate: and a --keep-open command line argument
It can be useful to investigate the state of the file
after event simulation runs.
2021-06-06 23:05:46 +10:00
c27b7df563 Cleanup: unused argument 2021-06-06 23:03:34 +10:00
a496af8680 LineArt: Fix edge clipping index error.
Small bug that's causing edge count to be incorrect in
final culled list, just being offset exactly 1 entry.

Reviewed By: Sebastian Parborg (zeddb)

Differential Revision: https://developer.blender.org/D11513
2021-06-06 11:18:18 +08:00
54ce344bc7 VSE: Remove seq->tmp usage from transform code
This field was used for extend feature to get handle position of
metastrip children. Since D9972 extend feature works only on meta
strip itself, not it's children.
So `SEQ_transform_get_left_handle_frame()` second argument is always
false and can be removed.

Another instance of `seq->tmp usage` is hack to distinguish strips to be
shuffled, which is not covered by this patch.

Reviewed By: campbellbarton

Differential Revision: https://developer.blender.org/D10321
2021-06-06 03:05:45 +02:00
dbfde0fe70 Exact Boolean: fix last commit: pass an arg by reference instead of value. 2021-06-05 14:17:15 -07:00
8e43ef5f31 Exact Boolean: speed up when there are many separate components.
Use bounding box tests quickly tell that two components cannot
have a containment relation between each other. This change
cut about 0.6s off a test with 25 big icospheres.
2021-06-05 11:31:08 -07:00
edaaa2afdd Limit Rotation: explicitly orthogonalize the matrix before processing.
Add a call to orthogonalize the matrix before processing for the
same reasons as D8915, and an early exit in case no limits are
enabled for a bit of extra efficiency.

Since the constraint goes through Euler decomposition, it would
in fact remove shear even before this change, but the resulting
rotation won't make much sense.

This change allows using the constraint without any enabled limits
purely for the purpose of efficiently removing shear.

Differential Revision: https://developer.blender.org/D9626
2021-06-05 16:29:46 +03:00
d2dc452333 Limit Rotation: add an Euler Order option.
Since Limit Rotation is based on Euler decomposition, it should allow
specifying the order to use for the same reasons as Copy Rotation does,
namely, if the bone uses Quaternion rotation for its animation channels,
there is no way to choose the order for the constraint.

Ref D9626
2021-06-05 16:29:46 +03:00
2cd1bc3aa7 Fix T88859: Assert when changing view modes
The `loose_lines`' ibo was not being initialized.
2021-06-05 09:49:48 -03:00
1a912462f4 BMesh: avoid extra faces-of-edges loop building partial update data 2021-06-05 21:33:15 +10:00
bcf9c73cbc Fix assert check in BLI_polyfill_beautify 2021-06-05 17:16:37 +10:00
3c9c557580 Fix assert in gpencil_batches_ensure 2021-06-05 17:13:12 +10:00
c7fee64dea Cleanup: use ternary operator for icon argument 2021-06-05 17:05:11 +10:00
ca3891e83b Fix T88828: View/Navigation(Walk/Fly) disappeared from keymap
This was unintentionally removed in
f92f5d1ac6.
2021-06-05 15:06:48 +10:00
a5114bfb85 Cleanup: indentation 2021-06-05 15:06:47 +10:00
022f8b552d Cleanup: spelling in comments
Also remove reference to function that never existed for adding `bNode`.
2021-06-05 15:03:59 +10:00
14508ef100 FFmpeg: Fix seeking not returning the correct frame when not using TC index
Fixed the logic for seeking in ffmpeg video files.
The main fix is that we now apply a small offset in ffmpeg_get_seek_pos
to make sure we don't get the frame in front of the seek position when
seeking backward.

The rest of the changes is general cleanup and untangling code.

Reviewed By: Richard Antalik

Differential Revision: http://developer.blender.org/D11492
2021-06-05 02:48:09 +02:00
bfaf09b5bc Fix T88813: Scalable allocator not used on win10
Due to the way we ship the CRT on windows TBB's
malloc proxy was unable to attach it self to
the memory management functions on windows 10.

This change moves ucrtbase.dll out of the blender.crt
folder and back into the main blender folder to side
step some undesirable behaviour on win10 making TBB
once more able to attach it self.

Having this work again, should give a speed
boost in memory allocation heavy workloads
such as mantaflow.

For details on how this only failed on Win10
see T88813
2021-06-04 17:22:31 -06:00
c2fa36999f Edit Mesh: partial updates for normal and face tessellation
This patch exposes functionality for performing partial mesh updates
for normal calculation and face tessellation while transforming a mesh.

The partial update data only needs to be generated once,
afterwards the cached connectivity information can be reused
(with the exception of changing proportional editing radius).

Currently this is only used for transform, in the future it could be
used for other operators as well as the transform panel.

The best-case overall speedup while transforming geometry is about
1.45x since the time to update a small number of normals and faces is
negligible.

For an additional speedup partial face tessellation is multi-threaded,
this gives ~15x speedup on my system (timing tessellation alone).
Exact results depend on the number of CPU cores available.

Ref D11494

Reviewed By: mano-wii
2021-06-05 08:35:31 +10:00
Charlie Jolly
00073651d4 Nodes: Add Multiply Add to Vector Math nodes
Cycles, Eevee, OSL, Geo, Attribute

This operator provides consistency with the standard math node. Allows users to use a single node instead of two nodes for this common operation.

Reviewed By: HooglyBoogly, brecht

Differential Revision: https://developer.blender.org/D10808
2021-06-04 16:59:28 +01:00
eb030204f1 windows/deps: Fix TBB build issues.
rB847579b42250 updated the TBB build script
which had some unintended consequences for
windows as the directory layout slightly
changed.

This change adjusts the builder to the new
structure, there are no version/functional
changes.
2021-06-04 09:16:03 -06:00
Philipp Oeser
56005ef499 Texture Paint: changing paint slots and viewport could go out of sync
When changing to another texture paint slot, the texture displayed in
the viewport should change accordingly (as well as the image displayed
in the Image Editor).

The procedure to find the texture to display in the viewport
(BKE_texpaint_slot_material_find_node) could fail
though because it assumed iterating nodes would always happen in the
same order (it was index based). This is not the case though, nodes can
get sorted differently based on selection (see ED_node_sort).

Now check the actual image being referenced in the paint slot for
comparison.

ref T88788 (probably enough to call this a fix, the other issue(s)
mentioned in the report are more likely a feature request)

Reviewed By: mano-wii

Maniphest Tasks: T88788

Differential Revision: https://developer.blender.org/D11496
2021-06-04 16:02:02 +02:00
b6d6d8a1aa LibOverride: Fix early break in some of the resync code.
This `break` moved out of its braces at some point in the previous
fixes/refctors... :(
2021-06-04 15:54:07 +02:00
3f4f28082f Fix build error after merging master 2021-06-04 08:17:41 -04:00
92ce1c8367 Merge branch 'master' into temp-spreadsheet-row-filter 2021-06-04 08:12:20 -04:00
0037e08b06 GPencil: Change Fill Boundary icon
The icon has been changed to `eye` because is more consistent with other areas.
2021-06-04 12:44:34 +02:00
b6b20c4d92 GPencil: Change Fill extend lines icon
The icon has been changed to `eye` because is more consistent with other areas.
2021-06-04 12:44:34 +02:00
Fynn Grotehans
d486ee2dbd Update Camera presets
The (tracking) camera presets have not been updated in the last 7 or
more years, so they are very outdated. I found it pointless to have a
few specific camera models in the list and instead add the most commonly
used sensor sizes/film sizes. This way the list is shorter, easier to
maintain/becomes later outdated, and is more user friendly for most people
who don't own any of the specific cameras. I added the Crop Factor to the
Beginning of the name, so it gets sortet in the correct order and presets
are easier to find based on the size.

Reviewed By: #render_cycles, #motion_tracking, brecht, sergey

Differential Revision: https://developer.blender.org/D10739
2021-06-04 12:19:58 +02:00
9ba6b64efa Greasepencil: show pressure curve widgets in the sidebar
These were only showing in the Properties Editor, but there is no reason
to have the panels be different in the sidebar (they should not show in
the top bar though).

agreed upon by both @anoniov and @mendio

ref T88787
2021-06-04 12:17:33 +02:00
e4ca6b93ad BlenLoad: Ensure linked IDs are properly sorted.
So far, linked IDs were not properly sorted at all, only the ones
explicitely linked from WM code would be, but any indirectly linked
data-blocks would end up in some random order in their lists.

While not ideal, this is not a huge issue in itself, but it had bad
side-effects, e.g. causing (recursive) resync of overrides to happen in
random order, leading to mismatches between name indices of newly-generated
override IDs and the one existings e.g.

And in general, it is much better to be consistent here.

Note that the file sub-version is bumped for this commit, since some
sorting (the directly linked IDs which we keep a reference to) should
never need to be re-done after relevant doversion process.
2021-06-04 12:09:05 +02:00
77d7cae266 GPencil: Cleanup unneeded variable assign
The variable is assigned below again and the initial value is not used.
2021-06-04 10:44:01 +02:00
f4e0a19d4f Fix T88803: GPencil Thickness modifier produces thicker lines
There was a double apply of the thickness due a bug in the fading new parameter.

Differential Revision: https://developer.blender.org/D11483
2021-06-04 10:34:05 +02:00
053082e9d8 Math: Added max_uu/min_uu variations. 2021-06-04 09:14:15 +02:00
Johnny Matthews
ddd4b2b785 Geometry Nodes: Curve Length Node
This commit adds a node that outputs the total length of all
evalauted curve splines in a geometry set as a float value.

Differential Revision: https://developer.blender.org/D11459
2021-06-03 19:12:38 -04:00
e5a1cadb2f Geometry Nodes: Support curve data in the geometry delete node
This commit implements support for deleting curve data in the geometry
delete node. Spline domain and point domain attributes are supported.

Differential Revision: https://developer.blender.org/D11464
2021-06-03 18:33:37 -04:00
c18675b12c Cleanup: Add comment explaining assert
This triggers fairly often during development, so it might save some
frustration at some point to have a comment here.
2021-06-03 18:06:31 -04:00
dba3fb9e09 Overlay: Flash on Mode Transfer overlay
This implements T87633

This overlay renders a flash animation on the target object when
transfering the mode to it using the mode transfer operator.
This provides visual feedback when switching between objects without
extra overlays that affect the general color and lighting in the scene.

Differences with the design task:

- This uses just a fade out animation instead of a fade in/out animation.
The code is ready for fade in/out, but as the rest of the overlays
(face sets, masks...) change instantly without animation, having a fade
in/out effect gives the impression that the object flashes twice (once
for the face sets, twice for the peak alpha of the flash animation).

- The rendering uses a flat color without fresnel for now, but this can
be improved in the future to make it look more like the shader in the
prototype.

- Not enabled by default (can be enabled in the overlays panel), maybe
the defaults can change for 3.0 to disable fade inactive and enable this
instead.

Reviewed By: jbakker, JulienKaspar

Differential Revision: https://developer.blender.org/D11055
2021-06-03 20:17:17 +02:00
3e695a27cd VSE: Add refresh_all operator to all sequencer regions
This operator is needed in some cases to update image preview.
In workspaces with smaller timelines this is limiting, because users
need to first check that mouse cursor is in correct place, then press
CTRL+R shortcut.
2021-06-03 19:58:36 +02:00
a1063fc6c2 VSE: Remove JPEG reference from proxy panel
Proxies doesn't use MJPEG codec anymore, but text still referenced it.
2021-06-03 19:44:00 +02:00
b414322f26 GHOST/wayland: fix restoring hidden cursor 2021-06-03 18:18:27 +01:00
899eefd1bb GHOST/wayland: fix non-moving normal cursor 2021-06-03 18:18:27 +01:00
72607feb91 GHOST/wayland: add 'xdg-decoration' support 2021-06-03 18:18:27 +01:00
870bcf6e1a GHOST/wayland: adapt window and cursor surface scale to support HiDPI screens 2021-06-03 18:18:27 +01:00
8b78510fc4 GHOST/wayland: handle return values for data sources 2021-06-03 18:18:27 +01:00
e0bc5c4087 GHOST/wayland: set parent relation only for dialogs to mimic X11 behaviour 2021-06-03 18:18:27 +01:00
1fd653dd82 GHOST/wayland: handle missing relative-pointer and pointer-constraints support 2021-06-03 18:18:27 +01:00
d9aae38bc8 GHOST/wayland: get cursor settings via D-Bus 2021-06-03 18:18:27 +01:00
efad9bcdda GHOST/wayland: inhibit warning 2021-06-03 18:18:27 +01:00
b5d7fb813f cmake: use absolute path for Wayland libraries 2021-06-03 18:18:27 +01:00
bb16f96973 GHOST/wayland: set swap interval to 0
The Wayland server will not update hidden surfaces. This will block the
main event loop and thus also block updates to visible windows in a multi-
window setup.
2021-06-03 18:18:27 +01:00
7197017ea9 WM: only use the tablet drag threshold for mouse button events
Keyboard click-drag events now use the "Drag Threshold".
This resolves a problem where keyboard click drag events
used a much smaller threshold when using a tablet.
2021-06-04 01:18:23 +10:00
ed6fd01ba9 LibOverride: fix previous commit (rB826bed4349fa). 2021-06-03 16:44:20 +02:00
826bed4349 LibOverride: Fix some fail cases with auto-resync.
In some cases e.g. only objects would actually need resync, so
collections on the override character would not be resynced, and if some
objects were sharing relationships with others those could be
lost/destroyed.
2021-06-03 16:12:26 +02:00
797f6e1483 Fix missing updates in RNA override create functions. 2021-06-03 15:44:23 +02:00
e011e4ce76 LibOverride: Add override_hierarchy_createto ID's RNA API. 2021-06-03 15:00:50 +02:00
92f8a6ac21 Fix T88762: UI using tab to enter next button could clamp the hard min/
max unneccessarily

Since rB298d5eb66916 [which was needed to update buttons with custom
property range functions correctly], using tab would always clamp
(hardmin/hardmax) properties which were using FLT_MAX / INT_MAX as range
in their property definitions.

The clamping of rB298d5eb66916 was copied over from rB9b7f44ceb56c
[where it was used for the softmin/softmax], and while the re-evaluation
of hardmin/hardmax is needed for custom property range functions, the
clamping should actually not take place.

There are many properties using FLT_MAX / INT_MAX etc. and while it
probably would be good to update these with ranges that make more sense
-- not using FLT_MAX / INT_MAX would not have done the clamping here --
there should not be an arbitrary limit to these and they should stay as
they are.

Maniphest Tasks: T88762

Differential Revision: https://developer.blender.org/D11473
2021-06-03 14:16:53 +02:00
7b8d812277 Cleanup: make format 2021-06-03 13:33:09 +02:00
2ae4e860f6 LibOverride: ensure proper indirect tag for 'virtual' linked IDs.
Ensure 'virtual' linked override IDs generated by the recursive resync
process are tagged as indirectly linked data.

This is needed to avoid the 'missing data' messages on those virtual
data-blocks after saving and reloading.
2021-06-03 10:27:05 +02:00
2ef192a55b IDManagement: Collection: Fix several issues in relationships building code.
`BKE_main_collections_parent_relations_rebuild`,
`BKE_collection_parent_relations_rebuild` anf their internal
dependencies had two issues fixed by this commit:

* Main one was that a same collection could be processed several times,
  sometimes even in an infinite loop (in some rare corner cases), by
  `collection_parents_rebuild_recursive`.
* More exotic, code here would not ensure that the collections it was
  processing were actually in Main (or a master one from a scene in
  Main), which became an issue with some advanced ID management
  processes involving partially out-of-main remapping, like liboverride
  resync.
2021-06-03 10:27:05 +02:00
a51f8f94d5 Cleanup: use ascii characters instead of unicode where possible
Follow own code style docs.
2021-06-03 11:32:53 +10:00
17f72be3cb Cleanup: spelling in comments, correct outdated comments 2021-06-03 10:47:02 +10:00
2a868d277e Cleanup: use doxy sections for node_relationships.cc 2021-06-03 10:35:46 +10:00
b60a72eaab Fix invalid return values from file_execute
Error in 6c8c30d865
2021-06-03 10:23:40 +10:00
2dcb6782e0 Draw Mesh Extractor: Fix used thread count
Some threads were always idle because of this.
2021-06-02 18:05:58 -03:00
4d64de2853 Cleanup: Remove unused 'ExtractTaskData's members 2021-06-02 17:55:25 -03:00
5af7225816 Cleanup: Fix build warnings 2021-06-02 21:52:36 +02:00
925df8ef26 VSE: Add strip-time intersection test function
Use SEQ_time_strip_intersects_frame function to test if strip intersects with frame.

Note: There are cases where this function should not be used. For example splitting
strips require at least 1 frame "inside" strip. Another example is drawing, where
playhead technically doesn't intersect strip, but it is rendered, because current
frame has "duration" or "thickness" of 1 frame.

Reviewed By: sergey

Differential Revision: https://developer.blender.org/D11320
2021-06-02 21:41:38 +02:00
2ee575fc1f Cleanup: Strip duplication code
Remove unused flag `SEQ_DUPE_ANIM` and code used by this flag.
Remove flag `SEQ_DUPE_CONTEXT` and refactor code, to split operator
logic from duplication code.
Reduce indentation level in for loop.

Reviewed By: sergey

Differential Revision: https://developer.blender.org/D11318
2021-06-02 21:41:17 +02:00
1f55786791 Fix T57397: Movies are blurred after sws_scale
Images with 4:2:2 and 4:4:4 chroma subsampling were blurred when
`SWS_FAST_BILINEAR` interpolation is set for `anim->img_convert_ctx`.

Use `SWS_BILINEAR` interpolation for all movies, as performance is
not impacted by this change.

Reviewed By: sergey

Differential Revision: https://developer.blender.org/D11457
2021-06-02 21:29:38 +02:00
a9dfde7b49 FFmpeg: Update proxy settings
Changes in rBce649c73446e, affected established proxy codec preset.
Presets were not working and all presets were similar to `veryfast`.
Tunes are now working too, so `fastdecode` tune can be used. I have
measured little improvement, but I tested this only on 2 machines and
I have been informed that `fastdecode` tune does influence decoding
performance for some users.

Change preset from `slow` to `veryfast` and add tune `fastdecode`

Reviewed By: sergey

Differential Revision: https://developer.blender.org/D11454
2021-06-02 21:25:37 +02:00
0ea0ccc4ff FFmpeg: Fix H264 lossless render not lossless
While encoder parameters for lossless encoding are set correctly,
output is not lossless due to pixel format being set to
`AV_PIX_FMT_YUV420P` which is inherently lossy due to chroma subsampling.

This was reported in T61569 and was merged to T57397, but there were
2 bugs - one for encoding and one for decoding.

Set pixel format to `AV_PIX_FMT_YUV444P` when rendering lossless H264
files. This format isn't available in `codec->pix_fmts[0]` and it looks,
that it has to be hard-coded.

Reviewed By: sergey

Differential Revision: D11458
2021-06-02 21:24:09 +02:00
81366b7d2c Boolean exact: speedup by parallelizing a plane calculation.
This patch is from erik85, who says:
This patch makes populate_plane inside polymesh_from_trimesh_with_dissolve run in parallel.
On a test file with a boolean between two subdivided cubes (~6 million verts) this gives a 10% speed increase (49.5s to 45s) on my 6 core CPU.

Also there is an optimization of other_tri_if_manifold to skip the contains-call and get the pointer directly.
This reduces CPU time for find_patches from 5s to 2.2s on the same test file.
2021-06-02 12:08:46 -07:00
dc960a81d1 Boolean exact: speedup when there are many components.
When there are many components (separate pieces of connected mesh),
a part of the algorithm to determine component containment was slow.
Using a float version of finding the nearest point on a triangle
as a prefilter sped this up enormously. A case of 25 icospheres
subdivided twice goes 11 seconds faster on my Macbook pro with this
change.
2021-06-02 11:18:00 -07:00
4f8edc8e7f Nodes: move some files to C++
This just moves a couple of files in `space_node` to C++ and fixes
related errors.

The goal is to be able to use C++ data structures to simplify the code.

Differential Revision: https://developer.blender.org/D11451
2021-06-02 17:20:46 +02:00
b8ae30e9e3 Cleanup: unused variable 2021-06-03 01:12:00 +10:00
ae28ceb9d8 Fix swapped x/y in event simulation script
Incorrect area center calculation, also correct comments.
2021-06-03 01:11:47 +10:00
05b685989b Fix T88732: Curve to mesh node crash with empty input curve
The mesh to curve node generated an empty curve because no edges were
selected. This commit changes that node to not add a curve in that case.

This also changes the curve to mesh node to not add a material when
no mesh was created. Even though we don't expect null curves or meshes
in this case, the change is harmless.
2021-06-02 11:11:34 -04:00
7654203cc8 EEVEE: AOVs not same as cycles.
EEVEE uses hashing to sync aov names and types with the gpu.
For the type a hashed value was overridden making `decalA`
and `decalB` choose the same hash. This patches fixes this
by removing the most significant bit.
2021-06-02 16:58:36 +02:00
2489f72d79 Revert "EEVEE: AOVs not same as cycles."
This reverts commit 730a46e87d.
2021-06-02 16:56:10 +02:00
730a46e87d EEVEE: AOVs not same as cycles.
EEVEE uses hashing to sync aov names and types with the gpu. For the type a hashed value was overridden making `decalA` and `decalB` choose the same hash. This patches fixes this by removing the most significant bit.
2021-06-02 16:54:01 +02:00
f944121700 GPencil: New operator to Normalize strokes
Sometimes is required to reset the thickness or the opacity of the strokes. Actually this was done using a modifier, but this operators solves this.

Reviewed By: mendio, filedescriptor

Maniphest Tasks: T87427

Differential Revision: https://developer.blender.org/D11453
2021-06-02 16:43:37 +02:00
21de669141 Splines: Function to copy spline settings without data
Often you need to copy a spline to do an operation, but don't want
to manually copy over all of the settings. I've already forgotten to
do it once anyway. These functions copy a spline's "settings" into a
new spline, but not the data. I tried to avoid duplicating any copying
so this is easier to extend in the future.

Differential Revision: https://developer.blender.org/D11463
2021-06-02 09:11:35 -04:00
5b176b66da LineArt: Tolerance for faces perpendicular to view
This is due to cam->obmat precision issue,
where it affects view vector precision.

Reviewed by Sebastian Parborg (zeddb)

Differential Revision: https://developer.blender.org/D11461
2021-06-02 20:55:54 +08:00
a55b73417f Cleanup: Correct comments
This corrects an outdated comment in the vector header and a typo
in the index mask header.
2021-06-02 08:28:46 -04:00
ea6d099082 Cleanup: Clang format 2021-06-02 08:27:53 -04:00
34f99bc6be Geometry Nodes: Allow reading converted attribute directly from spline
Often it would be beneficial to avoid the virtual array implementation
in `geometry_component_curve.cc` that flattens an attribute for every
spline and instead read an attribute separately for every input spline.
This commit implements functions to do that.

The downside is some code duplication-- we now have two places handling
this conversion. However, we can head in this general direction for the
attribute API anyway and support accessing attributes in smaller
contiguous chunks where necessary.

No functional changes in this commit.

Differential Revision: https://developer.blender.org/D11456
2021-06-02 08:24:42 -04:00
6cd64f8caa Add workaround for gcc 11 compiler bug
Differential Revision: https://developer.blender.org/D11462
2021-06-02 14:13:55 +02:00
f92f5d1ac6 Keymap: use D-Key for view-pie menu
Use the D-key to access the view menu instead of the Tilda key,
which isn't accessible on some international layouts.

To resolve the conflicts the following changes have been made.

- `D` (motion) opens the view menu.
- `D` (mouse-button) uses grease-pencil (as it does currently).
- `Tilda` is used to for "Object Mode Transfer" instead of the View menu.

See T88092 for details.

Reviewed By: Severin

Ref D11189
2021-06-02 21:34:30 +10:00
69e15eb1e4 Object: support running transfer mode in any object mode
There is no need to limit this to sculpt mode,
prepare for key short cut changes, see: T88092.
2021-06-02 21:29:15 +10:00
facc62d0d5 Docs: 2.93 release description for Linux appdata 2021-06-02 13:22:16 +02:00
46447594de Fix T88567: Cryptomatte only works for the first View Layer.
The view layer was always set to 0. This patch increments it.
2021-06-02 10:05:39 +02:00
507c19c0f7 Docs: formalize naming for generic callbacks in BKE_callbacks.h
Add a doc-string explaining the purpose of each call back and
how they should be used.

Also add a currently unused callback 'POST_FAIL' that is to be used in
cases the action fails - giving script authors, a guarantee that a
call to `pre` will always have a matching `post/post_fail` call.

- D11422: adds a callback that can use 'post_fail'.
- T88696: proposed these conventions.
2021-06-02 17:35:24 +10:00
3b3742c75f Cleanup: trailing commas to avoid right shift
This matches most declarations already in this file.
2021-06-02 17:06:12 +10:00
b5a883fef9 Cleanup: spelling in comments 2021-06-02 17:04:57 +10:00
711ddea60e Fix assert with geometry node output
The previous commit (my own) returned early without providing a value
for the node's output geometry set, which is required.
2021-06-01 23:50:23 -04:00
6b5bbd22d9 Cleanup: Avoid duplicating node input retrieval
Pass the selection name and the invert argument to each component
instead of retrieving them every time.
2021-06-01 17:51:21 -04:00
Wannes Malfait
464797078d Geometry Nodes: Add Delete Geometry Node
This node is similar to the mask modifier, but it deletes the elements
of the geometry corresponding to the selection, which is retrieved as
a boolean attribute. The node currently supports both mesh and point
cloud data. For meshes, which elements are deleted depends on the
domain of the input selection attribute, just like how behavior depends
on the selection mode in mesh edit mode.

In the future this node will support curve data, and ideally volume
data in some way.

Differential Revision: https://developer.blender.org/D10748
2021-06-01 17:32:03 -04:00
3400ba329e VSE: Use own category for metadata panel
Metadata panel was visible in each category. In other editors, this
panel is usually placed in category with other source media properties.
In sequencer, metadata is transfered over while compositing and relation
to particular strip is lost, therefore separate category for metadata
seems to be best option. Since Metadata panel is alone in this category,
it will be open by default.
2021-06-01 23:06:31 +02:00
17b09b509c Geometry Nodes: Skip calculating normals in transform node
This commit skips the eager recalculation of mesh normals in the
transform node. Often another deformation or topology-altering
operation will happen after the transform node, which means the
recalculation was redundant anyway.

In one of my test cases this made the node more than 14x faster.
Though depending on the situation the cost of updating the normals
may just be shifted elsewhere.
2021-06-01 12:03:59 -04:00
5b6e0bad1b Fix T88715: particle size influence texture not working for 'keyed' or 'none' physics types
This was reported for the special case of mapping with "Strand /
Particle" coords, but was not working with other coordinates either.

Dont see a reason for not supporting Size influence textures for these
kinds of particles (and since these types of particles have an "age"
like all others as well, even the "Strand / Particle" coords are
supported here as well)

Maniphest Tasks: T88715

Differential Revision: https://developer.blender.org/D11449
2021-06-01 17:54:20 +02:00
404b946ac0 LibOverride: Fix again infinite loop in resync in some complex/degenerated cases.
Broken in recent refactor of (recursive)resync, reported by studio,
thanks.
2021-06-01 17:37:22 +02:00
6583fb67c6 Geometry Nodes: add empty material slot to new meshes
This fixes T88455 by adding an empty material slot to newly
generated meshes. This allows the object to overwrite the
"default" material without any extra nodes. Technically,
all polygons reference the material index 0 already, so it
makes sense to add a material slot for this material index.

Differential Revision: https://developer.blender.org/D11439
2021-06-01 15:25:01 +02:00
James Monteath
8a63466ca3 BuildBot: Cleanup
Removing scripts that were placed in the source tree that would drive
the old buildbot. With the new buildbot in place these scripts aren't
being used anymore.

The buildbit is currently driven by
`build_files/config/pipeline_config.json`. In the near future make
update would also use this config.

Overview of the new buildbot: https://wiki.blender.org/wiki/Infrastructure/BuildBot

Work done by James Monteath.
2021-06-01 15:18:11 +02:00
James Monteath
ee54a8ace7 Update all README to clearify intention or usage
Add snap configuration file used by Buildbot snap store steps1
2021-05-31 18:58:42 +02:00
James Monteath
ffe7a41540 Fix typos 2021-05-28 20:50:52 +02:00
James Monteath
74c7e21f6c Add and update README.md files for CI script removal 2021-05-28 19:46:53 +02:00
James Monteath
fc2b56e68c Add branch based pipeline config file used by buildbot and make_update.py script. 2021-05-28 17:53:02 +02:00
James Monteath
51bbdfbef3 Moved to new git repo 2021-05-28 12:16:45 +02:00
James Monteath
5baf1dddd5 Buildbot related files have been moved to own repository 2021-05-28 10:16:39 +02:00
7680d4fc6a Add support for float2, float3, color, and instance columns 2021-05-18 13:14:08 -04:00
8d7866f4b3 Merge branch 'master' into temp-spreadsheet-row-filter 2021-05-18 11:20:48 -04:00
c9c88b9626 Use "Spreadsheet" instead of "SpreadSheet" 2021-04-22 08:56:22 -05:00
deff5f1cd0 Remove unecessary comment 2021-04-22 08:56:10 -05:00
bdcdc973a7 Fix memory leak 2021-04-22 08:56:01 -05:00
ec96577057 Merge branch 'master' into temp-spreadsheet-row-filter 2021-04-22 08:54:35 -05:00
a79288785c Remove commented code / always display "Only Selected" option 2021-04-19 16:49:31 -05:00
7612308189 Sort RNA definition automatically 2021-04-19 16:47:02 -05:00
f80bd477ac Don't mark unreachable code incorrectly 2021-04-19 16:46:53 -05:00
582928f081 Give operators more specific names 2021-04-19 16:46:26 -05:00
1d8983c549 Use StringRefNull 2021-04-19 16:39:23 -05:00
6c1ef12e80 Move assigning column runtime data to the main draw function 2021-04-19 16:38:21 -05:00
94cc56b450 Remove "T" keymap item 2021-04-19 16:36:11 -05:00
5877465c20 Don't gray out filter panels when their string is empty 2021-04-19 16:33:40 -05:00
9094e3cd77 Merge branch 'master' into temp-spreadsheet-row-filter 2021-04-19 16:28:15 -05:00
c62d1b5476 Remember the last data type when a column is no longer visible 2021-04-16 12:49:37 -05:00
8919c68adb Merge branch 'master' into temp-spreadsheet-row-filter 2021-04-16 11:57:39 -05:00
5e5242dbc9 Compare display names instead of the column ID 2021-04-14 12:50:18 -05:00
1b31d7bf47 Use nullptr instead of NULL 2021-04-14 11:32:26 -05:00
9f2b978617 Merge branch 'master' into temp-spreadsheet-row-filter 2021-04-14 11:25:52 -05:00
b770c9ed40 Cleanup: Whitespace and unused includes 2021-04-13 21:41:22 -05:00
d0fd407985 New implementation of the UI in a sidebar
Includes panels and drag and drop
2021-04-13 21:33:18 -05:00
f3b85c2e6c Address small review comments from Jacques 2021-04-13 12:00:06 -05:00
7fe14a43b9 Merge branch 'master' into temp-spreadsheet-row-filter 2021-04-13 11:52:15 -05:00
5da0baff4e Reorder enum definitions 2021-04-12 15:21:08 -05:00
95740e1153 Small cleanups 2021-04-12 15:19:10 -05:00
92aa3e75a6 Spreadsheet Editor: Row filters
{F9930989 size=full}

This patch adds support for filtering out rows based on rules and values.
Filters will work for any attribute data source, they are a property of
the spreadsheet rather than of the attribute system.

**Further Questions**
* The popover is a test, it would be easy to move this to a sidebar, but I wanted to see
  what people thought about the popover.
* `SpreadSheetColumn` does not know about the " X" suffixes that are added to vector columns.
  This means the row filter cannot determine the correct data type for vector columns.
  Luckily the default of float works, but I'm not sure how to handle this properly.

Differential Revision: https://developer.blender.org/D10959
2021-04-12 15:12:28 -05:00
4a6c00a9eb Merge branch 'master' into temp-spreadsheet-row-filter 2021-04-12 09:07:05 -05:00
47ed4da9e9 Geometry Nodes: Working spreadsheet row filtering
Now there is only some UI work left
2021-04-09 18:03:19 -05:00
c4b1dd175d Merge branch 'master' into temp-spreadsheet-row-filter 2021-04-09 11:41:00 -05:00
d09deb3147 Implement basic UI and structures for row filters 2021-03-30 16:55:02 -05:00
699 changed files with 18622 additions and 13743 deletions

View File

@@ -43,6 +43,12 @@ endif()
if(WIN32)
set(EMBREE_BUILD_DIR ${BUILD_MODE}/)
if(BUILD_MODE STREQUAL Debug)
list(APPEND EMBREE_EXTRA_ARGS
-DEMBREE_TBBMALLOC_LIBRARY_NAME=tbbmalloc_debug
-DEMBREE_TBB_LIBRARY_NAME=tbb_debug
)
endif()
else()
set(EMBREE_BUILD_DIR)
endif()

View File

@@ -22,6 +22,7 @@ if(WIN32)
-DTBB_BUILD_TBBMALLOC_PROXY=On
-DTBB_BUILD_STATIC=Off
-DTBB_BUILD_TESTS=Off
-DCMAKE_DEBUG_POSTFIX=_debug
)
set(TBB_LIBRARY tbb)
set(TBB_STATIC_LIBRARY Off)
@@ -55,17 +56,17 @@ if(WIN32)
ExternalProject_Add_Step(external_tbb after_install
# findtbb.cmake in some deps *NEEDS* to find tbb_debug.lib even if they are not going to use it
# to make that test pass, we place a copy with the right name in the lib folder.
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbb.lib ${HARVEST_TARGET}/tbb/lib/tbb_debug.lib
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbbmalloc.lib ${HARVEST_TARGET}/tbb/lib/tbbmalloc_debug.lib
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbb.dll ${HARVEST_TARGET}/tbb/lib/tbb_debug.dll
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbbmalloc.dll ${HARVEST_TARGET}/tbb/lib/tbbmalloc_debug.dll
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbb.lib ${LIBDIR}/tbb/lib/tbb_debug.lib
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbbmalloc.lib ${LIBDIR}/tbb/lib/tbbmalloc_debug.lib
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/bin/tbb.dll ${LIBDIR}/tbb/bin/tbb_debug.dll
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/bin/tbbmalloc.dll ${LIBDIR}/tbb/bin/tbbmalloc_debug.dll
# Normal collection of build artifacts
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbb.lib ${HARVEST_TARGET}/tbb/lib/tbb.lib
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbb.dll ${HARVEST_TARGET}/tbb/lib/tbb.dll
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/bin/tbb.dll ${HARVEST_TARGET}/tbb/bin/tbb.dll
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbbmalloc.lib ${HARVEST_TARGET}/tbb/lib/tbbmalloc.lib
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbbmalloc.dll ${HARVEST_TARGET}/tbb/lib/tbbmalloc.dll
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/bin/tbbmalloc.dll ${HARVEST_TARGET}/tbb/bin/tbbmalloc.dll
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbbmalloc_proxy.lib ${HARVEST_TARGET}/tbb/lib/tbbmalloc_proxy.lib
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbbmalloc_proxy.dll ${HARVEST_TARGET}/tbb/lib/tbbmalloc_proxy.dll
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/bin/tbbmalloc_proxy.dll ${HARVEST_TARGET}/tbb/bin/tbbmalloc_proxy.dll
COMMAND ${CMAKE_COMMAND} -E copy_directory ${LIBDIR}/tbb/include/ ${HARVEST_TARGET}/tbb/include/
DEPENDEES install
)
@@ -76,11 +77,12 @@ if(WIN32)
# to make that test pass, we place a copy with the right name in the lib folder.
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbb_debug.lib ${LIBDIR}/tbb/lib/tbb.lib
# Normal collection of build artifacts
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbb_debug.lib ${HARVEST_TARGET}/tbb/lib/debug/tbb_debug.lib
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbb_debug.dll ${HARVEST_TARGET}/tbb/lib/debug/tbb_debug.dll
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbbmalloc_proxy.lib ${HARVEST_TARGET}/tbb/lib/tbbmalloc_proxy_debug.lib
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbbmalloc.dll ${HARVEST_TARGET}/tbb/lib/debug/tbbmalloc.dll
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbbmalloc_proxy.dll ${HARVEST_TARGET}/tbb/lib/debug/tbbmalloc_proxy.dll
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbb_debug.lib ${HARVEST_TARGET}/tbb/lib/tbb_debug.lib
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/bin/tbb_debug.dll ${HARVEST_TARGET}/tbb/bin/tbb_debug.dll
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbbmalloc_debug.lib ${HARVEST_TARGET}/tbb/lib/tbbmalloc_debug.lib
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/lib/tbbmalloc_proxy_debug.lib ${HARVEST_TARGET}/tbb/lib/tbbmalloc_proxy_debug.lib
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/bin/tbbmalloc_debug.dll ${HARVEST_TARGET}/tbb/bin/tbbmalloc_debug.dll
COMMAND ${CMAKE_COMMAND} -E copy ${LIBDIR}/tbb/bin/tbbmalloc_proxy_debug.dll ${HARVEST_TARGET}/tbb/bin/tbbmalloc_proxy_debug.dll
DEPENDEES install
)
endif()

View File

@@ -432,9 +432,9 @@ set(USD_HASH 1dd1e2092d085ed393c1f7c450a4155a)
set(USD_HASH_TYPE MD5)
set(USD_FILE usd-v${USD_VERSION}.tar.gz)
set(OIDN_VERSION 1.3.0)
set(OIDN_VERSION 1.4.0)
set(OIDN_URI https://github.com/OpenImageDenoise/oidn/releases/download/v${OIDN_VERSION}/oidn-${OIDN_VERSION}.src.tar.gz)
set(OIDN_HASH 301a5a0958d375a942014df0679b9270)
set(OIDN_HASH 421824019becc5b664a22a2b98332bc5)
set(OIDN_HASH_TYPE MD5)
set(OIDN_FILE oidn-${OIDN_VERSION}.src.tar.gz)

View File

@@ -553,10 +553,10 @@ EMBREE_FORCE_BUILD=false
EMBREE_FORCE_REBUILD=false
EMBREE_SKIP=false
OIDN_VERSION="1.3.0"
OIDN_VERSION_SHORT="1.3"
OIDN_VERSION_MIN="1.3.0"
OIDN_VERSION_MAX="1.4"
OIDN_VERSION="1.4.0"
OIDN_VERSION_SHORT="1.4"
OIDN_VERSION_MIN="1.4.0"
OIDN_VERSION_MAX="1.5"
OIDN_FORCE_BUILD=false
OIDN_FORCE_REBUILD=false
OIDN_SKIP=false
@@ -565,7 +565,7 @@ ISPC_VERSION="1.14.1"
FFMPEG_VERSION="4.4"
FFMPEG_VERSION_SHORT="4.4"
FFMPEG_VERSION_MIN="4.4"
FFMPEG_VERSION_MIN="3.0"
FFMPEG_VERSION_MAX="5.0"
FFMPEG_FORCE_BUILD=false
FFMPEG_FORCE_REBUILD=false

View File

@@ -1,33 +1,3 @@
diff -Naur oidn-1.3.0/cmake/FindTBB.cmake external_openimagedenoise/cmake/FindTBB.cmake
--- oidn-1.3.0/cmake/FindTBB.cmake 2021-02-04 16:20:26 -0700
+++ external_openimagedenoise/cmake/FindTBB.cmake 2021-02-12 09:35:53 -0700
@@ -332,20 +332,22 @@
${TBB_ROOT}/lib/${TBB_ARCH}/${TBB_VCVER}
${TBB_ROOT}/lib
)
-
# On Windows, also search the DLL so that the client may install it.
file(GLOB DLL_NAMES
${TBB_ROOT}/bin/${TBB_ARCH}/${TBB_VCVER}/${LIB_NAME}.dll
${TBB_ROOT}/bin/${LIB_NAME}.dll
+ ${TBB_ROOT}/lib/${LIB_NAME}.dll
${TBB_ROOT}/redist/${TBB_ARCH}/${TBB_VCVER}/${LIB_NAME}.dll
${TBB_ROOT}/redist/${TBB_ARCH}/${TBB_VCVER}/${LIB_NAME_GLOB1}.dll
${TBB_ROOT}/redist/${TBB_ARCH}/${TBB_VCVER}/${LIB_NAME_GLOB2}.dll
${TBB_ROOT}/../redist/${TBB_ARCH}/tbb/${TBB_VCVER}/${LIB_NAME}.dll
${TBB_ROOT}/../redist/${TBB_ARCH}_win/tbb/${TBB_VCVER}/${LIB_NAME}.dll
)
- list(GET DLL_NAMES 0 DLL_NAME)
- get_filename_component(${BIN_DIR_VAR} "${DLL_NAME}" DIRECTORY)
- set(${DLL_VAR} "${DLL_NAME}" CACHE PATH "${COMPONENT_NAME} ${BUILD_CONFIG} dll path")
+ if (DLL_NAMES)
+ list(GET DLL_NAMES 0 DLL_NAME)
+ get_filename_component(${BIN_DIR_VAR} "${DLL_NAME}" DIRECTORY)
+ set(${DLL_VAR} "${DLL_NAME}" CACHE PATH "${COMPONENT_NAME} ${BUILD_CONFIG} dll path")
+ endif()
elseif(APPLE)
set(LIB_PATHS ${TBB_ROOT}/lib)
else()
--- external_openimagedenoise/cmake/oidn_ispc.cmake 2021-02-15 17:29:34.000000000 +0100
+++ external_openimagedenoise/cmake/oidn_ispc.cmake2 2021-02-15 17:29:28.000000000 +0100
@@ -98,7 +98,7 @@

View File

@@ -1,70 +1,4 @@
Blender Buildbot
================
Buildbot Configuration
=====================
Code signing
------------
Code signing is done as part of INSTALL target, which makes it possible to sign
files which are aimed into a bundle and coming from a non-signed source (such as
libraries SVN).
This is achieved by specifying `worker_codesign.cmake` as a post-install script
run by CMake. This CMake script simply involves an utility script written in
Python which takes care of an actual signing.
### Configuration
Client configuration doesn't need anything special, other than variable
`SHARED_STORAGE_DIR` pointing to a location which is watched by a server.
This is done in `config_builder.py` file and is stored in Git (which makes it
possible to have almost zero-configuration buildbot machines).
Server configuration requires copying `config_server_template.py` under the
name of `config_server.py` and tweaking values, which are platform-specific.
#### Windows configuration
There are two things which are needed on Windows in order to have code signing
to work:
- `TIMESTAMP_AUTHORITY_URL` which is most likely set http://timestamp.digicert.com
- `CERTIFICATE_FILEPATH` which is a full file path to a PKCS #12 key (.pfx).
## Tips
### Self-signed certificate on Windows
It is easiest to test configuration using self-signed certificate.
The certificate manipulation utilities are coming with Windows SDK.
Unfortunately, they are not added to PATH. Here is an example of how to make
sure they are easily available:
```
set PATH=C:\Program Files (x86)\Windows Kits\10\App Certification Kit;%PATH%
set PATH=C:\Program Files (x86)\Windows Kits\10\bin\10.0.18362.0\x64;%PATH%
```
Generate CA:
```
makecert -r -pe -n "CN=Blender Test CA" -ss CA -sr CurrentUser -a sha256 ^
-cy authority -sky signature -sv BlenderTestCA.pvk BlenderTestCA.cer
```
Import the generated CA:
```
certutil -user -addstore Root BlenderTestCA.cer
```
Create self-signed certificate and pack it into PKCS #12:
```
makecert -pe -n "CN=Blender Test SPC" -a sha256 -cy end ^
-sky signature ^
-ic BlenderTestCA.cer -iv BlenderTestCA.pvk ^
-sv BlenderTestSPC.pvk BlenderTestSPC.cer
pvk2pfx -pvk BlenderTestSPC.pvk -spc BlenderTestSPC.cer -pfx BlenderTestSPC.pfx
```
Files used by Buildbot's `compile-code` step.

View File

@@ -1,127 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
import argparse
import os
import re
import subprocess
import sys
def is_tool(name):
"""Check whether `name` is on PATH and marked as executable."""
# from whichcraft import which
from shutil import which
return which(name) is not None
class Builder:
def __init__(self, name, branch, codesign):
self.name = name
self.branch = branch
self.is_release_branch = re.match("^blender-v(.*)-release$", branch) is not None
self.codesign = codesign
# Buildbot runs from build/ directory
self.blender_dir = os.path.abspath(os.path.join('..', 'blender.git'))
self.build_dir = os.path.abspath(os.path.join('..', 'build'))
self.install_dir = os.path.abspath(os.path.join('..', 'install'))
self.upload_dir = os.path.abspath(os.path.join('..', 'install'))
# Detect platform
if name.startswith('mac'):
self.platform = 'mac'
self.command_prefix = []
elif name.startswith('linux'):
self.platform = 'linux'
if is_tool('scl'):
self.command_prefix = ['scl', 'enable', 'devtoolset-9', '--']
else:
self.command_prefix = []
elif name.startswith('win'):
self.platform = 'win'
self.command_prefix = []
else:
raise ValueError('Unkonw platform for builder ' + self.platform)
# Always 64 bit now
self.bits = 64
def create_builder_from_arguments():
parser = argparse.ArgumentParser()
parser.add_argument('builder_name')
parser.add_argument('branch', default='master', nargs='?')
parser.add_argument("--codesign", action="store_true")
args = parser.parse_args()
return Builder(args.builder_name, args.branch, args.codesign)
class VersionInfo:
def __init__(self, builder):
# Get version information
buildinfo_h = os.path.join(builder.build_dir, "source", "creator", "buildinfo.h")
blender_h = os.path.join(builder.blender_dir, "source", "blender", "blenkernel", "BKE_blender_version.h")
version_number = int(self._parse_header_file(blender_h, 'BLENDER_VERSION'))
version_number_patch = int(self._parse_header_file(blender_h, 'BLENDER_VERSION_PATCH'))
version_numbers = (version_number // 100, version_number % 100, version_number_patch)
self.short_version = "%d.%d" % (version_numbers[0], version_numbers[1])
self.version = "%d.%d.%d" % version_numbers
self.version_cycle = self._parse_header_file(blender_h, 'BLENDER_VERSION_CYCLE')
self.hash = self._parse_header_file(buildinfo_h, 'BUILD_HASH')[1:-1]
if self.version_cycle == "release":
# Final release
self.full_version = self.version
self.is_development_build = False
elif self.version_cycle == "rc":
# Release candidate
self.full_version = self.version + self.version_cycle
self.is_development_build = False
else:
# Development build
self.full_version = self.version + '-' + self.hash
self.is_development_build = True
def _parse_header_file(self, filename, define):
import re
regex = re.compile(r"^#\s*define\s+%s\s+(.*)" % define)
with open(filename, "r") as file:
for l in file:
match = regex.match(l)
if match:
return match.group(1)
return None
def call(cmd, env=None, exit_on_error=True):
print(' '.join(cmd))
# Flush to ensure correct order output on Windows.
sys.stdout.flush()
sys.stderr.flush()
retcode = subprocess.call(cmd, env=env)
if exit_on_error and retcode != 0:
sys.exit(retcode)
return retcode

View File

@@ -1,81 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
from dataclasses import dataclass
from pathlib import Path
from typing import List
@dataclass
class AbsoluteAndRelativeFileName:
"""
Helper class which keeps track of absolute file path for a direct access and
corresponding relative path against given base.
The relative part is used to construct a file name within an archive which
contains files which are to be signed or which has been signed already
(depending on whether the archive is addressed to signing server or back
to the buildbot worker).
"""
# Base directory which is where relative_filepath is relative to.
base_dir: Path
# Full absolute path of the corresponding file.
absolute_filepath: Path
# Derived from full file path, contains part of the path which is relative
# to a desired base path.
relative_filepath: Path
def __init__(self, base_dir: Path, filepath: Path):
self.base_dir = base_dir
self.absolute_filepath = filepath.resolve()
self.relative_filepath = self.absolute_filepath.relative_to(
self.base_dir)
@classmethod
def from_path(cls, path: Path) -> 'AbsoluteAndRelativeFileName':
assert path.is_absolute()
assert path.is_file()
base_dir = path.parent
return AbsoluteAndRelativeFileName(base_dir, path)
@classmethod
def recursively_from_directory(cls, base_dir: Path) \
-> List['AbsoluteAndRelativeFileName']:
"""
Create list of AbsoluteAndRelativeFileName for all the files in the
given directory.
NOTE: Result will be pointing to a resolved paths.
"""
assert base_dir.is_absolute()
assert base_dir.is_dir()
base_dir = base_dir.resolve()
result = []
for filename in base_dir.glob('**/*'):
if not filename.is_file():
continue
result.append(AbsoluteAndRelativeFileName(base_dir, filename))
return result

View File

@@ -1,245 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
import dataclasses
import json
import os
from pathlib import Path
from typing import Optional
import codesign.util as util
class ArchiveStateError(Exception):
message: str
def __init__(self, message):
self.message = message
super().__init__(self.message)
@dataclasses.dataclass
class ArchiveState:
"""
Additional information (state) of the archive
Includes information like expected file size of the archive file in the case
the archive file is expected to be successfully created.
If the archive can not be created, this state will contain error message
indicating details of error.
"""
# Size in bytes of the corresponding archive.
file_size: Optional[int] = None
# Non-empty value indicates that error has happenned.
error_message: str = ''
def has_error(self) -> bool:
"""
Check whether the archive is at error state
"""
return self.error_message
def serialize_to_string(self) -> str:
payload = dataclasses.asdict(self)
return json.dumps(payload, sort_keys=True, indent=4)
def serialize_to_file(self, filepath: Path) -> None:
string = self.serialize_to_string()
filepath.write_text(string)
@classmethod
def deserialize_from_string(cls, string: str) -> 'ArchiveState':
try:
object_as_dict = json.loads(string)
except json.decoder.JSONDecodeError:
raise ArchiveStateError('Error parsing JSON')
return cls(**object_as_dict)
@classmethod
def deserialize_from_file(cls, filepath: Path):
string = filepath.read_text()
return cls.deserialize_from_string(string)
class ArchiveWithIndicator:
"""
The idea of this class is to wrap around logic which takes care of keeping
track of a name of an archive and synchronization routines between buildbot
worker and signing server.
The synchronization is done based on creating a special file after the
archive file is knowingly ready for access.
"""
# Base directory where the archive is stored (basically, a basename() of
# the absolute archive file name).
#
# For example, 'X:\\TEMP\\'.
base_dir: Path
# Absolute file name of the archive.
#
# For example, 'X:\\TEMP\\FOO.ZIP'.
archive_filepath: Path
# Absolute name of a file which acts as an indication of the fact that the
# archive is ready and is available for access.
#
# This is how synchronization between buildbot worker and signing server is
# done:
# - First, the archive is created under archive_filepath name.
# - Second, the indication file is created under ready_indicator_filepath
# name.
# - Third, the colleague of whoever created the indicator name watches for
# the indication file to appear, and once it's there it access the
# archive.
ready_indicator_filepath: Path
def __init__(
self, base_dir: Path, archive_name: str, ready_indicator_name: str):
"""
Construct the object from given base directory and name of the archive
file:
ArchiveWithIndicator(Path('X:\\TEMP'), 'FOO.ZIP', 'INPUT_READY')
"""
self.base_dir = base_dir
self.archive_filepath = self.base_dir / archive_name
self.ready_indicator_filepath = self.base_dir / ready_indicator_name
def is_ready_unsafe(self) -> bool:
"""
Check whether the archive is ready for access.
No guarding about possible network failres is done here.
"""
if not self.ready_indicator_filepath.exists():
return False
try:
archive_state = ArchiveState.deserialize_from_file(
self.ready_indicator_filepath)
except ArchiveStateError as error:
print(f'Error deserializing archive state: {error.message}')
return False
if archive_state.has_error():
# If the error did happen during codesign procedure there will be no
# corresponding archive file.
# The caller code will deal with the error check further.
return True
# Sometimes on macOS indicator file appears prior to the actual archive
# despite the order of creation and os.sync() used in tag_ready().
# So consider archive not ready if there is an indicator without an
# actual archive.
if not self.archive_filepath.exists():
print('Found indicator without actual archive, waiting for archive '
f'({self.archive_filepath}) to appear.')
return False
# Wait for until archive is fully stored.
actual_archive_size = self.archive_filepath.stat().st_size
if actual_archive_size != archive_state.file_size:
print('Partial/invalid archive size (expected '
f'{archive_state.file_size} got {actual_archive_size})')
return False
return True
def is_ready(self) -> bool:
"""
Check whether the archive is ready for access.
Will tolerate possible network failures: if there is a network failure
or if there is still no proper permission on a file False is returned.
"""
# There are some intermitten problem happening at a random which is
# translates to "OSError : [WinError 59] An unexpected network error occurred".
# Some reports suggests it might be due to lack of permissions to the file,
# which might be applicable in our case since it's possible that file is
# initially created with non-accessible permissions and gets chmod-ed
# after initial creation.
try:
return self.is_ready_unsafe()
except OSError as e:
print(f'Exception checking archive: {e}')
return False
def tag_ready(self, error_message='') -> None:
"""
Tag the archive as ready by creating the corresponding indication file.
NOTE: It is expected that the archive was never tagged as ready before
and that there are no subsequent tags of the same archive.
If it is violated, an assert will fail.
"""
assert not self.is_ready()
# Try the best to make sure everything is synced to the file system,
# to avoid any possibility of stamp appearing on a network share prior to
# an actual file.
if util.get_current_platform() != util.Platform.WINDOWS:
os.sync()
archive_size = -1
if self.archive_filepath.exists():
archive_size = self.archive_filepath.stat().st_size
archive_info = ArchiveState(
file_size=archive_size, error_message=error_message)
self.ready_indicator_filepath.write_text(
archive_info.serialize_to_string())
def get_state(self) -> ArchiveState:
"""
Get state object for this archive
The state is read from the corresponding state file.
"""
try:
return ArchiveState.deserialize_from_file(self.ready_indicator_filepath)
except ArchiveStateError as error:
return ArchiveState(error_message=f'Error in information format: {error}')
def clean(self) -> None:
"""
Remove both archive and the ready indication file.
"""
util.ensure_file_does_not_exist_or_die(self.ready_indicator_filepath)
util.ensure_file_does_not_exist_or_die(self.archive_filepath)
def is_fully_absent(self) -> bool:
"""
Check whether both archive and its ready indicator are absent.
Is used for a sanity check during code signing process by both
buildbot worker and signing server.
"""
return (not self.archive_filepath.exists() and
not self.ready_indicator_filepath.exists())

View File

@@ -1,501 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
# Signing process overview.
#
# From buildbot worker side:
# - Files which needs to be signed are collected from either a directory to
# sign all signable files in there, or by filename of a single file to sign.
# - Those files gets packed into an archive and stored in a location location
# which is watched by the signing server.
# - A marker READY file is created which indicates the archive is ready for
# access.
# - Wait for the server to provide an archive with signed files.
# This is done by watching for the READY file which corresponds to an archive
# coming from the signing server.
# - Unpack the signed signed files from the archives and replace original ones.
#
# From code sign server:
# - Watch special location for a READY file which indicates the there is an
# archive with files which are to be signed.
# - Unpack the archive to a temporary location.
# - Run codesign tool and make sure all the files are signed.
# - Pack the signed files and store them in a location which is watched by
# the buildbot worker.
# - Create a READY file which indicates that the archive with signed files is
# ready.
import abc
import logging
import shutil
import subprocess
import time
import tarfile
import uuid
from pathlib import Path
from tempfile import TemporaryDirectory
from typing import Iterable, List
import codesign.util as util
from codesign.absolute_and_relative_filename import AbsoluteAndRelativeFileName
from codesign.archive_with_indicator import ArchiveWithIndicator
from codesign.exception import CodeSignException
logger = logging.getLogger(__name__)
logger_builder = logger.getChild('builder')
logger_server = logger.getChild('server')
def pack_files(files: Iterable[AbsoluteAndRelativeFileName],
archive_filepath: Path) -> None:
"""
Create tar archive from given files for the signing pipeline.
Is used by buildbot worker to create an archive of files which are to be
signed, and by signing server to send signed files back to the worker.
"""
with tarfile.TarFile.open(archive_filepath, 'w') as tar_file_handle:
for file_info in files:
tar_file_handle.add(file_info.absolute_filepath,
arcname=file_info.relative_filepath)
def extract_files(archive_filepath: Path,
extraction_dir: Path) -> None:
"""
Extract all files form the given archive into the given direcotry.
"""
# TODO(sergey): Verify files in the archive have relative path.
with tarfile.TarFile.open(archive_filepath, mode='r') as tar_file_handle:
tar_file_handle.extractall(path=extraction_dir)
class BaseCodeSigner(metaclass=abc.ABCMeta):
"""
Base class for a platform-specific signer of binaries.
Contains all the logic shared across platform-specific implementations, such
as synchronization and notification logic.
Platform specific bits (such as actual command for signing the binary) are
to be implemented as a subclass.
Provides utilities code signing as a whole, including functionality needed
by a signing server and a buildbot worker.
The signer and builder may run on separate machines, the only requirement is
that they have access to a directory which is shared between them. For the
security concerns this is to be done as a separate machine (or as a Shared
Folder configuration in VirtualBox configuration). This directory might be
mounted under different base paths, but its underlying storage is to be
the same.
The code signer is short-lived on a buildbot worker side, and is living
forever on a code signing server side.
"""
# TODO(sergey): Find a neat way to have config annotated.
# config: Config
# Storage directory where builder puts files which are requested to be
# signed.
# Consider this an input of the code signing server.
unsigned_storage_dir: Path
# Storage where signed files are stored.
# Consider this an output of the code signer server.
signed_storage_dir: Path
# Platform the code is currently executing on.
platform: util.Platform
def __init__(self, config):
self.config = config
absolute_shared_storage_dir = config.SHARED_STORAGE_DIR.resolve()
# Unsigned (signing server input) configuration.
self.unsigned_storage_dir = absolute_shared_storage_dir / 'unsigned'
# Signed (signing server output) configuration.
self.signed_storage_dir = absolute_shared_storage_dir / 'signed'
self.platform = util.get_current_platform()
def cleanup_environment_for_builder(self) -> None:
# TODO(sergey): Revisit need of cleaning up the existing files.
# In practice it wasn't so helpful, and with multiple clients
# talking to the same server it becomes even more tricky.
pass
def cleanup_environment_for_signing_server(self) -> None:
# TODO(sergey): Revisit need of cleaning up the existing files.
# In practice it wasn't so helpful, and with multiple clients
# talking to the same server it becomes even more tricky.
pass
def generate_request_id(self) -> str:
"""
Generate an unique identifier for code signing request.
"""
return str(uuid.uuid4())
def archive_info_for_request_id(
self, path: Path, request_id: str) -> ArchiveWithIndicator:
return ArchiveWithIndicator(
path, f'{request_id}.tar', f'{request_id}.ready')
def signed_archive_info_for_request_id(
self, request_id: str) -> ArchiveWithIndicator:
return self.archive_info_for_request_id(
self.signed_storage_dir, request_id)
def unsigned_archive_info_for_request_id(
self, request_id: str) -> ArchiveWithIndicator:
return self.archive_info_for_request_id(
self.unsigned_storage_dir, request_id)
############################################################################
# Buildbot worker side helpers.
@abc.abstractmethod
def check_file_is_to_be_signed(
self, file: AbsoluteAndRelativeFileName) -> bool:
"""
Check whether file is to be signed.
Is used by both single file signing pipeline and recursive directory
signing pipeline.
This is where code signer is to check whether file is to be signed or
not. This check might be based on a simple extension test or on actual
test whether file have a digital signature already or not.
"""
def collect_files_to_sign(self, path: Path) \
-> List[AbsoluteAndRelativeFileName]:
"""
Get all files which need to be signed from the given path.
NOTE: The path might either be a file or directory.
This function is run from the buildbot worker side.
"""
# If there is a single file provided trust the buildbot worker that it
# is eligible for signing.
if path.is_file():
file = AbsoluteAndRelativeFileName.from_path(path)
if not self.check_file_is_to_be_signed(file):
return []
return [file]
all_files = AbsoluteAndRelativeFileName.recursively_from_directory(
path)
files_to_be_signed = [file for file in all_files
if self.check_file_is_to_be_signed(file)]
return files_to_be_signed
def wait_for_signed_archive_or_die(self, request_id) -> None:
"""
Wait until archive with signed files is available.
Will only return if the archive with signed files is available. If there
was an error during code sign procedure the SystemExit exception is
raised, with the message set to the error reported by the codesign
server.
Will only wait for the configured time. If that time exceeds and there
is still no responce from the signing server the application will exit
with a non-zero exit code.
"""
signed_archive_info = self.signed_archive_info_for_request_id(
request_id)
unsigned_archive_info = self.unsigned_archive_info_for_request_id(
request_id)
timeout_in_seconds = self.config.TIMEOUT_IN_SECONDS
time_start = time.monotonic()
while not signed_archive_info.is_ready():
time.sleep(1)
time_slept_in_seconds = time.monotonic() - time_start
if time_slept_in_seconds > timeout_in_seconds:
signed_archive_info.clean()
unsigned_archive_info.clean()
raise SystemExit("Signing server didn't finish signing in "
f'{timeout_in_seconds} seconds, dying :(')
archive_state = signed_archive_info.get_state()
if archive_state.has_error():
signed_archive_info.clean()
unsigned_archive_info.clean()
raise SystemExit(
f'Error happenned during codesign procedure: {archive_state.error_message}')
def copy_signed_files_to_directory(
self, signed_dir: Path, destination_dir: Path) -> None:
"""
Copy all files from signed_dir to destination_dir.
This function will overwrite any existing file. Permissions are copied
from the source files, but other metadata, such as timestamps, are not.
"""
for signed_filepath in signed_dir.glob('**/*'):
if not signed_filepath.is_file():
continue
relative_filepath = signed_filepath.relative_to(signed_dir)
destination_filepath = destination_dir / relative_filepath
destination_filepath.parent.mkdir(parents=True, exist_ok=True)
shutil.copy(signed_filepath, destination_filepath)
def run_buildbot_path_sign_pipeline(self, path: Path) -> None:
"""
Run all steps needed to make given path signed.
Path points to an unsigned file or a directory which contains unsigned
files.
If the path points to a single file then this file will be signed.
This is used to sign a final bundle such as .msi on Windows or .dmg on
macOS.
NOTE: The code signed implementation might actually reject signing the
file, in which case the file will be left unsigned. This isn't anything
to be considered a failure situation, just might happen when buildbot
worker can not detect whether signing is really required in a specific
case or not.
If the path points to a directory then code signer will sign all
signable files from it (finding them recursively).
"""
self.cleanup_environment_for_builder()
# Make sure storage directory exists.
self.unsigned_storage_dir.mkdir(parents=True, exist_ok=True)
# Collect all files which needs to be signed and pack them into a single
# archive which will be sent to the signing server.
logger_builder.info('Collecting files which are to be signed...')
files = self.collect_files_to_sign(path)
if not files:
logger_builder.info('No files to be signed, ignoring.')
return
logger_builder.info('Found %d files to sign.', len(files))
request_id = self.generate_request_id()
signed_archive_info = self.signed_archive_info_for_request_id(
request_id)
unsigned_archive_info = self.unsigned_archive_info_for_request_id(
request_id)
pack_files(files=files,
archive_filepath=unsigned_archive_info.archive_filepath)
unsigned_archive_info.tag_ready()
# Wait for the signing server to finish signing.
logger_builder.info('Waiting signing server to sign the files...')
self.wait_for_signed_archive_or_die(request_id)
# Extract signed files from archive and move files to final location.
with TemporaryDirectory(prefix='blender-buildbot-') as temp_dir_str:
unpacked_signed_files_dir = Path(temp_dir_str)
logger_builder.info('Extracting signed files from archive...')
extract_files(
archive_filepath=signed_archive_info.archive_filepath,
extraction_dir=unpacked_signed_files_dir)
destination_dir = path
if destination_dir.is_file():
destination_dir = destination_dir.parent
self.copy_signed_files_to_directory(
unpacked_signed_files_dir, destination_dir)
logger_builder.info('Removing archive with signed files...')
signed_archive_info.clean()
############################################################################
# Signing server side helpers.
def wait_for_sign_request(self) -> str:
"""
Wait for the buildbot to request signing of an archive.
Returns an identifier of signing request.
"""
# TOOD(sergey): Support graceful shutdown on Ctrl-C.
logger_server.info(
f'Waiting for a request directory {self.unsigned_storage_dir} to appear.')
while not self.unsigned_storage_dir.exists():
time.sleep(1)
logger_server.info(
'Waiting for a READY indicator of any signing request.')
request_id = None
while request_id is None:
for file in self.unsigned_storage_dir.iterdir():
if file.suffix != '.ready':
continue
request_id = file.stem
logger_server.info(f'Found READY for request ID {request_id}.')
if request_id is None:
time.sleep(1)
unsigned_archive_info = self.unsigned_archive_info_for_request_id(
request_id)
while not unsigned_archive_info.is_ready():
time.sleep(1)
return request_id
@abc.abstractmethod
def sign_all_files(self, files: List[AbsoluteAndRelativeFileName]) -> None:
"""
Sign all files in the given directory.
NOTE: Signing should happen in-place.
"""
def run_signing_pipeline(self, request_id: str):
"""
Run the full signing pipeline starting from the point when buildbot
worker have requested signing.
"""
# Make sure storage directory exists.
self.signed_storage_dir.mkdir(parents=True, exist_ok=True)
with TemporaryDirectory(prefix='blender-codesign-') as temp_dir_str:
temp_dir = Path(temp_dir_str)
signed_archive_info = self.signed_archive_info_for_request_id(
request_id)
unsigned_archive_info = self.unsigned_archive_info_for_request_id(
request_id)
logger_server.info('Extracting unsigned files from archive...')
extract_files(
archive_filepath=unsigned_archive_info.archive_filepath,
extraction_dir=temp_dir)
logger_server.info('Collecting all files which needs signing...')
files = AbsoluteAndRelativeFileName.recursively_from_directory(
temp_dir)
logger_server.info('Signing all requested files...')
try:
self.sign_all_files(files)
except CodeSignException as error:
signed_archive_info.tag_ready(error_message=error.message)
unsigned_archive_info.clean()
logger_server.info('Signing is complete with errors.')
return
logger_server.info('Packing signed files...')
pack_files(files=files,
archive_filepath=signed_archive_info.archive_filepath)
signed_archive_info.tag_ready()
logger_server.info('Removing signing request...')
unsigned_archive_info.clean()
logger_server.info('Signing is complete.')
def run_signing_server(self):
logger_server.info('Starting new code signing server...')
self.cleanup_environment_for_signing_server()
logger_server.info('Code signing server is ready')
while True:
logger_server.info('Waiting for the signing request in %s...',
self.unsigned_storage_dir)
request_id = self.wait_for_sign_request()
logger_server.info(
f'Beging signign procedure for request ID {request_id}.')
self.run_signing_pipeline(request_id)
############################################################################
# Command executing.
#
# Abstracted to a degree that allows to run commands from a foreign
# platform.
# The goal with this is to allow performing dry-run tests of code signer
# server from other platforms (for example, to test that macOS code signer
# does what it is supposed to after doing a refactor on Linux).
# TODO(sergey): What is the type annotation for the command?
def run_command_or_mock(self, command, platform: util.Platform) -> None:
"""
Run given command if current platform matches given one
If the platform is different then it will only be printed allowing
to verify logic of the code signing process.
"""
if platform != self.platform:
logger_server.info(
f'Will run command for {platform}: {command}')
return
logger_server.info(f'Running command: {command}')
subprocess.run(command)
# TODO(sergey): What is the type annotation for the command?
def check_output_or_mock(self, command,
platform: util.Platform,
allow_nonzero_exit_code=False) -> str:
"""
Run given command if current platform matches given one
If the platform is different then it will only be printed allowing
to verify logic of the code signing process.
If allow_nonzero_exit_code is truth then the output will be returned
even if application quit with non-zero exit code.
Otherwise an subprocess.CalledProcessError exception will be raised
in such case.
"""
if platform != self.platform:
logger_server.info(
f'Will run command for {platform}: {command}')
return
if allow_nonzero_exit_code:
process = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
output = process.communicate()[0]
return output.decode()
logger_server.info(f'Running command: {command}')
return subprocess.check_output(
command, stderr=subprocess.STDOUT).decode()

View File

@@ -1,62 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
# Configuration of a code signer which is specific to the code running from
# buildbot's worker.
import sys
from pathlib import Path
import codesign.util as util
from codesign.config_common import *
platform = util.get_current_platform()
if platform == util.Platform.LINUX:
SHARED_STORAGE_DIR = Path('/data/codesign')
elif platform == util.Platform.WINDOWS:
SHARED_STORAGE_DIR = Path('Z:\\codesign')
elif platform == util.Platform.MACOS:
SHARED_STORAGE_DIR = Path('/Volumes/codesign_macos/codesign')
# https://docs.python.org/3/library/logging.config.html#configuration-dictionary-schema
LOGGING = {
'version': 1,
'formatters': {
'default': {'format': '%(asctime)-15s %(levelname)8s %(name)s %(message)s'}
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'default',
'stream': 'ext://sys.stderr',
}
},
'loggers': {
'codesign': {'level': 'INFO'},
},
'root': {
'level': 'WARNING',
'handlers': [
'console',
],
}
}

View File

@@ -1,36 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
from pathlib import Path
# Timeout in seconds for the signing process.
#
# This is how long buildbot packing step will wait signing server to
# perform signing.
#
# NOTE: Notarization could take a long time, hence the rather high value
# here. Might consider using different timeout for different platforms.
TIMEOUT_IN_SECONDS = 45 * 60 * 60
# Directory which is shared across buildbot worker and signing server.
#
# This is where worker puts files requested for signing as well as where
# server puts signed files.
SHARED_STORAGE_DIR: Path

View File

@@ -1,101 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
# Configuration of a code signer which is specific to the code signing server.
#
# NOTE: DO NOT put any sensitive information here, put it in an actual
# configuration on the signing machine.
from pathlib import Path
from codesign.config_common import *
CODESIGN_DIRECTORY = Path(__file__).absolute().parent
BLENDER_GIT_ROOT_DIRECTORY = CODESIGN_DIRECTORY.parent.parent.parent
################################################################################
# Common configuration.
# Directory where folders for codesign requests and signed result are stored.
# For example, /data/codesign
SHARED_STORAGE_DIR: Path
################################################################################
# macOS-specific configuration.
MACOS_ENTITLEMENTS_FILE = \
BLENDER_GIT_ROOT_DIRECTORY / 'release' / 'darwin' / 'entitlements.plist'
# Identity of the Developer ID Application certificate which is to be used for
# codesign tool.
# Use `security find-identity -v -p codesigning` to find the identity.
#
# NOTE: This identity is just an example from release/darwin/README.txt.
MACOS_CODESIGN_IDENTITY = 'AE825E26F12D08B692F360133210AF46F4CF7B97'
# User name (Apple ID) which will be used to request notarization.
MACOS_XCRUN_USERNAME = 'me@example.com'
# One-time application password which will be used to request notarization.
MACOS_XCRUN_PASSWORD = '@keychain:altool-password'
# Timeout in seconds within which the notarial office is supposed to reply.
MACOS_NOTARIZE_TIMEOUT_IN_SECONDS = 60 * 60
################################################################################
# Windows-specific configuration.
# URL to the timestamping authority.
WIN_TIMESTAMP_AUTHORITY_URL = 'http://timestamp.digicert.com'
# Full path to the certificate used for signing.
#
# The path and expected file format might vary depending on a platform.
#
# On Windows it is usually is a PKCS #12 key (.pfx), so the path will look
# like Path('C:\\Secret\\Blender.pfx').
WIN_CERTIFICATE_FILEPATH: Path
################################################################################
# Logging configuration, common for all platforms.
# https://docs.python.org/3/library/logging.config.html#configuration-dictionary-schema
LOGGING = {
'version': 1,
'formatters': {
'default': {'format': '%(asctime)-15s %(levelname)8s %(name)s %(message)s'}
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'default',
'stream': 'ext://sys.stderr',
}
},
'loggers': {
'codesign': {'level': 'INFO'},
},
'root': {
'level': 'WARNING',
'handlers': [
'console',
],
}
}

View File

@@ -1,26 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
class CodeSignException(Exception):
message: str
def __init__(self, message):
self.message = message
super().__init__(self.message)

View File

@@ -1,72 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
# NOTE: This is a no-op signer (since there isn't really a procedure to sign
# Linux binaries yet). Used to debug and verify the code signing routines on
# a Linux environment.
import logging
from pathlib import Path
from typing import List
from codesign.absolute_and_relative_filename import AbsoluteAndRelativeFileName
from codesign.base_code_signer import BaseCodeSigner
logger = logging.getLogger(__name__)
logger_server = logger.getChild('server')
class LinuxCodeSigner(BaseCodeSigner):
def is_active(self) -> bool:
"""
Check whether this signer is active.
if it is inactive, no files will be signed.
Is used to be able to debug code signing pipeline on Linux, where there
is no code signing happening in the actual buildbot and release
environment.
"""
return False
def check_file_is_to_be_signed(
self, file: AbsoluteAndRelativeFileName) -> bool:
if file.relative_filepath == Path('blender'):
return True
if (file.relative_filepath.parts[-3:-1] == ('python', 'bin') and
file.relative_filepath.name.startwith('python')):
return True
if file.relative_filepath.suffix == '.so':
return True
return False
def collect_files_to_sign(self, path: Path) \
-> List[AbsoluteAndRelativeFileName]:
if not self.is_active():
return []
return super().collect_files_to_sign(path)
def sign_all_files(self, files: List[AbsoluteAndRelativeFileName]) -> None:
num_files = len(files)
for file_index, file in enumerate(files):
logger.info('Server: Signed file [%d/%d] %s',
file_index + 1, num_files, file.relative_filepath)

View File

@@ -1,456 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
import logging
import re
import stat
import subprocess
import time
from pathlib import Path
from typing import List
import codesign.util as util
from buildbot_utils import Builder
from codesign.absolute_and_relative_filename import AbsoluteAndRelativeFileName
from codesign.base_code_signer import BaseCodeSigner
from codesign.exception import CodeSignException
logger = logging.getLogger(__name__)
logger_server = logger.getChild('server')
# NOTE: Check is done as filename.endswith(), so keep the dot
EXTENSIONS_TO_BE_SIGNED = {'.dylib', '.so', '.dmg'}
# Prefixes of a file (not directory) name which are to be signed.
# Used to sign extra executable files in Contents/Resources.
NAME_PREFIXES_TO_BE_SIGNED = {'python'}
class NotarizationException(CodeSignException):
pass
def is_file_from_bundle(file: AbsoluteAndRelativeFileName) -> bool:
"""
Check whether file is coming from an .app bundle
"""
parts = file.relative_filepath.parts
if not parts:
return False
if not parts[0].endswith('.app'):
return False
return True
def get_bundle_from_file(
file: AbsoluteAndRelativeFileName) -> AbsoluteAndRelativeFileName:
"""
Get AbsoluteAndRelativeFileName descriptor of bundle
"""
assert(is_file_from_bundle(file))
parts = file.relative_filepath.parts
bundle_name = parts[0]
base_dir = file.base_dir
bundle_filepath = file.base_dir / bundle_name
return AbsoluteAndRelativeFileName(base_dir, bundle_filepath)
def is_bundle_executable_file(file: AbsoluteAndRelativeFileName) -> bool:
"""
Check whether given file is an executable within an app bundle
"""
if not is_file_from_bundle(file):
return False
parts = file.relative_filepath.parts
num_parts = len(parts)
if num_parts < 3:
return False
if parts[1:3] != ('Contents', 'MacOS'):
return False
return True
def xcrun_field_value_from_output(field: str, output: str) -> str:
"""
Get value of a given field from xcrun output.
If field is not found empty string is returned.
"""
field_prefix = field + ': '
for line in output.splitlines():
line = line.strip()
if line.startswith(field_prefix):
return line[len(field_prefix):]
return ''
class MacOSCodeSigner(BaseCodeSigner):
def check_file_is_to_be_signed(
self, file: AbsoluteAndRelativeFileName) -> bool:
if file.relative_filepath.name.startswith('.'):
return False
if is_bundle_executable_file(file):
return True
base_name = file.relative_filepath.name
if any(base_name.startswith(prefix)
for prefix in NAME_PREFIXES_TO_BE_SIGNED):
return True
mode = file.absolute_filepath.lstat().st_mode
if mode & stat.S_IXUSR != 0:
file_output = subprocess.check_output(
("file", file.absolute_filepath)).decode()
if "64-bit executable" in file_output:
return True
return file.relative_filepath.suffix in EXTENSIONS_TO_BE_SIGNED
def collect_files_to_sign(self, path: Path) \
-> List[AbsoluteAndRelativeFileName]:
# Include all files when signing app or dmg bundle: all the files are
# needed to do valid signature of bundle.
if path.name.endswith('.app'):
return AbsoluteAndRelativeFileName.recursively_from_directory(path)
if path.is_dir():
files = []
for child in path.iterdir():
if child.name.endswith('.app'):
current_files = AbsoluteAndRelativeFileName.recursively_from_directory(
child)
else:
current_files = super().collect_files_to_sign(child)
for current_file in current_files:
files.append(AbsoluteAndRelativeFileName(
path, current_file.absolute_filepath))
return files
return super().collect_files_to_sign(path)
############################################################################
# Codesign.
def codesign_remove_signature(
self, file: AbsoluteAndRelativeFileName) -> None:
"""
Make sure given file does not have codesign signature
This is needed because codesigning is not possible for file which has
signature already.
"""
logger_server.info(
'Removing codesign signature from %s...', file.relative_filepath)
command = ['codesign', '--remove-signature', file.absolute_filepath]
self.run_command_or_mock(command, util.Platform.MACOS)
def codesign_file(
self, file: AbsoluteAndRelativeFileName) -> None:
"""
Sign given file
NOTE: File must not have any signatures.
"""
logger_server.info(
'Codesigning %s...', file.relative_filepath)
entitlements_file = self.config.MACOS_ENTITLEMENTS_FILE
command = ['codesign',
'--timestamp',
'--options', 'runtime',
f'--entitlements={entitlements_file}',
'--sign', self.config.MACOS_CODESIGN_IDENTITY,
file.absolute_filepath]
self.run_command_or_mock(command, util.Platform.MACOS)
def codesign_all_files(self, files: List[AbsoluteAndRelativeFileName]) -> None:
"""
Run codesign tool on all eligible files in the given list.
Will ignore all files which are not to be signed. For the rest will
remove possible existing signature and add a new signature.
"""
num_files = len(files)
have_ignored_files = False
signed_files = []
for file_index, file in enumerate(files):
# Ignore file if it is not to be signed.
# Allows to manually construct ZIP of a bundle and get it signed.
if not self.check_file_is_to_be_signed(file):
logger_server.info(
'Ignoring file [%d/%d] %s',
file_index + 1, num_files, file.relative_filepath)
have_ignored_files = True
continue
logger_server.info(
'Running codesigning routines for file [%d/%d] %s...',
file_index + 1, num_files, file.relative_filepath)
self.codesign_remove_signature(file)
self.codesign_file(file)
signed_files.append(file)
if have_ignored_files:
logger_server.info('Signed %d files:', len(signed_files))
num_signed_files = len(signed_files)
for file_index, signed_file in enumerate(signed_files):
logger_server.info(
'- [%d/%d] %s',
file_index + 1, num_signed_files,
signed_file.relative_filepath)
def codesign_bundles(
self, files: List[AbsoluteAndRelativeFileName]) -> None:
"""
Codesign all .app bundles in the given list of files.
Bundle is deducted from paths of the files, and every bundle is only
signed once.
"""
signed_bundles = set()
extra_files = []
for file in files:
if not is_file_from_bundle(file):
continue
bundle = get_bundle_from_file(file)
bundle_name = bundle.relative_filepath
if bundle_name in signed_bundles:
continue
logger_server.info('Running codesign routines on bundle %s',
bundle_name)
# It is not possible to remove signature from DMG.
if bundle.relative_filepath.name.endswith('.app'):
self.codesign_remove_signature(bundle)
self.codesign_file(bundle)
signed_bundles.add(bundle_name)
# Codesign on a bundle adds an extra folder with information.
# It needs to be compied to the source.
code_signature_directory = \
bundle.absolute_filepath / 'Contents' / '_CodeSignature'
code_signature_files = \
AbsoluteAndRelativeFileName.recursively_from_directory(
code_signature_directory)
for code_signature_file in code_signature_files:
bundle_relative_file = AbsoluteAndRelativeFileName(
bundle.base_dir,
code_signature_directory /
code_signature_file.relative_filepath)
extra_files.append(bundle_relative_file)
files.extend(extra_files)
############################################################################
# Notarization.
def notarize_get_bundle_id(self, file: AbsoluteAndRelativeFileName) -> str:
"""
Get bundle ID which will be used to notarize DMG
"""
name = file.relative_filepath.name
app_name = name.split('-', 2)[0].lower()
app_name_words = app_name.split()
if len(app_name_words) > 1:
app_name_id = ''.join(word.capitalize() for word in app_name_words)
else:
app_name_id = app_name_words[0]
# TODO(sergey): Consider using "alpha" for buildbot builds.
return f'org.blenderfoundation.{app_name_id}.release'
def notarize_request(self, file) -> str:
"""
Request notarization of the given file.
Returns UUID of the notarization request. If error occurred None is
returned instead of UUID.
"""
bundle_id = self.notarize_get_bundle_id(file)
logger_server.info('Bundle ID: %s', bundle_id)
logger_server.info('Submitting file to the notarial office.')
command = [
'xcrun', 'altool', '--notarize-app', '--verbose',
'-f', file.absolute_filepath,
'--primary-bundle-id', bundle_id,
'--username', self.config.MACOS_XCRUN_USERNAME,
'--password', self.config.MACOS_XCRUN_PASSWORD]
output = self.check_output_or_mock(
command, util.Platform.MACOS, allow_nonzero_exit_code=True)
for line in output.splitlines():
line = line.strip()
if line.startswith('RequestUUID = '):
request_uuid = line[14:]
return request_uuid
# Check whether the package has been already submitted.
if 'The software asset has already been uploaded.' in line:
request_uuid = re.sub(
'.*The upload ID is ([A-Fa-f0-9\-]+).*', '\\1', line)
logger_server.warning(
f'The package has been already submitted under UUID {request_uuid}')
return request_uuid
logger_server.error(output)
logger_server.error('xcrun command did not report RequestUUID')
return None
def notarize_review_status(self, xcrun_output: str) -> bool:
"""
Review status returned by xcrun's notarization info
Returns truth if the notarization process has finished.
If there are errors during notarization, a NotarizationException()
exception is thrown with status message from the notarial office.
"""
# Parse status and message
status = xcrun_field_value_from_output('Status', xcrun_output)
status_message = xcrun_field_value_from_output(
'Status Message', xcrun_output)
if status == 'success':
logger_server.info(
'Package successfully notarized: %s', status_message)
return True
if status == 'invalid':
logger_server.error(xcrun_output)
logger_server.error(
'Package notarization has failed: %s', status_message)
raise NotarizationException(status_message)
if status == 'in progress':
return False
logger_server.info(
'Unknown notarization status %s (%s)', status, status_message)
return False
def notarize_wait_result(self, request_uuid: str) -> None:
"""
Wait for until notarial office have a reply
"""
logger_server.info(
'Waiting for a result from the notarization office.')
command = ['xcrun', 'altool',
'--notarization-info', request_uuid,
'--username', self.config.MACOS_XCRUN_USERNAME,
'--password', self.config.MACOS_XCRUN_PASSWORD]
time_start = time.monotonic()
timeout_in_seconds = self.config.MACOS_NOTARIZE_TIMEOUT_IN_SECONDS
while True:
xcrun_output = self.check_output_or_mock(
command, util.Platform.MACOS, allow_nonzero_exit_code=True)
if self.notarize_review_status(xcrun_output):
break
logger_server.info('Keep waiting for notarization office.')
time.sleep(30)
time_slept_in_seconds = time.monotonic() - time_start
if time_slept_in_seconds > timeout_in_seconds:
logger_server.error(
"Notarial office didn't reply in %f seconds.",
timeout_in_seconds)
def notarize_staple(self, file: AbsoluteAndRelativeFileName) -> bool:
"""
Staple notarial label on the file
"""
logger_server.info('Stapling notarial stamp.')
command = ['xcrun', 'stapler', 'staple', '-v', file.absolute_filepath]
self.check_output_or_mock(command, util.Platform.MACOS)
def notarize_dmg(self, file: AbsoluteAndRelativeFileName) -> bool:
"""
Run entire pipeline to get DMG notarized.
"""
logger_server.info('Begin notarization routines on %s',
file.relative_filepath)
# Submit file for notarization.
request_uuid = self.notarize_request(file)
if not request_uuid:
return False
logger_server.info('Received Request UUID: %s', request_uuid)
# Wait for the status from the notarization office.
if not self.notarize_wait_result(request_uuid):
return False
# Staple.
self.notarize_staple(file)
def notarize_all_dmg(
self, files: List[AbsoluteAndRelativeFileName]) -> bool:
"""
Notarize all DMG images from the input.
Images are supposed to be codesigned already.
"""
for file in files:
if not file.relative_filepath.name.endswith('.dmg'):
continue
if not self.check_file_is_to_be_signed(file):
continue
self.notarize_dmg(file)
############################################################################
# Entry point.
def sign_all_files(self, files: List[AbsoluteAndRelativeFileName]) -> None:
# TODO(sergey): Handle errors somehow.
self.codesign_all_files(files)
self.codesign_bundles(files)
self.notarize_all_dmg(files)

View File

@@ -1,52 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
import logging.config
import sys
from pathlib import Path
from typing import Optional
import codesign.config_builder
import codesign.util as util
from codesign.base_code_signer import BaseCodeSigner
class SimpleCodeSigner:
code_signer: Optional[BaseCodeSigner]
def __init__(self):
platform = util.get_current_platform()
if platform == util.Platform.LINUX:
from codesign.linux_code_signer import LinuxCodeSigner
self.code_signer = LinuxCodeSigner(codesign.config_builder)
elif platform == util.Platform.MACOS:
from codesign.macos_code_signer import MacOSCodeSigner
self.code_signer = MacOSCodeSigner(codesign.config_builder)
elif platform == util.Platform.WINDOWS:
from codesign.windows_code_signer import WindowsCodeSigner
self.code_signer = WindowsCodeSigner(codesign.config_builder)
else:
self.code_signer = None
def sign_file_or_directory(self, path: Path) -> None:
logging.config.dictConfig(codesign.config_builder.LOGGING)
self.code_signer.run_buildbot_path_sign_pipeline(path)

View File

@@ -1,54 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
import sys
from enum import Enum
from pathlib import Path
class Platform(Enum):
LINUX = 1
MACOS = 2
WINDOWS = 3
def get_current_platform() -> Platform:
if sys.platform == 'linux':
return Platform.LINUX
elif sys.platform == 'darwin':
return Platform.MACOS
elif sys.platform == 'win32':
return Platform.WINDOWS
raise Exception(f'Unknown platform {sys.platform}')
def ensure_file_does_not_exist_or_die(filepath: Path) -> None:
"""
If the file exists, unlink it.
If the file path exists and is not a file an assert will trigger.
If the file path does not exists nothing happens.
"""
if not filepath.exists():
return
if not filepath.is_file():
# TODO(sergey): Provide information about what the filepath actually is.
raise SystemExit(f'{filepath} is expected to be a file, but is not')
filepath.unlink()

View File

@@ -1,117 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
import logging
from pathlib import Path
from typing import List
import codesign.util as util
from buildbot_utils import Builder
from codesign.absolute_and_relative_filename import AbsoluteAndRelativeFileName
from codesign.base_code_signer import BaseCodeSigner
from codesign.exception import CodeSignException
logger = logging.getLogger(__name__)
logger_server = logger.getChild('server')
# NOTE: Check is done as filename.endswith(), so keep the dot
EXTENSIONS_TO_BE_SIGNED = {'.exe', '.dll', '.pyd', '.msi'}
BLACKLIST_FILE_PREFIXES = (
'api-ms-', 'concrt', 'msvcp', 'ucrtbase', 'vcomp', 'vcruntime')
class SigntoolException(CodeSignException):
pass
class WindowsCodeSigner(BaseCodeSigner):
def check_file_is_to_be_signed(
self, file: AbsoluteAndRelativeFileName) -> bool:
base_name = file.relative_filepath.name
if any(base_name.startswith(prefix)
for prefix in BLACKLIST_FILE_PREFIXES):
return False
return file.relative_filepath.suffix in EXTENSIONS_TO_BE_SIGNED
def get_sign_command_prefix(self) -> List[str]:
return [
'signtool', 'sign', '/v',
'/f', self.config.WIN_CERTIFICATE_FILEPATH,
'/tr', self.config.WIN_TIMESTAMP_AUTHORITY_URL]
def run_codesign_tool(self, filepath: Path) -> None:
command = self.get_sign_command_prefix() + [filepath]
try:
codesign_output = self.check_output_or_mock(command, util.Platform.WINDOWS)
except subprocess.CalledProcessError as e:
raise SigntoolException(f'Error running signtool {e}')
logger_server.info(f'signtool output:\n{codesign_output}')
got_number_of_success = False
for line in codesign_output.split('\n'):
line_clean = line.strip()
line_clean_lower = line_clean.lower()
if line_clean_lower.startswith('number of warnings') or \
line_clean_lower.startswith('number of errors'):
number = int(line_clean_lower.split(':')[1])
if number != 0:
raise SigntoolException('Non-clean success of signtool')
if line_clean_lower.startswith('number of files successfully signed'):
got_number_of_success = True
number = int(line_clean_lower.split(':')[1])
if number != 1:
raise SigntoolException('Signtool did not consider codesign a success')
if not got_number_of_success:
raise SigntoolException('Signtool did not report number of files signed')
def sign_all_files(self, files: List[AbsoluteAndRelativeFileName]) -> None:
# NOTE: Sign files one by one to avoid possible command line length
# overflow (which could happen if we ever decide to sign every binary
# in the install folder, for example).
#
# TODO(sergey): Consider doing batched signing of handful of files in
# one go (but only if this actually known to be much faster).
num_files = len(files)
for file_index, file in enumerate(files):
# Ignore file if it is not to be signed.
# Allows to manually construct ZIP of package and get it signed.
if not self.check_file_is_to_be_signed(file):
logger_server.info(
'Ignoring file [%d/%d] %s',
file_index + 1, num_files, file.relative_filepath)
continue
logger_server.info(
'Running signtool command for file [%d/%d] %s...',
file_index + 1, num_files, file.relative_filepath)
self.run_codesign_tool(file.absolute_filepath)

View File

@@ -1,37 +0,0 @@
#!/usr/bin/env python3
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
# NOTE: This is a no-op signer (since there isn't really a procedure to sign
# Linux binaries yet). Used to debug and verify the code signing routines on
# a Linux environment.
import logging.config
from pathlib import Path
from typing import List
from codesign.linux_code_signer import LinuxCodeSigner
import codesign.config_server
if __name__ == "__main__":
logging.config.dictConfig(codesign.config_server.LOGGING)
code_signer = LinuxCodeSigner(codesign.config_server)
code_signer.run_signing_server()

View File

@@ -1,41 +0,0 @@
#!/usr/bin/env python3
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
import logging.config
from pathlib import Path
from typing import List
from codesign.macos_code_signer import MacOSCodeSigner
import codesign.config_server
if __name__ == "__main__":
entitlements_file = codesign.config_server.MACOS_ENTITLEMENTS_FILE
if not entitlements_file.exists():
raise SystemExit(
'Entitlements file {entitlements_file} does not exist.')
if not entitlements_file.is_file():
raise SystemExit(
'Entitlements file {entitlements_file} is not a file.')
logging.config.dictConfig(codesign.config_server.LOGGING)
code_signer = MacOSCodeSigner(codesign.config_server)
code_signer.run_signing_server()

View File

@@ -1,11 +0,0 @@
@echo off
rem This is an entry point of the codesign server for Windows.
rem It makes sure that signtool.exe is within the current PATH and can be
rem used by the Python script.
SETLOCAL
set PATH=C:\Program Files (x86)\Windows Kits\10\App Certification Kit;%PATH%
codesign_server_windows.py

View File

@@ -1,54 +0,0 @@
#!/usr/bin/env python3
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
# Implementation of codesign server for Windows.
#
# NOTE: If signtool.exe is not in the PATH use codesign_server_windows.bat
import logging.config
import shutil
from pathlib import Path
from typing import List
import codesign.util as util
from codesign.windows_code_signer import WindowsCodeSigner
import codesign.config_server
if __name__ == "__main__":
logging.config.dictConfig(codesign.config_server.LOGGING)
logger = logging.getLogger(__name__)
logger_server = logger.getChild('server')
# TODO(sergey): Consider moving such sanity checks into
# CodeSigner.check_environment_or_die().
if not shutil.which('signtool.exe'):
if util.get_current_platform() == util.Platform.WINDOWS:
raise SystemExit("signtool.exe is not found in %PATH%")
logger_server.info(
'signtool.exe not found, '
'but will not be used on this foreign platform')
code_signer = WindowsCodeSigner(codesign.config_server)
code_signer.run_signing_server()

View File

@@ -1,551 +0,0 @@
#!/usr/bin/env python3
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
import argparse
import re
import shutil
import subprocess
import sys
import time
from pathlib import Path
from tempfile import TemporaryDirectory, NamedTemporaryFile
from typing import List
BUILDBOT_DIRECTORY = Path(__file__).absolute().parent
CODESIGN_SCRIPT = BUILDBOT_DIRECTORY / 'worker_codesign.py'
BLENDER_GIT_ROOT_DIRECTORY = BUILDBOT_DIRECTORY.parent.parent
DARWIN_DIRECTORY = BLENDER_GIT_ROOT_DIRECTORY / 'release' / 'darwin'
# Extra size which is added on top of actual files size when estimating size
# of destination DNG.
EXTRA_DMG_SIZE_IN_BYTES = 800 * 1024 * 1024
################################################################################
# Common utilities
def get_directory_size(root_directory: Path) -> int:
"""
Get size of directory on disk
"""
total_size = 0
for file in root_directory.glob('**/*'):
total_size += file.lstat().st_size
return total_size
################################################################################
# DMG bundling specific logic
def create_argument_parser():
parser = argparse.ArgumentParser()
parser.add_argument(
'source_dir',
type=Path,
help='Source directory which points to either existing .app bundle'
'or to a directory with .app bundles.')
parser.add_argument(
'--background-image',
type=Path,
help="Optional background picture which will be set on the DMG."
"If not provided default Blender's one is used.")
parser.add_argument(
'--volume-name',
type=str,
help='Optional name of a volume which will be used for DMG.')
parser.add_argument(
'--dmg',
type=Path,
help='Optional argument which points to a final DMG file name.')
parser.add_argument(
'--applescript',
type=Path,
help="Optional path to applescript to set up folder looks of DMG."
"If not provided default Blender's one is used.")
parser.add_argument(
'--codesign',
action="store_true",
help="Code sign and notarize DMG contents.")
return parser
def collect_app_bundles(source_dir: Path) -> List[Path]:
"""
Collect all app bundles which are to be put into DMG
If the source directory points to FOO.app it will be the only app bundle
packed.
Otherwise all .app bundles from given directory are placed to a single
DMG.
"""
if source_dir.name.endswith('.app'):
return [source_dir]
app_bundles = []
for filename in source_dir.glob('*'):
if not filename.is_dir():
continue
if not filename.name.endswith('.app'):
continue
app_bundles.append(filename)
return app_bundles
def collect_and_log_app_bundles(source_dir: Path) -> List[Path]:
app_bundles = collect_app_bundles(source_dir)
if not app_bundles:
print('No app bundles found for packing')
return
print(f'Found {len(app_bundles)} to pack:')
for app_bundle in app_bundles:
print(f'- {app_bundle}')
return app_bundles
def estimate_dmg_size(app_bundles: List[Path]) -> int:
"""
Estimate size of DMG to hold requested app bundles
The size is based on actual size of all files in all bundles plus some
space to compensate for different size-on-disk plus some space to hold
codesign signatures.
Is better to be on a high side since the empty space is compressed, but
lack of space might cause silent failures later on.
"""
app_bundles_size = 0
for app_bundle in app_bundles:
app_bundles_size += get_directory_size(app_bundle)
return app_bundles_size + EXTRA_DMG_SIZE_IN_BYTES
def copy_app_bundles_to_directory(app_bundles: List[Path],
directory: Path) -> None:
"""
Copy all bundles to a given directory
This directory is what the DMG will be created from.
"""
for app_bundle in app_bundles:
print(f'Copying {app_bundle.name}...')
shutil.copytree(app_bundle, directory / app_bundle.name)
def get_main_app_bundle(app_bundles: List[Path]) -> Path:
"""
Get application bundle main for the installation
"""
return app_bundles[0]
def create_dmg_image(app_bundles: List[Path],
dmg_filepath: Path,
volume_name: str) -> None:
"""
Create DMG disk image and put app bundles in it
No DMG configuration or codesigning is happening here.
"""
if dmg_filepath.exists():
print(f'Removing existing writable DMG {dmg_filepath}...')
dmg_filepath.unlink()
print('Preparing directory with app bundles for the DMG...')
with TemporaryDirectory(prefix='blender-dmg-content-') as content_dir_str:
# Copy all bundles to a clean directory.
content_dir = Path(content_dir_str)
copy_app_bundles_to_directory(app_bundles, content_dir)
# Estimate size of the DMG.
dmg_size = estimate_dmg_size(app_bundles)
print(f'Estimated DMG size: {dmg_size:,} bytes.')
# Create the DMG.
print(f'Creating writable DMG {dmg_filepath}')
command = ('hdiutil',
'create',
'-size', str(dmg_size),
'-fs', 'HFS+',
'-srcfolder', content_dir,
'-volname', volume_name,
'-format', 'UDRW',
dmg_filepath)
subprocess.run(command)
def get_writable_dmg_filepath(dmg_filepath: Path):
"""
Get file path for writable DMG image
"""
parent = dmg_filepath.parent
return parent / (dmg_filepath.stem + '-temp.dmg')
def mount_readwrite_dmg(dmg_filepath: Path) -> None:
"""
Mount writable DMG
Mounting point would be /Volumes/<volume name>
"""
print(f'Mounting read-write DMG ${dmg_filepath}')
command = ('hdiutil',
'attach', '-readwrite',
'-noverify',
'-noautoopen',
dmg_filepath)
subprocess.run(command)
def get_mount_directory_for_volume_name(volume_name: str) -> Path:
"""
Get directory under which the volume will be mounted
"""
return Path('/Volumes') / volume_name
def eject_volume(volume_name: str) -> None:
"""
Eject given volume, if mounted
"""
mount_directory = get_mount_directory_for_volume_name(volume_name)
if not mount_directory.exists():
return
mount_directory_str = str(mount_directory)
print(f'Ejecting volume {volume_name}')
# Figure out which device to eject.
mount_output = subprocess.check_output(['mount']).decode()
device = ''
for line in mount_output.splitlines():
if f'on {mount_directory_str} (' not in line:
continue
tokens = line.split(' ', 3)
if len(tokens) < 3:
continue
if tokens[1] != 'on':
continue
if device:
raise Exception(
f'Multiple devices found for mounting point {mount_directory}')
device = tokens[0]
if not device:
raise Exception(
f'No device found for mounting point {mount_directory}')
print(f'{mount_directory} is mounted as device {device}, ejecting...')
subprocess.run(['diskutil', 'eject', device])
def copy_background_if_needed(background_image_filepath: Path,
mount_directory: Path) -> None:
"""
Copy background to the DMG
If the background image is not specified it will not be copied.
"""
if not background_image_filepath:
print('No background image provided.')
return
print(f'Copying background image {background_image_filepath}')
destination_dir = mount_directory / '.background'
destination_dir.mkdir(exist_ok=True)
destination_filepath = destination_dir / background_image_filepath.name
shutil.copy(background_image_filepath, destination_filepath)
def create_applications_link(mount_directory: Path) -> None:
"""
Create link to /Applications in the given location
"""
print('Creating link to /Applications')
command = ('ln', '-s', '/Applications', mount_directory / ' ')
subprocess.run(command)
def run_applescript(applescript: Path,
volume_name: str,
app_bundles: List[Path],
background_image_filepath: Path) -> None:
"""
Run given applescript to adjust look and feel of the DMG
"""
main_app_bundle = get_main_app_bundle(app_bundles)
with NamedTemporaryFile(
mode='w', suffix='.applescript') as temp_applescript:
print('Adjusting applescript for volume name...')
# Adjust script to the specific volume name.
with open(applescript, mode='r') as input:
for line in input.readlines():
stripped_line = line.strip()
if stripped_line.startswith('tell disk'):
line = re.sub('tell disk ".*"',
f'tell disk "{volume_name}"',
line)
elif stripped_line.startswith('set background picture'):
if not background_image_filepath:
continue
else:
background_image_short = \
'.background:' + background_image_filepath.name
line = re.sub('to file ".*"',
f'to file "{background_image_short}"',
line)
line = line.replace('blender.app', main_app_bundle.name)
temp_applescript.write(line)
temp_applescript.flush()
print('Running applescript...')
command = ('osascript', temp_applescript.name)
subprocess.run(command)
print('Waiting for applescript...')
# NOTE: This is copied from bundle.sh. The exact reason for sleep is
# still remained a mystery.
time.sleep(5)
def codesign(subject: Path):
"""
Codesign file or directory
NOTE: For DMG it will also notarize.
"""
command = (CODESIGN_SCRIPT, subject)
subprocess.run(command)
def codesign_app_bundles_in_dmg(mount_directory: str) -> None:
"""
Code sign all binaries and bundles in the mounted directory
"""
print(f'Codesigning all app bundles in {mount_directory}')
codesign(mount_directory)
def codesign_and_notarize_dmg(dmg_filepath: Path) -> None:
"""
Run codesign and notarization pipeline on the DMG
"""
print(f'Codesigning and notarizing DMG {dmg_filepath}')
codesign(dmg_filepath)
def compress_dmg(writable_dmg_filepath: Path,
final_dmg_filepath: Path) -> None:
"""
Compress temporary read-write DMG
"""
command = ('hdiutil', 'convert',
writable_dmg_filepath,
'-format', 'UDZO',
'-o', final_dmg_filepath)
if final_dmg_filepath.exists():
print(f'Removing old compressed DMG {final_dmg_filepath}')
final_dmg_filepath.unlink()
print('Compressing disk image...')
subprocess.run(command)
def create_final_dmg(app_bundles: List[Path],
dmg_filepath: Path,
background_image_filepath: Path,
volume_name: str,
applescript: Path,
codesign: bool) -> None:
"""
Create DMG with all app bundles
Will take care configuring background, signing all binaries and app bundles
and notarizing the DMG.
"""
print('Running all routines to create final DMG')
writable_dmg_filepath = get_writable_dmg_filepath(dmg_filepath)
mount_directory = get_mount_directory_for_volume_name(volume_name)
# Make sure volume is not mounted.
# If it is mounted it will prevent removing old DMG files and could make
# it so app bundles are copied to the wrong place.
eject_volume(volume_name)
create_dmg_image(app_bundles, writable_dmg_filepath, volume_name)
mount_readwrite_dmg(writable_dmg_filepath)
# Run codesign first, prior to copying amything else.
#
# This allows to recurs into the content of bundles without worrying about
# possible interfereice of Application symlink.
if codesign:
codesign_app_bundles_in_dmg(mount_directory)
copy_background_if_needed(background_image_filepath, mount_directory)
create_applications_link(mount_directory)
run_applescript(applescript, volume_name, app_bundles,
background_image_filepath)
print('Ejecting read-write DMG image...')
eject_volume(volume_name)
compress_dmg(writable_dmg_filepath, dmg_filepath)
writable_dmg_filepath.unlink()
if codesign:
codesign_and_notarize_dmg(dmg_filepath)
def ensure_dmg_extension(filepath: Path) -> Path:
"""
Make sure given file have .dmg extension
"""
if filepath.suffix != '.dmg':
return filepath.with_suffix(f'{filepath.suffix}.dmg')
return filepath
def get_dmg_filepath(requested_name: Path, app_bundles: List[Path]) -> Path:
"""
Get full file path for the final DMG image
Will use the provided one when possible, otherwise will deduct it from
app bundles.
If the name is deducted, the DMG is stored in the current directory.
"""
if requested_name:
return ensure_dmg_extension(requested_name.absolute())
# TODO(sergey): This is not necessarily the main one.
main_bundle = app_bundles[0]
# Strip .app from the name
return Path(main_bundle.name[:-4] + '.dmg').absolute()
def get_background_image(requested_background_image: Path) -> Path:
"""
Get effective filepath for the background image
"""
if requested_background_image:
return requested_background_image.absolute()
return DARWIN_DIRECTORY / 'background.tif'
def get_applescript(requested_applescript: Path) -> Path:
"""
Get effective filepath for the applescript
"""
if requested_applescript:
return requested_applescript.absolute()
return DARWIN_DIRECTORY / 'blender.applescript'
def get_volume_name_from_dmg_filepath(dmg_filepath: Path) -> str:
"""
Deduct volume name from the DMG path
Will use first part of the DMG file name prior to dash.
"""
tokens = dmg_filepath.stem.split('-')
words = tokens[0].split()
return ' '.join(word.capitalize() for word in words)
def get_volume_name(requested_volume_name: str,
dmg_filepath: Path) -> str:
"""
Get effective name for DMG volume
"""
if requested_volume_name:
return requested_volume_name
return get_volume_name_from_dmg_filepath(dmg_filepath)
def main():
parser = create_argument_parser()
args = parser.parse_args()
# Get normalized input parameters.
source_dir = args.source_dir.absolute()
background_image_filepath = get_background_image(args.background_image)
applescript = get_applescript(args.applescript)
codesign = args.codesign
app_bundles = collect_and_log_app_bundles(source_dir)
if not app_bundles:
return
dmg_filepath = get_dmg_filepath(args.dmg, app_bundles)
volume_name = get_volume_name(args.volume_name, dmg_filepath)
print(f'Will produce DMG "{dmg_filepath.name}" (without quotes)')
create_final_dmg(app_bundles,
dmg_filepath,
background_image_filepath,
volume_name,
applescript,
codesign)
if __name__ == "__main__":
main()

View File

@@ -1,44 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# This is a script which is used as POST-INSTALL one for regular CMake's
# INSTALL target.
# It is used by buildbot workers to sign every binary which is going into
# the final buundle.
# On Windows Python 3 there only is python.exe, no python3.exe.
#
# On other platforms it is possible to have python2 and python3, and a
# symbolic link to python to either of them. So on those platforms use
# an explicit Python version.
if(WIN32)
set(PYTHON_EXECUTABLE python)
else()
set(PYTHON_EXECUTABLE python3)
endif()
execute_process(
COMMAND ${PYTHON_EXECUTABLE} "${CMAKE_CURRENT_LIST_DIR}/worker_codesign.py"
"${CMAKE_INSTALL_PREFIX}"
WORKING_DIRECTORY ${CMAKE_CURRENT_LIST_DIR}
RESULT_VARIABLE exit_code
)
if(NOT exit_code EQUAL "0")
message(FATAL_ERROR "Non-zero exit code of codesign tool")
endif()

View File

@@ -1,74 +0,0 @@
#!/usr/bin/env python3
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# Helper script which takes care of signing provided location.
#
# The location can either be a directory (in which case all eligible binaries
# will be signed) or a single file (in which case a single file will be signed).
#
# This script takes care of all the complexity of communicating between process
# which requests file to be signed and the code signing server.
#
# NOTE: Signing happens in-place.
import argparse
import sys
from pathlib import Path
from codesign.simple_code_signer import SimpleCodeSigner
def create_argument_parser():
parser = argparse.ArgumentParser()
parser.add_argument('path_to_sign', type=Path)
return parser
def main():
parser = create_argument_parser()
args = parser.parse_args()
path_to_sign = args.path_to_sign.absolute()
if sys.platform == 'win32':
# When WIX packed is used to generate .msi on Windows the CPack will
# install two different projects and install them to different
# installation prefix:
#
# - C:\b\build\_CPack_Packages\WIX\Blender
# - C:\b\build\_CPack_Packages\WIX\Unspecified
#
# Annoying part is: CMake's post-install script will only be run
# once, with the install prefix which corresponds to a project which
# was installed last. But we want to sign binaries from all projects.
# So in order to do so we detect that we are running for a CPack's
# project used for WIX and force parent directory (which includes both
# projects) to be signed.
#
# Here we force both projects to be signed.
if path_to_sign.name == 'Unspecified' and 'WIX' in str(path_to_sign):
path_to_sign = path_to_sign.parent
code_signer = SimpleCodeSigner()
code_signer.sign_file_or_directory(path_to_sign)
if __name__ == "__main__":
main()

View File

@@ -1,135 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
import os
import shutil
import buildbot_utils
def get_cmake_options(builder):
codesign_script = os.path.join(
builder.blender_dir, 'build_files', 'buildbot', 'worker_codesign.cmake')
config_file = "build_files/cmake/config/blender_release.cmake"
options = ['-DCMAKE_BUILD_TYPE:STRING=Release',
'-DWITH_GTESTS=ON']
if builder.platform == 'mac':
options.append('-DCMAKE_OSX_ARCHITECTURES:STRING=x86_64')
options.append('-DCMAKE_OSX_DEPLOYMENT_TARGET=10.9')
elif builder.platform == 'win':
options.extend(['-G', 'Visual Studio 16 2019', '-A', 'x64'])
if builder.codesign:
options.extend(['-DPOSTINSTALL_SCRIPT:PATH=' + codesign_script])
elif builder.platform == 'linux':
config_file = "build_files/buildbot/config/blender_linux.cmake"
optix_sdk_dir = os.path.join(builder.blender_dir, '..', '..', 'NVIDIA-Optix-SDK-7.1')
options.append('-DOPTIX_ROOT_DIR:PATH=' + optix_sdk_dir)
# Workaround to build sm_30 kernels with CUDA 10, since CUDA 11 no longer supports that architecture
if builder.platform == 'win':
options.append('-DCUDA10_TOOLKIT_ROOT_DIR:PATH=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1')
options.append('-DCUDA10_NVCC_EXECUTABLE:FILEPATH=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/bin/nvcc.exe')
options.append('-DCUDA11_TOOLKIT_ROOT_DIR:PATH=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.1')
options.append('-DCUDA11_NVCC_EXECUTABLE:FILEPATH=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.1/bin/nvcc.exe')
elif builder.platform == 'linux':
options.append('-DCUDA10_TOOLKIT_ROOT_DIR:PATH=/usr/local/cuda-10.1')
options.append('-DCUDA10_NVCC_EXECUTABLE:FILEPATH=/usr/local/cuda-10.1/bin/nvcc')
options.append('-DCUDA11_TOOLKIT_ROOT_DIR:PATH=/usr/local/cuda-11.1')
options.append('-DCUDA11_NVCC_EXECUTABLE:FILEPATH=/usr/local/cuda-11.1/bin/nvcc')
options.append("-C" + os.path.join(builder.blender_dir, config_file))
options.append("-DCMAKE_INSTALL_PREFIX=%s" % (builder.install_dir))
return options
def update_git(builder):
# Do extra git fetch because not all platform/git/buildbot combinations
# update the origin remote, causing buildinfo to detect local changes.
os.chdir(builder.blender_dir)
print("Fetching remotes")
command = ['git', 'fetch', '--all']
buildbot_utils.call(builder.command_prefix + command)
def clean_directories(builder):
# Make sure no garbage remained from the previous run
if os.path.isdir(builder.install_dir):
shutil.rmtree(builder.install_dir)
# Make sure build directory exists and enter it
os.makedirs(builder.build_dir, exist_ok=True)
# Remove buildinfo files to force buildbot to re-generate them.
for buildinfo in ('buildinfo.h', 'buildinfo.h.txt', ):
full_path = os.path.join(builder.build_dir, 'source', 'creator', buildinfo)
if os.path.exists(full_path):
print("Removing {}" . format(buildinfo))
os.remove(full_path)
def cmake_configure(builder):
# CMake configuration
os.chdir(builder.build_dir)
cmake_cache = os.path.join(builder.build_dir, 'CMakeCache.txt')
if os.path.exists(cmake_cache):
print("Removing CMake cache")
os.remove(cmake_cache)
print("CMake configure:")
cmake_options = get_cmake_options(builder)
command = ['cmake', builder.blender_dir] + cmake_options
buildbot_utils.call(builder.command_prefix + command)
def cmake_build(builder):
# CMake build
os.chdir(builder.build_dir)
# NOTE: CPack will build an INSTALL target, which would mean that code
# signing will happen twice when using `make install` and CPack.
# The tricky bit here is that it is not possible to know whether INSTALL
# target is used by CPack or by a buildbot itaself. Extra level on top of
# this is that on Windows it is required to build INSTALL target in order
# to have unit test binaries to run.
# So on the one hand we do an extra unneeded code sign on Windows, but on
# a positive side we don't add complexity and don't make build process more
# fragile trying to avoid this. The signing process is way faster than just
# a clean build of buildbot, especially with regression tests enabled.
if builder.platform == 'win':
command = ['cmake', '--build', '.', '--target', 'install', '--config', 'Release']
else:
command = ['make', '-s', '-j16', 'install']
print("CMake build:")
buildbot_utils.call(builder.command_prefix + command)
if __name__ == "__main__":
builder = buildbot_utils.create_builder_from_arguments()
update_git(builder)
clean_directories(builder)
cmake_configure(builder)
cmake_build(builder)

View File

@@ -1,208 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
# Runs on buildbot worker, creating a release package using the build
# system and zipping it into buildbot_upload.zip. This is then uploaded
# to the master in the next buildbot step.
import os
import sys
from pathlib import Path
import buildbot_utils
def get_package_name(builder, platform=None):
info = buildbot_utils.VersionInfo(builder)
package_name = 'blender-' + info.full_version
if platform:
package_name += '-' + platform
if not (builder.branch == 'master' or builder.is_release_branch):
if info.is_development_build:
package_name = builder.branch + "-" + package_name
return package_name
def sign_file_or_directory(path):
from codesign.simple_code_signer import SimpleCodeSigner
code_signer = SimpleCodeSigner()
code_signer.sign_file_or_directory(Path(path))
def create_buildbot_upload_zip(builder, package_files):
import zipfile
buildbot_upload_zip = os.path.join(builder.upload_dir, "buildbot_upload.zip")
if os.path.exists(buildbot_upload_zip):
os.remove(buildbot_upload_zip)
try:
z = zipfile.ZipFile(buildbot_upload_zip, "w", compression=zipfile.ZIP_STORED)
for filepath, filename in package_files:
print("Packaged", filename)
z.write(filepath, arcname=filename)
z.close()
except Exception as ex:
sys.stderr.write('Create buildbot_upload.zip failed: ' + str(ex) + '\n')
sys.exit(1)
def create_tar_xz(src, dest, package_name):
# One extra to remove leading os.sep when cleaning root for package_root
ln = len(src) + 1
flist = list()
# Create list of tuples containing file and archive name
for root, dirs, files in os.walk(src):
package_root = os.path.join(package_name, root[ln:])
flist.extend([(os.path.join(root, file), os.path.join(package_root, file)) for file in files])
import tarfile
# Set UID/GID of archived files to 0, otherwise they'd be owned by whatever
# user compiled the package. If root then unpacks it to /usr/local/ you get
# a security issue.
def _fakeroot(tarinfo):
tarinfo.gid = 0
tarinfo.gname = "root"
tarinfo.uid = 0
tarinfo.uname = "root"
return tarinfo
package = tarfile.open(dest, 'w:xz', preset=9)
for entry in flist:
package.add(entry[0], entry[1], recursive=False, filter=_fakeroot)
package.close()
def cleanup_files(dirpath, extension):
for f in os.listdir(dirpath):
filepath = os.path.join(dirpath, f)
if os.path.isfile(filepath) and f.endswith(extension):
os.remove(filepath)
def pack_mac(builder):
info = buildbot_utils.VersionInfo(builder)
os.chdir(builder.build_dir)
cleanup_files(builder.build_dir, '.dmg')
package_name = get_package_name(builder, 'macOS')
package_filename = package_name + '.dmg'
package_filepath = os.path.join(builder.build_dir, package_filename)
release_dir = os.path.join(builder.blender_dir, 'release', 'darwin')
buildbot_dir = os.path.join(builder.blender_dir, 'build_files', 'buildbot')
bundle_script = os.path.join(buildbot_dir, 'worker_bundle_dmg.py')
command = [bundle_script]
command += ['--dmg', package_filepath]
if info.is_development_build:
background_image = os.path.join(release_dir, 'buildbot', 'background.tif')
command += ['--background-image', background_image]
if builder.codesign:
command += ['--codesign']
command += [builder.install_dir]
buildbot_utils.call(command)
create_buildbot_upload_zip(builder, [(package_filepath, package_filename)])
def pack_win(builder):
info = buildbot_utils.VersionInfo(builder)
os.chdir(builder.build_dir)
cleanup_files(builder.build_dir, '.zip')
# CPack will add the platform name
cpack_name = get_package_name(builder, None)
package_name = get_package_name(builder, 'windows' + str(builder.bits))
command = ['cmake', '-DCPACK_OVERRIDE_PACKAGENAME:STRING=' + cpack_name, '.']
buildbot_utils.call(builder.command_prefix + command)
command = ['cpack', '-G', 'ZIP']
buildbot_utils.call(builder.command_prefix + command)
package_filename = package_name + '.zip'
package_filepath = os.path.join(builder.build_dir, package_filename)
package_files = [(package_filepath, package_filename)]
if info.version_cycle == 'release':
# Installer only for final release builds, otherwise will get
# 'this product is already installed' messages.
command = ['cpack', '-G', 'WIX']
buildbot_utils.call(builder.command_prefix + command)
package_filename = package_name + '.msi'
package_filepath = os.path.join(builder.build_dir, package_filename)
if builder.codesign:
sign_file_or_directory(package_filepath)
package_files += [(package_filepath, package_filename)]
create_buildbot_upload_zip(builder, package_files)
def pack_linux(builder):
blender_executable = os.path.join(builder.install_dir, 'blender')
info = buildbot_utils.VersionInfo(builder)
# Strip all unused symbols from the binaries
print("Stripping binaries...")
buildbot_utils.call(builder.command_prefix + ['strip', '--strip-all', blender_executable])
print("Stripping python...")
py_target = os.path.join(builder.install_dir, info.short_version)
buildbot_utils.call(
builder.command_prefix + [
'find', py_target, '-iname', '*.so', '-exec', 'strip', '-s', '{}', ';',
],
)
# Construct package name
platform_name = 'linux64'
package_name = get_package_name(builder, platform_name)
package_filename = package_name + ".tar.xz"
print("Creating .tar.xz archive")
package_filepath = builder.install_dir + '.tar.xz'
create_tar_xz(builder.install_dir, package_filepath, package_name)
# Create buildbot_upload.zip
create_buildbot_upload_zip(builder, [(package_filepath, package_filename)])
if __name__ == "__main__":
builder = buildbot_utils.create_builder_from_arguments()
# Make sure install directory always exists
os.makedirs(builder.install_dir, exist_ok=True)
if builder.platform == 'mac':
pack_mac(builder)
elif builder.platform == 'win':
pack_win(builder)
elif builder.platform == 'linux':
pack_linux(builder)

View File

@@ -1,42 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
import buildbot_utils
import os
import sys
def get_ctest_arguments(builder):
args = ['--output-on-failure']
if builder.platform == 'win':
args += ['-C', 'Release']
return args
def test(builder):
os.chdir(builder.build_dir)
command = builder.command_prefix + ['ctest'] + get_ctest_arguments(builder)
buildbot_utils.call(command)
if __name__ == "__main__":
builder = buildbot_utils.create_builder_from_arguments()
test(builder)

View File

@@ -1,31 +0,0 @@
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
import buildbot_utils
import os
import sys
if __name__ == "__main__":
builder = buildbot_utils.create_builder_from_arguments()
os.chdir(builder.blender_dir)
# Run make update which handles all libraries and submodules.
make_update = os.path.join(builder.blender_dir, "build_files", "utils", "make_update.py")
buildbot_utils.call([sys.executable, make_update, '--no-blender', "--use-tests", "--use-centos-libraries"])

View File

@@ -20,8 +20,24 @@ if(NOT CLANG_ROOT_DIR AND NOT $ENV{CLANG_ROOT_DIR} STREQUAL "")
set(CLANG_ROOT_DIR $ENV{CLANG_ROOT_DIR})
endif()
if(NOT LLVM_ROOT_DIR)
if(DEFINED LLVM_VERSION)
message(running llvm-config-${LLVM_VERSION})
find_program(LLVM_CONFIG llvm-config-${LLVM_VERSION})
endif()
if(NOT LLVM_CONFIG)
find_program(LLVM_CONFIG llvm-config)
endif()
execute_process(COMMAND ${LLVM_CONFIG} --prefix
OUTPUT_VARIABLE LLVM_ROOT_DIR
OUTPUT_STRIP_TRAILING_WHITESPACE)
set(LLVM_ROOT_DIR ${LLVM_ROOT_DIR} CACHE PATH "Path to the LLVM installation")
endif()
set(_CLANG_SEARCH_DIRS
${CLANG_ROOT_DIR}
${LLVM_ROOT_DIR}
/opt/lib/clang
)

View File

@@ -596,14 +596,6 @@ function(SETUP_LIBDIRS)
link_directories(${GMP_LIBPATH})
endif()
if(WITH_GHOST_WAYLAND)
link_directories(
${wayland-client_LIBRARY_DIRS}
${wayland-egl_LIBRARY_DIRS}
${xkbcommon_LIBRARY_DIRS}
${wayland-cursor_LIBRARY_DIRS})
endif()
if(WIN32 AND NOT UNIX)
link_directories(${PTHREADS_LIBPATH})
endif()

View File

@@ -388,6 +388,10 @@ endif()
if(WITH_TBB)
find_package(TBB)
if(NOT TBB_FOUND)
message(WARNING "TBB not found, disabling WITH_TBB")
set(WITH_TBB OFF)
endif()
endif()
if(WITH_POTRACE)

View File

@@ -457,6 +457,10 @@ endif()
if(WITH_TBB)
find_package_wrapper(TBB)
if(NOT TBB_FOUND)
message(WARNING "TBB not found, disabling WITH_TBB")
set(WITH_TBB OFF)
endif()
endif()
if(WITH_XR_OPENXR)
@@ -575,17 +579,17 @@ if(WITH_GHOST_WAYLAND)
pkg_check_modules(wayland-scanner REQUIRED wayland-scanner)
pkg_check_modules(xkbcommon REQUIRED xkbcommon)
pkg_check_modules(wayland-cursor REQUIRED wayland-cursor)
pkg_check_modules(dbus REQUIRED dbus-1)
set(WITH_GL_EGL ON)
if(WITH_GHOST_WAYLAND)
list(APPEND PLATFORM_LINKLIBS
${wayland-client_LIBRARIES}
${wayland-egl_LIBRARIES}
${xkbcommon_LIBRARIES}
${wayland-cursor_LIBRARIES}
)
endif()
list(APPEND PLATFORM_LINKLIBS
${wayland-client_LINK_LIBRARIES}
${wayland-egl_LINK_LIBRARIES}
${xkbcommon_LINK_LIBRARIES}
${wayland-cursor_LINK_LIBRARIES}
${dbus_LINK_LIBRARIES}
)
endif()
if(WITH_GHOST_X11)

View File

@@ -675,10 +675,11 @@ if(WITH_SYSTEM_AUDASPACE)
endif()
if(WITH_TBB)
set(TBB_LIBRARIES optimized ${LIBDIR}/tbb/lib/tbb.lib debug ${LIBDIR}/tbb/lib/debug/tbb_debug.lib)
set(TBB_LIBRARIES optimized ${LIBDIR}/tbb/lib/tbb.lib debug ${LIBDIR}/tbb/lib/tbb_debug.lib)
set(TBB_INCLUDE_DIR ${LIBDIR}/tbb/include)
set(TBB_INCLUDE_DIRS ${TBB_INCLUDE_DIR})
if(WITH_TBB_MALLOC_PROXY)
set(TBB_MALLOC_LIBRARIES optimized ${LIBDIR}/tbb/lib/tbbmalloc.lib debug ${LIBDIR}/tbb/lib/tbbmalloc_debug.lib)
add_definitions(-DWITH_TBB_MALLOC)
endif()
endif()

View File

@@ -15,6 +15,15 @@ if(WITH_WINDOWS_BUNDLE_CRT)
include(InstallRequiredSystemLibraries)
# ucrtbase(d).dll cannot be in the manifest, due to the way windows 10 handles
# redirects for this dll, for details see T88813.
foreach(lib ${CMAKE_INSTALL_SYSTEM_RUNTIME_LIBS})
string(FIND ${lib} "ucrtbase" pos)
if(NOT pos EQUAL -1)
list(REMOVE_ITEM CMAKE_INSTALL_SYSTEM_RUNTIME_LIBS ${lib})
install(FILES ${lib} DESTINATION . COMPONENT Libraries)
endif()
endforeach()
# Install the CRT to the blender.crt Sub folder.
install(FILES ${CMAKE_INSTALL_SYSTEM_RUNTIME_LIBS} DESTINATION ./blender.crt COMPONENT Libraries)

View File

@@ -0,0 +1,8 @@
Pipeline Config
===============
This configuration file is used by buildbot new pipeline for the `update-code` step.
It will soon be used by the ../utils/make_update.py script.
Both buildbot and developers will eventually use the same configuration file.

View File

@@ -0,0 +1,87 @@
{
"update-code":
{
"git" :
{
"submodules":
[
{ "path": "release/scripts/addons", "branch": "master", "commit_id": "HEAD" },
{ "path": "release/scripts/addons_contrib", "branch": "master", "commit_id": "HEAD" },
{ "path": "release/datafiles/locale", "branch": "master", "commit_id": "HEAD" },
{ "path": "source/tools", "branch": "master", "commit_id": "HEAD" }
]
},
"svn":
{
"tests": { "path": "lib/tests", "branch": "trunk", "commit_id": "HEAD" },
"libraries":
{
"darwin-x86_64": { "path": "lib/darwin", "branch": "trunk", "commit_id": "HEAD" },
"darwin-arm64": { "path": "lib/darwin_arm64", "branch": "trunk", "commit_id": "HEAD" },
"linux-x86_64": { "path": "lib/linux_centos7_x86_64", "branch": "trunk", "commit_id": "HEAD" },
"windows-amd64": { "path": "lib/win64_vc15", "branch": "trunk", "commit_id": "HEAD" }
}
}
},
"buildbot":
{
"gcc":
{
"version": "9.0"
},
"sdks":
{
"optix":
{
"version": "7.1.0"
},
"cuda10":
{
"version": "10.1"
},
"cuda11":
{
"version": "11.3"
}
},
"cmake":
{
"default":
{
"version": "any",
"overrides":
{
}
},
"darwin-x86_64":
{
"overrides":
{
}
},
"darwin-arm64":
{
"overrides":
{
}
},
"linux-x86_64":
{
"overrides":
{
}
},
"windows-amd64":
{
"overrides":
{
}
}
}
}
}

View File

@@ -0,0 +1,5 @@
Make Utility Scripts
====================
Scripts used only by developers for now

View File

@@ -1,4 +1,4 @@
# Doxyfile 1.8.16
# Doxyfile 1.9.1
# This file describes the settings to be used by the documentation system
# doxygen (www.doxygen.org) for a project.
@@ -38,7 +38,7 @@ PROJECT_NAME = Blender
# could be handy for archiving the generated documentation or if some version
# control system is used.
PROJECT_NUMBER = "V3.0"
PROJECT_NUMBER = V3.0
# Using the PROJECT_BRIEF tag one can provide an optional one line description
# for a project that appears at the top of each page and should give viewer a
@@ -227,6 +227,14 @@ QT_AUTOBRIEF = NO
MULTILINE_CPP_IS_BRIEF = NO
# By default Python docstrings are displayed as preformatted text and doxygen's
# special commands cannot be used. By setting PYTHON_DOCSTRING to NO the
# doxygen's special commands can be used and the contents of the docstring
# documentation blocks is shown as doxygen documentation.
# The default value is: YES.
PYTHON_DOCSTRING = YES
# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the
# documentation from any documented member that it re-implements.
# The default value is: YES.
@@ -263,12 +271,6 @@ TAB_SIZE = 4
ALIASES =
# This tag can be used to specify a number of word-keyword mappings (TCL only).
# A mapping has the form "name=value". For example adding "class=itcl::class"
# will allow you to use the command class in the itcl::class meaning.
TCL_SUBST =
# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources
# only. Doxygen will then generate output that is more tailored for C. For
# instance, some of the names that are used will be different. The list of all
@@ -309,19 +311,22 @@ OPTIMIZE_OUTPUT_SLICE = NO
# parses. With this tag you can assign which parser to use for a given
# extension. Doxygen has a built-in mapping, but you can override or extend it
# using this tag. The format is ext=language, where ext is a file extension, and
# language is one of the parsers supported by doxygen: IDL, Java, Javascript,
# Csharp (C#), C, C++, D, PHP, md (Markdown), Objective-C, Python, Slice,
# language is one of the parsers supported by doxygen: IDL, Java, JavaScript,
# Csharp (C#), C, C++, D, PHP, md (Markdown), Objective-C, Python, Slice, VHDL,
# Fortran (fixed format Fortran: FortranFixed, free formatted Fortran:
# FortranFree, unknown formatted Fortran: Fortran. In the later case the parser
# tries to guess whether the code is fixed or free formatted code, this is the
# default for Fortran type files), VHDL, tcl. For instance to make doxygen treat
# .inc files as Fortran files (default is PHP), and .f files as C (default is
# Fortran), use: inc=Fortran f=C.
# default for Fortran type files). For instance to make doxygen treat .inc files
# as Fortran files (default is PHP), and .f files as C (default is Fortran),
# use: inc=Fortran f=C.
#
# Note: For files without extension you can use no_extension as a placeholder.
#
# Note that for custom extensions you also need to set FILE_PATTERNS otherwise
# the files are not read by doxygen.
# the files are not read by doxygen. When specifying no_extension you should add
# * to the FILE_PATTERNS.
#
# Note see also the list of default file extension mappings.
EXTENSION_MAPPING =
@@ -455,6 +460,19 @@ TYPEDEF_HIDES_STRUCT = NO
LOOKUP_CACHE_SIZE = 3
# The NUM_PROC_THREADS specifies the number threads doxygen is allowed to use
# during processing. When set to 0 doxygen will based this on the number of
# cores available in the system. You can set it explicitly to a value larger
# than 0 to get more control over the balance between CPU load and processing
# speed. At this moment only the input processing can be done using multiple
# threads. Since this is still an experimental feature the default is set to 1,
# which efficively disables parallel processing. Please report any issues you
# encounter. Generating dot graphs in parallel is controlled by the
# DOT_NUM_THREADS setting.
# Minimum value: 0, maximum value: 32, default value: 1.
NUM_PROC_THREADS = 1
#---------------------------------------------------------------------------
# Build related configuration options
#---------------------------------------------------------------------------
@@ -518,6 +536,13 @@ EXTRACT_LOCAL_METHODS = NO
EXTRACT_ANON_NSPACES = NO
# If this flag is set to YES, the name of an unnamed parameter in a declaration
# will be determined by the corresponding definition. By default unnamed
# parameters remain unnamed in the output.
# The default value is: YES.
RESOLVE_UNNAMED_PARAMS = YES
# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all
# undocumented members inside documented classes or files. If set to NO these
# members will be included in the various overviews, but no documentation
@@ -535,8 +560,8 @@ HIDE_UNDOC_MEMBERS = NO
HIDE_UNDOC_CLASSES = NO
# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend
# (class|struct|union) declarations. If set to NO, these declarations will be
# included in the documentation.
# declarations. If set to NO, these declarations will be included in the
# documentation.
# The default value is: NO.
HIDE_FRIEND_COMPOUNDS = NO
@@ -555,11 +580,18 @@ HIDE_IN_BODY_DOCS = NO
INTERNAL_DOCS = YES
# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file
# names in lower-case letters. If set to YES, upper-case letters are also
# allowed. This is useful if you have classes or files whose names only differ
# in case and if your file system supports case sensitive file names. Windows
# (including Cygwin) ands Mac users are advised to set this option to NO.
# With the correct setting of option CASE_SENSE_NAMES doxygen will better be
# able to match the capabilities of the underlying filesystem. In case the
# filesystem is case sensitive (i.e. it supports files in the same directory
# whose names only differ in casing), the option must be set to YES to properly
# deal with such files in case they appear in the input. For filesystems that
# are not case sensitive the option should be be set to NO to properly deal with
# output files written for symbols that only differ in casing, such as for two
# classes, one named CLASS and the other named Class, and to also support
# references to files without having to specify the exact matching casing. On
# Windows (including Cygwin) and MacOS, users should typically set this option
# to NO, whereas on Linux or other Unix flavors it should typically be set to
# YES.
# The default value is: system dependent.
CASE_SENSE_NAMES = YES
@@ -798,7 +830,10 @@ WARN_IF_DOC_ERROR = YES
WARN_NO_PARAMDOC = NO
# If the WARN_AS_ERROR tag is set to YES then doxygen will immediately stop when
# a warning is encountered.
# a warning is encountered. If the WARN_AS_ERROR tag is set to FAIL_ON_WARNINGS
# then doxygen will continue running as if WARN_AS_ERROR tag is set to NO, but
# at the end of the doxygen process doxygen will return with a non-zero status.
# Possible values are: NO, YES and FAIL_ON_WARNINGS.
# The default value is: NO.
WARN_AS_ERROR = NO
@@ -840,8 +875,8 @@ INPUT = doxygen.main.h \
# This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
# libiconv (or the iconv built into libc) for the transcoding. See the libiconv
# documentation (see: https://www.gnu.org/software/libiconv/) for the list of
# possible encodings.
# documentation (see:
# https://www.gnu.org/software/libiconv/) for the list of possible encodings.
# The default value is: UTF-8.
INPUT_ENCODING = UTF-8
@@ -854,11 +889,15 @@ INPUT_ENCODING = UTF-8
# need to set EXTENSION_MAPPING for the extension otherwise the files are not
# read by doxygen.
#
# Note the list of default checked file patterns might differ from the list of
# default file extension mappings.
#
# If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp,
# *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h,
# *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc,
# *.m, *.markdown, *.md, *.mm, *.dox, *.py, *.pyw, *.f90, *.f95, *.f03, *.f08,
# *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf, *.qsf and *.ice.
# *.m, *.markdown, *.md, *.mm, *.dox (to be provided as doxygen C comment),
# *.py, *.pyw, *.f90, *.f95, *.f03, *.f08, *.f18, *.f, *.for, *.vhd, *.vhdl,
# *.ucf, *.qsf and *.ice.
FILE_PATTERNS =
@@ -1086,13 +1125,6 @@ VERBATIM_HEADERS = YES
ALPHABETICAL_INDEX = YES
# The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in
# which the alphabetical index list will be split.
# Minimum value: 1, maximum value: 20, default value: 5.
# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
COLS_IN_ALPHA_INDEX = 5
# In case all classes in a project start with a common prefix, all classes will
# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag
# can be used to specify a prefix (or a list of prefixes) that should be ignored
@@ -1231,9 +1263,9 @@ HTML_TIMESTAMP = YES
# If the HTML_DYNAMIC_MENUS tag is set to YES then the generated HTML
# documentation will contain a main index with vertical navigation menus that
# are dynamically created via Javascript. If disabled, the navigation index will
# are dynamically created via JavaScript. If disabled, the navigation index will
# consists of multiple levels of tabs that are statically embedded in every HTML
# page. Disable this option to support browsers that do not have Javascript,
# page. Disable this option to support browsers that do not have JavaScript,
# like the Qt help browser.
# The default value is: YES.
# This tag requires that the tag GENERATE_HTML is set to YES.
@@ -1263,10 +1295,11 @@ HTML_INDEX_NUM_ENTRIES = 100
# If the GENERATE_DOCSET tag is set to YES, additional index files will be
# generated that can be used as input for Apple's Xcode 3 integrated development
# environment (see: https://developer.apple.com/xcode/), introduced with OSX
# 10.5 (Leopard). To create a documentation set, doxygen will generate a
# Makefile in the HTML output directory. Running make will produce the docset in
# that directory and running make install will install the docset in
# environment (see:
# https://developer.apple.com/xcode/), introduced with OSX 10.5 (Leopard). To
# create a documentation set, doxygen will generate a Makefile in the HTML
# output directory. Running make will produce the docset in that directory and
# running make install will install the docset in
# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at
# startup. See https://developer.apple.com/library/archive/featuredarticles/Doxy
# genXcode/_index.html for more information.
@@ -1308,8 +1341,8 @@ DOCSET_PUBLISHER_NAME = Publisher
# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three
# additional HTML index files: index.hhp, index.hhc, and index.hhk. The
# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop
# (see: https://www.microsoft.com/en-us/download/details.aspx?id=21138) on
# Windows.
# (see:
# https://www.microsoft.com/en-us/download/details.aspx?id=21138) on Windows.
#
# The HTML Help Workshop contains a compiler that can convert all HTML output
# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML
@@ -1339,7 +1372,7 @@ CHM_FILE = blender.chm
HHC_LOCATION = "C:/Program Files (x86)/HTML Help Workshop/hhc.exe"
# The GENERATE_CHI flag controls if a separate .chi index file is generated
# (YES) or that it should be included in the master .chm file (NO).
# (YES) or that it should be included in the main .chm file (NO).
# The default value is: NO.
# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
@@ -1384,7 +1417,8 @@ QCH_FILE =
# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help
# Project output. For more information please see Qt Help Project / Namespace
# (see: https://doc.qt.io/archives/qt-4.8/qthelpproject.html#namespace).
# (see:
# https://doc.qt.io/archives/qt-4.8/qthelpproject.html#namespace).
# The default value is: org.doxygen.Project.
# This tag requires that the tag GENERATE_QHP is set to YES.
@@ -1392,8 +1426,8 @@ QHP_NAMESPACE = org.doxygen.Project
# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt
# Help Project output. For more information please see Qt Help Project / Virtual
# Folders (see: https://doc.qt.io/archives/qt-4.8/qthelpproject.html#virtual-
# folders).
# Folders (see:
# https://doc.qt.io/archives/qt-4.8/qthelpproject.html#virtual-folders).
# The default value is: doc.
# This tag requires that the tag GENERATE_QHP is set to YES.
@@ -1401,16 +1435,16 @@ QHP_VIRTUAL_FOLDER = doc
# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom
# filter to add. For more information please see Qt Help Project / Custom
# Filters (see: https://doc.qt.io/archives/qt-4.8/qthelpproject.html#custom-
# filters).
# Filters (see:
# https://doc.qt.io/archives/qt-4.8/qthelpproject.html#custom-filters).
# This tag requires that the tag GENERATE_QHP is set to YES.
QHP_CUST_FILTER_NAME =
# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the
# custom filter to add. For more information please see Qt Help Project / Custom
# Filters (see: https://doc.qt.io/archives/qt-4.8/qthelpproject.html#custom-
# filters).
# Filters (see:
# https://doc.qt.io/archives/qt-4.8/qthelpproject.html#custom-filters).
# This tag requires that the tag GENERATE_QHP is set to YES.
QHP_CUST_FILTER_ATTRS =
@@ -1422,9 +1456,9 @@ QHP_CUST_FILTER_ATTRS =
QHP_SECT_FILTER_ATTRS =
# The QHG_LOCATION tag can be used to specify the location of Qt's
# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the
# generated .qhp file.
# The QHG_LOCATION tag can be used to specify the location (absolute path
# including file name) of Qt's qhelpgenerator. If non-empty doxygen will try to
# run qhelpgenerator on the generated .qhp file.
# This tag requires that the tag GENERATE_QHP is set to YES.
QHG_LOCATION =
@@ -1501,6 +1535,17 @@ TREEVIEW_WIDTH = 246
EXT_LINKS_IN_WINDOW = NO
# If the HTML_FORMULA_FORMAT option is set to svg, doxygen will use the pdf2svg
# tool (see https://github.com/dawbarton/pdf2svg) or inkscape (see
# https://inkscape.org) to generate formulas as SVG images instead of PNGs for
# the HTML output. These images will generally look nicer at scaled resolutions.
# Possible values are: png (the default) and svg (looks nicer but requires the
# pdf2svg or inkscape tool).
# The default value is: png.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_FORMULA_FORMAT = png
# Use this tag to change the font size of LaTeX formulas included as images in
# the HTML documentation. When you change the font size after a successful
# doxygen run you need to manually remove any form_*.png images from the HTML
@@ -1521,8 +1566,14 @@ FORMULA_FONTSIZE = 10
FORMULA_TRANSPARENT = YES
# The FORMULA_MACROFILE can contain LaTeX \newcommand and \renewcommand commands
# to create new LaTeX commands to be used in formulas as building blocks. See
# the section "Including formulas" for details.
FORMULA_MACROFILE =
# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see
# https://www.mathjax.org) which uses client side Javascript for the rendering
# https://www.mathjax.org) which uses client side JavaScript for the rendering
# instead of using pre-rendered bitmaps. Use this if you do not have LaTeX
# installed or if you want to formulas look prettier in the HTML output. When
# enabled you may also need to install MathJax separately and configure the path
@@ -1534,7 +1585,7 @@ USE_MATHJAX = NO
# When MathJax is enabled you can set the default output format to be used for
# the MathJax output. See the MathJax site (see:
# http://docs.mathjax.org/en/latest/output.html) for more details.
# http://docs.mathjax.org/en/v2.7-latest/output.html) for more details.
# Possible values are: HTML-CSS (which is slower, but has the best
# compatibility), NativeMML (i.e. MathML) and SVG.
# The default value is: HTML-CSS.
@@ -1550,7 +1601,7 @@ MATHJAX_FORMAT = HTML-CSS
# Content Delivery Network so you can quickly see the result without installing
# MathJax. However, it is strongly recommended to install a local copy of
# MathJax from https://www.mathjax.org before deployment.
# The default value is: https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/.
# The default value is: https://cdn.jsdelivr.net/npm/mathjax@2.
# This tag requires that the tag USE_MATHJAX is set to YES.
MATHJAX_RELPATH = http://www.mathjax.org/mathjax
@@ -1564,7 +1615,8 @@ MATHJAX_EXTENSIONS =
# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces
# of code that will be used on startup of the MathJax code. See the MathJax site
# (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an
# (see:
# http://docs.mathjax.org/en/v2.7-latest/output.html) for more details. For an
# example see the documentation.
# This tag requires that the tag USE_MATHJAX is set to YES.
@@ -1592,7 +1644,7 @@ MATHJAX_CODEFILE =
SEARCHENGINE = NO
# When the SERVER_BASED_SEARCH tag is enabled the search engine will be
# implemented using a web server instead of a web client using Javascript. There
# implemented using a web server instead of a web client using JavaScript. There
# are two flavors of web server based searching depending on the EXTERNAL_SEARCH
# setting. When disabled, doxygen will generate a PHP script for searching and
# an index file used by the script. When EXTERNAL_SEARCH is enabled the indexing
@@ -1611,7 +1663,8 @@ SERVER_BASED_SEARCH = NO
#
# Doxygen ships with an example indexer (doxyindexer) and search engine
# (doxysearch.cgi) which are based on the open source search engine library
# Xapian (see: https://xapian.org/).
# Xapian (see:
# https://xapian.org/).
#
# See the section "External Indexing and Searching" for details.
# The default value is: NO.
@@ -1624,8 +1677,9 @@ EXTERNAL_SEARCH = NO
#
# Doxygen ships with an example indexer (doxyindexer) and search engine
# (doxysearch.cgi) which are based on the open source search engine library
# Xapian (see: https://xapian.org/). See the section "External Indexing and
# Searching" for details.
# Xapian (see:
# https://xapian.org/). See the section "External Indexing and Searching" for
# details.
# This tag requires that the tag SEARCHENGINE is set to YES.
SEARCHENGINE_URL =
@@ -1789,9 +1843,11 @@ LATEX_EXTRA_FILES =
PDF_HYPERLINKS = NO
# If the USE_PDFLATEX tag is set to YES, doxygen will use pdflatex to generate
# the PDF file directly from the LaTeX files. Set this option to YES, to get a
# higher quality PDF documentation.
# If the USE_PDFLATEX tag is set to YES, doxygen will use the engine as
# specified with LATEX_CMD_NAME to generate the PDF file directly from the LaTeX
# files. Set this option to YES, to get a higher quality PDF documentation.
#
# See also section LATEX_CMD_NAME for selecting the engine.
# The default value is: YES.
# This tag requires that the tag GENERATE_LATEX is set to YES.
@@ -2126,7 +2182,8 @@ INCLUDE_FILE_PATTERNS =
# recursively expanded use the := operator instead of the = operator.
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
PREDEFINED = BUILD_DATE
PREDEFINED = BUILD_DATE \
DOXYGEN=1
# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this
# tag can be used to specify a list of macro names that should be expanded. The
@@ -2303,10 +2360,32 @@ UML_LOOK = YES
# but if the number exceeds 15, the total amount of fields shown is limited to
# 10.
# Minimum value: 0, maximum value: 100, default value: 10.
# This tag requires that the tag HAVE_DOT is set to YES.
# This tag requires that the tag UML_LOOK is set to YES.
UML_LIMIT_NUM_FIELDS = 10
# If the DOT_UML_DETAILS tag is set to NO, doxygen will show attributes and
# methods without types and arguments in the UML graphs. If the DOT_UML_DETAILS
# tag is set to YES, doxygen will add type and arguments for attributes and
# methods in the UML graphs. If the DOT_UML_DETAILS tag is set to NONE, doxygen
# will not generate fields with class member information in the UML graphs. The
# class diagrams will look similar to the default class diagrams but using UML
# notation for the relationships.
# Possible values are: NO, YES and NONE.
# The default value is: NO.
# This tag requires that the tag UML_LOOK is set to YES.
DOT_UML_DETAILS = NO
# The DOT_WRAP_THRESHOLD tag can be used to set the maximum number of characters
# to display on a single line. If the actual line length exceeds this threshold
# significantly it will wrapped across multiple lines. Some heuristics are apply
# to avoid ugly line breaks.
# Minimum value: 0, maximum value: 1000, default value: 17.
# This tag requires that the tag HAVE_DOT is set to YES.
DOT_WRAP_THRESHOLD = 17
# If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and
# collaboration graphs will show the relations between templates and their
# instances.
@@ -2496,9 +2575,11 @@ DOT_MULTI_TARGETS = YES
GENERATE_LEGEND = YES
# If the DOT_CLEANUP tag is set to YES, doxygen will remove the intermediate dot
# If the DOT_CLEANUP tag is set to YES, doxygen will remove the intermediate
# files that are used to generate the various graphs.
#
# Note: This setting is not only used for dot files but also for msc and
# plantuml temporary files.
# The default value is: YES.
# This tag requires that the tag HAVE_DOT is set to YES.
DOT_CLEANUP = YES

View File

@@ -29,7 +29,7 @@ with offscreen.bind():
amount = 10
for i in range(-amount, amount + 1):
x_pos = i / amount
draw_circle_2d((x_pos, 0.0), (1, 1, 1, 1), 0.5, 200)
draw_circle_2d((x_pos, 0.0), (1, 1, 1, 1), 0.5, segments=200)
# Drawing the generated texture in 3D space

View File

@@ -34,7 +34,9 @@ with offscreen.bind():
for i in range(RING_AMOUNT):
draw_circle_2d(
(random.uniform(-1, 1), random.uniform(-1, 1)),
(1, 1, 1, 1), random.uniform(0.1, 1), 20)
(1, 1, 1, 1), random.uniform(0.1, 1),
segments=20,
)
buffer = fb.read_color(0, 0, WIDTH, HEIGHT, 4, 0, 'UBYTE')

View File

@@ -132,7 +132,7 @@ def init():
_workaround_buggy_drivers()
path = os.path.dirname(__file__)
user_path = os.path.dirname(os.path.abspath(bpy.utils.user_resource('CONFIG', '')))
user_path = os.path.dirname(os.path.abspath(bpy.utils.user_resource('CONFIG', path='')))
_cycles.init(path, user_path, bpy.app.background)
_parse_command_line()

View File

@@ -34,12 +34,17 @@ void BlenderSync::sync_light(BL::Object &b_parent,
bool *use_portal)
{
/* test if we need to sync */
Light *light;
ObjectKey key(b_parent, persistent_id, b_ob_instance, false);
BL::Light b_light(b_ob.data());
Light *light = light_map.find(key);
/* Check if the transform was modified, in case a linked collection is moved we do not get a
* specific depsgraph update (T88515). This also mimics the behavior for Objects. */
const bool tfm_updated = (light && light->get_tfm() != tfm);
/* Update if either object or light data changed. */
if (!light_map.add_or_update(&light, b_ob, b_parent, key)) {
if (!tfm_updated && !light_map.add_or_update(&light, b_ob, b_parent, key)) {
Shader *shader;
if (!shader_map.add_or_update(&shader, b_light)) {
if (light->get_is_portal())

View File

@@ -289,11 +289,10 @@ static PyObject *render_func(PyObject * /*self*/, PyObject *args)
RNA_pointer_create(NULL, &RNA_Depsgraph, (ID *)PyLong_AsVoidPtr(pydepsgraph), &depsgraphptr);
BL::Depsgraph b_depsgraph(depsgraphptr);
/* Allow Blender to execute other Python scripts, and isolate TBB tasks so we
* don't get deadlocks with Blender threads accessing shared data like images. */
/* Allow Blender to execute other Python scripts. */
python_thread_state_save(&session->python_thread_state);
tbb::this_task_arena::isolate([&] { session->render(b_depsgraph); });
session->render(b_depsgraph);
python_thread_state_restore(&session->python_thread_state);
@@ -330,8 +329,7 @@ static PyObject *bake_func(PyObject * /*self*/, PyObject *args)
python_thread_state_save(&session->python_thread_state);
tbb::this_task_arena::isolate(
[&] { session->bake(b_depsgraph, b_object, pass_type, pass_filter, width, height); });
session->bake(b_depsgraph, b_object, pass_type, pass_filter, width, height);
python_thread_state_restore(&session->python_thread_state);
@@ -377,7 +375,7 @@ static PyObject *reset_func(PyObject * /*self*/, PyObject *args)
python_thread_state_save(&session->python_thread_state);
tbb::this_task_arena::isolate([&] { session->reset_session(b_data, b_depsgraph); });
session->reset_session(b_data, b_depsgraph);
python_thread_state_restore(&session->python_thread_state);
@@ -399,7 +397,7 @@ static PyObject *sync_func(PyObject * /*self*/, PyObject *args)
python_thread_state_save(&session->python_thread_state);
tbb::this_task_arena::isolate([&] { session->synchronize(b_depsgraph); });
session->synchronize(b_depsgraph);
python_thread_state_restore(&session->python_thread_state);

View File

@@ -52,6 +52,9 @@ shader node_vector_math(string math_type = "add",
else if (math_type == "faceforward") {
Vector = compatible_faceforward(Vector1, Vector2, Vector3);
}
else if (math_type == "multiply_add") {
Vector = Vector1 * Vector2 + Vector3;
}
else if (math_type == "dot_product") {
Value = dot(Vector1, Vector2);
}

View File

@@ -58,7 +58,8 @@ ccl_device void svm_node_vector_math(KernelGlobals *kg,
float3 vector;
/* 3 Vector Operators */
if (type == NODE_VECTOR_MATH_WRAP || type == NODE_VECTOR_MATH_FACEFORWARD) {
if (type == NODE_VECTOR_MATH_WRAP || type == NODE_VECTOR_MATH_FACEFORWARD ||
type == NODE_VECTOR_MATH_MULTIPLY_ADD) {
uint4 extra_node = read_node(kg, offset);
c = stack_load_float3(stack, extra_node.x);
}

View File

@@ -52,6 +52,9 @@ ccl_device void svm_vector_math(float *value,
case NODE_VECTOR_MATH_FACEFORWARD:
*vector = faceforward(a, b, c);
break;
case NODE_VECTOR_MATH_MULTIPLY_ADD:
*vector = a * b + c;
break;
case NODE_VECTOR_MATH_DOT_PRODUCT:
*value = dot(a, b);
break;

View File

@@ -341,6 +341,7 @@ typedef enum NodeVectorMathType {
NODE_VECTOR_MATH_TANGENT,
NODE_VECTOR_MATH_REFRACT,
NODE_VECTOR_MATH_FACEFORWARD,
NODE_VECTOR_MATH_MULTIPLY_ADD,
} NodeVectorMathType;
typedef enum NodeClampType {

View File

@@ -606,7 +606,8 @@ void read_geometry_data(AlembicProcedural *proc,
template<typename T> struct value_type_converter {
using cycles_type = float;
static constexpr TypeDesc type_desc = TypeFloat;
/* Use `TypeDesc::FLOAT` instead of `TypeFloat` to work around a compiler bug in gcc 11. */
static constexpr TypeDesc type_desc = TypeDesc::FLOAT;
static constexpr const char *type_name = "float (default)";
static cycles_type convert_value(T value)

View File

@@ -6093,6 +6093,7 @@ NODE_DEFINE(VectorMathNode)
type_enum.insert("reflect", NODE_VECTOR_MATH_REFLECT);
type_enum.insert("refract", NODE_VECTOR_MATH_REFRACT);
type_enum.insert("faceforward", NODE_VECTOR_MATH_FACEFORWARD);
type_enum.insert("multiply_add", NODE_VECTOR_MATH_MULTIPLY_ADD);
type_enum.insert("dot_product", NODE_VECTOR_MATH_DOT_PRODUCT);
@@ -6165,7 +6166,8 @@ void VectorMathNode::compile(SVMCompiler &compiler)
int vector_stack_offset = compiler.stack_assign_if_linked(vector_out);
/* 3 Vector Operators */
if (math_type == NODE_VECTOR_MATH_WRAP || math_type == NODE_VECTOR_MATH_FACEFORWARD) {
if (math_type == NODE_VECTOR_MATH_WRAP || math_type == NODE_VECTOR_MATH_FACEFORWARD ||
math_type == NODE_VECTOR_MATH_MULTIPLY_ADD) {
ShaderInput *vector3_in = input("Vector3");
int vector3_stack_offset = compiler.stack_assign(vector3_in);
compiler.add_node(

View File

@@ -93,42 +93,23 @@ void my_guess_pkt_duration(AVFormatContext *s, AVStream *st, AVPacket *pkt)
#endif
FFMPEG_INLINE
void my_update_cur_dts(AVFormatContext *s, AVStream *ref_st, int64_t timestamp)
int64_t timestamp_from_pts_or_dts(int64_t pts, int64_t dts)
{
int i;
for (i = 0; i < s->nb_streams; i++) {
AVStream *st = s->streams[i];
st->cur_dts = av_rescale(timestamp,
st->time_base.den * (int64_t)ref_st->time_base.num,
st->time_base.num * (int64_t)ref_st->time_base.den);
}
}
FFMPEG_INLINE
void av_update_cur_dts(AVFormatContext *s, AVStream *ref_st, int64_t timestamp)
{
my_update_cur_dts(s, ref_st, timestamp);
}
FFMPEG_INLINE
int64_t av_get_pts_from_frame(AVFormatContext *avctx, AVFrame *picture)
{
int64_t pts;
pts = picture->pts;
/* Some videos do not have any pts values, use dts instead in those cases if
* possible. Usually when this happens dts can act as pts because as all frames
* should then be presented in their decoded in order. IE pts == dts. */
if (pts == AV_NOPTS_VALUE) {
pts = picture->pkt_dts;
return dts;
}
if (pts == AV_NOPTS_VALUE) {
pts = 0;
}
(void)avctx;
return pts;
}
FFMPEG_INLINE
int64_t av_get_pts_from_frame(AVFrame *picture)
{
return timestamp_from_pts_or_dts(picture->pts, picture->pkt_dts);
}
/* -------------------------------------------------------------------- */
/** \name Deinterlace code block
*

View File

@@ -282,6 +282,7 @@ elseif(WITH_GHOST_X11 OR WITH_GHOST_WAYLAND)
${wayland-egl_INCLUDE_DIRS}
${xkbcommon_INCLUDE_DIRS}
${wayland-cursor_INCLUDE_DIRS}
${dbus_INCLUDE_DIRS}
)
list(APPEND SRC
@@ -321,6 +322,11 @@ elseif(WITH_GHOST_X11 OR WITH_GHOST_WAYLAND)
xdg-shell
"${WAYLAND_PROTOCOLS_DIR}/stable/xdg-shell/xdg-shell.xml"
)
# xdg-decoration.
generate_protocol_bindings(
xdg-decoration
"${WAYLAND_PROTOCOLS_DIR}/unstable/xdg-decoration/xdg-decoration-unstable-v1.xml"
)
# Pointer-constraints.
generate_protocol_bindings(
pointer-constraints

View File

@@ -40,6 +40,7 @@
#include <unordered_map>
#include <unordered_set>
#include "GHOST_WaylandCursorSettings.h"
#include <pointer-constraints-client-protocol.h>
#include <relative-pointer-client-protocol.h>
#include <wayland-cursor.h>
@@ -52,15 +53,6 @@
#include <cstring>
struct output_t {
struct wl_output *output;
int32_t width, height;
int transform;
int scale;
std::string make;
std::string model;
};
struct buffer_t {
void *data;
size_t size;
@@ -72,6 +64,12 @@ struct cursor_t {
struct wl_buffer *buffer;
struct wl_cursor_image image;
struct buffer_t *file_buffer = nullptr;
struct wl_cursor_theme *theme = nullptr;
int size;
std::string theme_name;
// outputs on which the cursor is visible
std::unordered_set<const output_t *> outputs;
int scale = 1;
};
struct data_offer_t {
@@ -142,10 +140,14 @@ struct display_t {
struct wl_display *display;
struct wl_compositor *compositor = nullptr;
struct xdg_wm_base *xdg_shell = nullptr;
struct zxdg_decoration_manager_v1 *xdg_decoration_manager = nullptr;
struct wl_shm *shm = nullptr;
std::vector<output_t *> outputs;
std::vector<input_t *> inputs;
struct wl_cursor_theme *cursor_theme = nullptr;
struct {
std::string theme;
int size;
} cursor;
struct wl_data_device_manager *data_device_manager = nullptr;
struct zwp_relative_pointer_manager_v1 *relative_pointer_manager = nullptr;
struct zwp_pointer_constraints_v1 *pointer_constraints = nullptr;
@@ -154,6 +156,8 @@ struct display_t {
std::vector<struct wl_egl_window *> os_egl_windows;
};
static GHOST_WindowManager *window_manager = nullptr;
static void display_destroy(display_t *d)
{
if (d->data_device_manager) {
@@ -188,6 +192,9 @@ static void display_destroy(display_t *d)
if (input->cursor.surface) {
wl_surface_destroy(input->cursor.surface);
}
if (input->cursor.theme) {
wl_cursor_theme_destroy(input->cursor.theme);
}
if (input->pointer) {
wl_pointer_destroy(input->pointer);
}
@@ -210,10 +217,6 @@ static void display_destroy(display_t *d)
delete input;
}
if (d->cursor_theme) {
wl_cursor_theme_destroy(d->cursor_theme);
}
if (d->shm) {
wl_shm_destroy(d->shm);
}
@@ -238,6 +241,10 @@ static void display_destroy(display_t *d)
wl_compositor_destroy(d->compositor);
}
if (d->xdg_decoration_manager) {
zxdg_decoration_manager_v1_destroy(d->xdg_decoration_manager);
}
if (d->xdg_shell) {
xdg_wm_base_destroy(d->xdg_shell);
}
@@ -478,7 +485,9 @@ static void dnd_events(const input_t *const input, const GHOST_TEventType event)
static std::string read_pipe(data_offer_t *data_offer, const std::string mime_receive)
{
int pipefd[2];
pipe(pipefd);
if (pipe(pipefd) != 0) {
return {};
}
wl_data_offer_receive(data_offer->id, mime_receive.c_str(), pipefd[1]);
close(pipefd[1]);
@@ -513,7 +522,9 @@ static void data_source_send(void *data,
int32_t fd)
{
const char *const buffer = static_cast<char *>(data);
write(fd, buffer, strlen(buffer) + 1);
if (write(fd, buffer, strlen(buffer) + 1) < 0) {
GHOST_PRINT("error writing to clipboard: " << std::strerror(errno) << std::endl);
}
close(fd);
}
@@ -789,13 +800,80 @@ static void cursor_buffer_release(void *data, struct wl_buffer *wl_buffer)
cursor_t *cursor = static_cast<cursor_t *>(data);
wl_buffer_destroy(wl_buffer);
cursor->buffer = nullptr;
if (wl_buffer == cursor->buffer) {
/* the mapped buffer was from a custom cursor */
cursor->buffer = nullptr;
}
}
const struct wl_buffer_listener cursor_buffer_listener = {
cursor_buffer_release,
};
static GHOST_IWindow *get_window(struct wl_surface *surface)
{
if (!surface) {
return nullptr;
}
for (GHOST_IWindow *win : window_manager->getWindows()) {
if (surface == static_cast<const GHOST_WindowWayland *>(win)->surface()) {
return win;
}
}
return nullptr;
}
static bool update_cursor_scale(cursor_t &cursor, wl_shm *shm)
{
int scale = 0;
for (const output_t *output : cursor.outputs) {
if (output->scale > scale)
scale = output->scale;
}
if (scale > 0 && cursor.scale != scale) {
cursor.scale = scale;
wl_surface_set_buffer_scale(cursor.surface, scale);
wl_cursor_theme_destroy(cursor.theme);
cursor.theme = wl_cursor_theme_load(cursor.theme_name.c_str(), scale * cursor.size, shm);
return true;
}
return false;
}
static void cursor_surface_enter(void *data,
struct wl_surface * /*wl_surface*/,
struct wl_output *output)
{
input_t *input = static_cast<input_t *>(data);
for (const output_t *reg_output : input->system->outputs()) {
if (reg_output->output == output) {
input->cursor.outputs.insert(reg_output);
}
}
update_cursor_scale(input->cursor, input->system->shm());
}
static void cursor_surface_leave(void *data,
struct wl_surface * /*wl_surface*/,
struct wl_output *output)
{
input_t *input = static_cast<input_t *>(data);
for (const output_t *reg_output : input->system->outputs()) {
if (reg_output->output == output) {
input->cursor.outputs.erase(reg_output);
}
}
update_cursor_scale(input->cursor, input->system->shm());
}
struct wl_surface_listener cursor_surface_listener = {
cursor_surface_enter,
cursor_surface_leave,
};
static void pointer_enter(void *data,
struct wl_pointer * /*wl_pointer*/,
uint32_t serial,
@@ -803,22 +881,28 @@ static void pointer_enter(void *data,
wl_fixed_t surface_x,
wl_fixed_t surface_y)
{
if (!surface) {
GHOST_WindowWayland *win = static_cast<GHOST_WindowWayland *>(get_window(surface));
if (!win) {
return;
}
win->activate();
input_t *input = static_cast<input_t *>(data);
input->pointer_serial = serial;
input->x = wl_fixed_to_int(surface_x);
input->y = wl_fixed_to_int(surface_y);
input->x = win->scale() * wl_fixed_to_int(surface_x);
input->y = win->scale() * wl_fixed_to_int(surface_y);
input->focus_pointer = surface;
input->system->pushEvent(
new GHOST_EventCursor(input->system->getMilliSeconds(),
GHOST_kEventCursorMove,
static_cast<GHOST_WindowWayland *>(wl_surface_get_user_data(surface)),
input->x,
input->y,
GHOST_TABLET_DATA_NONE));
win->setCursorShape(win->getCursorShape());
input->system->pushEvent(new GHOST_EventCursor(input->system->getMilliSeconds(),
GHOST_kEventCursorMove,
static_cast<GHOST_WindowWayland *>(win),
input->x,
input->y,
GHOST_TABLET_DATA_NONE));
}
static void pointer_leave(void *data,
@@ -826,9 +910,14 @@ static void pointer_leave(void *data,
uint32_t /*serial*/,
struct wl_surface *surface)
{
if (surface != nullptr) {
static_cast<input_t *>(data)->focus_pointer = nullptr;
GHOST_IWindow *win = get_window(surface);
if (!win) {
return;
}
static_cast<input_t *>(data)->focus_pointer = nullptr;
static_cast<GHOST_WindowWayland *>(win)->deactivate();
}
static void pointer_motion(void *data,
@@ -839,21 +928,20 @@ static void pointer_motion(void *data,
{
input_t *input = static_cast<input_t *>(data);
GHOST_IWindow *win = static_cast<GHOST_WindowWayland *>(
wl_surface_get_user_data(input->focus_pointer));
GHOST_WindowWayland *win = static_cast<GHOST_WindowWayland *>(get_window(input->focus_pointer));
if (!win) {
return;
}
input->x = wl_fixed_to_int(surface_x);
input->y = wl_fixed_to_int(surface_y);
input->x = win->scale() * wl_fixed_to_int(surface_x);
input->y = win->scale() * wl_fixed_to_int(surface_y);
input->system->pushEvent(new GHOST_EventCursor(input->system->getMilliSeconds(),
GHOST_kEventCursorMove,
win,
wl_fixed_to_int(surface_x),
wl_fixed_to_int(surface_y),
input->x,
input->y,
GHOST_TABLET_DATA_NONE));
}
@@ -864,6 +952,14 @@ static void pointer_button(void *data,
uint32_t button,
uint32_t state)
{
input_t *input = static_cast<input_t *>(data);
GHOST_IWindow *win = get_window(input->focus_pointer);
if (!win) {
return;
}
GHOST_TEventType etype = GHOST_kEventUnknown;
switch (state) {
case WL_POINTER_BUTTON_STATE_RELEASED:
@@ -887,9 +983,6 @@ static void pointer_button(void *data,
break;
}
input_t *input = static_cast<input_t *>(data);
GHOST_IWindow *win = static_cast<GHOST_WindowWayland *>(
wl_surface_get_user_data(input->focus_pointer));
input->data_source->source_serial = serial;
input->buttons.set(ebutton, state == WL_POINTER_BUTTON_STATE_PRESSED);
input->system->pushEvent(new GHOST_EventButton(
@@ -902,12 +995,18 @@ static void pointer_axis(void *data,
uint32_t axis,
wl_fixed_t value)
{
input_t *input = static_cast<input_t *>(data);
GHOST_IWindow *win = get_window(input->focus_pointer);
if (!win) {
return;
}
if (axis != WL_POINTER_AXIS_VERTICAL_SCROLL) {
return;
}
input_t *input = static_cast<input_t *>(data);
GHOST_IWindow *win = static_cast<GHOST_WindowWayland *>(
wl_surface_get_user_data(input->focus_pointer));
input->system->pushEvent(
new GHOST_EventWheel(input->system->getMilliSeconds(), win, std::signbit(value) ? +1 : -1));
}
@@ -1137,7 +1236,12 @@ static void seat_capabilities(void *data, struct wl_seat *wl_seat, uint32_t capa
input->cursor.visible = true;
input->cursor.buffer = nullptr;
input->cursor.file_buffer = new buffer_t;
if (!get_cursor_settings(input->cursor.theme_name, input->cursor.size)) {
input->cursor.theme_name = std::string();
input->cursor.size = default_cursor_size;
}
wl_pointer_add_listener(input->pointer, &pointer_listener, data);
wl_surface_add_listener(input->cursor.surface, &cursor_surface_listener, data);
}
if (capabilities & WL_SEAT_CAPABILITY_KEYBOARD) {
@@ -1160,8 +1264,8 @@ static void output_geometry(void *data,
struct wl_output * /*wl_output*/,
int32_t /*x*/,
int32_t /*y*/,
int32_t /*physical_width*/,
int32_t /*physical_height*/,
int32_t physical_width,
int32_t physical_height,
int32_t /*subpixel*/,
const char *make,
const char *model,
@@ -1171,6 +1275,8 @@ static void output_geometry(void *data,
output->transform = transform;
output->make = std::string(make);
output->model = std::string(model);
output->width_mm = physical_width;
output->height_mm = physical_height;
}
static void output_mode(void *data,
@@ -1181,8 +1287,8 @@ static void output_mode(void *data,
int32_t /*refresh*/)
{
output_t *output = static_cast<output_t *>(data);
output->width = width;
output->height = height;
output->width_pxl = width;
output->height_pxl = height;
}
/**
@@ -1227,13 +1333,17 @@ static void global_add(void *data,
struct display_t *display = static_cast<struct display_t *>(data);
if (!strcmp(interface, wl_compositor_interface.name)) {
display->compositor = static_cast<wl_compositor *>(
wl_registry_bind(wl_registry, name, &wl_compositor_interface, 1));
wl_registry_bind(wl_registry, name, &wl_compositor_interface, 3));
}
else if (!strcmp(interface, xdg_wm_base_interface.name)) {
display->xdg_shell = static_cast<xdg_wm_base *>(
wl_registry_bind(wl_registry, name, &xdg_wm_base_interface, 1));
xdg_wm_base_add_listener(display->xdg_shell, &shell_listener, nullptr);
}
else if (!strcmp(interface, zxdg_decoration_manager_v1_interface.name)) {
display->xdg_decoration_manager = static_cast<zxdg_decoration_manager_v1 *>(
wl_registry_bind(wl_registry, name, &zxdg_decoration_manager_v1_interface, 1));
}
else if (!strcmp(interface, wl_output_interface.name)) {
output_t *output = new output_t;
output->scale = 1;
@@ -1335,16 +1445,6 @@ GHOST_SystemWayland::GHOST_SystemWayland() : GHOST_System(), d(new display_t)
wl_data_device_add_listener(input->data_device, &data_device_listener, input);
}
}
const char *theme = std::getenv("XCURSOR_THEME");
const char *size = std::getenv("XCURSOR_SIZE");
const int sizei = size ? std::stoi(size) : default_cursor_size;
d->cursor_theme = wl_cursor_theme_load(theme, sizei, d->shm);
if (!d->cursor_theme) {
display_destroy(d);
throw std::runtime_error("Wayland: unable to access cursor themes!");
}
}
GHOST_SystemWayland::~GHOST_SystemWayland()
@@ -1471,8 +1571,8 @@ void GHOST_SystemWayland::getMainDisplayDimensions(GHOST_TUns32 &width, GHOST_TU
{
if (getNumDisplays() > 0) {
/* We assume first output as main. */
width = uint32_t(d->outputs[0]->width);
height = uint32_t(d->outputs[0]->height);
width = uint32_t(d->outputs[0]->width_pxl) / d->outputs[0]->scale;
height = uint32_t(d->outputs[0]->height_pxl) / d->outputs[0]->scale;
}
}
@@ -1481,7 +1581,7 @@ void GHOST_SystemWayland::getAllDisplayDimensions(GHOST_TUns32 &width, GHOST_TUn
getMainDisplayDimensions(width, height);
}
GHOST_IContext *GHOST_SystemWayland::createOffscreenContext(GHOST_GLSettings glSettings)
GHOST_IContext *GHOST_SystemWayland::createOffscreenContext(GHOST_GLSettings /*glSettings*/)
{
/* Create new off-screen window. */
wl_surface *os_surface = wl_compositor_create_surface(compositor());
@@ -1530,6 +1630,11 @@ GHOST_IWindow *GHOST_SystemWayland::createWindow(const char *title,
const bool is_dialog,
const GHOST_IWindow *parentWindow)
{
/* globally store pointer to window manager */
if (!window_manager) {
window_manager = getWindowManager();
}
GHOST_WindowWayland *window = new GHOST_WindowWayland(
this,
title,
@@ -1574,6 +1679,21 @@ xdg_wm_base *GHOST_SystemWayland::shell()
return d->xdg_shell;
}
zxdg_decoration_manager_v1 *GHOST_SystemWayland::decoration_manager()
{
return d->xdg_decoration_manager;
}
const std::vector<output_t *> &GHOST_SystemWayland::outputs() const
{
return d->outputs;
}
wl_shm *GHOST_SystemWayland::shm() const
{
return d->shm;
}
void GHOST_SystemWayland::setSelection(const std::string &selection)
{
this->selection = selection;
@@ -1581,23 +1701,20 @@ void GHOST_SystemWayland::setSelection(const std::string &selection)
static void set_cursor_buffer(input_t *input, wl_buffer *buffer)
{
input->cursor.visible = (buffer != nullptr);
cursor_t *c = &input->cursor;
wl_surface_attach(input->cursor.surface, buffer, 0, 0);
wl_surface_commit(input->cursor.surface);
c->visible = (buffer != nullptr);
if (input->cursor.visible) {
wl_surface_damage(input->cursor.surface,
0,
0,
int32_t(input->cursor.image.width),
int32_t(input->cursor.image.height));
wl_pointer_set_cursor(input->pointer,
input->pointer_serial,
input->cursor.surface,
int32_t(input->cursor.image.hotspot_x),
int32_t(input->cursor.image.hotspot_y));
}
wl_surface_attach(c->surface, buffer, 0, 0);
wl_surface_damage(c->surface, 0, 0, int32_t(c->image.width), int32_t(c->image.height));
wl_pointer_set_cursor(input->pointer,
input->pointer_serial,
c->visible ? c->surface : nullptr,
int32_t(c->image.hotspot_x) / c->scale,
int32_t(c->image.hotspot_y) / c->scale);
wl_surface_commit(c->surface);
}
GHOST_TSuccess GHOST_SystemWayland::setCursorShape(GHOST_TStandardCursor shape)
@@ -1608,7 +1725,15 @@ GHOST_TSuccess GHOST_SystemWayland::setCursorShape(GHOST_TStandardCursor shape)
const std::string cursor_name = cursors.count(shape) ? cursors.at(shape) :
cursors.at(GHOST_kStandardCursorDefault);
wl_cursor *cursor = wl_cursor_theme_get_cursor(d->cursor_theme, cursor_name.c_str());
input_t *input = d->inputs[0];
cursor_t *c = &input->cursor;
if (!c->theme) {
/* The cursor surface hasn't entered an output yet. Initialize theme with scale 1. */
c->theme = wl_cursor_theme_load(c->theme_name.c_str(), c->size, d->inputs[0]->system->shm());
}
wl_cursor *cursor = wl_cursor_theme_get_cursor(c->theme, cursor_name.c_str());
if (!cursor) {
GHOST_PRINT("cursor '" << cursor_name << "' does not exist" << std::endl);
@@ -1620,11 +1745,11 @@ GHOST_TSuccess GHOST_SystemWayland::setCursorShape(GHOST_TStandardCursor shape)
if (!buffer) {
return GHOST_kFailure;
}
cursor_t *c = &d->inputs[0]->cursor;
c->buffer = buffer;
c->image = *image;
set_cursor_buffer(d->inputs[0], buffer);
set_cursor_buffer(input, buffer);
return GHOST_kSuccess;
}
@@ -1735,6 +1860,11 @@ GHOST_TSuccess GHOST_SystemWayland::setCursorVisibility(bool visible)
GHOST_TSuccess GHOST_SystemWayland::setCursorGrab(const GHOST_TGrabCursorMode mode,
wl_surface *surface)
{
/* ignore, if the required protocols are not supported */
if (!d->relative_pointer_manager || !d->pointer_constraints) {
return GHOST_kFailure;
}
if (d->inputs.empty()) {
return GHOST_kFailure;
}
@@ -1754,6 +1884,7 @@ GHOST_TSuccess GHOST_SystemWayland::setCursorGrab(const GHOST_TGrabCursorMode mo
break;
case GHOST_kGrabNormal:
break;
case GHOST_kGrabWrap:
input->relative_pointer = zwp_relative_pointer_manager_v1_get_relative_pointer(
d->relative_pointer_manager, input->pointer);

View File

@@ -26,6 +26,7 @@
#include "GHOST_WindowWayland.h"
#include <wayland-client.h>
#include <xdg-decoration-client-protocol.h>
#include <xdg-shell-client-protocol.h>
#include <string>
@@ -34,6 +35,16 @@ class GHOST_WindowWayland;
struct display_t;
struct output_t {
struct wl_output *output;
int32_t width_pxl, height_pxl; // dimensions in pixel
int32_t width_mm, height_mm; // dimensions in millimeter
int transform;
int scale;
std::string make;
std::string model;
};
class GHOST_SystemWayland : public GHOST_System {
public:
GHOST_SystemWayland();
@@ -84,6 +95,12 @@ class GHOST_SystemWayland : public GHOST_System {
xdg_wm_base *shell();
zxdg_decoration_manager_v1 *decoration_manager();
const std::vector<output_t *> &outputs() const;
wl_shm *shm() const;
void setSelection(const std::string &selection);
GHOST_TSuccess setCursorShape(GHOST_TStandardCursor shape);

View File

@@ -139,7 +139,6 @@ static void initRawInput()
#undef DEVICE_COUNT
}
typedef HRESULT(API *GHOST_WIN32_SetProcessDpiAwareness)(PROCESS_DPI_AWARENESS);
typedef BOOL(API *GHOST_WIN32_EnableNonClientDpiScaling)(HWND);
GHOST_SystemWin32::GHOST_SystemWin32()
@@ -1462,14 +1461,7 @@ LRESULT WINAPI GHOST_SystemWin32::s_wndProc(HWND hwnd, UINT msg, WPARAM wParam,
* since DefWindowProc propagates it up the parent chain
* until it finds a window that processes it.
*/
/* Get the window under the mouse and send event to its queue. */
POINT mouse_pos = {GET_X_LPARAM(lParam), GET_Y_LPARAM(lParam)};
HWND mouse_hwnd = ChildWindowFromPoint(HWND_DESKTOP, mouse_pos);
GHOST_WindowWin32 *mouse_window = (GHOST_WindowWin32 *)::GetWindowLongPtr(mouse_hwnd,
GWLP_USERDATA);
processWheelEvent(mouse_window ? mouse_window : window, wParam, lParam);
processWheelEvent(window, wParam, lParam);
eventHandled = true;
#ifdef BROKEN_PEEK_TOUCHPAD
PostMessage(hwnd, WM_USER, 0, 0);

View File

@@ -0,0 +1,130 @@
/*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
/** \file
* \ingroup GHOST
*/
#pragma once
#include <dbus/dbus.h>
#include <string>
static DBusMessage *get_setting_sync(DBusConnection *const connection,
const char *key,
const char *value)
{
DBusError error;
dbus_bool_t success;
DBusMessage *message;
DBusMessage *reply;
dbus_error_init(&error);
message = dbus_message_new_method_call("org.freedesktop.portal.Desktop",
"/org/freedesktop/portal/desktop",
"org.freedesktop.portal.Settings",
"Read");
success = dbus_message_append_args(
message, DBUS_TYPE_STRING, &key, DBUS_TYPE_STRING, &value, DBUS_TYPE_INVALID);
if (!success) {
return NULL;
}
reply = dbus_connection_send_with_reply_and_block(
connection, message, DBUS_TIMEOUT_USE_DEFAULT, &error);
dbus_message_unref(message);
if (dbus_error_is_set(&error)) {
return NULL;
}
return reply;
}
static bool parse_type(DBusMessage *const reply, const int type, void *value)
{
DBusMessageIter iter[3];
dbus_message_iter_init(reply, &iter[0]);
if (dbus_message_iter_get_arg_type(&iter[0]) != DBUS_TYPE_VARIANT) {
return false;
}
dbus_message_iter_recurse(&iter[0], &iter[1]);
if (dbus_message_iter_get_arg_type(&iter[1]) != DBUS_TYPE_VARIANT) {
return false;
}
dbus_message_iter_recurse(&iter[1], &iter[2]);
if (dbus_message_iter_get_arg_type(&iter[2]) != type) {
return false;
}
dbus_message_iter_get_basic(&iter[2], value);
return true;
}
static bool get_cursor_settings(std::string &theme, int &size)
{
static const char name[] = "org.gnome.desktop.interface";
static const char key_theme[] = "cursor-theme";
static const char key_size[] = "cursor-size";
DBusError error;
DBusConnection *connection;
DBusMessage *reply;
const char *value_theme = NULL;
dbus_error_init(&error);
connection = dbus_bus_get(DBUS_BUS_SESSION, &error);
if (dbus_error_is_set(&error)) {
return false;
}
reply = get_setting_sync(connection, name, key_theme);
if (!reply) {
return false;
}
if (!parse_type(reply, DBUS_TYPE_STRING, &value_theme)) {
dbus_message_unref(reply);
return false;
}
theme = std::string(value_theme);
dbus_message_unref(reply);
reply = get_setting_sync(connection, name, key_size);
if (!reply) {
return false;
}
if (!parse_type(reply, DBUS_TYPE_INT32, &size)) {
dbus_message_unref(reply);
return false;
}
dbus_message_unref(reply);
return true;
}

View File

@@ -29,11 +29,19 @@
#include <wayland-egl.h>
static constexpr size_t base_dpi = 96;
struct window_t {
GHOST_WindowWayland *w;
wl_surface *surface;
// outputs on which the window is currently shown on
std::unordered_set<const output_t *> outputs;
GHOST_TUns16 dpi = 0;
int scale = 1;
struct xdg_surface *xdg_surface;
struct xdg_toplevel *xdg_toplevel;
struct zxdg_toplevel_decoration_v1 *xdg_toplevel_decoration = nullptr;
enum zxdg_toplevel_decoration_v1_mode decoration_mode;
wl_egl_window *egl_window;
int32_t pending_width, pending_height;
bool is_maximised;
@@ -93,17 +101,30 @@ static const xdg_toplevel_listener toplevel_listener = {
toplevel_close,
};
static void toplevel_decoration_configure(
void *data,
struct zxdg_toplevel_decoration_v1 * /*zxdg_toplevel_decoration_v1*/,
uint32_t mode)
{
static_cast<window_t *>(data)->decoration_mode = zxdg_toplevel_decoration_v1_mode(mode);
}
static const zxdg_toplevel_decoration_v1_listener toplevel_decoration_v1_listener = {
toplevel_decoration_configure,
};
static void surface_configure(void *data, xdg_surface *xdg_surface, uint32_t serial)
{
window_t *win = static_cast<window_t *>(data);
int w, h;
wl_egl_window_get_attached_size(win->egl_window, &w, &h);
if (win->pending_width != 0 && win->pending_height != 0 && win->pending_width != w &&
win->pending_height != h) {
win->width = win->pending_width;
win->height = win->pending_height;
wl_egl_window_resize(win->egl_window, win->pending_width, win->pending_height, 0, 0);
if (win->xdg_surface != xdg_surface) {
return;
}
if (win->pending_width != 0 && win->pending_height != 0) {
win->width = win->scale * win->pending_width;
win->height = win->scale * win->pending_height;
wl_egl_window_resize(win->egl_window, win->width, win->height, 0, 0);
win->pending_width = 0;
win->pending_height = 0;
win->w->notify_size();
@@ -123,6 +144,52 @@ static const xdg_surface_listener surface_listener = {
surface_configure,
};
static bool update_scale(GHOST_WindowWayland *window)
{
int scale = 0;
for (const output_t *output : window->outputs_active()) {
if (output->scale > scale)
scale = output->scale;
}
if (scale > 0 && window->scale() != scale) {
window->scale() = scale;
// using the real DPI will cause wrong scaling of the UI
// use a multiplier for the default DPI as workaround
window->dpi() = scale * base_dpi;
wl_surface_set_buffer_scale(window->surface(), scale);
return true;
}
return false;
}
static void surface_enter(void *data, struct wl_surface * /*wl_surface*/, struct wl_output *output)
{
GHOST_WindowWayland *w = static_cast<GHOST_WindowWayland *>(data);
for (const output_t *reg_output : w->outputs()) {
if (reg_output->output == output) {
w->outputs_active().insert(reg_output);
}
}
update_scale(w);
}
static void surface_leave(void *data, struct wl_surface * /*wl_surface*/, struct wl_output *output)
{
GHOST_WindowWayland *w = static_cast<GHOST_WindowWayland *>(data);
for (const output_t *reg_output : w->outputs()) {
if (reg_output->output == output) {
w->outputs_active().erase(reg_output);
}
}
update_scale(w);
}
struct wl_surface_listener wl_surface_listener = {
surface_enter,
surface_leave,
};
/** \} */
/* -------------------------------------------------------------------- */
@@ -161,17 +228,28 @@ GHOST_WindowWayland::GHOST_WindowWayland(GHOST_SystemWayland *system,
/* Window surfaces. */
w->surface = wl_compositor_create_surface(m_system->compositor());
wl_surface_add_listener(w->surface, &wl_surface_listener, this);
w->egl_window = wl_egl_window_create(w->surface, int(width), int(height));
w->xdg_surface = xdg_wm_base_get_xdg_surface(m_system->shell(), w->surface);
w->xdg_toplevel = xdg_surface_get_toplevel(w->xdg_surface);
if (m_system->decoration_manager()) {
w->xdg_toplevel_decoration = zxdg_decoration_manager_v1_get_toplevel_decoration(
m_system->decoration_manager(), w->xdg_toplevel);
zxdg_toplevel_decoration_v1_add_listener(
w->xdg_toplevel_decoration, &toplevel_decoration_v1_listener, w);
zxdg_toplevel_decoration_v1_set_mode(w->xdg_toplevel_decoration,
ZXDG_TOPLEVEL_DECORATION_V1_MODE_SERVER_SIDE);
}
wl_surface_set_user_data(w->surface, this);
xdg_surface_add_listener(w->xdg_surface, &surface_listener, w);
xdg_toplevel_add_listener(w->xdg_toplevel, &toplevel_listener, w);
if (parentWindow) {
if (parentWindow && is_dialog) {
xdg_toplevel_set_parent(
w->xdg_toplevel, dynamic_cast<const GHOST_WindowWayland *>(parentWindow)->w->xdg_toplevel);
}
@@ -192,6 +270,9 @@ GHOST_WindowWayland::GHOST_WindowWayland(GHOST_SystemWayland *system,
if (setDrawingContextType(type) == GHOST_kFailure) {
GHOST_PRINT("Failed to create EGL context" << std::endl);
}
/* set swap interval to 0 to prevent blocking */
setSwapInterval(0);
}
GHOST_TSuccess GHOST_WindowWayland::close()
@@ -226,6 +307,31 @@ GHOST_TSuccess GHOST_WindowWayland::notify_size()
new GHOST_Event(m_system->getMilliSeconds(), GHOST_kEventWindowSize, this));
}
wl_surface *GHOST_WindowWayland::surface() const
{
return w->surface;
}
const std::vector<output_t *> &GHOST_WindowWayland::outputs() const
{
return m_system->outputs();
}
std::unordered_set<const output_t *> &GHOST_WindowWayland::outputs_active()
{
return w->outputs;
}
uint16_t &GHOST_WindowWayland::dpi()
{
return w->dpi;
}
int &GHOST_WindowWayland::scale()
{
return w->scale;
}
GHOST_TSuccess GHOST_WindowWayland::setWindowCursorGrab(GHOST_TGrabCursorMode mode)
{
return m_system->setCursorGrab(mode, w->surface);
@@ -310,6 +416,9 @@ GHOST_WindowWayland::~GHOST_WindowWayland()
releaseNativeHandles();
wl_egl_window_destroy(w->egl_window);
if (w->xdg_toplevel_decoration) {
zxdg_toplevel_decoration_v1_destroy(w->xdg_toplevel_decoration);
}
xdg_toplevel_destroy(w->xdg_toplevel);
xdg_surface_destroy(w->xdg_surface);
wl_surface_destroy(w->surface);
@@ -317,6 +426,11 @@ GHOST_WindowWayland::~GHOST_WindowWayland()
delete w;
}
GHOST_TUns16 GHOST_WindowWayland::getDPIHint()
{
return w->dpi;
}
GHOST_TSuccess GHOST_WindowWayland::setWindowCursorVisibility(bool visible)
{
return m_system->setCursorVisibility(visible);

View File

@@ -24,9 +24,14 @@
#include "GHOST_Window.h"
#include <unordered_set>
#include <vector>
class GHOST_SystemWayland;
struct window_t;
struct wl_surface;
struct output_t;
class GHOST_WindowWayland : public GHOST_Window {
public:
@@ -47,6 +52,8 @@ class GHOST_WindowWayland : public GHOST_Window {
~GHOST_WindowWayland() override;
GHOST_TUns16 getDPIHint() override;
GHOST_TSuccess close();
GHOST_TSuccess activate();
@@ -55,6 +62,16 @@ class GHOST_WindowWayland : public GHOST_Window {
GHOST_TSuccess notify_size();
wl_surface *surface() const;
const std::vector<output_t *> &outputs() const;
std::unordered_set<const output_t *> &outputs_active();
uint16_t &dpi();
int &scale();
protected:
GHOST_TSuccess setWindowCursorGrab(GHOST_TGrabCursorMode mode) override;

View File

@@ -55,9 +55,6 @@ typedef BOOL(API *GHOST_WIN32_WTOverlap)(HCTX, BOOL);
// typedefs for user32 functions to allow dynamic loading of Windows 10 DPI scaling functions
typedef UINT(API *GHOST_WIN32_GetDpiForWindow)(HWND);
#ifndef USER_DEFAULT_SCREEN_DPI
# define USER_DEFAULT_SCREEN_DPI 96
#endif // USER_DEFAULT_SCREEN_DPI
struct GHOST_PointerInfoWin32 {
GHOST_TInt32 pointerId;

View File

@@ -77,7 +77,7 @@ class scoped_array {
void reset(T* new_array) {
if (sizeof(T)) {
delete array_;
delete[] array_;
}
array_ = new_array;
}

View File

@@ -112,7 +112,7 @@ void MeshTopology::getEdgeVertexIndices(int edge_index, int *v1, int *v2) const
if (edge_index >= edges_.size()) {
*v1 = -1;
*v1 = -1;
*v2 = -1;
return;
}

View File

@@ -31,9 +31,13 @@ typedef struct plConvexHull__ {
plConvexHull plConvexHullCompute(float (*coords)[3], int count);
void plConvexHullDelete(plConvexHull hull);
int plConvexHullNumVertices(plConvexHull hull);
int plConvexHullNumLoops(plConvexHull hull);
int plConvexHullNumFaces(plConvexHull hull);
void plConvexHullGetVertex(plConvexHull hull, int n, float coords[3], int *original_index);
void plConvexHullGetLoop(plConvexHull hull, int n, int *v_from, int *v_to);
int plConvexHullGetReversedLoopIndex(plConvexHull hull, int n);
int plConvexHullGetFaceSize(plConvexHull hull, int n);
void plConvexHullGetFaceLoops(plConvexHull hull, int n, int *loops);
void plConvexHullGetFaceVertices(plConvexHull hull, int n, int *vertices);
#ifdef __cplusplus

View File

@@ -39,6 +39,12 @@ int plConvexHullNumVertices(plConvexHull hull)
return computer->vertices.size();
}
int plConvexHullNumLoops(plConvexHull hull)
{
btConvexHullComputer *computer(reinterpret_cast<btConvexHullComputer *>(hull));
return computer->edges.size();
}
int plConvexHullNumFaces(plConvexHull hull)
{
btConvexHullComputer *computer(reinterpret_cast<btConvexHullComputer *>(hull));
@@ -55,6 +61,19 @@ void plConvexHullGetVertex(plConvexHull hull, int n, float coords[3], int *origi
(*original_index) = computer->original_vertex_index[n];
}
void plConvexHullGetLoop(plConvexHull hull, int n, int *v_from, int *v_to)
{
btConvexHullComputer *computer(reinterpret_cast<btConvexHullComputer *>(hull));
(*v_from) = computer->edges[n].getSourceVertex();
(*v_to) = computer->edges[n].getTargetVertex();
}
int plConvexHullGetReversedLoopIndex(plConvexHull hull, int n)
{
btConvexHullComputer *computer(reinterpret_cast<btConvexHullComputer *>(hull));
return computer->edges[n].getReverseEdge() - &computer->edges[0];
}
int plConvexHullGetFaceSize(plConvexHull hull, int n)
{
btConvexHullComputer *computer(reinterpret_cast<btConvexHullComputer *>(hull));
@@ -69,6 +88,19 @@ int plConvexHullGetFaceSize(plConvexHull hull, int n)
return count;
}
void plConvexHullGetFaceLoops(plConvexHull hull, int n, int *loops)
{
btConvexHullComputer *computer(reinterpret_cast<btConvexHullComputer *>(hull));
const btConvexHullComputer::Edge *e_orig, *e;
int count;
for (e_orig = &computer->edges[computer->faces[n]], e = e_orig, count = 0;
count == 0 || e != e_orig;
e = e->getNextEdgeOfFace(), count++) {
loops[count] = e - &computer->edges[0];
}
}
void plConvexHullGetFaceVertices(plConvexHull hull, int n, int *vertices)
{
btConvexHullComputer *computer(reinterpret_cast<btConvexHullComputer *>(hull));

5
release/darwin/README.md Normal file
View File

@@ -0,0 +1,5 @@
Buildbot Configuration
======================
Files used by Buildbot's `package-code-binaires` step for the darwin platform.

View File

@@ -1,55 +0,0 @@
macOS app bundling guide
========================
Install Code Signing Certificate
--------------------------------
* Go to https://developer.apple.com/account/resources/certificates/list
* Download the Developer ID Application certificate.
* Double click the file and add to key chain (default options).
* Delete the file from the Downloads folder.
* You will also need to install a .p12 public/private key file for the
certificate. This is only available for the owner of the Blender account,
or can be exported and copied from another system that already has code
signing set up.
Find the codesigning identity by running:
$ security find-identity -v -p codesigning
"Developer ID Application: Stichting Blender Foundation" is the identity needed.
The long code at the start of the line is used as <identity> below.
Setup Apple ID
--------------
* The Apple ID must have two step verification enabled.
* Create an app specific password for the code signing app (label can be anything):
https://support.apple.com/en-us/HT204397
* Add the app specific password to keychain:
$ security add-generic-password -a <apple-id> -w <app-specific-password> -s altool-password
When running the bundle script, there will be a popup. To avoid that either:
* Click Always Allow in the popup
* In the Keychain Access app, change the Access Control settings on altool-password
Bundle
------
Then the bundle is created as follows:
$ ./bundle.sh --source <sourcedir> --dmg <dmg> --bundle-id <bundleid> --username <apple-id> --password "@keychain:altool-password" --codesign <identity>
<sourcedir> directory where built Blender.app is
<dmg> location and name of the final disk image
<bundleid> id on notarization, for example org.blenderfoundation.blender.release
<apple-id> your appleid email
<identity> codesigning identity
When specifying only --sourcedir and --dmg, the build will not be signed.
Example :
$ ./bundle.sh --source /data/build/bin --dmg /data/Blender-2.8-alpha-macOS-10.11.dmg --bundle-id org.blenderfoundation.blender.release --username "foo@mac.com" --password "@keychain:altool-password" --codesign AE825E26F12D08B692F360133210AF46F4CF7B97

View File

@@ -1,18 +0,0 @@
tell application "Finder"
tell disk "Blender"
open
set current view of container window to icon view
set toolbar visible of container window to false
set statusbar visible of container window to false
set the bounds of container window to {100, 100, 640, 472}
set theViewOptions to icon view options of container window
set arrangement of theViewOptions to not arranged
set icon size of theViewOptions to 128
set background picture of theViewOptions to file ".background:background.tif"
set position of item " " of container window to {400, 190}
set position of item "blender.app" of container window to {135, 190}
update without registering applications
delay 5
close
end tell
end tell

View File

@@ -1,212 +0,0 @@
#!/usr/bin/env bash
#
# Script to create a macOS dmg file for Blender builds, including code
# signing and notarization for releases.
# Check that we have all needed tools.
for i in osascript git codesign hdiutil xcrun ; do
if [ ! -x "$(which ${i})" ]; then
echo "Unable to execute command $i, macOS broken?"
exit 1
fi
done
# Defaults settings.
_script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
_volume_name="Blender"
_tmp_dir="$(mktemp -d)"
_tmp_dmg="/tmp/blender-tmp.dmg"
_background_image="${_script_dir}/background.tif"
_mount_dir="/Volumes/${_volume_name}"
_entitlements="${_script_dir}/entitlements.plist"
# Handle arguments.
while [[ $# -gt 0 ]]; do
key=$1
case $key in
-s|--source)
SRC_DIR="$2"
shift
shift
;;
-d|--dmg)
DEST_DMG="$2"
shift
shift
;;
-b|--bundle-id)
N_BUNDLE_ID="$2"
shift
shift
;;
-u|--username)
N_USERNAME="$2"
shift
shift
;;
-p|--password)
N_PASSWORD="$2"
shift
shift
;;
-c|--codesign)
C_CERT="$2"
shift
shift
;;
--background-image)
_background_image="$2"
shift
shift
;;
-h|--help)
echo "Usage:"
echo " $(basename "$0") --source DIR --dmg IMAGENAME "
echo " optional arguments:"
echo " --codesign <certname>"
echo " --username <username>"
echo " --password <password>"
echo " --bundle-id <bundleid>"
echo " Check https://developer.apple.com/documentation/security/notarizing_your_app_before_distribution/customizing_the_notarization_workflow "
exit 1
;;
esac
done
if [ ! -d "${SRC_DIR}/Blender.app" ]; then
echo "use --source parameter to set source directory where Blender.app can be found"
exit 1
fi
if [ -z "${DEST_DMG}" ]; then
echo "use --dmg parameter to set output dmg name"
exit 1
fi
# Destroy destination dmg if there is any.
test -f "${DEST_DMG}" && rm "${DEST_DMG}"
if [ -d "${_mount_dir}" ]; then
echo -n "Ejecting existing blender volume.."
DEV_FILE=$(mount | grep "${_mount_dir}" | awk '{ print $1 }')
diskutil eject "${DEV_FILE}" || exit 1
echo
fi
# Copy dmg contents.
echo -n "Copying Blender.app..."
cp -r "${SRC_DIR}/Blender.app" "${_tmp_dir}/" || exit 1
echo
# Create the disk image.
_directory_size=$(du -sh ${_tmp_dir} | awk -F'[^0-9]*' '$0=$1')
_image_size=$(echo "${_directory_size}" + 400 | bc) # extra 400 need for codesign to work (why on earth?)
echo
echo -n "Creating disk image of size ${_image_size}M.."
test -f "${_tmp_dmg}" && rm "${_tmp_dmg}"
hdiutil create -size "${_image_size}m" -fs HFS+ -srcfolder "${_tmp_dir}" -volname "${_volume_name}" -format UDRW "${_tmp_dmg}" -mode 755
echo "Mounting readwrite image..."
hdiutil attach -readwrite -noverify -noautoopen "${_tmp_dmg}"
echo "Setting background picture.."
if ! test -z "${_background_image}"; then
echo "Copying background image ..."
test -d "${_mount_dir}/.background" || mkdir "${_mount_dir}/.background"
_background_image_NAME=$(basename "${_background_image}")
cp "${_background_image}" "${_mount_dir}/.background/${_background_image_NAME}"
fi
echo "Creating link to /Applications ..."
ln -s /Applications "${_mount_dir}/Applications"
echo "Renaming Applications to empty string."
mv ${_mount_dir}/Applications "${_mount_dir}/ "
echo "Running applescript to set folder looks ..."
cat "${_script_dir}/blender.applescript" | osascript
echo "Waiting after applescript ..."
sleep 5
if [ ! -z "${C_CERT}" ]; then
# Codesigning requires all libs and binaries to be signed separately.
echo -n "Codesigning Python"
for f in $(find "${_mount_dir}/Blender.app/Contents/Resources" -name "python*"); do
if [ -x ${f} ] && [ ! -d ${f} ]; then
codesign --remove-signature "${f}"
codesign --timestamp --options runtime --entitlements="${_entitlements}" --sign "${C_CERT}" "${f}"
fi
done
echo ; echo -n "Codesigning .dylib and .so libraries"
for f in $(find "${_mount_dir}/Blender.app" -name "*.dylib" -o -name "*.so"); do
codesign --remove-signature "${f}"
codesign --timestamp --options runtime --entitlements="${_entitlements}" --sign "${C_CERT}" "${f}"
done
echo ; echo -n "Codesigning Blender.app"
codesign --remove-signature "${_mount_dir}/Blender.app"
codesign --timestamp --options runtime --entitlements="${_entitlements}" --sign "${C_CERT}" "${_mount_dir}/Blender.app"
echo
else
echo "No codesigning cert given, skipping..."
fi
# Need to eject dev files to remove /dev files and free .dmg for converting
echo "Unmounting rw disk image ..."
DEV_FILE=$(mount | grep "${_mount_dir}" | awk '{ print $1 }')
diskutil eject "${DEV_FILE}"
sleep 3
echo "Compressing disk image ..."
hdiutil convert "${_tmp_dmg}" -format UDZO -o "${DEST_DMG}"
# Codesign the dmg
if [ ! -z "${C_CERT}" ]; then
echo -n "Codesigning dmg..."
codesign --timestamp --force --sign "${C_CERT}" "${DEST_DMG}"
echo
fi
# Cleanup
rm -rf "${_tmp_dir}"
rm "${_tmp_dmg}"
# Notarize
if [ ! -z "${N_USERNAME}" ] && [ ! -z "${N_PASSWORD}" ] && [ ! -z "${N_BUNDLE_ID}" ]; then
# Send to Apple
echo "Sending ${DEST_DMG} for notarization..."
_tmpout=$(mktemp)
echo xcrun altool --notarize-app --verbose -f "${DEST_DMG}" --primary-bundle-id "${N_BUNDLE_ID}" --username "${N_USERNAME}" --password "${N_PASSWORD}"
xcrun altool --notarize-app --verbose -f "${DEST_DMG}" --primary-bundle-id "${N_BUNDLE_ID}" --username "${N_USERNAME}" --password "${N_PASSWORD}" >${_tmpout} 2>&1
# Parse request uuid
_requuid=$(cat "${_tmpout}" | grep "RequestUUID" | awk '{ print $3 }')
echo "RequestUUID: ${_requuid}"
if [ ! -z "${_requuid}" ]; then
# Wait for Apple to confirm notarization is complete
echo "Waiting for notarization to be complete.."
for c in {20..0};do
sleep 600
xcrun altool --notarization-info "${_requuid}" --username "${N_USERNAME}" --password "${N_PASSWORD}" >${_tmpout} 2>&1
_status=$(cat "${_tmpout}" | grep "Status:" | awk '{ print $2 }')
if [ "${_status}" == "invalid" ]; then
echo "Got invalid notarization!"
break;
fi
if [ "${_status}" == "success" ]; then
echo -n "Notarization successful! Stapling..."
xcrun stapler staple -v "${DEST_DMG}"
break;
fi
echo "Notarization in progress, waiting..."
done
else
cat ${_tmpout}
echo "Error getting RequestUUID, notarization unsuccessful"
fi
else
echo "No notarization credentials supplied, skipping..."
fi
echo "..done. You should have ${DEST_DMG} ready to upload"

View File

@@ -40,6 +40,25 @@
</screenshot>
</screenshots>
<releases>
<release version="2.93" date="2021-06-02">
<description>
<p>New features:</p>
<ul>
<li>Mesh primitive nodes</li>
<li>Line Art</li>
<li>EEVEE Realistic depth of field and volumetrics</li>
<li>Spreadsheet editor</li>
</ul>
<p>Enhancements:</p>
<ul>
<li>Geometry nodes 22 new nodes and imrpoved attribute search</li>
<li>Mask loops, textures and patterns for sculpting</li>
<li>Grease pencil interpolate refactored and SVG and PDF support</li>
<li>Persistent Data rendering settings for Cycles</li>
<li>Video Sequencer Editor auto-proxy system</li>
</ul>
</description>
</release>
<release version="2.92" date="2021-02-25">
<description>
<p>New features:</p>

View File

@@ -0,0 +1,17 @@
Snap Configuration
===================
Files used by Buildbot's `package-code-store-snap` and `deliver-code-store-snap` steps.
Build pipeline snap tracks and channels
```
<track>/stable
- Latest stable release for the specified track
<track>/candidate
- Test builds for the upcoming stable release - *not used for now*
<track>/beta
- Nightly automated builds provided by a release branch
<track>/egde/<branch>
- Nightly or on demand builds - will also make use of branch
```

View File

@@ -1,38 +0,0 @@
Snap Package Instructions
=========================
This folder contains the scripts for creating and uploading the snap on:
https://snapcraft.io/blender
Setup
-----
This has only been tested to work on Ubuntu.
# Install required packages
sudo apt install snapd snapcraft
Steps
-----
# Build the snap file
python3 bundle.py --version 2.XX --url https://download.blender.org/release/Blender2.XX/blender-2.XX-x86_64.tar.bz2
# Install snap to test
# --dangerous is needed since the snap has not been signed yet
# --classic is required for installing Blender in general
sudo snap install --dangerous --classic blender_2.XX_amd64.snap
# Upload
snapcraft push --release=stable blender_2.XX_amd64.snap
Release Values
--------------
stable: final release
candidate: release candidates

View File

@@ -10,12 +10,7 @@ description: |
scientists, students, VFX experts, animators, game artists, modders, and
the list goes on.
The standard snap channels are used in the following way:
stable - Latest stable release.
candidate - Test builds for the upcoming stable release.
icon: ../icons/scalable/apps/blender.svg
icon: @ICON_PATH@
passthrough:
license: GPL-3.0
@@ -27,13 +22,14 @@ apps:
command: ./blender-wrapper
desktop: ./blender.desktop
base: core18
version: '@VERSION@'
grade: @GRADE@
parts:
blender:
plugin: dump
source: @URL@
source: @PACKAGE_PATH@
build-attributes: [keep-execstack, no-patchelf]
override-build: |
snapcraftctl build
@@ -47,7 +43,7 @@ parts:
- libxrender1
- libxxf86vm1
wrapper:
plugin: copy
plugin: dump
source: .
files:
blender-wrapper: blender-wrapper
stage:
- ./blender-wrapper

View File

@@ -1,21 +0,0 @@
#!/usr/bin/env python3
import argparse
import os
import pathlib
import subprocess
parser = argparse.ArgumentParser()
parser.add_argument("--version", required=True)
parser.add_argument("--url", required=True)
parser.add_argument("--grade", default="stable", choices=["stable", "devel"])
args = parser.parse_args()
yaml_text = pathlib.Path("snapcraft.yaml.in").read_text()
yaml_text = yaml_text.replace("@VERSION@", args.version)
yaml_text = yaml_text.replace("@URL@", args.url)
yaml_text = yaml_text.replace("@GRADE@", args.grade)
pathlib.Path("snapcraft.yaml").write_text(yaml_text)
subprocess.call(["snapcraft", "clean"])
subprocess.call(["snapcraft", "snap"])

View File

@@ -49,16 +49,16 @@ def _initialize():
def paths():
# RELEASE SCRIPTS: official scripts distributed in Blender releases
addon_paths = _bpy.utils.script_paths("addons")
addon_paths = _bpy.utils.script_paths(subdir="addons")
# CONTRIB SCRIPTS: good for testing but not official scripts yet
# if folder addons_contrib/ exists, scripts in there will be loaded too
addon_paths += _bpy.utils.script_paths("addons_contrib")
addon_paths += _bpy.utils.script_paths(subdir="addons_contrib")
return addon_paths
def modules_refresh(module_cache=addons_fake_modules):
def modules_refresh(*, module_cache=addons_fake_modules):
global error_encoding
import os
@@ -203,9 +203,9 @@ def modules_refresh(module_cache=addons_fake_modules):
del modules_stale
def modules(module_cache=addons_fake_modules, *, refresh=True):
def modules(*, module_cache=addons_fake_modules, refresh=True):
if refresh or ((module_cache is addons_fake_modules) and modules._is_first):
modules_refresh(module_cache)
modules_refresh(module_cache=module_cache)
modules._is_first = False
mod_list = list(module_cache.values())
@@ -512,7 +512,7 @@ def _blender_manual_url_prefix():
return "https://docs.blender.org/manual/en/" + manual_version
def module_bl_info(mod, info_basis=None):
def module_bl_info(mod, *, info_basis=None):
if info_basis is None:
info_basis = {
"name": "",

View File

@@ -134,7 +134,7 @@ def _disable(template_id, *, handle_error=None):
print("\tapp_template_utils.disable", template_id)
def import_from_path(path, ignore_not_found=False):
def import_from_path(path, *, ignore_not_found=False):
import os
from importlib import import_module
base_module, template_id = path.rsplit(os.sep, 2)[-2:]
@@ -148,9 +148,9 @@ def import_from_path(path, ignore_not_found=False):
raise ex
def import_from_id(template_id, ignore_not_found=False):
def import_from_id(template_id, *, ignore_not_found=False):
import os
path = next(iter(_bpy.utils.app_template_paths(template_id)), None)
path = next(iter(_bpy.utils.app_template_paths(path=template_id)), None)
if path is None:
if ignore_not_found:
return None
@@ -163,7 +163,7 @@ def import_from_id(template_id, ignore_not_found=False):
return import_from_path(path, ignore_not_found=ignore_not_found)
def activate(template_id=None):
def activate(*, template_id=None):
template_id_prev = _app_template["id"]
# not needed but may as well avoids redundant
@@ -190,4 +190,4 @@ def reset(*, reload_scripts=False):
# TODO reload_scripts
activate(template_id)
activate(template_id=template_id)

View File

@@ -1572,7 +1572,7 @@ class I18n:
if not os.path.isfile(dst):
print("WARNING: trying to write as python code into {}, which is not a file! Aborting.".format(dst))
return
prev, txt, nxt, has_trans = self._parser_check_file(dst)
prev, txt, nxt, _has_trans = self._parser_check_file(dst)
if prev is None and nxt is None:
print("WARNING: Looks like given python file {} has no auto-generated translations yet, will be added "
"at the end of the file, you can move that section later if needed...".format(dst))

View File

@@ -71,7 +71,7 @@ def rtl_process_po(args, settings):
po.write(kind="PO", dest=args.dst)
def language_menu(args, settings):
def language_menu(_args, settings):
# 'DEFAULT' and en_US are always valid, fully-translated "languages"!
stats = {"DEFAULT": 1.0, "en_US": 1.0}

View File

@@ -84,10 +84,10 @@ def protect_format_seq(msg):
# LRM = "\u200E"
# RLM = "\u200F"
LRE = "\u202A"
RLE = "\u202B"
# RLE = "\u202B"
PDF = "\u202C"
LRO = "\u202D"
RLO = "\u202E"
# RLO = "\u202E"
# uctrl = {LRE, RLE, PDF, LRO, RLO}
# Most likely incomplete, but seems to cover current needs.
format_codes = set("tslfd")

View File

@@ -26,7 +26,7 @@ __all__ = (
)
def generate(context, space_type, use_fallback_keys=True, use_reset=True):
def generate(context, space_type, *, use_fallback_keys=True, use_reset=True):
"""
Keymap for popup toolbar, currently generated each time.
"""

View File

@@ -17,7 +17,7 @@
# ##### END GPL LICENSE BLOCK #####
def keyconfig_data_oskey_from_ctrl(keyconfig_data_src, filter_fn=None):
def keyconfig_data_oskey_from_ctrl(keyconfig_data_src, *, filter_fn=None):
keyconfig_data_dst = []
for km_name, km_parms, km_items_data_src in keyconfig_data_src:
km_items_data_dst = km_items_data_src.copy()
@@ -61,4 +61,4 @@ def keyconfig_data_oskey_from_ctrl_for_macos(keyconfig_data_src):
return False
return True
return keyconfig_data_oskey_from_ctrl(keyconfig_data_src, filter_fn)
return keyconfig_data_oskey_from_ctrl(keyconfig_data_src, filter_fn=filter_fn)

View File

@@ -19,7 +19,7 @@
# <pep8-80 compliant>
def url_prefill_from_blender(addon_info=None):
def url_prefill_from_blender(*, addon_info=None):
import bpy
import gpu
import struct

View File

@@ -56,7 +56,7 @@ def _getattr_bytes(var, attr):
return var.path_resolve(attr, False).as_bytes()
def abspath(path, start=None, library=None):
def abspath(path, *, start=None, library=None):
"""
Returns the absolute path relative to the current blend file
using the "//" prefix.
@@ -92,7 +92,7 @@ def abspath(path, start=None, library=None):
return path
def relpath(path, start=None):
def relpath(path, *, start=None):
"""
Returns the path relative to the current blend file using the "//" prefix.
@@ -134,7 +134,7 @@ def is_subdir(path, directory):
return False
def clean_name(name, replace="_"):
def clean_name(name, *, replace="_"):
"""
Returns a name with characters replaced that
may cause problems under various circumstances,
@@ -198,6 +198,7 @@ def _clean_utf8(name):
_display_name_literals = {
":": "_colon_",
"+": "_plus_",
"/": "_slash_",
}
@@ -310,7 +311,7 @@ def resolve_ncase(path):
return ncase_path if found else path
def ensure_ext(filepath, ext, case_sensitive=False):
def ensure_ext(filepath, ext, *, case_sensitive=False):
"""
Return the path with the extension added if it is not already set.
@@ -331,7 +332,7 @@ def ensure_ext(filepath, ext, case_sensitive=False):
return filepath + ext
def module_names(path, recursive=False):
def module_names(path, *, recursive=False):
"""
Return a list of modules which can be imported from *path*.
@@ -360,7 +361,7 @@ def module_names(path, recursive=False):
if isfile(fullpath):
modules.append((filename, fullpath))
if recursive:
for mod_name, mod_path in module_names(directory, True):
for mod_name, mod_path in module_names(directory, recursive=True):
modules.append(("%s.%s" % (filename, mod_name),
mod_path,
))

View File

@@ -81,7 +81,7 @@ _script_module_dirs = "startup", "modules"
_is_factory_startup = _bpy.app.factory_startup
def execfile(filepath, mod=None):
def execfile(filepath, *, mod=None):
"""
Execute a file path as a Python script.
@@ -193,7 +193,7 @@ _global_loaded_modules = [] # store loaded module names for reloading.
import bpy_types as _bpy_types # keep for comparisons, never ever reload this.
def load_scripts(reload_scripts=False, refresh_scripts=False):
def load_scripts(*, reload_scripts=False, refresh_scripts=False):
"""
Load scripts and run each modules register function.
@@ -357,7 +357,7 @@ def script_path_pref():
return _os.path.normpath(path) if path else None
def script_paths(subdir=None, user_pref=True, check_all=False, use_user=True):
def script_paths(*, subdir=None, user_pref=True, check_all=False, use_user=True):
"""
Returns a list of valid script paths.
@@ -446,16 +446,16 @@ def refresh_script_paths():
_sys_path_ensure_append(path)
def app_template_paths(subdir=None):
def app_template_paths(*, path=None):
"""
Returns valid application template paths.
:arg subdir: Optional subdir.
:type subdir: string
:arg path: Optional subdir.
:type path: string
:return: app template paths.
:rtype: generator
"""
subdir_args = (subdir,) if subdir is not None else ()
subdir_args = (path,) if path is not None else ()
# Note: keep in sync with: Blender's 'BKE_appdir_app_template_any'.
# Uses 'BLENDER_USER_SCRIPTS', 'BLENDER_SYSTEM_SCRIPTS'
# ... in this case 'system' accounts for 'local' too.
@@ -463,9 +463,9 @@ def app_template_paths(subdir=None):
(_user_resource, "bl_app_templates_user"),
(system_resource, "bl_app_templates_system"),
):
path = resource_fn('SCRIPTS', _os.path.join("startup", module_name, *subdir_args))
if path and _os.path.isdir(path):
yield path
path_test = resource_fn('SCRIPTS', path=_os.path.join("startup", module_name, *subdir_args))
if path_test and _os.path.isdir(path_test):
yield path_test
def preset_paths(subdir):
@@ -478,7 +478,7 @@ def preset_paths(subdir):
:rtype: list
"""
dirs = []
for path in script_paths("presets", check_all=True):
for path in script_paths(subdir="presets", check_all=True):
directory = _os.path.join(path, subdir)
if not directory.startswith(path):
raise Exception("invalid subdir given %r" % subdir)
@@ -532,7 +532,7 @@ def is_path_builtin(path):
return False
def smpte_from_seconds(time, fps=None, fps_base=None):
def smpte_from_seconds(time, *, fps=None, fps_base=None):
"""
Returns an SMPTE formatted string from the *time*:
``HH:MM:SS:FF``.
@@ -552,7 +552,7 @@ def smpte_from_seconds(time, fps=None, fps_base=None):
)
def smpte_from_frame(frame, fps=None, fps_base=None):
def smpte_from_frame(frame, *, fps=None, fps_base=None):
"""
Returns an SMPTE formatted string from the *frame*:
``HH:MM:SS:FF``.
@@ -585,7 +585,7 @@ def smpte_from_frame(frame, fps=None, fps_base=None):
))
def time_from_frame(frame, fps=None, fps_base=None):
def time_from_frame(frame, *, fps=None, fps_base=None):
"""
Returns the time from a frame number .
@@ -610,7 +610,7 @@ def time_from_frame(frame, fps=None, fps_base=None):
return timedelta(0, frame / fps)
def time_to_frame(time, fps=None, fps_base=None):
def time_to_frame(time, *, fps=None, fps_base=None):
"""
Returns a float frame number from a time given in seconds or
as a datetime.timedelta object.
@@ -639,7 +639,7 @@ def time_to_frame(time, fps=None, fps_base=None):
return time * fps
def preset_find(name, preset_path, display_name=False, ext=".py"):
def preset_find(name, preset_path, *, display_name=False, ext=".py"):
if not name:
return None
@@ -676,7 +676,7 @@ def keyconfig_init():
keyconfig_set(filepath)
def keyconfig_set(filepath, report=None):
def keyconfig_set(filepath, *, report=None):
from os.path import basename, splitext
if _bpy.app.debug_python:
@@ -712,14 +712,14 @@ def keyconfig_set(filepath, report=None):
return True
def user_resource(resource_type, path="", create=False):
def user_resource(resource_type, *, path="", create=False):
"""
Return a user resource path (normally from the users home directory).
:arg type: Resource type in ['DATAFILES', 'CONFIG', 'SCRIPTS', 'AUTOSAVE'].
:type type: string
:arg subdir: Optional subdirectory.
:type subdir: string
:arg path: Optional subdirectory.
:type path: string
:arg create: Treat the path as a directory and create
it if its not existing.
:type create: boolean
@@ -727,7 +727,7 @@ def user_resource(resource_type, path="", create=False):
:rtype: string
"""
target_path = _user_resource(resource_type, path)
target_path = _user_resource(resource_type, path=path)
if create:
# should always be true.

View File

@@ -447,7 +447,7 @@ def path_reference(
"""
import os
is_relative = filepath.startswith("//")
filepath_abs = bpy.path.abspath(filepath, base_src, library)
filepath_abs = bpy.path.abspath(filepath, start=base_src, library=library)
filepath_abs = os.path.normpath(filepath_abs)
if mode in {'ABSOLUTE', 'RELATIVE', 'STRIP'}:

View File

@@ -64,7 +64,7 @@ def region_2d_to_vector_3d(region, rv3d, coord):
return view_vector
def region_2d_to_origin_3d(region, rv3d, coord, clamp=None):
def region_2d_to_origin_3d(region, rv3d, coord, *, clamp=None):
"""
Return the 3d view origin from the region relative 2d coords.
@@ -167,7 +167,7 @@ def region_2d_to_location_3d(region, rv3d, coord, depth_location):
)[0]
def location_3d_to_region_2d(region, rv3d, coord, default=None):
def location_3d_to_region_2d(region, rv3d, coord, *, default=None):
"""
Return the *region* relative 2d location of a 3d position.

View File

@@ -150,7 +150,11 @@ class Object(bpy_types.ID):
class WindowManager(bpy_types.ID):
__slots__ = ()
def popup_menu(self, draw_func, title="", icon='NONE'):
def popup_menu(
self, draw_func, *,
title="",
icon='NONE',
):
import bpy
popup = self.popmenu_begin__internal(title, icon=icon)
@@ -176,7 +180,11 @@ class WindowManager(bpy_types.ID):
finally:
self.popover_end__internal(popup, keymap=keymap)
def popup_menu_pie(self, event, draw_func, title="", icon='NONE'):
def popup_menu_pie(
self, event, draw_func, *,
title="",
icon='NONE',
):
import bpy
pie = self.piemenu_begin__internal(title, icon=icon, event=event)
@@ -392,7 +400,7 @@ class EditBone(StructRNA, _GenericBone, metaclass=StructMetaPropGroup):
self.tail = self.head + vec
self.roll = other.roll
def transform(self, matrix, scale=True, roll=True):
def transform(self, matrix, *, scale=True, roll=True):
"""
Transform the the bones head, tail, roll and envelope
(when the matrix has a scale component).
@@ -560,9 +568,17 @@ class Text(bpy_types.ID):
self.write(string)
def as_module(self):
from os.path import splitext
import bpy
from os.path import splitext, join
from types import ModuleType
mod = ModuleType(splitext(self.name)[0])
name = self.name
mod = ModuleType(splitext(name)[0])
# This is a fake file-path, set this since some scripts check `__file__`,
# error messages may include this as well.
# NOTE: the file path may be a blank string if the file hasn't been saved.
mod.__dict__.update({
"__file__": join(bpy.data.filepath, name),
})
# TODO: We could use Text.compiled (C struct member)
# if this is called often it will be much faster.
exec(self.as_string(), mod.__dict__)
@@ -731,7 +747,7 @@ class Operator(StructRNA, metaclass=RNAMeta):
return delattr(properties, attr)
return super().__delattr__(attr)
def as_keywords(self, ignore=()):
def as_keywords(self, *, ignore=()):
"""Return a copy of the properties as a dictionary"""
ignore = ignore + ("rna_type",)
return {attr: getattr(self, attr)

View File

@@ -86,7 +86,7 @@ def get_doc(obj):
return result and RE_EMPTY_LINE.sub('', result.rstrip()) or ''
def get_argspec(func, strip_self=True, doc=None, source=None):
def get_argspec(func, *, strip_self=True, doc=None, source=None):
"""Get argument specifications.
:param strip_self: strip `self` from argspec

View File

@@ -143,7 +143,7 @@ def complete(line):
"""
import inspect
def try_import(mod, only_modules=False):
def try_import(mod, *, only_modules=False):
def is_importable(module, attr):
if only_modules:
@@ -184,7 +184,7 @@ def complete(line):
mod = words[1].split('.')
if len(mod) < 2:
return filter_prefix(get_root_modules(), words[-1])
completion_list = try_import('.'.join(mod[:-1]), True)
completion_list = try_import('.'.join(mod[:-1]), only_modules=True)
completion_list = ['.'.join(mod[:-1] + [el]) for el in completion_list]
return filter_prefix(completion_list, words[-1])
if len(words) >= 3 and words[0] == 'from':

View File

@@ -62,7 +62,7 @@ def complete_names(word, namespace):
return sorted(set(completer.matches))
def complete_indices(word, namespace, obj=None, base=None):
def complete_indices(word, namespace, *, obj=None, base=None):
"""Complete a list or dictionary with its indices:
* integer numbers for list
@@ -117,7 +117,7 @@ def complete_indices(word, namespace, obj=None, base=None):
return matches
def complete(word, namespace, private=True):
def complete(word, namespace, *, private=True):
"""Complete word within a namespace with the standard rlcompleter
module. Also supports index or key access [].
@@ -191,7 +191,7 @@ def complete(word, namespace, private=True):
# an extra char '[', '(' or '.' will be added
if hasattr(obj, '__getitem__') and not is_struct_seq(obj):
# list or dictionary
matches = complete_indices(word, namespace, obj)
matches = complete_indices(word, namespace, obj=obj)
elif hasattr(obj, '__call__'):
# callables
matches = [word + '(']

View File

@@ -87,7 +87,7 @@ def complete(line, cursor, namespace, private):
matches.sort()
else:
from . import complete_namespace
matches = complete_namespace.complete(word, namespace, private)
matches = complete_namespace.complete(word, namespace, private=private)
else:
# for now we don't have completers for strings
# TODO: add file auto completer for strings
@@ -96,7 +96,7 @@ def complete(line, cursor, namespace, private):
return matches, word
def expand(line, cursor, namespace, private=True):
def expand(line, cursor, namespace, *, private=True):
"""This method is invoked when the user asks autocompletion,
e.g. when Ctrl+Space is clicked.
@@ -150,5 +150,5 @@ def expand(line, cursor, namespace, private=True):
line = line[:cursor] + prefix + line[cursor:]
cursor += len(prefix.encode('utf-8'))
if no_calltip and prefix.endswith('('):
return expand(line, cursor, namespace, private)
return expand(line, cursor, namespace, private=private)
return line, cursor, scrollback

View File

@@ -21,7 +21,7 @@ __all__ = (
)
def batch_for_shader(shader, type, content, indices=None):
def batch_for_shader(shader, type, content, *, indices=None):
"""
Return a batch already configured and compatible with the shader.

View File

@@ -16,7 +16,7 @@
#
# ***** END GPL LICENSE BLOCK *****
def draw_circle_2d(position, color, radius, segments=32):
def draw_circle_2d(position, color, radius, *, segments=32):
"""
Draw a circle.

View File

@@ -240,7 +240,7 @@ def RKS_GEN_custom_props(_ksi, _context, ks, data):
prop_path = '["%s"]' % bpy.utils.escape_identifier(cprop_name)
try:
rna_property = data.path_resolve(prop_path, False)
except ValueError as ex:
except ValueError:
# This happens when a custom property is set to None. In that case it cannot
# be converted to an FCurve-compatible value, so we can't keyframe it anyway.
continue

View File

@@ -25,7 +25,7 @@ class NodeCategory:
def poll(cls, _context):
return True
def __init__(self, identifier, name, description="", items=None):
def __init__(self, identifier, name, *, description="", items=None):
self.identifier = identifier
self.name = name
self.description = description
@@ -43,7 +43,7 @@ class NodeCategory:
class NodeItem:
def __init__(self, nodetype, label=None, settings=None, poll=None):
def __init__(self, nodetype, *, label=None, settings=None, poll=None):
if settings is None:
settings = {}
@@ -92,7 +92,7 @@ class NodeItem:
class NodeItemCustom:
def __init__(self, poll=None, draw=None):
def __init__(self, *, poll=None, draw=None):
self.poll = poll
self.draw = draw

View File

@@ -30,7 +30,7 @@ ARRAY_TYPES = (list, tuple, IDPropertyArray, Vector)
MAX_DISPLAY_ROWS = 4
def rna_idprop_ui_get(item, create=True):
def rna_idprop_ui_get(item, *, create=True):
try:
return item['_RNA_UI']
except:
@@ -59,9 +59,9 @@ def rna_idprop_ui_prop_update(item, prop):
prop_rna.update()
def rna_idprop_ui_prop_get(item, prop, create=True):
def rna_idprop_ui_prop_get(item, prop, *, create=True):
rna_ui = rna_idprop_ui_get(item, create)
rna_ui = rna_idprop_ui_get(item, create=create)
if rna_ui is None:
return None
@@ -73,8 +73,8 @@ def rna_idprop_ui_prop_get(item, prop, create=True):
return rna_ui[prop]
def rna_idprop_ui_prop_clear(item, prop, remove=True):
rna_ui = rna_idprop_ui_get(item, False)
def rna_idprop_ui_prop_clear(item, prop, *, remove=True):
rna_ui = rna_idprop_ui_get(item, create=False)
if rna_ui is None:
return
@@ -143,7 +143,7 @@ def rna_idprop_ui_prop_default_set(item, prop, value):
pass
if defvalue:
rna_ui = rna_idprop_ui_prop_get(item, prop, True)
rna_ui = rna_idprop_ui_prop_get(item, prop, create=True)
rna_ui["default"] = defvalue
else:
rna_ui = rna_idprop_ui_prop_get(item, prop)
@@ -181,7 +181,7 @@ def rna_idprop_ui_create(
rna_idprop_ui_prop_update(item, prop)
# Clear the UI settings
rna_ui_group = rna_idprop_ui_get(item, True)
rna_ui_group = rna_idprop_ui_get(item, create=True)
rna_ui_group[prop] = {}
rna_ui = rna_ui_group[prop]
@@ -210,7 +210,7 @@ def rna_idprop_ui_create(
return rna_ui
def draw(layout, context, context_member, property_type, use_edit=True):
def draw(layout, context, context_member, property_type, *, use_edit=True):
def assign_props(prop, val, key):
prop.data_path = context_member

View File

@@ -245,9 +245,10 @@ def rna2xml(
fw("%s</%s>\n" % (root_ident, root_node))
def xml2rna(root_xml,
root_rna=None, # must be set
):
def xml2rna(
root_xml, *,
root_rna=None, # must be set
):
def rna2xml_node(xml_node, value):
# print("evaluating:", xml_node.nodeName)
@@ -394,7 +395,7 @@ def xml_file_run(context, filepath, rna_map):
xml2rna(xml_node, root_rna=value)
def xml_file_write(context, filepath, rna_map, skip_typemap=None):
def xml_file_write(context, filepath, rna_map, *, skip_typemap=None):
file = open(filepath, "w", encoding="utf-8")
fw = file.write

View File

@@ -1,4 +0,0 @@
import bpy
bpy.context.camera.sensor_width = 6.16
bpy.context.camera.sensor_height = 4.62
bpy.context.camera.sensor_fit = 'HORIZONTAL'

View File

@@ -0,0 +1,4 @@
import bpy
bpy.context.camera.sensor_width = 13.2
bpy.context.camera.sensor_height = 8.80
bpy.context.camera.sensor_fit = 'HORIZONTAL'

Some files were not shown because too many files have changed in this diff Show More